← All Resources

OpenAI Admits AI Browsers Are Fundamentally Broken

OpenAI just admitted what we already knew: AI browsers are fundamentally broken -- and detection alone can't save you. Prevention is the only answer.

The Admission

OpenAI's December 22, 2025 admission was blunt:

"Prompt injection, much like scams and social engineering on the web, is unlikely to ever be fully solved." [1]

Agentic AI systems are fundamentally vulnerable.

The Attack That Breaks Everything

Prompt injection embeds malicious commands in ordinary content -- invisible to humans, executable by AI. [3] OpenAI's own example: a poisoned email makes an agent send a resignation letter when asked for an out-of-office message. [4]

Researchers replicated this across Google Docs and email within days of Atlas's launch. [5] Brave confirmed the vulnerability is systemic, affecting every agentic browser including Perplexity's Comet. [6]

Why This Can't Be Fixed

The web's 30-year security model is simple: browsers render, users act. Content is untrusted. Actions require explicit consent.

Agentic browsers destroy this model. They read content, interpret intent, and act autonomously -- across email, documents, payments, corporate systems. George Chalhoub from UCL captured the core problem:

Prompt injection "collapses the boundary between data and instructions," transforming agents "from a helpful tool to a potential attack vector." [8]

Why Detection Alone Isn't Enough

Legacy security -- SIEM, EDR, anomaly detection -- can't recognize attacks written in plain English. By the time these systems detect suspicious behavior, the AI has already parsed the content, interpreted the instruction, and executed the action. [13]

The Bottom Line

OpenAI admitted it. The UK NCSC confirmed it. Researchers demonstrated it across every major platform. Prompt injection isn't a bug to be fixed -- it's a fundamental property of systems that interpret natural language as instructions.

Traditional security models don't apply. Prevention must lead.

Prevention First Philosophy

At Salience Cyber, we built the Cognition AI Engine to operate at the browser and system data planes -- the layers where content becomes action.

The Salience Cyber Cognition AI Engine stops attacks before execution, without human intervention. Operating at the point where AI agents consume content -- clipboard operations, page rendering, document parsing -- we block adversarial patterns before they reach the AI's decision-making process.

This is prevention-first by design:

  • Smaller Attack Surface -- Agentless architecture operating external to the AI with deterministic logic
  • Less Noise -- Eliminate false positives from detection-first systems trying to parse natural language attacks
  • Leaner Stack -- No SIEM correlation, no EDR alerts, no post-breach forensics -- just prevention at the source

Traditional security stacks detect malicious actions. We prevent malicious content from ever reaching the AI. When the attack is semantic, there's no second chance. By the time detection fires, the damage is done.

We stop attacks before they start.

The era of agentic AI is inevitable. Insecure agentic AI is not.

Learn more at saliencecyber.ai.


References

  1. OpenAI. "Continuously hardening ChatGPT Atlas against prompt injection." OpenAI Blog, 22 Dec 2025.
  2. Bellan, Rebecca. "OpenAI says AI browsers may always be vulnerable to prompt injection attacks." TechCrunch, 22 Dec 2025.
  3. OpenAI. "Continuously hardening ChatGPT Atlas against prompt injection" (section on prompt injection and content-layer attacks). OpenAI Blog, 22 Dec 2025.
  4. OpenAI. "Continuously hardening ChatGPT Atlas against prompt injection" (resignation email demonstration). OpenAI Blog, 22 Dec 2025.
  5. The Register. Coverage of prompt injection and early ChatGPT Atlas agent demos, Oct 2025.
  6. Brave. "Agentic Browser Security: Indirect Prompt Injection in Perplexity Comet." Brave Blog, Aug 2025.
  7. Fortune / AOL. Security feature including Charlie Eriksen (Aikido Security) on AI browser risk, Dec 2025.
  8. Times of India. Coverage quoting George Chalhoub (UCL Interaction Centre) on prompt injection and AI agents, Oct 2025.
  9. UK National Cyber Security Centre. "Prompt injection attacks against generative AI." NCSC Advisory, Dec 2025.
  10. OpenAI. "Continuously hardening ChatGPT Atlas against prompt injection" (LLM-based automated attacker and RL training). OpenAI Blog, 22 Dec 2025.
  11. OpenAI. "Continuously hardening ChatGPT Atlas against prompt injection" (demo of malicious email leading to resignation letter). OpenAI Blog, 22 Dec 2025.
  12. Bellan, Rebecca. "OpenAI says AI browsers may always be vulnerable to prompt injection attacks" (analysis of structural browser risk). TechCrunch, 22 Dec 2025.
  13. UK National Cyber Security Centre. "Prompt injection attacks against generative AI" (risk may never be fully mitigated; focus on likelihood/impact reduction). NCSC Advisory, Dec 2025.
  14. Google. "User Alignment Critic for agentic systems" (blog post on detecting misaligned agent behavior in AI agents). Google Blog, 2025.
  15. Synthesis of OpenAI security communications on Atlas, UK NCSC prompt injection advisory, and Brave's "Security & Privacy in Agentic Browsing" series and related prompt-injection research, 2025.