OpenAI has acknowledged that AI-powered browsers, including its ChatGPT Atlas, face persistent security risks from prompt injection attacks. These attacks occur when malicious instructions are hidden in web pages, emails, or documents to manipulate AI behavior.
In a recent blog post, OpenAI explained that prompt injection is unlikely to be fully eliminated. The company noted that ChatGPT Atlas’s “agent mode” increases the security attack surface, making defenses more challenging. OpenAI emphasized that wide permissions given to AI agents can allow malicious content to influence actions, even with safeguards in place.
OpenAI recommends users limit access granted to AI agents and require confirmations before executing actions, such as sending messages or making payments. Providing specific instructions instead of broad authority can also reduce the risk of attacks.
The company compared prompt injection attacks to traditional scams and social engineering, stating these threats represent a long-term security challenge for AI browsers. OpenAI said addressing this issue will require continuous updates and advanced defense strategies.
Since the launch of ChatGPT Atlas in October, researchers have demonstrated that short text snippets or embedded content can alter AI agent behavior. To strengthen security, OpenAI is using an internal “LLM-based automated attacker.” This system simulates hackers by repeatedly testing AI agents with malicious instructions, identifying vulnerabilities faster than human testing alone.
In one case, OpenAI showed how a hidden email instruction caused the AI to send a resignation message instead of an out-of-office reply. Following security updates, Atlas was able to detect and flag the malicious prompt.
In other news read more about: Goodbye Play Store? OpenAI Launches Built-In App Store Inside ChatGPT
OpenAI stressed that while AI browsers provide useful functionality, the balance between convenience and security risks remains delicate. Users are advised to remain cautious and follow recommended safety measures as AI systems continue to evolve.



