How ChatGPT Atlas Might Steal Your Credit Card Number

On 21 October 2025, OpenAI released ChatGPT Atlas. Which is an Agentic Browser that integrates AI to your browser. The AI will assist on automation, summarize websites, add even add items to your list of ecommerce carts and more. Currently, ChatGPT Atlas are only accessible from macOS.

Preview of Atlas

Hours after the launch of ChatGPT Atlas, Brave — the company behind the privacy-focused Brave browser launched a blog highlighting a critical vulnerability affecting AI-powered browsers like Perplexity’s Comet Browser.

The issue lies in what’s known as a prompt injection — a malicious piece of code/ command that tricks an AI assistant embedded in your browser into executing unintended commands. That could mean exposing sensitive data, manipulating browser actions, or even stealing payment information stored in autofill or session cookies.Brave Post Issuing Perplexity Comet Brower Security VulnerabilitiesBrave Post Issuing Perplexity Comet Brower Security Vulnerabilities

Brave Post on X Highlighting Perplexity Comet Browser Security Vulnerabilites

Prompt Injections

Unlike traditional browsers, Agentic Browsers like ChatGPT Atlas can interpret natural language and perform automated actions — a capability that attackers can exploit through cleverly disguised website content. As Brave researchers put it:

“Indirect prompt injections are a systemic problem facing Comet and other AI-powered browsers.”

if an AI agent inside your browser can read and act on webpage content, then malicious sites can talk directly to your AI — and that’s where things get dangerous.

What makes these attacks especially concerning is their invisibility. Unlike malware downloads or phishing URLs, users see no obvious indicators of compromise. The malicious “payload” is embedded in seemingly harmless web text that only the AI agent processes. Furthermore, because the AI’s internal decision-making is non-transparent, users have no audit trail of what instructions were executed or what data was exposed. This lack of observability turns agentic browsers into potential black boxes of security risk.

How the Attack Works

Setup: An attacker hides malicious instructions inside website content using methods like white text on white backgrounds, invisible HTML comments, or encoded metadata. They may also inject prompts into user-generated content — for instance, Reddit comments or Facebook posts.

Trigger: An unsuspecting user visits the infected webpage and interacts with the browser’s AI assistant — for example, by clicking “Summarize this page” or asking the AI to extract key insights.

Injection: As the AI parses the page, it encounters the hidden malicious text. Because the AI cannot distinguish between genuine content and injected commands, it interprets everything as part of the user’s request.

Exploit: The injected commands then instruct the AI to act maliciously — such as opening the user’s banking site, reading saved passwords, or sending sensitive information to an attacker-controlled server.

This chain of events is known as an indirect prompt injection, because the attack originates from external content (like a webpage or document) that the AI processes while fulfilling a legitimate request.

I suggest reading more about the attack demonstration in Brave’s blog.

Mitigations

To mitigate this, researchers suggest some key mitigations that future systems like ChatGPT Atlas or any Agentic Browsers must implement:

  1. Separate User Intent from Website Content: The browser must clearly distinguish between the user’s trusted commands and the untrusted contents of a webpage when sending context to the AI backend. Page content should always be treated as untrusted input. Even if the model processes both, its resulting output must be handled as potentially unsafe until verified.
  2. Validate Model Outputs for User Alignment: Every AI-generated action should be independently checked against the user’s original intent. If the model suggests an operation that was not explicitly requested — like submitting data or navigating to a new domain — that action should be blocked or require confirmation. This enforces a boundary between “user-trusted” and “model-derived” instructions.
  3. Require Explicit User Interaction for Sensitive Tasks: Security and privacy-critical actions — such as sending emails, transferring data, or bypassing HTTPS errors — must always require direct user confirmation. Regardless of the AI’s internal plan, the final step in executing sensitive actions should be a human-in-the-loop safeguard.
  4. Isolate Agentic Mode from Regular Browsing: Agentic browsing should exist as a distinct, opt-in mode that users cannot accidentally enter. Everyday browsing does not need agentic privileges to access your email, read private data, or perform autonomous actions. By isolating these environments and limiting permissions, the browser can prevent inadvertent exposure of sensitive information.

These four measures together form the foundation of agentic security design — enforcing strict contextual boundaries, verifying intent alignment, and embedding consent at every decision layer.

Public Sentiment

The public response on the new ChatGPT Atlas are skeptical about the precense of Agentic Browser due to security vulnerabilities. Right now, it looks like it mostly depends of the user to carefully watch what agent mode is doing at all times. If you’re an early adopter of ChatGPT Agent or any Agentic Browsers, here’s what you as a user can do to stay safe:

  1. Avoid linking sensitive accounts (e.g., banking, e-commerce, or crypto wallets) until security improvements are confirmed.
  2. Disable automatic AI actions that can modify browser data or fill forms (e.g. payment forms).
  3. Think before you click — remember, prompt injections often hide in everyday web pages or even ads.

Summary

OpenAI’s new ChatGPT Atlas introduces a groundbreaking “Agentic Browser” that allows AI to automate tasks, summarize content, and perform actions online through natural language. But according to Brave researchers, this innovation also opens the door to a new kind of cyber threat — prompt injection attacks.

Unlike traditional exploits that inject malicious code, prompt injections manipulate LLMs. A malicious website can embed hidden text that tricks the AI into leaking sensitive data, executing commands, or bypassing browser safeguards.

To address these vulnerabilities, researchers propose four critical defenses:

  1. Separate user intent from webpage content to prevent confusion between trusted and untrusted data.
  2. Validate AI outputs to ensure they align with the user’s actual intent.
  3. Require user confirmation before performing any sensitive action.
  4. Isolate agentic mode so users cannot accidentally give the AI unrestricted control.

Until these safeguards mature, experts urge early adopters to act cautiously — avoid linking sensitive accounts, disable automatic AI actions, and remain cautious while browsing.

Source:

Leave a Reply