A newly discovered security flaw allows attackers to inject malicious instructions into ChatGPT’s memory, potentially leading to remote code execution, data theft, or system compromise. The issue stems from a Cross-Site Request Forgery (CSRF) exploit that takes advantage of a user’s active ChatGPT session.
How the Exploit Works
When a logged in ChatGPT user clicks a malicious link, the attacker’s page can send a hidden CSRF request using the user’s ChatGPT credentials. This request secretly writes harmful instructions into ChatGPT’s memory.
Later, when the user interacts with ChatGPT, these tainted memories can trigger malicious actions such as generating compromised code, leaking sensitive data, or fetching and executing remote scripts.
Because ChatGPT’s memory is persistent and synced across devices, a single infected account can stay compromised across browsers, workstations, and personal devices.
Atlas Browser: The Weak Point
While this vulnerability can affect any ChatGPT user, it poses the greatest risk to those using OpenAI’s ChatGPT Atlas browser. By default, Atlas keeps users logged in to ChatGPT and currently lacks strong anti phishing protections.
