Security researchers at Tenable disclosed seven attack paths against OpenAI’s ChatGPT that could quietly siphon data from users’ chat histories and “memories.” Most are forms of indirect prompt injection — malicious instructions hidden in web pages, search results, or links that the model later executes as if they came from the user. OpenAI has shipped partial fixes, but researchers warn the broader class of injection risks won’t be “systematically solved” soon.
What’s new
Tenable’s team detailed seven techniques impacting recent ChatGPT models, including GPT-4o and GPT-5, that enable data exfiltration or safety bypasses without the victim realizing it. Highlights include:
- Trusted-site injection during browsing: Hidden instructions in comments or markup get executed when ChatGPT is asked to summarize a page.
- Zero-click injection via search: Simply asking about a site can trigger…
