In the new edition of SecuritySunday, we dive into seven critical vulnerabilities in ChatGPT that allow attackers to exfiltrate your private data directly from conversations and memories.
Tenable Research has published an analysis revealing seven new vulnerabilities and attack techniques in ChatGPT (including GPT-5). Through indirect prompt injections, attackers can manipulate responses and achieve exfiltration of private data from user memories or conversation context, sometimes without any interaction from the victim.
The described vulnerabilities include:
- Indirect prompt injection through trusted websites
- “0-click” injection via indexed pages in search results
- “1-click” injection through https://chatgpt[.]com/?q=…
- Bypassing the url_safe security checkpoint
- Conversation Injection, where SearchGPT injects a prompt into the response and ChatGPT subsequently obeys it
- Markdown that allows hiding malicious content
- Memory injection, which permanently poisons user memory to ensure persistence
The authors demonstrate complete attack vectors (PoC), for example phishing: an article summary is “poisoned” with a comment and exploits url_safe to display a malicious link. Another scenario involves a hidden prompt in a SearchGPT response that injects ChatGPT instructions, which upon subsequent input renders image beacons and exfiltrates data, or an “ask-only” attack where an indexed malicious page manipulates output without user interaction.
From a security perspective, it’s crucial that separating SearchGPT without user context doesn’t prevent subsequent ChatGPT manipulation through conversation context and memories. The url_safe mechanism can be bypassed via bing.com whitelisting and its tracking redirects, turning links into an exfiltration channel. A bug in code rendering allows content to be hidden from users while still being processed by the model.
According to Tenable, the issues were reported to OpenAI and some have been fixed, however, some PoCs remain functional even in GPT-5 and ChatGPT-4o. Prompt injection is a structural LLM problem that likely cannot be quickly eliminated systemically; the vendor should strengthen security mechanisms (e.g., url_safe) to limit damage caused by these attacks and protect users’ private data.
New LANDFALL Spyware Targeting Samsung
The newly described LANDFALL campaign demonstrates how far the sophistication of mobile espionage operations has advanced. Attackers exploited vulnerability CVE-2025–21042 in the libimagecodec[.]quram[.]so library on Samsung Galaxy devices to deliver modular Android spyware through seemingly innocent image files.
The delivery vector was DNG images distributed via WhatsApp. In practice, these were DNG files with an appended ZIP archive at the end. The exploit then unpacked and executed two key components: the b[.]so loader (internally called Bridge Head) and l[.]so, which modifies SELinux policy to gain elevated privileges and ensure persistence. After establishing an HTTPS connection, the C2 infrastructure handled downloading additional modules, turning LANDFALL into a full-featured eavesdropping and data exfiltration tool.
The targeting was highly selective. Specific Galaxy model series appear in the code — S22, S23, S24, Z Fold 4, and Z Flip 4 — suggesting attackers invested in precise compatibility tuning.
LANDFALL fits into the current trend of weaponized images. This year we’ve seen similar attack vectors on Apple platforms (CVE-2025–43300 in ImageIO) combined with a WhatsApp vulnerability (CVE-2025–55177).
Google Reveals PROMPTFLUX Malware That Uses Gemini AI to Continuously Rewrite Its Code
Google GTIG (Google Threat Intelligence Group) described an experimental VBScript dropper called PROMPTFLUX, which uses a hardcoded key to call Gemini 1.5 Flash and has it generate obfuscated variants of its code to bypass signature detection.
“PROMPTFLUX is written in VB Script and communicates with the Gemini API to perform self-modification, likely for the purpose of evading static detection,” stated the Google Threat Intelligence Group (GTIG).
The prompt sent to the model is very specific. The attacker requests VB Script code modifications to bypass antivirus programs and instructs the model to generate only the code itself.
“Although the self-modification function (AttemptToUpdateSelf) is commented out, its presence combined with active logging of AI responses to the file ‘%TEMP%\thinking_robot_log.txt’ clearly indicates the author’s intent to create a metamorphic script that can evolve over time,” Google added.
Vulnerabilities in Microsoft Teams Allow Attackers to Impersonate Colleagues
Researchers from Check Point described four vulnerabilities in Teams that allowed attackers to modify message content without an “Edited” label, manipulate notifications to appear as if from a different sender, change display names in private chats, and spoof caller identity during audio/video calls.
“Together, these vulnerabilities demonstrate how attackers can undermine fundamental trust and transform Teams from a business enabler into a vector for fraud,” Check Point stated.
Microsoft described this flaw as CVE-2024–38197 (CVSS score 6.5) — a spoofing issue affecting the Teams app for iOS that could allow an attacker to change the sender name of a message in the Teams app and potentially trick them into disclosing sensitive information through social engineering.
“Our research shows that attackers no longer need to penetrate systems; they only need to disrupt trust. Organizations must now protect what people believe, not just what systems process. Seeing is no longer believing; believing means verifying,” Check Point concluded.
