How ChatGPT Could Leak Your Private Email Data

Beware of email injection attacks when enabling developer mode and using MCP in ChatGPT.

How ChatGPT’s MCP Could Leak Your Private Data
Beware of the potential risks and danger of using MCP in ChatGPT. Image by

In an experiment conducted by Eito Miyamura from the University of Oxford, it was demonstrated that ChatGPT’s recently released Model Context Protocol (MCP) can be exploited to leak sensitive information to an attacker.

What’s even more concerning is that all the attacker needs is the victim’s email address.

The attack works by sending a calendar invite that contains malicious instructions. Once ChatGPT processes the invite, those instructions are executed automatically, giving the attacker unauthorized access to the victim’s email inbox. This attack is called prompt-injection, which could lead to the exposure of highly sensitive data such as financial reports, corporate trade secrets, bank account details, or even stored passwords.

I’ll break down the full attack process shortly, but first, let’s briefly cover what MCP is.

MCP was introduced by Anthropic in late 2024 as an open protocol that allows large language models (LLMs) to connect and interact with external applications, data sources, and tools. In practice, this means ChatGPT can now directly access and interact with your personal accounts on Gmail, Google…

Leave a Reply