A vulnerability in ChatGPT Deep Research agent allows an attacker to request the agent to leak sensitive Gmail inbox data with a single crafted email, according to Radware.
Deep Research is an autonomous research mode launched by OpenAI in February 2025.
“You give it a prompt and ChatGPT will find, analyze and synthesize hundreds of online sources to create a comprehensive report at the level of a research analyst,” is the promise made by the company with this mode.
On September 18, three researchers at Radware shared findings of a new zero-click vulnerability in OpenAI’s Deep Research when the function is connected to Gmail and the user requests sources from the web.
The vulnerability, dubbed ‘ShadowLeak’ by the researchers, allows service-side exfiltration, meaning that a successful attack chain leaks data directly from OpenAI’s cloud infrastructure, making it invisible to local or enterprise defenses.
The attack uses indirect prompt injection techniques by embedding hidden commands in email HTML using techniques like white-on-white text or microscopic fonts, so users remain unaware while the Deep Research agent executes them.
Unlike previous client-side exfiltration attacks (such as AgentFlayer and EchoLeak), which relied on the agent rendering attacker-controlled content in the user’s interface, this service-side leak occurs entirely within OpenAI’s cloud.
The agent’s autonomous browsing tool executes the exfiltration without any client involvement, expanding the threat surface by exploiting backend execution rather than frontend rendering.
ShadowLeak’s Attack Chain
Here’s the breakdown of a successful ShadowLeak attack chain, where the attacker is trying to collect personally identifiable information (PII) from their victim:
- The attacker sends the victim an innocent-looking email with hidden instructions requesting an agent to find the victim’s full name and address in the inbox and open a “public employee lookup URL” with those values as a parameter – with the URL really pointing to an attacker-controlled server
- The victim asks the Deep Research agent to process information and perform tasks from accessing their emails – not knowing that one of their emails contains hidden instructions the agent will detect and possibly follow
- The Deep Research agent processes the attacker’s email, initiates access to the attacker domain and injects the PII into the URL as directed – all this without user confirmation and without rendering anything in the user interface
The Radware researchers noted that it took a long trial-and-error phase with may iterations to craft a malicious email that triggered the Deep Research agent to inject PII into the malicious URL.
For instance, they had to disguise the request as legitimate user requests, force Deep Research to use specific tools, such as browser.open(), which allowed it to make direct HTTP requests, instruct the agent to “retry several times” and instruct the agent to encode the extracted PII into Base64 before appending it to the URL.
Once all these tricks were used, the researchers achieved a 100% success rate in exfiltrating Gmail data using the ShadowLeak method.

Mitigating Service-Side AI Agent Threats
According to Radware, organizations can partially mitigate risks by sanitizing emails before agent processing, removing hidden CSS, obfuscated text and malicious HTML. However, they noted that this measure offers limited protection against attacks that manipulate the agent itself.
A stronger defense is real-time behavior monitoring, where the agent’s actions and inferred intent are continuously checked against the user’s original request. Any deviation, such as unauthorized data exfiltration, can then be detected and blocked before execution.
The Radware researchers reported their findings to OpenAI via the Bugcrowd platform in June 2025.
In August, Radware noted that OpenAI silently fixed the vulnerability. In early September, OpenAI acknowledged the vulnerability and marked it as resolved.