A critical vulnerability chain in Salesforce’s AI-powered AgentForce platform has been discovered by cybersecurity researchers.
The flaw, known as ForcedLeak, carried a severity score of 9.4 and could have allowed attackers to steal sensitive CRM data through indirect prompt injection.
Salesforce has since patched the issue by enforcing Trusted URLs and re-securing an expired domain that attackers could have exploited.
Noma Security, which identified the problem, said the findings highlight how AI agents present an expanded attack surface compared to traditional chatbots.
Understanding the ForcedLeak Vulnerability
Unlike conventional prompt-response systems, AI agents such as AgentForce operate with autonomy. They can plan, execute, and respond based on multiple inputs without human oversight.
Noma Security showed how attackers could embed malicious instructions in Salesforce’s Web-to-Lead forms, stored as customer data. When employees later queried AgentForce, the system processed both legitimate requests and the hidden malicious commands.
“Indirect Prompt Injection is basically cross-site scripting, but instead of tricking a database into doing or divulging things it shouldn’t, the attackers get the inline AI to do it,” Andy Bennett, CISO at Apollo Information Systems, said. “It is like a mix of scripted attacks and social engineering.”
Read more on AI security governance: Why Shadow AI Is the Next Big Governance Challenge for CISOs
The research also found that Salesforce’s Content Security Policy whitelist included an expired domain. Attackers could purchase it cheaply and then use it to exfiltrate CRM data, such as customer contact information, sales pipeline details, and internal communications.
“It’s advisable to secure the systems around the AI agents in use, which include APIs, forms, and middleware, so that prompt injection is harder to exploit and less harmful if it succeeds,” Chrissa Constantine, senior cybersecurity solution architect at Black Duck, said.
Recommended Safeguards
Organizations using Salesforce AgentForce with Web-to-Lead enabled should also:
-
Apply Salesforce patches to enforce Trusted URLs for AgentForce and Einstein AI immediately
-
Audit existing lead data for suspicious submissions containing unusual instructions
-
Enforce strict tool-calling security guardrails and detect prompt injection in real-time
As Bennett noted, AI-driven attacks can move “at the speed of a machine,” making damage both faster and more extensive.
The ForcedLeak disclosure serves as a reminder that businesses adopting autonomous AI must prioritize security governance, continuous testing and strict controls to protect against evolving threats.