Nearly two-thirds (62%) of organizations have experienced a deepfake attack in the past 12 months, according to a new Gartner survey.
These deepfake attacks encompass either social engineering, impersonating someone during a video or audio call with an employee or exploiting automated verification, such as face or voice biometrics.
Akif Khan, senior director at Gartner Research, told Infosecurity that continuous improvements in deepfake technologies mean such threats are only going to grow.
Need for Deepfake Detection Integrated in Everyday Tools
Khan said the most pervasive technique currently in this space is deepfakes being combined with social engineering, such as impersonating an executive to get an employee to transfer a large sum of money into an attacker-controlled account.
“That’s trickier because social engineering is a perpetually reliable thing for attackers to use. When you throw deepfakes in there your employees really are on the frontline of trying to spot something is unusual. You can’t just rely on automated defenses to protect you,” Khan explained.
To protect against this threat, Khan urged organizations to consider emerging technical solutions, in which vendors can bake deepfake detection into tools such as Microsoft Teams or Zoom.
“That’s relatively new, there are not many large-scale production deployments so it still remains to be seen how effective that can really be once its operationalized in an environment,” he cautioned.
In the immediate term, Khan said that some organizations have implemented effective awareness training programs specifically around deepfakes. This includes creating deepfakes of company executives and using them in simulations to employees.
Another aspect is reviewing current business processes in areas such as payment approvals. Khan advised putting in place authorization at the application level to ensure these attacks are detected.
“This means the CFO can phone and ask you to transfer some money, the payment can be set up in the finance application, but then the CFO needs to log on to the finance application, ideally with phishing-resistant multi-factor authentication (MFA), and actually authorize that transaction,” Khan noted.
Targeting of AI Applications
The report, published during the Gartner Security & Risk Management Summit 2025, also found that 32% of organizations have experienced an attack on AI applications leveraging the application prompt in the past 12 months.
These attacks include prompt injection – where attackers generate large language models (LLMs) into generating biased or malicious output.

Khan told Infosecurity that the results are consistent with the conversations Gartner has had with clients around attacks on AI applications.
“Around two-thirds said that they haven’t experienced any attacks – that is a useful sanity check that this is a threat but is it the biggest threat that organizations face? No it’s not. But it is one that they do need to take seriously because we do have approximately 5% of respondents saying they have had a major incident,” he commented.
Khan advised security leaders to focus on several areas to protect AI applications, including shadow AI and how access is managed to company approved or developed tools.
Gartner surveyed 302 cybersecurity leaders in North America, EMEA and Asia/Pacific for the report.