UK Firms Lose Average of £2.9m to AI Risk

UK Firms Lose Average of £2.9m to AI Risk

British businesses have been urged to prioritize AI governance when adopting the technology in new projects, after new data from EY revealed the average company has lost millions due to unmanaged risks.

The consulting giant polled 100 UK firms as part of a global Responsible AI (RAI) Pulse survey, which was compiled from interviews with 975 C-suite leaders across 21 countries.

Almost all (98%) UK respondents reported losses over the past year due to AI-related risks, with 55% claiming that such risks cost them over $1m (£750,000). The most common risks included regulatory non-compliance (57%), inaccurate or poor-quality training data (53%) and high energy usage impacting sustainability goals (52%).

Average losses were estimated at $3.9m (£2.9m) per company.

Read more on AI risks: Researchers Warn of Security Gaps in AI Browsers

The report identified several areas where UK companies could be doing better on governance. For one thing, only 17% of C-suite respondents were able to correctly identify the appropriate controls to manage specific risks such as non-compliance, poor-quality data or cybersecurity vulnerabilities.

The study identified another potential risk factor: just half (53%) of organizations that allowed regular employees to independently create and deploy AI agents had formal policies in place to ensure responsible AI. Yet two-thirds (64%) of those polled allow these “citizen developers” to operate in their organization.

Just 34% of HR teams in responding companies said they had begun developing a strategy for managing a human/AI workforce, EY claimed.

Carving Out Competitive Advantage

Matthew Ringelheim, EY UK&I AI & data leader, argued that companies which view responsible AI as a strategic advantage instead of an extra cost will end up leading the pack.

“They will build trust both within and beyond their organization and accelerate speed to market – bringing the latest technologies into production ahead of their competitors,” he added.

“As organizations continue to navigate the complexities of AI integration, prioritizing responsible governance will be essential for driving sustainable growth and maintaining a competitive edge in the market.”

The good news for the organizations surveyed is that 81% claimed to have continuous monitoring in place to ensure AI processes and models stick to responsible AI principles. A similar share said they have incident escalation procedures in place in case an agent behaves unexpectedly.

UK respondents with an oversight committee for their AI efforts reported 35% more revenue growth, a 40% increase in cost savings and a 40% rise employee satisfaction, the report claimed.

EY recommended that C-suite leaders:

  • Adopt a comprehensive approach to responsible AI – including articulating and communicating the key principles of such an approach, executing them with controls, KPIs and training, and establishing effective governance
  • Improve C-suite knowledge and awareness of responsible AI with targeted training, especially in areas such as appropriate safeguards
  • Identify and manage agentic AI risks by adopting appropriate policies and putting governance and monitoring in place

Previous Article

UK: NCSC Reports 130% Spike in "Nationally Significant" Cyber Incidents

Next Article

What AI Reveals About Web Applications— and Why It Matters