Dive Brief:
- Corporate cybersecurity leaders believe AI will be essential to their missions, but, so far, few are seeing big gains from agentic security products, according to a new EY survey.
- With AI governance dominating C-suite agendas, the survey released on Thursday found that companies are making progress in integrating risk management frameworks into their operations, even if those ways of thinking have yet to fully permeate corporate cultures.
- The survey findings prompted EY to make four high-level recommendations to businesses still deciding how to adopt and use AI for cybersecurity.
Dive Insight:
Businesses are avidly pursuing the automation of routine cybersecurity functions, EY found, believing that doing so will address both budgetary and effectiveness concerns around security operations.
“Nearly all security leaders believe AI is a core defensive solution for cybersecurity (96%) and are already deploying AI in cybersecurity operations (95%),” EY said. But two-thirds of executives said they were still testing AI products.
Cybersecurity leaders are equally optimistic and concerned about AI, in different ways. Nearly all respondents (99%) predicted that AI would completely overhaul how they defended their networks, but a similarly large number (96%) said AI also posed a major threat because of how it helped hackers launch fast, sophisticated cyberattacks.
The EY survey involved interviews with 500 senior cybersecurity leaders at companies with at least $500 million in annual revenues, in industries ranging from energy to financial services to healthcare.
On the agentic AI front, EY said the survey “paint[s] a picture of gains not yet realized.” Roughly half of executives using AI for cybersecurity said agentic tools had yielded a return of less than $1 million, and another 12% either didn’t experience or didn’t track returns on investment from agentic AI.
Still, cybersecurity leaders expect AI to soon take over many of their teams’ functions. Topping the list are detecting advanced persistent threats (62% of respondents expected this to happen within the next two years), detecting fraud (58%) and overseeing identity and access management (51%).
Robust governance frameworks could make the difference between success and failure in this gradual handoff to AI, and companies appear to be taking the governance issue seriously. Roughly half of executives told EY that they had already begun implementing governance mechanisms in key AI activities. Still, only 26% said they had fully integrated those governance processes into the work of the business units relying on AI, and only 20% said that governance mindset was embedded in their organizational culture.
The survey did, however, find widespread appreciation for the importance of governance mechanisms. Ninety-seven percent of executives said governance was “essential” to getting value out of AI cybersecurity investments.
Along with governance, human oversight will also keep AI from overstepping its bounds and jeopardizing companies’ cybersecurity. Eighty-five percent of executives said they had human-in-the-loop requirements for all major cybersecurity decisions, and nearly all respondents (98%) said agentic tools needed human oversight to pay off.
At the same time, companies don’t have enough employees who can perform this oversight. Among the executives that EY surveyed, 90% said they had trouble recruiting and retaining cybersecurity workers who could manage AI products. Roughly the same percentage said their company’s biggest liability was employees who weren’t ready to repel AI-powered cyberattacks.
EY called that “a stark indication that AI does not reduce human risk when the workforce lacks the knowledge to govern it appropriately.”
Given the findings in the survey, EY said companies needed to understand four important realities: budgetary constraints make AI a virtual necessity in cybersecurity; AI’s return on investment depends on companies moving beyond “task-level automation” to fully agentic operations; human oversight of AI is “nonnegotiable”; and strong governance processes underpin trustworthy AI.