As organizations automate more of their environments, channel partners are finding that identity management now serves as the control plane for modern data security, with AI expanding reach and increasing risk.
Separate research released last month by Netwrix and the Cloud Security Alliance points to the same issue: As organizations adopt agentic AI and automate workflows at scale, identity systems no longer act merely as gatekeepers for users. Rather, they now govern how machines, agents, tokens and automated processes access sensitive data continuously and often autonomously.
That convergence is reshaping both attacker behavior and defensive priorities in ways that channel partners must account for when assessing, monitoring and managing customer environments.
From credentials to orchestration
Netwrix’s security outlook forecasts that between 2026 and 2029, attackers will target identity orchestration itself — including federation trust relationships, automated provisioning workflows and misconfigured access logic — instead of stealing individual credentials.
That’s because identity automation now directly determines who or what can access sensitive data. A failure in identity governance creates an authentication problem and opens the door for data exposure.
Netwrix’s findings dovetail closely with those from the CSA and the non-profit organization’s insight into non-human identities. According to the CSA survey commissioned by Oasis Security, 78% of organizations lack formal policies for creating or removing AI-related identities, while 92% lack confidence that their legacy IAM systems can manage AI-driven access risk at all.
In other words, identity sprawl is accelerating faster than governance models can keep up.
AI increases speed, not autonomy
Despite widespread concern about autonomous AI-driven cyberattacks, both reports suggest the near-term risk is not only more mundane, it is more dangerous.
State-sponsored groups are using selective autonomy that partially delegates decisions to AI, Netwrix found. However, most self-directing attack systems remain human-supervised, susceptible to misleading feedback and operationally unreliable.
“While AI is already influencing cybersecurity in meaningful ways, fully autonomous, self-directing AI-driven cyberattacks are unlikely to become a dominant threat in 2026,” researchers wrote.
However, attackers are using AI to accelerate familiar techniques such as reconnaissance, impersonation, workflow abuse and privilege escalation.
CSA’s data support this conclusion. More than three-quarters of respondents rated their ability to prevent attacks via NHIs as low or moderate, largely because manual processes and slow remediation can’t keep pace with machine-speed identity creation. Nearly one-quarter of organizations take more than 24 hours to revoke exposed credentials and 30% take more than a day to triage a high-severity credential leak — an eternity in automated environments.
In short, AI is not replacing humans in attacks. Yet. For now, it’s compressing timelines until traditional controls break, a problem that ties to governance as the dominant failure point.
Governance gaps
Alongside widespread over-permissioning, CSA found that more than half of organizations report no clear ownership for AI identities. Only 14% of respondents said they have fully automated lifecycle management for AI-related credentials, leaving a large share of identity governance reliant on spreadsheets or ad hoc workflows.
Similarly, Netwrix predicts that, by 2027, AI agents routinely will span identity systems and data environments that previously were managed separately. Static policies and siloed controls won’t survive that convergence.
For managed service providers, these shifts are visible on the ground. Brian F. Ricci, president of MSP Pinnacle Computer Services, said identity and data risks increasingly surface through automated workflows rather than traditional perimeter failures.
“Our clients in healthcare and financial services can’t afford a breach,” Ricci told Channel Dive.
As such, Pinnacle and similar providers are being asked to identify problematic permissions, shadow data and misconfigurations continuously, not just during quarterly audits, Ricci said. The ability to monitor identity-driven access across hybrid environments is becoming central to how MSPs demonstrate value and reduce client exposure.
Defensive AI
Other channel organizations are responding by using AI defensively, both to detect abnormal activity earlier and to manage the operational complexity created by identity sprawl.
Josh Lee, a cybersecurity specialist at Kansas Banker Technologies, said his team uses AI across identity, email and endpoint security to surface anomalies and automate containment before threats escalate. Plus, he said, “we’re actively developing custom AI tools to analyze operational data and help us improve processes, enhance service quality and deliver even stronger security outcomes for our clients.”
The key for channel partners is to recognize that AI can exacerbate weaknesses in existing security systems. Identity sprawl, unclear ownership, slow remediation and fragmented governance no longer qualify as manageable inconveniences. In AI-driven environments, they now pose a broader risk.
“The threat landscape isn’t only expanding because attackers suddenly have better tools,” said Dirk Schrader, VP of security research at Netwrix. “It’s also expanding because identity security, data security and automation are becoming inseparable. Our research team sees firsthand how misconfigurations and automated workflows create real exposure. Organizations that succeed will be the ones that govern identity and data security together and treat automation as something to be continuously validated, not blindly trusted.”