New KuppingerCole Analysts research, commissioned by Ping Identity, defines how enterprises can govern AI agents at runtime to close emerging authorization gaps
DENVER, April 28, 2026 /PRNewswire/ -- Ping Identity, a leader in securing digital identities for the world's largest enterprises, today announced new research from KuppingerCole Analysts, commissioned by Ping Identity. AI agents are being deployed into production faster than enterprises can govern them, exposing gaps in identity systems designed for human users.
The report, From AI Agents to Trusted Digital Workers, highlights the challenges when governing AI agents and identifies a critical failure mode emerging in enterprise identity systems as these agents operate at runtime, beyond the limits of traditional access controls.
As organizations move AI agents into production environments, the focus is shifting from managing identity to controlling how identities act across systems, data, and workflows. Identity systems originally designed for human interaction are now being pushed to operate continuously, increasing pressure on existing models and exposing gaps in governance, visibility, and accountability at the moment decisions are executed.
"Enterprises are deploying autonomous AI faster than they can govern it," said Andre Durand, CEO & Founder, Ping Identity. "Identity remains foundational, but in an agentic environment it must operate continuously. Control must be enforced at the moment an action occurs."
Where Traditional Identity Models Break Down for AI Agents
The research describes a failure mode in which AI agents combine individually legitimate permissions in unintended ways, resulting in actions that bypass established controls and cannot be fully traced or governed. This failure mode represents a new class of identity risk in environments where AI agents operate autonomously across enterprise systems.
As the industry is quickly learning, access grants permission. It does not enforce control.
With AI adoption accelerating, organizations face new challenges:
- Delegation opacity and sub-agent spawning, where agent chains become untraceable and break auditability
- Implicit human assumptions in IAM, as OAuth and OIDC models rely on human decision-makers that agents bypass
- Context leakage across systems without continuous re-evaluation of authorization
- New questions around permission inheritance, liability, and enforcement in agent-to-agent interactions
The Risk is Already Materializing
Independent research from KuppingerCole Analysts reinforce the urgency of governing AI-driven identity interactions, noting that AI agents already interact across enterprise identity systems while many IAM approaches remain focused on users and controlled environments.
The research also highlights measurable risk, citing findings from IBM's 2025 Cost of a Data Breach report that show:
- 13% of organizations have experienced AI-related security breaches
- 97% of organizations lack adequate access controls for AI systems
Recent incidents, including enterprise data leaks and prompt injection attacks, demonstrate how gaps in AI governance are already being exploited in real-world environments.
Despite these risks, most identity and access management approaches remain centered on human users and static access decisions, leaving organizations unprepared to govern autonomous systems.
A Framework for Governing Autonomous AI
To address these challenges, KuppingerCole Analysts outline an independent blueprint for controlling autonomous AI. This approach is grounded in identity, policy-based authorization, governance and oversight, along with accountability, extending identity and zero trust principles to support continuous, runtime authorization and governance.
"These trends reflect a broader shift in identity requirements," said Martin Kuppinger, Founder, KuppingerCole Analysts. "As autonomous agents become more prevalent, organizations will need to extend identity and authorization models to maintain control, accountability, and trust across increasingly dynamic environments."
Ping Identity's Identity for AI features are designed to align with these principles, offering capabilities for Runtime Identity, policy-based authorization, and governance controls intended to help organizations manage AI agents across enterprise environments.
KuppingerCole Analysts have also recognized Ping Identity's capabilities in managing AI agents, including the ability to assign unique identities, enforce policy-based access controls, and maintain human accountability in AI-driven processes.
As organizations move from experimentation to operational AI, success will depend on identity enabling safe, governed, and scalable AI execution. Ping Identity, recently recognized as an Overall Leader across multiple KuppingerCole Analysts Leadership Compass reports, including Customer IAM and B2B identity, is applying that leadership to define how enterprises securely govern autonomous AI.
Additional Resources:
- Download the white paper: From AI Agents to Trusted Digital Workers
- Learn more about Identity for AI
About Ping Identity
At Ping Identity, we help organizations secure and manage digital identities across customers, employees, partners, and non-human entities. Whether securing millions of users, fighting fraud, simplifying third-party access, or enabling passwordless experiences, establishing trust in every digital moment shouldn't slow you down. Our enterprise identity platform is designed for scale, flexibility, and integration across cloud, hybrid, and on-prem environments. With our Runtime Identity capabilities, Ping enables organizations to adopt AI and automation by continuously verifying identity, context, and intent at every interaction, helping secure and govern AI agents in real time. Learn more at pingidentity.com.
Media Contact
press@pingidentity.com
Follow us on Twitter/X: @PingIdentity
Join us on LinkedIn: Ping Identity
Subscribe to our YouTube Channel: PingIdentityTV
Like us on Facebook: PingIdentityPage
SOURCE Ping Identity