Your IAM Programme Was Built for Humans. It Has No Idea What to Do With Agents.
Non-Human Identities are the new Shadow AI — ungoverned, fast-proliferating, and now the fastest-growing attack vector in enterprise infrastructure.
There is a quiet governance crisis unfolding in enterprise environments, and most boards have not been briefed on it. It does not involve a new threat actor or a zero-day exploit. It involves a category of identity that your organisation is almost certainly creating faster than it can govern — and which your existing Identity and Access Management programme was never designed to handle.
That category is Non-Human Identities. API keys, service accounts, OAuth tokens, certificates, and — increasingly — the credentials used by AI agents operating autonomously across your infrastructure. They do not log in from recognisable locations. They do not follow normal working hours. They do not respond to behavioural anomaly detection tuned for human patterns. And they outnumber your human users by ratios that, until recently, would have seemed implausible.
As of 2025, NHIs outnumber human identities in enterprise environments at a ratio of 144 to 1 — a 44% increase from the year prior.[1] That ratio is accelerating as organisations deploy AI agents at scale. The governance infrastructure being applied to manage them has not kept pace. The result is a structural exposure that sits squarely within the scope of NIS2 Article 21 and, for many organisations, the EU AI Act — but which most compliance programmes have not yet addressed.
This is the new Shadow AI problem. Only this time, it is not employees quietly using unapproved tools. It is the tools themselves acquiring identities, permissions, and access — at machine speed, outside the visibility of your IAM programme.
The Scale of the Problem
Understanding why NHIs represent a governance failure requires appreciating their operational characteristics. Unlike human identities, NHIs bypass multi-factor authentication by design. They operate continuously — including at 3am, from multiple locations simultaneously, authenticating thousands of times per minute. They persist indefinitely without lifecycle management unless someone explicitly deprovisions them. And they are often granted standing privileges far beyond what any specific task requires.
The data on governance maturity is stark. According to CSA research, fewer than one in four organisations has documented and formally adopted policies for creating or removing AI identities.[2] Only 12% report high confidence in their ability to prevent attacks via NHIs.[3] A separate study found that 68% of IT security incidents now involve machine identities, and half of enterprises surveyed have experienced a breach attributable to unmanaged NHIs.[4]
The attack surface extends beyond the agents themselves. Research by Entro Labs found that nearly half of all exposed secrets — the credentials NHIs use to authenticate — were discovered outside code repositories entirely: in CI/CD workflows, collaboration platforms, and, notably, SharePoint, which alone accounted for almost one in five credential exposures.[5] The implication is that the credential sprawl problem is not contained within your development environment. It is distributed across the same productivity tools your organisation runs on.
Within this population, the privilege concentration is particularly concerning. One in twenty AWS machine identities carries full administrator-level permissions.[6] These are not edge cases. They are the architecture of the average enterprise cloud environment, accumulated through years of permissive provisioning and insufficient lifecycle governance.
Why Existing IAM Programmes Fail Here
The core problem is architectural. Traditional IAM was designed around a set of assumptions that NHIs violate systematically.
Human IAM assumes predictable behaviour: employees work defined hours, access systems from consistent locations, and interact with technology in recognisable patterns. Anomaly detection, behavioural baselines, and privileged access review cycles are all calibrated to these patterns. NHIs break every one of them — legitimately. A service account that authenticates thousands of times per minute from multiple IP ranges is not compromised; that is its normal operational profile.
Human IAM assumes a defined lifecycle: onboarding creates the identity, role changes update permissions, offboarding removes access. NHIs are provisioned dynamically, often by developers building pipelines or deploying agents, without equivalent offboarding discipline. The result is credential sprawl: active credentials attached to decommissioned services, orphaned API keys retaining production access, and service accounts whose owners have long since left the organisation.
Human IAM assumes visibility: your HR system feeds your identity provider, which feeds your access management tooling. NHIs proliferate across cloud environments, SaaS platforms, and CI/CD pipelines in ways that no single system of record captures. As the Cloud Security Alliance notes, the challenge is not merely governance — it is that most organisations do not have a complete inventory of what NHIs exist in their environment, let alone what they are permitted to do.[7]
The KuppingerCole Leadership Compass on Non-Human Identity Management puts it plainly: traditional IAM and PAM systems were not designed to handle the dynamic and large-scale nature of these entities. They do not adequately address the security complexities of device and system accounts, and this gap in governance not only poses significant security risks but also hampers operational agility.[8]
The arrival of agentic AI compounds this in a specific and important way. AI agents are not static service accounts. They are autonomous systems that can spawn sub-agents, mint new credentials, and acquire permissions dynamically as they execute multi-step tasks. They operate at machine speed across APIs, databases, and SaaS integrations — often without a human in the loop. The blast radius of a compromised agent credential is not the agent itself; it is every system the agent has permission to touch, and every downstream agent it can reach.
The Regulatory Exposure
For NIS2-covered entities, NHI governance is not a future compliance consideration. It is a current one.
Article 21(2)(i) of the NIS2 Directive explicitly requires the use of multi-factor authentication or continuous authentication solutions, and policies and procedures regarding cryptography and encryption.[9] By design, NHIs cannot satisfy MFA requirements — they are non-human, and authentication mechanisms for machine identities operate differently. The compliance question is whether your organisation has equivalent controls in place for NHIs: short-lived credentials, certificate-based authentication, zero standing privilege architectures, and automated rotation policies. Most organisations do not.
Article 21(2)(d) requires access control and asset management. Applied to NHIs, this means maintaining an inventory of machine identities, assigning accountable human owners to each credential, and enforcing least-privilege access. The 78% of organisations that lack formal NHI provisioning and deprovisioning policies are, in operational terms, running an access control gap of significant regulatory consequence.[10]
The AI Act introduces a parallel and reinforcing obligation. From 2 August 2026, full compliance requirements apply to Annex III high-risk AI systems — which include AI used in critical infrastructure, essential services, and employment decisions.[11] Article 9 of the AI Act mandates a continuous, documented risk management process throughout the AI system lifecycle. For organisations deploying agentic AI in NIS2-covered sectors, the intersection of these two frameworks creates a dual governance obligation: NIS2 governs the security of the network and information systems the agents operate across; the AI Act governs the agents themselves. Neither framework currently exempts the other’s obligations.
The governance gap is therefore not merely a security programme problem. It is a compliance programme problem with two distinct regulatory clocks running simultaneously.
What Boards Are Not Being Told
The board-level conversation about AI has, for most organisations, focused on productivity, risk appetite, and governance frameworks for AI adoption. It has not yet adequately addressed the identity implications of the AI systems already deployed.
Boards understand — at least in principle — that employees using unauthorised AI tools creates Shadow AI risk. What they are rarely told is that the AI tools they have authorised are generating their own credentials, acquiring their own permissions, and operating in ways that existing security controls cannot see or constrain.
The IANS Research report on identity security heading into 2026 ranked ‘Identity Assurance for an AI World’ as the second-highest CISO priority, scoring 4.46 out of 5.[12] Yet the same research found a significant gap between recognition of the problem and organisational capacity to address it — with IAM investment as a proportion of security budget varying from 6% at smaller organisations to 12% at large enterprises, and manual processes still dominant across the market.
The strategic board argument is this: the organisation is creating a new class of digital actor — AI agents — that inherits enterprise access, operates at machine speed, and sits outside the governance model that applies to human access. Each agent represents a credential that, if compromised, provides an attacker with persistent access equivalent to the agent’s permissions, without triggering human-centric security controls. The attack is not sophisticated. The access it provides is.
As BeyondTrust’s analysis frames it, the mandate for the security function is to shift the question from ‘Who has access?’ to ‘What has access to what?’[13] Boards should be asking the same question.
What CISOs Should Do Now
The NHI governance problem does not require a new technology investment as the first step. It requires a governance posture shift, followed by targeted tooling to operationalise it.
• Inventory before you govern. You cannot apply policy to identities you cannot see. The starting point is a comprehensive discovery exercise across cloud environments, SaaS platforms, CI/CD pipelines, and collaboration tools. This is not a one-time exercise — NHI creation is continuous, and discovery must be too.
• Assign human ownership to every NHI. Every machine credential should have a named human accountable for its existence, its permissions, and its deprovisioning. Orphaned credentials — those whose owners have left or whose originating service has been decommissioned — are the most common source of standing privilege that attackers exploit.
• Apply least privilege architecture to machine identities. The one-in-twenty AWS machine identities carrying full administrator access is not an outlier — it is the result of permissive defaults and insufficient review cycles. NHI privilege should be scoped to specific tasks, time-limited where possible, and reviewed on the same cadence as privileged human access.
• Design for zero standing privilege for AI agents. AI agents should receive elevated permissions only for the duration of the specific task that requires them, with automatic revocation on completion. This is not a theoretical architecture — it is the practical application of least privilege to autonomous systems operating at machine speed.
• Brief the board on the regulatory intersection. NIS2 Article 21 access control obligations and the AI Act’s Article 9 risk management requirements together create a documented governance obligation for NHI security. The board should understand that the AI it has authorised sits within a regulatory perimeter that includes identity governance — and that the compliance gap, if not addressed, carries both supervisory and personal liability risk under NIS2 Article 20.
• Treat NHI governance as the next phase of your Shadow AI programme. Shadow AI was about employees creating ungoverned AI usage. NHI proliferation is about authorised AI systems creating ungoverned identity sprawl. The governance logic is the same; the technical response needs to evolve to match.
The Governance Moment
Every major shift in enterprise IT architecture has eventually created an identity crisis — not in the existential sense, but in the governance sense. Cloud migration multiplied machine identities beyond what on-premise IAM could track. SaaS adoption distributed credentials across platforms that no single identity provider controlled. Each time, the security function ran behind the architecture, governing the previous model while the new one accumulated ungoverned exposure.
Agentic AI is the next instance of this pattern. The difference is that AI agents do not merely hold credentials passively — they use them autonomously, at speed, across systems, making decisions with real operational and regulatory consequences. The blast radius of a governance failure is no longer measured in data records exposed; it is measured in automated actions taken.
The organisations that will navigate this well are not those with the most sophisticated AI adoption. They are those that treated identity governance as a foundational control rather than a supporting function — and that extended that governance to non-human actors before the incidents made the case for them.
Your IAM programme was built for humans. The question is whether you will extend it to agents before an attacker does it for you.
References
[1] Entro Labs — NHI & Secrets Risk Report H1 2025
[2] CSA — State of Non-Human Identity and AI Security Survey Report
[3] CSA — State of Non-Human Identity and AI Security Survey Report
[4] Obsidian Security — What Are Non-Human Identities: NHI Security Guide
[5] Entro Labs — NHI & Secrets Risk Report H1 2025
[6] Entro Labs — NHI & Secrets Risk Report H1 2025
[7] CSA — Securing the Agentic Control Plane, March 2026
[8] KuppingerCole — Leadership Compass: Non-Human Identity Management
[9] Directive (EU) 2022/2555 — NIS2 Directive, Article 21(2)(i)
[10] MSSP Alert — Security Teams Will Wrestle with Agentic AI, Non-Human Identities in 2026
[11] EU AI Act — Regulation (EU) 2024/1689, Article 9 and Annex III
[12] IANS Research — AI Agents Are Creating an Identity Security Crisis in 2026
[13] BeyondTrust — Agentic AI Security: Securing Shadow AI & Non-Human Identities


