




The Explosion of AI Identity: Boom or Doom?
AI identities are multiplying fast, and many organisations still lack the controls to secure them. This is creating a growing identity risk that attackers are increasingly able to exploit.

As organisations around the world continue to pour money into their AI initiatives, we’re at the point where machine identities now outnumber human identities by more than 80 to 1.
Yes, eighty to one.
The problem is that, despite 42%of those machine identities having access to sensitive data, 68%of organisations lack identity security controls for AI. And 47% can’t even secure their shadow AI. In the meantime, we continue to deploy AI, automate workflows, and expand machine identities alongside human ones.
The result? While cybercriminals around the world are rubbing their hands in glee, the explosion of AI identities is an exceptionally risky oversight for those who haven’t applied the controls needed to protect the organisation.
Boom times for some
It’s of little surprise that leveraging identity compromise is a booming business.
When the Cybersecurity and Infrastructure Security Agency (CISA) examined the networks of 121 critical infrastructure organisations, they found that “90%of the time, initial access was gained via identity compromise.”
And when combined with other techniques, CISA noted that identity is the primary means of privilege escalation, which in turn enables lateral movement between IT and OT environments.

So, who’s making all these AI identities?
According to CyberArk, as of 2025, AI was the #1creator of new identities with privileged and sensitive access.
And as so many AI agents (shadow and otherwise) proliferate seemingly unchecked, managing them has already become a significant problem. The increase in machine identities has inadvertently expanded the attack surface. With cybercriminals effortlessly keeping pace with this AI identity boom through weaponised AI, and security teams using AI to reduce threat response times from hours to seconds, we’re effectively trying to out-AI one another – with the stakes growing ever higher.
CyberArk eloquently sums up this moment in time as: “The AI Trifecta: Attacker, Defender and Identity Risk.”
Feeling insecure – you should be
Why are organisations not securing their AI identities? With up to 36% of AI tools used without approval and outside IT departments' control, shadow AI is a fast-growing risk. And you can’t (easily) secure what you don’t know about. Your users can unknowingly expose data, and attackers can corrupt AI processes, biasing behaviour and outcomes.
And where to even start with AI agents?
Statistasays that “The number of active AI agents in companies worldwide was forecast to increase to over 2.2 billion in 2030. To compare, in 2025, enterprises around the globe had 28.6 million active AI agents.”
The challenge, suggests CyberArk, is that while traditional IAM solutions can secure the infrastructure and access layers that make up an agent’s attack surface, the model layer is another problem altogether. The model layer is the AI itself. And that can be tricked or hijacked.
At this layer, AI agents can be tricked into executing malicious commands, leaking your data, handing out privileges or granting unauthorised access at a faster rate than a human ever could. Gartner predicts that by 2028, AI agents will be making 15%of our daily work decisions for us. While these decisions may not be large or strategic, manipulation could still have dire knock-on results for many organisations.
Shutting the back door: Consciously locking down machine identities
When is a machine not a machine? When it behaves like a human and has the same level of access to sensitive data. How can you control it? By enforcing both human and machine security controls.
It’s all too easy to ignore the sprawl of machine identities and focus on our human users. But in reality, it’s only by applying the same strict controls to our non-human identities as we do to human users that we can protect our systems. And it needs to be done at scale to accommodate growth.
But what about securing what you can’t see?
Out of the shadows
Identifying and securing shadow AI agents is as important – if not more important - than locking down the machine identities you already know about. They create significant identity siloes (which are obviously a source of organisational risk), which undermine your best compliance and governance efforts and create an unsanctioned no-holds-barred playground for attackers.
Luckily, some IAM vendors (for example, Okta) have reshaped our approach with new features that not only allow you to discover these shadowy machine identities but also pinpoint the risk they present, map their ‘blast radius,’ and knock their (mis)configurations into shape. They can integrate AI into your identity security fabric without compromising your visibility, control, or governance.

Watch this space – it’s not all doom and gloom
Yes, the reality is that AI is rapidly outpacing traditional IAM solutions.
But the good news is that IAM vendors are leading the charge of securing the AI model layer. They’re challenging the traditional IAM systems that simply aren’t designed to handle the authentication, authorisation, and monitoring protocols required for thousands (or even millions) of these intelligent AI entities.
And naturally, they’re using AI to fight fire with fire.

Is your traditional IAM pulling its weight?
Identity and Access Management (IAM or IDAM) solutions are undeniably foundational to modern security. IAM handles identity functions, including:
Identity lifecycle management: Creating, updating, and removing user identities for employees as they join the organisation, change roles, or leave.
Authentication: Proving who someone is using passwords, MFA, SSO, and biometrics.
Authorisation: Determining what your users are allowed to access.
Access control enforcement: Applying those permissions to systems and data
Audit and compliance: Logging access for security, governance, and standards like ISO 27001.
IAM solutions are designed to protect your organisation from identity compromise and limit what an attacker can achieve. When used in conjunction with a zero-trust environment, it becomes the primary control plane – focusing on verification, restriction and reaction – not blind trust.
But how can a traditional IAM solution protect your (abundance of) AI identities from exploitation? Many struggle to find AI agents and their credentials in the near-real-time required to act before it’s too late, manage identities that appear, act, and disappear in a puff of cyber-smoke, accumulate privileges across tools and APIs, or even attribute blame where it lies.
(Exploit how, you ask? According to CyberArk, attackers are finding new ways to ‘jailbreak’ AI models. They are manipulating LLMs into secretly extracting and sending users’ personal information, including names, IDs, email addresses, payment details, and more, back to home base. And they’re doing it with considerable success – reaching 100% on some models.)