This recent account of a data breach suffered by consulting giant McKinsey, shows that when it comes to AI, implementers are forgetting the very basics of cyber security. As it appears, a clever autonomous AI agent managed to find the security holes in all of McKinsey’s well documented APIs in just two hours, which granted it full access to the company’s production database. The data it managed to steal included over 57K user accounts, 46M chat messages, 780K documents, and much more. An interesting bit here is that the attacking agent also selected which target to attack (i.e., McKinsey) on its own… that was new.
Now we could of course decry the malevolent advances in nefarious AI systems, or expose the threat of autonomous agents set free on our internet ocean. Sure, but when you look at it, McKinsey was lacking even the most basic hygiene: APIs left wide open to the Internet, a whole technology stack susceptible to basic SQL injection attacks and a complete disregard for basic Access Control precepts, such as, you know, authentication.
These are all vulnerabilities we’ve known for a long time now. For instance, we’ve known about SQL Injection since Jeff Forristal’s first publication of the exploit in Phrack Magazine… back in 1998 !
And of course there’s XQCD’s famous “Exploits of a Mom” comic strip, published in October, 2007:

XQCD, “Exploits of a mom”, source: https://xkcd.com/327/
“And I hope you’ve learned to sanitize your database inputs.”
Input sanitation is nowadays considered table stakes in all dev circles, red or white hat included. And yet, somehow, when building AI systems, implementers forget their basics. Ditto of authentication, apparently.
The current big investments in AI, all over the world, and the rush towards agentic systems makes everybody sprint forward, trying to get to that rather undefined and elusive finish line first. It’s the digital gold rush, and some things must be sacrificed for the sake of speed. But why does it have to be security?
AI amnesia and access controll
But “AI-amnesia” can take on many forms. The phenomenon pops-up in many, more subtle ways just about everywhere you look where the letters “A” and “I” are used together. Take for example authorization. It is by far the hardest problem to solve when it comes to Agentic AI (well, once you’re done training your AI model that is). And the solutions to this problem could be plotted on a spectrum with two extremes…
On one end of the spectrum, you find Identity and Access Management (IAM) practitioners, all very smart, and eager to solve AI’s “Identity “Problem”, but also happy to invent their own full blown frameworks or specifications, oblivious to prior work or to the realities organizations face nowadays.
And on the other end, we can find AI specialists, those smart folks we owe the present AI revolution to in the first place. Those are the data scientists, mathematicians, researchers and engineers who provide us with the full Agentic stack that the likes of OpenClaw rely on nowadays. It’s not only the millions of AI models themselves (the sheer number of open models available on HuggingFace for example, is rather mind boggling), but also the Databases, because AI needs data, and memory. For all these very fine folks, “authorization” amounts to one simple thing: Roles. Which, as we’ll see, is a gross oversimplification of the problem…
What is the problem with role-based access control?
Roles have been the prevalent solution for everything authorization since the advent of LDAP-based directories in the late 1990’s. The problem with roles is that they proliferate.
Over time, as organizations keep growing and building infrastructure and software, they need more and more new roles to secure access to their brand new toys. The reason is simple: nobody wants to break anything, of course, so instead of looking at existing roles and how they could be modified and repurposed to fit new systems (a risky proposal), practitioners tend to just create new roles. Easy, no risk. This practice persists to the point where all organizations nowadays suffer from “role explosions”, a state where it is impossible to determine with certainty who has access to what anymore. This state also tends to over-permit access, as Subjects accumulate Roles over time, they also accumulate access entitlements. Role explosion therefore requires enterprises to go through costly Access Certification programs, which have been shown to not always be effective.
Role-based access control applied to AI
So now let’s try to apply Roles to Agentic AI and Agentic Workflows. It’s easier here to think about something a bit concrete… Let’s say a big enterprise has 1000 running Agents spread worldwide, with hundreds of MCP servers, each exposing a number of Tools, Resources and Prompts (realistic use-case for, say, big Financial institutions).
Now for proper, and rather coarse/high-level access control, you’d have to determine whether the prompting human, using that specific agent skill, can actually (or not) use the requested MCP Tool (or even use the agent or skill in the first place). How many Roles do you need for this?
The human, let’s call her Alice, may already be a member of quite a few roles in the organization; maybe she’s a manager and has access to various systems because of her job. Now can any of these Roles be reused by the Agents which help she needs?
Well, that would depend on the following factors:
- Understanding what Alice actually currently has access to. As seen, this requires some kind of access certification program. Results vary…
- Understanding what each Agent permissions are and should be (remember, we have 1000 agents running worldwide, each agent can call whichever other it discovers).
- Being able to map 1 for 1, each required Agent’s permissions to existing Roles. This means that every time Alice uses an agent, she delegates her access entitlements to that agent, and that agent now has access to the Resources that those roles grant. We don’t really want that, do we? Especially since we’re not too sure about Alice’s entitlements in the first place… Those would be very powerful agents indeed.
So instead, we must, typically, create new roles… seems safer here also. How many roles do we need?
There are N Agents, each has M skills that can invoke O MCP Tools/Resources. If you want full granularity down to the tool level, you’d need M x N x O roles, you can then pinpoint access down to the MCP Tools level. But then certain employees will need access to different combinations of Agents/Tools… which would then require even more roles. Right off the bat you have a role explosion on your hands.
Using Roles for Agentic AI causes role explosion from the get go. These new Agentic roles would add to the already dire problem the organization likely faces, rendering the whole setup rather unmanageable.
Solution: Intent-based access control
Everybody should stay clear from Roles and Role-Based Access Control (RBAC). They are a solution from the past.
Instead, favour policy-based systems. A new emergent authorization model for Agentic AI, Intent-Based Access Control, requires determining the prompting Entity’s (human or machine) intent, and matching that intent to the usage intent declared on every Agent skill and MCP Tool.
Matching intent to intended usage may require evaluating proper access rules. Now ideally, these rules engines should run locally at the Agent/Tool level, to minimize network traffic as much as possible. Nevertheless, proper governance and compliance requires a centralized administration of at least the policy administration. Managing a central repository of policies makes access provable and regulatory compliance possible.
Additionally, finer grained controls require proper context in order to make decisions, this context is likely not all available locally, as it includes data or metadata about the entity and the present situation.The local intent-based rules engine will likely need to also fetch data or context from some central store, in order to have a fuller picture of each request. Context is better represented as a graph, therefore accessing a context graph is likely required when implementing certain policies.
To conclude…
Access Control is a complex problem to solve, especially for Agentic AI. Simple, one-size-fits-all solutions, like RBAC, just won’t work. Proper access control lives on a spectrum, and access to each single Agent skill or Tools needs to be plotted on that spectrum, and implemented through provable policies, often using full context.
High-level access to skills may be computable based on usage intent alone, but the finer the controls need to be, the more complex the access policies become, to the point where knowing the full context of the request is necessary.
For a full Agentic and AI-ready data platform, check indykite.ai .
This article was first published on Substack.









