In the previous article, we explored what agentic AI is and how these systems move from generating responses to executing actions across systems. This shift changes how software systems behave at a fundamental level, because AI is no longer only producing information, but interacting with tools, systems, and workflows to complete tasks.
Once systems operate in this way, the focus shifts from understanding what agentic AI is to understanding what it means for AI security, risk and governance. When systems can act, interpret intent, and execute across environments, traditional security assumptions begin to break down.
From predictable systems to dynamic AI behaviour
Traditional security models were designed for systems with predictable and bounded behavior. Inputs lead to outputs, and system actions follow defined rules that can be enforced consistently.
Agentic AI systems operate differently. Their behavior is shaped at runtime based on context, tool interactions, and evolving system state. As a result, the same input can lead to different outcomes depending on what the system knows, what tools are available, and how prior steps have unfolded. As a result, system behavior becomes less deterministic, and harder to define in advance using static rules or fixed policies.
The shift from traditional security to agentic security
Traditional security models assume that systems generate outputs, and that any action is taken separately by a user or another controlled system. This works when AI is used for analysis, recommendations, or classification, where outputs remain informational.
Why output-based models break in agentic AI
Agentic AI changes this structure by turning outputs into triggers for execution. A response can directly initiate actions across APIs, tools, and enterprise systems, removing the separation between information and action. This shifts AI from a support function into an active participant in execution flows.
Because of this, security can no longer focus only on what the system produces. It must also account for what the system does as a result of interpreting intent and context. This creates a more fluid security surface where behavior emerges dynamically as the system operates.
How the security surface expands in agentic AI
As agentic systems become more capable, the security surface expands beyond infrastructure into decision-making itself.
These systems process instructions, external data, and contextual signals continuously. If any of these are manipulated or misinterpreted, they can influence downstream behavior in ways that are difficult to detect or isolate in real time.
At the same time, agentic systems often operate across multiple tools and environments with delegated permissions, increasing the range of possible actions. This creates a situation where system behavior emerges from the interaction between intent, context, permissions, and tool access, rather than from a single controlled execution.
Key security risks in agentic AI
Risks in agentic systems do not come from a single failure point. They emerge from the interaction between instructions, context, permissions, and execution pathways. The attack surface extends into the decision process itself, including how context is interpreted and how tools are selected.
Agents operating within enterprise environments can have delegated access and decision-making authority, but like humans, they can introduce risk not only through external compromise, but through misalignment, misuse, or unintended behavior at scale.
Instruction integrity becomes a primary concern. Agents continuously ingest information from users, documents, and external sources. If any of these inputs are manipulated, including through adversarial inputs or prompt injection techniques, they can influence downstream decisions in ways that are difficult to isolate at runtime. This introduces a class of AI security risks where the system executes actions based on compromised or misaligned intent.
Access control introduces another dimension. Agentic systems require permissions to interact with enterprise environments, and these permissions often exceed what a single human action would require. Because agents do not experience natural operational friction, misconfigurations can scale quickly into systemic impact.
Identity further complicates this landscape. Agents frequently act on behalf of users or other systems, which introduces indirect privilege relationships. Without strict boundaries, this can create situations where actions occur with more authority than originally intended.
Tool integration expands the execution surface even further. Each additional system an agent connects to increases the number of possible action pathways, making overall behavior harder to fully predict or constrain.
Finally, memory introduces persistence into the system. Past interactions can influence future decisions, which means that incorrect or manipulated inputs may continue to shape behavior long after the original interaction has ended.
Together, these factors introduce new challenges for AI trust and assurance in enterprise environments.
Why traditional security models are not sufficient for AI
Enterprise security frameworks were designed for environments where system behavior is stable, predictable, and defined in advance. They rely on static permissions, predefined rules, and post-event auditing as the primary mechanisms for control and governance.
In agentic environments, these assumptions no longer hold. Behavior is formed at runtime, influenced by dynamic context, tool outputs, and system state, which makes static policies harder to enforce consistently. Post-event analysis also becomes less effective as a primary control mechanism, since it only explains what happened after actions have already been executed and potentially impacted other systems.
As a result, security can no longer operate as an external layer applied around the system. It needs to move closer to where decisions are formed and actions are initiated.
Next up: How to secure agentic AI
In the next article, we explore how real-time governance, context-based access control, and execution-level guardrails form the foundation for securing agentic AI systems in practice.










