Joakim E. Andresen
January 19, 2026

When power outpaces control: The most severe AI vulnerability yet

When power outpaces control: The most severe AI vulnerability yet

A few simple steps were all it took to take over an entire enterprise system. With a universal credential, a handful of publicly discoverable details, and a powerful AI agent, a security researcher discovered a perfect storm: he could escalate privileges, create new admin accounts, and achieve a full takeover.

A critical platform at the core of enterprise operations

Few enterprise platforms sit closer to the operational core of large organizations than ServiceNow. Used to run IT service management and enterprise workflows across roughly 85% of the Fortune 500, it is deeply embedded in how systems, users, and data connect.

Within this environment sits ServiceNow’s Virtual Agent, a conversational interface designed to help employees and customers to resolve issues efficiently. Through natural language, it can create tickets, answer questions, and trigger actions across connected systems, from HR requests to security operations.

Notably, interaction with the Virtual Agent is not limited to the ServiceNow interface. Users can also engage it from connected platforms like Slack, allowing the chatbot to act as an entry point from collaboration tools already embedded in daily enterprise operations.

How the exploit worked

The Virtual Agent had exposed an API so third-party services could authenticate and communicate with it. Aaron Costello, Chief of Security Research at AppOmni, discovered that this authentication relied on a shared credential used across all external integrations: “servicenowexternalagent.” Anyone who knew this key could connect as if they were a legitimate integration without additional credentials.

All a user - or an attacker - needed to prove their identity was an email address. No password, no multifactor authentication.

Things got more interesting when ServiceNow introduced a new AI agent, “Now Assist,” which allows users to create new data anywhere in ServiceNow, that the Virtual Agent chatbot was able to engage with. With a bit of publicly available information and the shared credential, Costello was able to impersonate an admin-level user. He then used the chatbot to interact with the Now Assist agent to create a new system account for him with admin-level privileges.

This resulted in Costello gaining full access to one of the most sensitive enterprise platforms in operation today.

"It's not just a compromise of the platform and what's in the platform — there may be data from other systems being put onto that platform," he notes, adding, "If you're any reasonably-sized organization, you are absolutely going to have ServiceNow hooked up to all kinds of other systems. So with this exploit, you can also then ... pivot around to Salesforce, or jump to Microsoft, or wherever."
  • Aaron Costello, Chief of Security Research at AppOmni

ServiceNow has since addressed the vulnerability, however it highlights the gravity of powerful agents with inappropriate security, access and governance.  

The agentic security gap

This incident exposes a structural challenge with autonomous agents: the gap between access and action, made worse by both overly broad permissions and weak authentication. In this case, access controls granted agents more privileges than necessary, while minimal authentication and easy user impersonation meant that even those broad permissions could be exploited without significant barriers.

Agentic AI adds a second, subtler risk. Once an agent has access to data, there is often no enforcement to control how that data can be used or what actions can be taken with it. When powerful agents are trusted implicitly and granted broad capabilities, even simple automation can lead to high-impact consequences.

As a result, agents can act autonomously without sufficient guardrails, executing actions because they could, not because they should. That combination of weak authentication, excessive trust, and autonomous capability made a platform-wide compromise possible, even without a single software bug being exploited.

Governing AI agents

Preventing incidents like this requires a shift in how enterprises secure autonomous systems. Rather than taking away autonomy, enterprises should enforce limits on what agents can do with the data they access. Agents don’t just see information; they act on it. That’s why the most critical security checks need to happen at the point of use, ensuring every action is validated against permissions, policies, and context before execution.

Effective governance focuses on controlling both what agents can do and how the data they use is applied:

  • Dynamic, non-persistent permissions
    Instead of static credentials or standing access, permissions should be granted only for the scope and duration of a specific task. This would have limited the ability to create persistent admin accounts or maintain long-term control.
  • Contextual, real-time evaluation
    Agent permissions should be expressed as narrowly scoped capabilities, defining exactly which actions are allowed, on which objects, and under what conditions. Even if a user were impersonated, actions like privilege escalation or account creation could have been denied based on risk, sensitivity, and live context.
  • Context-rich data foundations
    Relationships, ownership, consent, sensitivity, and purpose must be machine-readable so agents cannot freely apply data across systems or pivot laterally without restriction.
  • Traceable autonomous behavior
    Every decision and action is recorded with its context, making agent behavior observable and auditable across systems. This enables earlier detection and containment of misuse.

When access, governance, and context are combined, enterprises can safely empower AI agents to operate autonomously - unlocking efficiency while containing risk.

Lessons for enterprise AI

The ServiceNow vulnerability shows that simple autonomous agents can create systemic risk when given unchecked power. Addressing this requires both understanding the gap between access and action and putting governance in place that operates in real time.

Solutions like IndyKite’s AgentControl close that gap, enforcing contextual, task-specific permissions and traceable decision-making across agents, data, and APIs. By combining real-time governance with context-rich data, enterprises can turn potentially risky AI agents into effective autonomous systems that act safely, efficiently, and in alignment with organizational intent - ensuring that the power of AI is matched by control.

Learn more about AgentControl here.

Keep updated