Microsoft Copilot Studio is one of the most visible examples of how easy it has become to create AI agents without writing code. With only a few configuration steps, non-developers can build agents that query internal systems, retrieve documents, and perform actions on real data.
That accessibility also makes agent behaviour easier to examine. In a recent report by Tenable, researchers tested a no-code Copilot agent configured to support a simple business workflow and connected to internal data sources. The agent operated within its assigned permissions and followed explicit instructions about how data should be handled. During normal interaction, however, it was able to expose sensitive information and perform actions outside the intended user context.
This example is not unique to Copilot. It reflects a broader challenge emerging with no-code and agentic systems more generally: how to safely govern autonomous agents operating inside enterprise environments.
Inside the Copilot agent exposure
The Copilot case reveals what many security professionals have been concerned about for some time. Even though the agent was operating within the confines of its access and followed the instructions provided by the researchers, it was still able to expose data.
In the documented scenario, a no-code Copilot agent was created to act as a customer-facing assistant for a fictional travel agency. Its responsibilities included viewing bookings, updating reservations, and answering questions about pricing and availability. To support these tasks, the agent was connected to a SharePoint file containing customer records such as names, booking details, and payment information.
The agent’s instructions clearly stated that customer data should be handled on a per-user basis. Despite this, a user interacting with the agent was able to request information associated with other customers and receive it in response, including sensitive data that should have remained scoped to individual accounts.
The interaction reflected normal use of the agent’s configured capabilities. The agent retrieved data available through its connected source and used that data to respond to the request. What was missing was a way for the agent to determine how the data it could access was appropriate to use in that moment. Once the data was retrieved, use followed implicitly.
The agentic security gap
This example highlights a structural separation between access and use. Traditional security models focus on determining whether an entity is allowed to access data. Agentic systems introduce an additional consideration: whether retrieved data should be used in a given situation.
Autonomous agents operate at runtime. They retrieve information, combine signals, and act in pursuit of a goal. Access controls define what data is available to them, but they do not shape how that data is applied as decisions are made. Once data enters an agent’s working context, it becomes part of the reasoning process by default.
This separation between access and use is the agentic security gap. Until it is addressed, enterprises will continue to struggle to deploy autonomous agents in environments where sensitive data, regulatory obligations, and trust boundaries matter.
Governing data use at runtime
Addressing the agentic security gap requires a shift in how security and governance are applied. In traditional systems, decisions about data access are made during configuration, and enforced based on user roles and attributes. With autonomous agents, the critical decisions occur at the moment of use, as data is accessed and retrieved in real time.
Runtime enforcement focuses on that moment. It asks whether specific data is appropriate for the current request, context, and actor, rather than assuming that availability implies suitability. To do that effectively, governance cannot sit only in policies or documentation. It needs to be encoded into the data itself so the agents can leverage it to guide their decisions.
With this approach, provenance, sensitivity, ownership, consent, freshness, and usage constraints all become machine-readable signals that can be evaluated as data is retrieved. This allows decisions about use to be made dynamically, based on live context and business logic.
For enterprises, this reframes the challenge. The focus no longer needs to be about locking down access for agents, but on preparing data so it can be used safely by autonomous systems. When data is connected, contextualised, and governed at the source, agents can operate with greater autonomy without eroding trust boundaries. This is the dual approach behind IndyKite’s AgentControl: granular access control that can enforce dynamically at runtime, combined with well governed data that can enforce restrictions and enable use.








