Glossary
We’ve got you covered from A-Z
What is AI agent security?
AI agent security focuses on protecting individual autonomous AI agents and their interactions with data, systems, and users. It governs the agent’s access, enforces policies, and monitors its actions to prevent misuse, exploitation, or unintended harm.
Why it matters: Securing individual AI agents protects data integrity, privacy, and operational reliability, ensuring the agent behaves safely and as intended.
What is AI application security (AIAS)?
AI Application Security (AIAS) protects custom-built AI applications from threats such as prompt injection, rogue agent behavior, and unauthorized access. It uses automated testing, content guardrails, and monitoring to ensure applications behave safely.
Why it matters: AIAS reduces operational and security risks in AI deployments, maintaining integrity, reliability, and compliance.
What is AI assurance?
AI assurance is the ability to verify, monitor, and demonstrate that AI systems operate safely, comply with governance or regulatory requirements, and can be audited. It includes traceability of data inputs, model decisions, and system actions.
Why it matters: Assurance is necessary to meet internal risk controls, satisfy legal standards, and maintain accountability for AI-driven outcomes.
What is AI data governance?
AI data governance is an extension of traditional governance that focuses on managing the unique risks and complexities of AI systems. It ensures that data feeding AI systems is visible, well-understood, and governed by context-aware metadata—capturing provenance, usage constraints, and trust signals. This enables enterprises to safely scale AI, enforce data policies at the point of use, and ensure decisions are grounded in reliable information.
Learn more about AI data governance in our Knowledge Center.
What is AI data security?
AI data security involves making sure the data used by AI systems is reliable, properly managed, and safeguarded from abuse, while also maintaining transparency and trust at every stage. This solid framework allows organizations to deploy AI with confidence, meet regulatory requirements, and protect sensitive data.
What is AI governance?
AI governance is the control framework that governs how data is accessed, interpreted, and used by AI systems. It ensures that AI operates within defined boundaries—enforcing data use restrictions, applying contextual metadata, and maintaining traceability—so enterprises can deploy AI safely, compliantly, and at scale.
Learn more about AI governance in our Knowledge Centre.
What is AI orchestration?
AI orchestration is the process of managing and coordinating multiple AI agents, models, tools, and data sources so they work together seamlessly and efficiently. It ensures that AI components interact correctly, share data appropriately, and respond dynamically to changes in context or environment.
Why it matters: AI orchestration enables organizations to scale autonomous systems safely, optimize performance, and maintain control and compliance across complex AI-driven processes.
What is AI poisoning?
AI poisoning, also known as data poisoning, involves a deliberate and malicious contamination of data to compromise the performance of AI and ML systems. Attackers may inject false, misleading, or manipulated data into the training process to degrade model accuracy, introduce biases, or cause targeted misbehavior in specific scenarios.
Learn more about how to protect against AI poisoning here.
What is AI prompt injection?
Prompt injection is when someone inserts harmful or misleading text into an AI’s input to manipulate how it responds. This can cause the AI to produce incorrect, biased, or even dangerous outputs, or reveal information it shouldn’t. Because prompt injection can make AI behave in unexpected or harmful ways, protecting against it is key to keeping AI systems safe and trustworthy.
Why it matters: Prompt injection can cause AI to produce harmful or unauthorized outputs, making it a critical threat to trust, security, and brand safety.
What is AI-ready data?
AI-ready data is data that is accessible, trustworthy, enriched, and of high quality, ensuring accuracy and relevance for AI applications. In essence, it is information specifically prepared and optimized for use in artificial intelligence and machine learning models. This is crucial because the quality and preparedness of data directly impact the effectiveness, reliability, and fairness of AI systems.
What is AI risk?
AI risk refers to the potential harm or exposure arising from AI systems that are insecure, biased, poorly governed, or used outside their intended context. This can include data leakage, ethical issues, compliance violations, manipulation attacks, or unsafe decision-making.
Why it matters: Unmanaged AI risk can result in reputational damage, financial loss, operational disruption, and regulatory penalties.
What is AI risk mitigation?
AI risk mitigation is the process of identifying, assessing, and reducing the potential threats associated with the development and use of AI systems. It involves proactively managing risks - such as bias, security vulnerabilities, privacy concerns, and unintended behaviors - to ensure AI systems are safe, ethical, reliable, and compliant with regulations.
Why it matters: Proactively reducing AI risks supports safer deployment, regulatory compliance, and long-term trust in AI systems.
What is AI trust?
AI trust is the confidence that AI systems behave reliably, securely, and in alignment with intended goals and constraints. It involves assurance that outputs are explainable, fair, accountable, and consistent over time.
Why it matters: Without trust in AI decisions, organizations cannot safely scale adoption or meet stakeholder, customer, and regulatory expectations.
What is AI usage control?
AI Usage Control (AIUC) enforces organizational policies for safe consumption of AI services, particularly third-party tools. It monitors usage, prevents sensitive data leaks, and controls what topics or outputs employees can generate with AI.
Why it matters: AIUC ensures AI is used responsibly, mitigates insider and operational risks, and maintains compliance with governance policies.
What is a Model Context Protocol?
MCP (Model Context Protocol) allows for seamless integration and communication between AI models and different components, such as tools, data sources, and services. By standardizing how context and capabilities are shared, MCP enables AI to access relevant information, interact with external systems, and perform tasks more effectively and securely.
Why it matters: MCP enables agents to operate more securely and consistently by standardizing how they access tools, data, and capabilities.
What is a multi-agent system?
A multi-agent system is a group of AI agents that interact, collaborate, or compete to achieve individual or collective goals. Such systems often require coordination and communication protocols.
Why it matters: Multi-agent systems enable complex problem-solving and automation at scale, but introduce additional governance, security, and coordination challenges.
What is an agentic application?
An agentic application is software that incorporates one or more AI agents to execute tasks, respond to conditions, or make decisions autonomously.
Why it matters: Agentic applications enhance productivity and responsiveness, but must have clear policies to prevent unintended outcomes.
What is an agentic experience?
An agentic experience refers to interactions where AI agents actively assist or complete tasks for users, providing guidance, recommendations, or actions autonomously. These experiences showcase the potential of agent-native systems to streamline workflows or enhance user-facing services.
Why it matters: Agentic experiences can improve efficiency, personalization, and user satisfaction, but require careful design, transparency, and control to ensure trust and safe adoption.
What is an agentic security risk?
An agentic security risk is the potential harm that arises from the autonomous actions of AI agents, including unauthorized access, data misuse, malicious exploitation, or unintended behavior caused by errors or misconfigurations. These risks emerge from the agent’s ability to act independently, interact with systems, and access sensitive resources.
Why it matters: Understanding agentic risks enables organizations to implement proper safeguards, prevent misuse, and ensure agents operate safely and responsibly.
What is an agentic security solution?
An agentic security solution is a framework, platform, or set of controls designed to protect autonomous AI agents and the environments they operate in. These solutions can govern agent permissions, monitor decision-making, enforce policy boundaries, detect malicious behavior, and prevent misuse of tools or data during autonomous actions.
Why it matters: As AI agents gain more autonomy and system access, dedicated security solutions are critical to ensure safe operation, prevent exploitation, and maintain trust in agent-driven workflows.
What is an agentic security threat?
An agentic security threat is a malicious or accidental event that targets or exploits autonomous AI agents and their ability to act independently. Threats may involve manipulating an agent’s decisions, misusing its tools, triggering unauthorized actions, or causing data leakage or system disruption. Examples include memory poisoning, policy circumvention, or hijacking agent workflows.
Why it matters: Recognizing agent-specific threats is essential for designing secure, resilient AI systems that prevent autonomous agents from being weaponized or acting unpredictably.
What is an agent-native data model?
An agent-native data model is a way of structuring data specifically for AI agents to use efficiently. It emphasizes relationships, context, and attributes that agents need to make autonomous decisions.
Why it matters: Structuring data in an agent-native way ensures agents have the right information at the right time, improving accuracy and reducing errors.
What is an AI agent?
An AI agent is an autonomous software system that perceives its environment, makes decisions, and acts to achieve specific goals on behalf of users, often without human intervention. AI agents can reason, plan, remember, learn, and adapt, enabling them to complete complex tasks with a degree of independence.
Why it matters: AI agents unlock automation and efficiency for complex workflows, but require careful governance and oversight to ensure safe, ethical, and compliant operation.
What is an AI context engine?
An AI context engine is a system that gathers, interprets, and manages contextual information - such as user intent, data sensitivity, environment, and task relevance - to guide AI behavior and decision-making. It integrates diverse data sources, connects related information, and builds a unified understanding that allows AI models and agents to act appropriately within specific circumstances. Unlike basic AI systems that follow predefined rules, a context engine provides a dynamic intelligence layer that enables real-time, context-aware reasoning and more accurate, relevant responses.
Why it matters: By supplying continuous contextual intelligence, an AI context engine ensures AI outputs are precise, secure, and aligned with business objectives and compliance requirements, strengthening both performance and trust in AI systems.
What is an AI-ready data model?
An AI-ready data model is data that has been systematically prepared, structured, and governed so that AI and machine learning systems can use it accurately and efficiently. It is clean, complete, and contextualized, with clear metadata, allowing AI systems to easily find, process, and learn from it to make reliable predictions and decisions.
Why it matters: AI-ready data models ensure AI systems can operate effectively, reduce errors, and deliver scalable, reliable outcomes.
Keep updated
Don’t miss a beat from your favourite identity geeks