Glossary
We’ve got you covered from A-Z
What is AI data governance?
AI data governance is an extension of traditional governance that focuses on managing the unique risks and complexities of AI systems. It ensures that data feeding AI systems is visible, well-understood, and governed by context-aware metadata—capturing provenance, usage constraints, and trust signals. This enables enterprises to safely scale AI, enforce data policies at the point of use, and ensure decisions are grounded in reliable information.
Learn more about AI data governance in our Knowledge Center.
What are AI security threats?
AI security threats are risks and vulnerabilities that target or arise from AI systems, potentially compromising their integrity, confidentiality, or availability. Such threats can include adversarial attacks, data poisoning, model inversion, unauthorized inference, and misuse of AI for malicious purposes.
Read more about these risks and how to mitigate them in the Knowledge Center.
What are Policy Decision Points (PDPs)?
Policy Decision Points are parts of a system that review access requests against set rules and available context, then decide whether to approve or deny the request, sending that decision back to a Policy Enforcement Point (PEP).
Why it matters: PDPs centralize decision logic, allowing for scalable governance and consistent policy enforcement across systems.
What are Policy Enforcement Points (PEPs)?
Policy Enforcement Points are parts of a system that control access by checking each request, asking a Policy Decision Point (PDP) for a decision, and then allowing or blocking the request based on that decision.
Why it matters: PEPs ensure access is consistently enforced in real time, preventing unauthorized actions before they occur.
What are adversarial inputs in AI?
Adversarial inputs are carefully designed changes to data that confuse AI models and cause them to make mistakes. These changes can be subtle and hard for humans to detect, but they exploit weaknesses in the AI’s understanding. Adversarial inputs can reduce the accuracy and reliability of AI, so defending against them is important to ensure AI makes correct and safe decisions.
What are data silos?
Data silos refer to isolated collections of data, such as customer or sales data, within an organization that are not easily accessible or integrated with other data sources. Imagine having separate storage rooms for each department, where each room holds important information, but each department can only access their own storage room. This makes it difficult to get a complete unified view of the entire organization’s data.
What are directory information services?
A directory information service is a centralized database which stores, manages, and provides access to directory data, such as user identities, resources, and access permissions. Picture a company’s phonebook, listing all employees, their contact information, and their roles, helping everyone find the right person quickly.
What are dynamic access tokens?
Dynamic access tokens are temporary, context-aware credentials that grant AI agents or users secure access to resources. They can adjust permissions in real-time based on policies, risk, or environmental factors.
Why it matters: Dynamic tokens improve security by limiting exposure and ensuring access is only granted under the right conditions.
What are knowledge graphs?
A knowledge graph, also known as a semantic network or connected data model, represents a network of real-world entities, made up of nodes, edges and labels, and illustrates the relationships between them - visualized as a graph structure. Imagine a smart map that connects pieces of information together, and shows how things are related. By doing so, we can find unique connections and new insights, which makes it easier to answer complex questions, and provide helpful recommendations.
Why it matters: Mapping relationships between entities uncovers hidden insights, improves recommendations, and supports complex queries.
What does agent-native mean?
Agent-native refers to systems and workflows intentionally designed for autonomous AI agents to operate at the core, instead of being added on top of workflows designed for humans. It reflects a shift toward environments where agents execute tasks, make decisions, and interact with other systems independently, while humans guide overall direction.
Why it matters: Agent-native design ensures agents can operate safely and efficiently, enabling greater automation, scalability, and alignment with business goals.
What is 0Auth2?
OAuth 2.0 is an open standard protocol that allows third-party applications, like a website or application to access the resources of a user without exposing their credentials. For instance, it allows apps to access your data without giving them your password, keeping your information secure.
What is AI agent security?
AI agent security focuses on protecting individual autonomous AI agents and their interactions with data, systems, and users. It governs the agent’s access, enforces policies, and monitors its actions to prevent misuse, exploitation, or unintended harm.
Why it matters: Securing individual AI agents protects data integrity, privacy, and operational reliability, ensuring the agent behaves safely and as intended.
What is AI application security (AIAS)?
AI Application Security (AIAS) protects custom-built AI applications from threats such as prompt injection, rogue agent behavior, and unauthorized access. It uses automated testing, content guardrails, and monitoring to ensure applications behave safely.
Why it matters: AIAS reduces operational and security risks in AI deployments, maintaining integrity, reliability, and compliance.
What is AI assurance?
AI assurance is the ability to verify, monitor, and demonstrate that AI systems operate safely, comply with governance or regulatory requirements, and can be audited. It includes traceability of data inputs, model decisions, and system actions.
Why it matters: Assurance is necessary to meet internal risk controls, satisfy legal standards, and maintain accountability for AI-driven outcomes.
What is AI data security?
AI data security involves making sure the data used by AI systems is reliable, properly managed, and safeguarded from abuse, while also maintaining transparency and trust at every stage. This solid framework allows organizations to deploy AI with confidence, meet regulatory requirements, and protect sensitive data.
What is AI governance?
AI governance is the control framework that governs how data is accessed, interpreted, and used by AI systems. It ensures that AI operates within defined boundaries—enforcing data use restrictions, applying contextual metadata, and maintaining traceability—so enterprises can deploy AI safely, compliantly, and at scale.
Learn more about AI governance in our Knowledge Centre.
What is AI orchestration?
AI orchestration is the process of managing and coordinating multiple AI agents, models, tools, and data sources so they work together seamlessly and efficiently. It ensures that AI components interact correctly, share data appropriately, and respond dynamically to changes in context or environment.
Why it matters: AI orchestration enables organizations to scale autonomous systems safely, optimize performance, and maintain control and compliance across complex AI-driven processes.
What is AI poisoning?
AI poisoning, also known as data poisoning, involves a deliberate and malicious contamination of data to compromise the performance of AI and ML systems. Attackers may inject false, misleading, or manipulated data into the training process to degrade model accuracy, introduce biases, or cause targeted misbehavior in specific scenarios.
Learn more about how to protect against AI poisoning here.
What is AI prompt injection?
Prompt injection is when someone inserts harmful or misleading text into an AI’s input to manipulate how it responds. This can cause the AI to produce incorrect, biased, or even dangerous outputs, or reveal information it shouldn’t. Because prompt injection can make AI behave in unexpected or harmful ways, protecting against it is key to keeping AI systems safe and trustworthy.
Why it matters: Prompt injection can cause AI to produce harmful or unauthorized outputs, making it a critical threat to trust, security, and brand safety.
What is AI risk mitigation?
AI risk mitigation is the process of identifying, assessing, and reducing the potential threats associated with the development and use of AI systems. It involves proactively managing risks - such as bias, security vulnerabilities, privacy concerns, and unintended behaviors - to ensure AI systems are safe, ethical, reliable, and compliant with regulations.
Why it matters: Proactively reducing AI risks supports safer deployment, regulatory compliance, and long-term trust in AI systems.
What is AI risk?
AI risk refers to the potential harm or exposure arising from AI systems that are insecure, biased, poorly governed, or used outside their intended context. This can include data leakage, ethical issues, compliance violations, manipulation attacks, or unsafe decision-making.
Why it matters: Unmanaged AI risk can result in reputational damage, financial loss, operational disruption, and regulatory penalties.
What is AI trust?
AI trust is the confidence that AI systems behave reliably, securely, and in alignment with intended goals and constraints. It involves assurance that outputs are explainable, fair, accountable, and consistent over time.
Why it matters: Without trust in AI decisions, organizations cannot safely scale adoption or meet stakeholder, customer, and regulatory expectations.
What is AI usage control?
AI Usage Control (AIUC) enforces organizational policies for safe consumption of AI services, particularly third-party tools. It monitors usage, prevents sensitive data leaks, and controls what topics or outputs employees can generate with AI.
Why it matters: AIUC ensures AI is used responsibly, mitigates insider and operational risks, and maintains compliance with governance policies.
What is AI-ready data?
AI-ready data is data that is accessible, trustworthy, enriched, and of high quality, ensuring accuracy and relevance for AI applications. In essence, it is information specifically prepared and optimized for use in artificial intelligence and machine learning models. This is crucial because the quality and preparedness of data directly impact the effectiveness, reliability, and fairness of AI systems.
What is AI prompt injection?
AI prompt injection is when someone inserts harmful or misleading text into an AI’s input to manipulate how it responds. This can cause the AI to produce incorrect, biased, or even dangerous outputs, or reveal information it shouldn’t. Because prompt injection can make AI behave in unexpected or harmful ways, protecting against it is key to keeping AI systems safe and trustworthy.
Learn more about how to protect your AI systems here.
Keep updated
Don’t miss a beat from your favourite identity geeks