Glossary

We’ve got you covered from A-Z

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

What is AgentControl?

A
All

AgentControl provides fine-grained, context-aware authorization for AI agents, ensuring they access only the data and systems required for their specific tasks. By dynamically evaluating permissions in real time, it prevents over-permissioned agents from exposing sensitive information or violating policies.

Why it matters: Proper agent access ensures AI agents can perform tasks safely and efficiently, while reducing security risks, preventing unauthorized data use, and maintaining compliance.

Read more
Purple arrow pointing right

What is Attribute Based Access Control (ABAC)?

A
All

Attribute Based Access Control (ABAC) is a security approach that uses attributes (such as title, location, team, etc) to determine access to a resource. A system administrator would be the one to set approved characteristics to determine access. 

Learn more here.

Read more
Purple arrow pointing right

What is AuthZen?

A
All

AuthZen (Authorization Enhancement) is a standard by the OpenID Foundation that defines an interoperable protocol for fine-grained authorization. It enables Policy Enforcement Points (PEPs) and Policy Decision Points (PDPs) to work together using rich contextual data to make precise access decisions.

Why it matters: AuthZen helps organizations enforce precise, policy-driven access decisions, improving control, compliance, and interoperability.

Read more
Purple arrow pointing right

What is B2B data sharing?

B
All

B2B data sharing involves securely sharing or accessing data from one entity to another for business purposes. For example to enable collaboration, improve services or simply create mutual value. This often involves sharing customer insights, supply chain data or analytics, while ensuring privacy, security and compliance with regulations.

Why it matters: Secure, well governed data sharing drives collaboration, innovation, and value creation without exposing sensitive information.

Learn more about B2B data sharing here.

Read more
Purple arrow pointing right

What is ContX IQ?

C
All

ContX IQ is a IndyKite product that combines data retrieval and enforcement to t enable secure, real-time delivery of data to the right place in the right context. It allows organizations to define business parameters, run contextual queries, and fetch data (without duplication) tailored to specific situations, while simplifying integrations and maintaining access control and consent management.

Why it matters: ContX IQ ensures that data is shared safely, efficiently, and in alignment with policies, reducing engineering overhead and supporting trust in AI-driven processes.

Read more
Purple arrow pointing right

What is Just in time access (JIT)?

J
All

Just in time access (JIT) refers to a process of temporarily granting on-demand (privileged) access only when needed for a specific task or period. Access is provided dynamically and automatically based on predefined policies and conditions. It’s like asking and getting a temporary key to a room only when you need to go inside. You don't have permanent access, but you can enter when necessary.

Why it matters: Temporary, on-demand access minimizes exposure, reduces the attack surface, and supports compliance.

Read more
Purple arrow pointing right

What is Knowledge-based Access Control (KBAC)?

K
All

Knowledge-based Access Control (KBAC) leverages contextual and relational data to drive granular authorization decisions. At the core of the IndyKite Identity Platform is the Identity Knowledge Graph, which gathers data from various sources to create an operational data layer. To manage access, KBAC is added, using connected and enriched data to make real-time, context-aware authorization decisions based on your business needs.

Discover our Introduction to Knowledge-based Access Control.

Learn more about KBAC here.

Read more
Purple arrow pointing right

What is LLM security?

L
All

LLM security involves safeguarding large language models and their related systems against risks like data leaks, prompt injection attacks, misuse, and unauthorized access. It involves securing the data used to train and interact with the model, as well as the model’s behavior and outputs.

Read more
Purple arrow pointing right

What is OWASP agentic security?

O
All

OWASP agentic security is an initiative by the Open Web Application Security Project that provides security guidelines, threat models, and best practices for protecting autonomous AI agents and agentic systems. It focuses on mitigating risks such as unauthorized access, tool misuse, memory poisoning, data leakage, and unsafe autonomous behavior across the agent lifecycle.

Why it matters: Adopting OWASP-aligned principles helps organizations identify and reduce common vulnerabilities in agentic AI, ensuring safer, more trustworthy autonomous systems.

Read more
Purple arrow pointing right

What is Policy Based Access Control (PBAC)?

P
All

Policy Based Access Control (PBAC) is authorization approach that organizes access privileges based on a user’s role (predefined rules or policies) to determine who is granted access to resources and under what conditions. Policies can consist of a variety of attributes, such as: name, organization, job title, security clearance, creation date, file type, location, time of day and sensitivity or threat level. Once these are combined to form policies, rules are established to evaluate who is requesting access, what they are requesting access to and the action determining access.

Learn more here.

Read more
Purple arrow pointing right

What is RAG protection?

R
All

RAG protection refers to securing Retrieval-Augmented Generation (RAG) systems by controlling how data is accessed and used. It includes fine-grained authorization to make sure only the right information is shared in the right context, helping prevent data leaks or unauthorized access during AI interactions.

Why it matters: RAG systems often access sensitive information, so proper protection prevents data leakage, ensures compliance, and maintains trust in AI-driven responses.

Read more
Purple arrow pointing right

What is RAG security?

R
All

RAG security focuses on protecting Retrieval-Augmented Generation (RAG) systems by using technical measures to secure data, prevent unauthorized access, and maintain privacy. It ensures the system and data are safe from misuse.

Learn more by downloading the E-guide: RAG Security.

Read more
Purple arrow pointing right

What is RagProtect?

R
All

RagProtect secures Retrieval-Augmented Generation (RAG) systems by providing fine-grained, context-aware authorization. It ensures that only the right information is accessed in the right context, protecting sensitive data and preventing leaks during AI interactions.

Why it matters: RAG systems often handle confidential data; RagProtect safeguards privacy, compliance, and trust in AI-driven outputs.

Read more
Purple arrow pointing right

What is Retrieval Augmented Generation (RAG)?

R
All

Retrieval-Augmented Generation (RAG) is a method in AI that combines two steps: first, it finds (retrieves) useful information from sources like databases or documents; then, it uses an AI model to create a response based on that information. This makes the AI’s answers more accurate and helpful, especially for tasks like answering questions or summarizing information.

Read more
Purple arrow pointing right

What is TRiSM?

T
All

AI Trust, Risk, and Security Management (TRiSM) is a framework for overseeing the safety, reliability, and compliance of AI systems across their lifecycle. It provides governance, monitoring, and controls to manage AI risks, including bias, misuse, and operational vulnerabilities.

Why it matters: Implementing TRiSM ensures organizations deploy AI responsibly, reduce operational risk, and maintain trust in AI-driven decisions.

Read more
Purple arrow pointing right

What is Trust Scoring?

T
All

A method for evaluating the reliability of data based on two aspects, Data Integrity and Data Provenance. Data Integrity measures quality through dimensions such as freshness (how up to date the data is), completeness (whether all expected properties are present), and validity (accuracy of representation). Data Provenance tracks lineage and transparency through origin (where the data came from) and verification (when it was last authenticated). Together, these provide a configurable score indicating overall trustworthiness. The goal of trust scoring is to provide a measurable level of confidence in the data, allowing organizations to make more informed decisions by using data that has been verified as trustworthy.

Why it matters: Trust scoring strengthens governance and accountability by providing a measurable way to assess data reliability before it is used in critical processes or AI systems.

Read more
Purple arrow pointing right

What is Zero-Trust architecture?

Z
All

Zero-Trust architecture is a security framework that assumes all users, devices, and transactions are potential threats and nothing can be trusted implicitly, therefore requiring strict authentication and authorization measures for every access attempt. A Zero Trust approach is a core pillar of most enterprise cyber security strategies, resulting in strengthened defense against cyber threats, enhanced data protection, and continuous security monitoring and enforcement across its network and systems.

Read more
Purple arrow pointing right

What is a Model Context Protocol?

M
All

MCP (Model Context Protocol) allows for seamless integration and communication between AI models and different components, such as tools, data sources, and services. By standardizing how context and capabilities are shared, MCP enables AI to access relevant information, interact with external systems, and perform tasks more effectively and securely.

Why it matters: MCP enables agents to operate more securely and consistently by standardizing how they access tools, data, and capabilities.

Read more
Purple arrow pointing right

What is a connected data model?

C
All

Connected data models involve networks of data points or nodes linked through relationships. Knowledge graphs are a popular way to do this, making connections between disparate sources to provide specific insights. They aim to intuitively represent the interconnected world. The real world is flexible, messy and constantly changing. Our relationships and connections are dynamic and are at times incredibly complex and layered, and knowledge graphs are designed to reflect this complexity.

Read more
Purple arrow pointing right

What is a data catalog?

D
All

A data catalog is the ability to inventory and organize data assets. Capabilities include using machine learning for automatically detecting relationships between data assets. This process involves users verifying and resolving any uncertainties found during automated inventory.

Read more
Purple arrow pointing right

What is a data control engine?

D
All

A data control engine is an external system that uses metadata and relationships in a graph to enforce enterprise policies, manage governance, and control access across applications and systems. It ensures that rules, trust signals, and usage restrictions travel with the data, allowing operationalized graph data to be used securely and consistently in analytics, AI, and workflows.

Why it matters: Ensuring policies and governance travel with the data prevents misuse, reduces compliance risk, and allows secure, scalable data operations.

Read more
Purple arrow pointing right

What is a data mesh?

D
All

Data mesh is a data management approach that supports a domain-led practice for defining, delivering, maintaining, and governing data products. While it’s not yet an established best practice, data mesh helps ensure that data products are easy to find and use by data consumers, such as business users, data analysts, data engineers, or other systems. Additionally, data products must meet terms of service and SLAs, forming a contract between the provider and the consumer.

Read more
Purple arrow pointing right

What is a graph model?

G
All

A graph model is a way of structuring data (using a graph database) that represents entities as nodes and the relationships between them as edges. It captures not just values but the connections and dependencies among entities, allowing enterprises to see complex systems, follow chains of interaction, and understand how elements such as customers, accounts, transactions, and products relate to one another.

Why it matters: Understanding and using graph models enables organizations to uncover hidden relationships, maintain context across systems, and make better-informed decisions.

Read more
Purple arrow pointing right

What is a multi-agent system?

M
All

A multi-agent system is a group of AI agents that interact, collaborate, or compete to achieve individual or collective goals. Such systems often require coordination and communication protocols.

Why it matters: Multi-agent systems enable complex problem-solving and automation at scale, but introduce additional governance, security, and coordination challenges.

Read more
Purple arrow pointing right

What is a unified data layer?

U
All

A unified data layer, also known as connected data, refers to data stored in a graph data model, which captures relationships between data points. This approach excels at understanding dynamic and complex relationships, managing data intuitively, and providing context to otherwise meaningless information. Connected data offers greater flexibility, insight, and speed for data-driven projects, making it a powerful force in the data management landscape.

Why it matters: Centralized, connected data ensures consistency, improves efficiency, and reduces errors across systems and processes.

Read more
Purple arrow pointing right

Keep updated

Don’t miss a beat from your favourite identity geeks