Glossary
We’ve got you covered from A-Z
What is graph data?
Graph data is information organized as nodes and edges, where nodes represent entities (people, accounts, devices) and edges represent relationships (ownership, interaction, dependency). Both nodes and edges can carry attributes, allowing the data to include context alongside values. This structure makes relationships explicit and queryable, enabling enterprises to see dependencies, patterns, and connections that traditional tabular data cannot capture.
What is graph data modeling?
Graph data modeling is the process of structuring data as a graph, where entities are represented as nodes and their relationships as edges with attributes. This approach carries context with the data, supports flexibility as business requirements evolve, and enables visibility across connected domains, forming a foundation for analysis, governance, and operational use.
What is graph integration?
Graph integration is the use of a graph model as a shared layer to connect data from multiple systems while preserving the relationships between entities. Rather than moving isolated records, applications operate on a connected structure that reflects how systems, customers, and processes interact. This approach improves consistency, adaptability, and context-rich insights across enterprise applications, while embedding governance and reducing duplication or conflicts.
Why it matters: Integrating data through a graph model preserves context, reduces duplication, and supports faster, more accurate insights across systems.
What is Just in time access (JIT)?
Just in time access (JIT) refers to a process of temporarily granting on-demand (privileged) access only when needed for a specific task or period. Access is provided dynamically and automatically based on predefined policies and conditions. It’s like asking and getting a temporary key to a room only when you need to go inside. You don't have permanent access, but you can enter when necessary.
Why it matters: Temporary, on-demand access minimizes exposure, reduces the attack surface, and supports compliance.
What is Knowledge-based Access Control (KBAC)?
Knowledge-based Access Control (KBAC) leverages contextual and relational data to drive granular authorization decisions. At the core of the IndyKite Identity Platform is the Identity Knowledge Graph, which gathers data from various sources to create an operational data layer. To manage access, KBAC is added, using connected and enriched data to make real-time, context-aware authorization decisions based on your business needs.
Discover our Introduction to Knowledge-based Access Control.
Learn more about KBAC here.
What is least privilege?
Least privilege is a security concept that restricts user access rights to the minimum level needed to perform the job, based on roles and responsibilities. Benefits include; enhanced data security, mitigated risk associated with unauthorized access, and ensured compliance with regulatory standards for data protection.
What is LLM security?
LLM security involves safeguarding large language models and their related systems against risks like data leaks, prompt injection attacks, misuse, and unauthorized access. It involves securing the data used to train and interact with the model, as well as the model’s behavior and outputs.
What is model inversion?
Model inversion is an attack method targeting AI models, where an attacker infers information about the model's training data by analyzing the model's output. It effectively “reverse-engineers” the model to uncover the data it was trained on, which can lead to exposure of sensitive or private information.
Why it matters: Model inversion can expose private or proprietary data, posing serious risks to privacy, security, and compliance.
What is multi-agent security?
Multi-agent security covers the protection and governance of environments where multiple AI agents interact, collaborate, or compete. It addresses risks that emerge from agent-to-agent coordination, shared tools, and cascading decision chains.
Why it matters: Coordinated agents can create expanded attack surfaces or compounding errors, so multi-agent security is essential to prevent unintended behaviors, exploitation, and systemic failures.
What is ontology?
An ontology is a shared, formal definition of an organization’s core business concepts and the relationships between them. It provides a structured way to represent meaning so that systems can understand not just data, but what that data represents in the real world.
Why it matters:
Ontologies give AI systems clear concepts to reason about instead of inferring meaning implicitly. This enables more accurate reasoning, better interoperability, and reduced ambiguity across models, agents, and applications.
What is OWASP agentic security?
OWASP agentic security is an initiative by the Open Web Application Security Project that provides security guidelines, threat models, and best practices for protecting autonomous AI agents and agentic systems. It focuses on mitigating risks such as unauthorized access, tool misuse, memory poisoning, data leakage, and unsafe autonomous behavior across the agent lifecycle.
Why it matters: Adopting OWASP-aligned principles helps organizations identify and reduce common vulnerabilities in agentic AI, ensuring safer, more trustworthy autonomous systems.
What is Policy Based Access Control (PBAC)?
Policy Based Access Control (PBAC) is authorization approach that organizes access privileges based on a user’s role (predefined rules or policies) to determine who is granted access to resources and under what conditions. Policies can consist of a variety of attributes, such as: name, organization, job title, security clearance, creation date, file type, location, time of day and sensitivity or threat level. Once these are combined to form policies, rules are established to evaluate who is requesting access, what they are requesting access to and the action determining access.
Learn more here.
What is RagProtect?
RagProtect secures Retrieval-Augmented Generation (RAG) systems by providing fine-grained, context-aware authorization. It ensures that only the right information is accessed in the right context, protecting sensitive data and preventing leaks during AI interactions.
Why it matters: RAG systems often handle confidential data; RagProtect safeguards privacy, compliance, and trust in AI-driven outputs.
What is RAG protection?
RAG protection refers to securing Retrieval-Augmented Generation (RAG) systems by controlling how data is accessed and used. It includes fine-grained authorization to make sure only the right information is shared in the right context, helping prevent data leaks or unauthorized access during AI interactions.
Why it matters: RAG systems often access sensitive information, so proper protection prevents data leakage, ensures compliance, and maintains trust in AI-driven responses.
What is RAG security?
RAG security focuses on protecting Retrieval-Augmented Generation (RAG) systems by using technical measures to secure data, prevent unauthorized access, and maintain privacy. It ensures the system and data are safe from misuse.
Learn more by downloading the E-guide: RAG Security.
What is real-time data visibility?
Real-time data visibility essentially means live information. It refers to the capability of accessing and analyzing data as it is generated or updated, providing immediate insights into current business operations or conditions. It’s like watching a sports game on your phone. You can see the score, and plays as they happen in real time. For a company, being able to have real-time visibility and insights on their data, offers many opportunities such as facilitating proactive decision-making, and improving overall efficiency.
Why it matters: Immediate access to live data enables faster decision-making, proactive management, and better operational control.
What is Retrieval Augmented Generation (RAG)?
Retrieval-Augmented Generation (RAG) is a method in AI that combines two steps: first, it finds (retrieves) useful information from sources like databases or documents; then, it uses an AI model to create a response based on that information. This makes the AI’s answers more accurate and helpful, especially for tasks like answering questions or summarizing information.
What is secure model coordination?
Secure model coordination ensures multiple AI models or agents collaborate safely, sharing data and instructions while maintaining integrity, privacy, and compliance.
Why it matters: Proper coordination prevents errors, miscommunications, or malicious interference in multi-model environments.
What is structured data?
Structured data is information that is carefully organized according to well-defined schemas, often stored in databases with rows and columns. Organizations invest in governance practices such as catalogs, metadata tagging, accuracy checks, and lineage tracking to ensure reliability. Examples include financial records, CRM entries, transaction logs, and operational metrics.
Why it matters:Structured data has been the cornerstone of enterprise analytics for decades because it is predictable, consistent, and easy to query. It supports reporting, forecasting, simulations, and traditional machine learning models. However, it does not capture the depth of context and nuance found in unstructured data, which is critical for AI-native applications and LLM-driven workflows.
What is technical debt?
Technical debt refers to the total expense caused by inadequate architecture or software development. These may be decisions to prioritize speed over design, but it is more often the result of short-sighted, siloed software decisions without a view to the broader architecture. Legacy solutions that have become obsolete over time, but are incorporated in a way that is difficult to remove also contributes to an organization’s technical debt.
What is the AI lifecycle?
The AI lifecycle is the end-to-end process of developing, deploying, and maintaining an AI system. It includes stages such as problem definition, data preparation, model training, evaluation, deployment, monitoring, and ongoing governance to ensure performance, accuracy, and compliance over time.
Why it matters: Managing the full lifecycle ensures AI systems remain accurate, secure, compliant, and aligned with business goals over time.
What is TRiSM?
AI Trust, Risk, and Security Management (TRiSM) is a framework for overseeing the safety, reliability, and compliance of AI systems across their lifecycle. It provides governance, monitoring, and controls to manage AI risks, including bias, misuse, and operational vulnerabilities.
Why it matters: Implementing TRiSM ensures organizations deploy AI responsibly, reduce operational risk, and maintain trust in AI-driven decisions.
What is trusted data use?
Trusted data use is the practice of using data with confidence that it is accurate, reliable, complete, and governed. It ensures that data meets defined quality, security, and compliance standards, allowing it to be used safely across systems, analytics, and AI.
Why it matters: Trusted data use enables consistent, compliant, and confident decisions, ensuring that data-driven systems operate safely and effectively.
What is trust fabric?
Trust fabric is a concept primarily driven by Microsoft, defined as a real-time approach to securing access that is adaptive and comprehensive. A trust fabric authenticates identities, verifies access conditions, checks permissions, encrypts the communication channel, and monitors for security breaches. All continuously evaluated in real-time.
What is Trust Scoring?
A method for evaluating the reliability of data based on two aspects, Data Integrity and Data Provenance. Data Integrity measures quality through dimensions such as freshness (how up to date the data is), completeness (whether all expected properties are present), and validity (accuracy of representation). Data Provenance tracks lineage and transparency through origin (where the data came from) and verification (when it was last authenticated). Together, these provide a configurable score indicating overall trustworthiness. The goal of trust scoring is to provide a measurable level of confidence in the data, allowing organizations to make more informed decisions by using data that has been verified as trustworthy.
Why it matters: Trust scoring strengthens governance and accountability by providing a measurable way to assess data reliability before it is used in critical processes or AI systems.
Keep updated
Don’t miss a beat from your favourite identity geeks