Glossary
We’ve got you covered from A-Z
What is context-aware enforcement?
Context-aware enforcement is a type of dynamic enforcement that applies access and security policies specifically based on real-time contextual information, such as user identity, device status, location, and behavior. By evaluating the circumstances surrounding each request, it ensures that access or actions are only allowed when appropriate.
Why it matters: By using contextual intelligence, organizations can prevent unauthorized actions while enabling legitimate system use, enhancing security without disrupting productivity.
What is context-aware security?
Context-aware security is the practice of using situational information to enhance security decisions in real time. It takes into account factors like user location, device type, time of access, and network conditions to dynamically adjust access controls and security measures. This approach enables more adaptive and precise protection - reducing the risk of threats while still allowing legitimate users to access what they need.
Why it matters: Adaptive, context-based security reduces threats more precisely than static rules, strengthening protection without hindering productivity.
What is context-based access control (CBAC)?
Context-based access control (CBAC) is a dynamic security model that makes adaptive, risk-aware access decisions by evaluating multiple real-time situational factors, such as user behavior, device health, location, and network conditions, instead of relying solely on static rules.
Why it matters: CBAC enhances security while maintaining operational efficiency, ensuring sensitive resources are accessed safely and appropriately.
What is contextual access control?
Contextual access control is a dynamic security approach that grants or denies access based on real-time contextual factors, rather than just static attributes like user identity. It evaluates variables such as role, location, device, time, and activity to assess the risk of each access request.
Why it matters: Considering the circumstances of each request reduces the risk of unauthorized access while allowing legitimate users to operate efficiently, supporting both security and usability.
What is contextualized data?
Contextualized data refers to information that is enhanced with relevant context, such as time, location, environmental conditions, historical trends, or external events to provide deeper insights and greater understanding. Traditional databases can’t capture context, however connected data models can in the form of relationships to other data points, attributes and metadata. Contextualized data provides a richer view that can enhance workflows for identity and access management, threat detection, predictive models and personalization.
What is data access?
Data access refers to a user's ability (with permission granted) to retrieve, manipulate, or interact with data stored in a system or database. Simplified, it’s like having a key to unlock a safe where information is stored, allowing you to view, change, or use the data based on your permissions.
What is data assurance?
Data assurance is the process of validating that data is accurate, complete, governed, and appropriate for use in applications, analytics, or AI systems. It includes evaluating provenance, quality, consistency, and usage permissions.
Why it matters: AI and decision systems are only as reliable as the data they rely on, and assured data reduces the likelihood of incorrect, biased, or non-compliant outcomes.
What is data classification?
Data classification involves organizing data into categories to enhance its usability and security. This process simplifies data retrieval and is crucial for risk management, compliance, and data security efforts.
What is data enablement?
Data enablement is the means of empowering an organization to collect the full potential of their data. It involves ensuring that data is properly integrated, managed, and delivered to the right users in a meaningful way, so it can be used effectively to drive decision-making and innovation.
Why it matters: Effective data enablement allows organizations to leverage their full data potential for innovation and growth.
What is data entity matching?
Data entity matching refers to the task to figure out if two entity descriptions actually refer to the same real-world entity. By identifying, linking and merging similar or identical entities across different datasets you can create a unified and accurate representation. The goal is to build a cohesive dataset, enabling clearer insights and more informed decision-making.
What is data fabric?
A data fabric is a design framework for creating flexible and reusable data pipelines, services, and semantics. It uses data integration, active metadata, knowledge graphs, profiling, machine learning, and data cataloging. Data Fabric changes the main approach to data management, which is “build to suit” for data and use cases and replaces it with “observe and leverage”.
What is data governance?
Data governance is a framework of rules and guidelines for how everyone should handle and use information in a company to keep it accurate, secure, and useful.
Why it matters: Strong governance ensures data is secure, compliant, and used responsibly, reducing risk while enabling business value.
What is data integration?
Data integration involves practices, techniques, and tools to ensure consistent access and delivery of data across different areas and structures within a company. It aims to meet the data needs of all applications and business processes efficiently.
Why it matters: Seamless data integration ensures consistency, reduces duplication, and supports accurate, organization-wide insights.
What is data lineage?
Data lineage refers to the lifecycle and journey of data from origin to destination. From creation to how it’s been edited, transformed and used. Data lineage is critical to know how data can and should be used (compliance), how it was generated and how trustworthy it is. This becomes particularly important when considering data for insights or for machine learning and large language models.
What is data management?
Data management refers to the collection, organization, protection and utilization of an organization’s data and is a core enabler of modern businesses. Data is considered a company’s most critical and valuable asset, however without tooling to effectively manage and make use of the data, it is worthless. Data management technologies include Master Data Management, Customer Data Platforms, Data Unification platforms and Data integration platforms. Every system in use at an enterprise collects data, so a clear data management strategy is crucial to manage, govern and make use of all the data in a safe, secure and compliant way.
What is data poisoning?
Data poisoning, also known as AI poisoning, involves a deliberate and malicious contamination of data to compromise the performance of AI and ML systems. Attackers may inject false, misleading, or manipulated data into the training process to degrade model accuracy, introduce biases, or cause targeted misbehavior in specific scenarios.
Learn how to protect agaist data poisioning here.
What is data profiling?
Data profiling involves statistical analysis of various datasets (both structured and unstructured, external and internal) acting as an enabler to provide business users with insights into data quality and identify data quality issues. Profiling also checks data against established rules from rules management.
What is data provenance?
Data provenance is a historical record of source data, a way to understand the journey of data throughout the organization. Data provenance plays a crucial role in understanding the quality of your data and ensuring its veracity. It’s like a detailed travel log for data, showing where it came from, where it has been, and how it has changed over time.
Why it matters: Knowing a data’s origin and transformations ensures accuracy, trust, and compliance across systems and applications.
What is data risk scoring?
Data risk scoring is a method of rating potential risk on different kinds of information based on how sensitive it is and how likely it could be accessed by someone who shouldn't have it.
Why it matters: Understanding potential data risks allows organizations to prioritize mitigation and protect sensitive information.
What is data transformation?
Data transformation refers to the process of converting data from one format or structure into another, often done to facilitate analysis, integration or storage. Similarly we could say it’s like changing a piece of Lego so they fit better together in your creation, or in order to build something new and useful.
What is data trust scoring?
Data trust scoring assesses the reliability of any data with standards that provide instant insight into how much you can trust your data.
What is data veracity?
Data veracity refers to the accuracy, quality, and reliability of data, in order to make it suitable for decision-making and analysis. The better data veracity, the more trustworthy and better performing your AI can be, for instance.
Why it matters: Reliable, high-quality data is critical for accurate insights, trustworthy AI, and informed decision-making.
What is data visibility in AI?
Data visibility in AI involves having a clear and complete understanding of the data used by AI models - where it originates, how it’s processed, and how it influences the AI’s decisions. This helps organizations ensure data quality, maintain accountability, and make better, more transparent AI-driven decisions.
Why it matters: Understanding how data shapes AI behavior enables transparency, fairness, and responsible governance across AI systems.
What is data visibility?
Data visibility refers to the ability to view and understand data across systems or platforms. It ensures users can easily locate, access and interpret data, providing transparency and supporting informed decision-making.
Why it matters: Greater visibility supports informed decision-making, improves accountability, and helps detect gaps or risks in how data is used.
What is data/AI poisoning?
Data poisoning, also known as AI poisoning, involves a deliberate and malicious contamination of data to compromise the performance of AI and ML systems. Attackers may inject false, misleading, or manipulated data into the training process to degrade model accuracy, introduce biases, or cause targeted misbehavior in specific scenarios.
Why it matters: Poisoned data can corrupt models, degrade performance, or lead to manipulated outcomes, undermining reliability and safety.
Keep updated
Don’t miss a beat from your favourite identity geeks