Appendix E: Glossary of Key Terms
This glossary is written for executive readers. The purpose is not exhaustive technical precision, but clear and consistent use across the book.
Terms
- AI governance: the policies, processes, roles, and controls used to manage AI responsibly.
- Audit readiness: the ability to produce current evidence showing how an AI system was approved, monitored, and controlled.
- Bias: a pattern of unfair or unjustified difference in outcomes across people or groups.
- Data drift: change in data patterns over time that can weaken model performance or relevance.
- Evidence pack: the minimum record set used to support approval, review, and assurance for an AI system.
- Explainability: the ability to make an AI system’s behavior or output understandable enough for the context.
- Global explanation: explanation of what generally matters across many model outputs.
- High-risk system: an AI system used in a context where legal, safety, rights, or major operational consequences justify stronger controls.
- Human oversight: the practical ability for people to review, challenge, override, or stop AI use.
- Inventory: a maintained record of AI systems, owners, purposes, and risk status.
- Model-agnostic method: an explanation or evaluation method that works across different model types.
- Model-specific method: an explanation or evaluation method tied to a particular model family.
- Monitoring: post-deployment observation of system behavior, incidents, drift, complaints, and control breaches.
- Out-of-context risk: the risk that a model is used in conditions different from those it was designed or validated for.
- Post-hoc explainability: explanation added after training to help interpret a more complex model.
- Residual risk: the level of risk left after controls are applied.
- Trustworthy AI: AI that can be used responsibly because it is lawful, well-governed, technically robust, and subject to meaningful oversight.