Appendix A: AI Governance Checklist for Boards
Boards do not need to inspect every model. They do need enough structure to challenge management credibly. The checklist below is designed for board committees, executive committees, and supervisory bodies reviewing AI exposure.
Board Checklist
- Do we have a current inventory of material AI systems and high-impact use cases?
- Has management classified those systems by risk and business significance?
- Is there a named accountable owner for each material AI deployment?
- Do we know which decisions remain human-led and where override rights exist?
- What is our evidence standard before approval, scale, or renewal?
- How are incidents, complaints, overrides, and near misses reported upward?
- What AI dependencies do we have on key vendors or external platforms?
- What contractual rights do we have for audit, change notification, and incident response?
- Which AI systems would create the greatest legal, operational, or reputational exposure if they failed?
- When did the board last review AI controls, incidents, and remediation actions?
Red Flags for Boards
- Management cannot produce a reliable AI inventory.
- AI is being used in sensitive decisions without clear ownership.
- Governance exists on paper but not in operational review routines.
- Vendor systems are material, but management has little visibility into updates or controls.
- Reporting focuses on innovation activity rather than risk, incidents, and evidence.
Minimum Reporting Pack
For material AI use, boards should usually expect:
- a list of material systems and their owners
- risk classification and rationale
- major incidents, complaints, or control breaches
- remediation status and open issues
- significant upcoming deployments or changes in use
- material vendor or regulatory developments