The Executive Five-Year Agenda
Leadership teams need a forward view that is concrete enough to guide action but flexible enough to survive technical and regulatory change. Over the next five years, the decisive issue will not be whether organisations can access AI tools at all. It will be whether they can turn scattered access into governed capability.
That is now visible across sectors. NIST’s AI RMF and the Generative AI Profile both frame AI as a lifecycle governance issue rather than a one-time procurement decision.[2], [3] OECD’s work on SMEs shows that adoption gaps persist and that smaller organisations need a different support model from large firms.[16], [17] OECD’s public-sector work shows many governments still struggle to move from pilots to reliable operational use because skills, data, procurement, and legacy systems are weak.[23] At the national level, OECD’s compute-capacity work makes the strategic point: ambition without infrastructure and implementation capacity creates a policy blind spot.[24]
The practical consequence is that a serious five-year agenda has to answer two questions at once:
- what must we do now to stop unmanaged AI use from creating avoidable exposure?
- what capabilities must become durable by the end of the period?
Priority Areas
- Immediate priorities
- Regulatory preparedness
- Investment strategy
- Building sustainable AI capability
- Adapting the agenda to institutional context
The Next 12 Months
The immediate objective is not to become an AI leader in every domain. It is to establish visibility, control, and a minimum evidence standard before AI use spreads faster than oversight.
In the next 12 months, most leadership teams should:
- build or refresh an AI inventory that includes material tools, workflows, vendors, and owners
- classify systems by use case, consequence of failure, and dependency profile
- stop unmanaged adoption in sensitive workflows and define clear restrictions on unsanctioned use
- name accountable owners for material AI systems and escalation routes for incidents or overrides
- define minimum evidence requirements before wider rollout, including testing, review, and monitoring
- set a reporting rhythm for board, executive, or equivalent oversight review
This phase is about visibility, containment, and decision discipline. The point is not to create a perfect governance architecture in year one. It is to stop the organisation from being surprised by its own use of AI.
The Next 24 Months
Once the organisation has visibility and basic controls, the next priority is to make AI repeatable as a managed capability rather than a sequence of disconnected pilots. This is where many organisations stall. NIST’s governance model implies ongoing lifecycle control, and OECD’s public-sector work shows that implementation failure often comes from weak operational capacity rather than lack of strategic intent.[2], [23]
In the next 24 months, management should focus on:
- standard approval and escalation processes rather than ad hoc exception handling
- stronger vendor governance, contract discipline, and change-notification rights
- active monitoring for drift, incidents, complaints, overrides, and material context changes
- clearer data-readiness, traceability, and documentation practices
- targeted capability building in the business, operations, and oversight functions, not only in technical teams
- use-case prioritisation tied to measurable economic, service, research, or institutional value
This phase is about moving from experimentation to governable scale. If the organisation cannot show how it reviews changes, tracks incidents, and pauses weak systems, it is not scaling capability. It is scaling exposure.
Investment Discipline
AI spending should be judged with the same discipline as other strategic investments, but with a broader cost lens than many teams apply at pilot stage.
Leadership should expect investment cases to include:
- model and platform access cost
- infrastructure and deployment cost
- integration and workflow redesign cost
- monitoring, review, and governance cost
- vendor dependency and exit cost
- obsolescence risk if a frontier or hyperscaler-backed provider closes the capability gap quickly
- realistic assumptions about adoption, error reduction, and measurable value creation
This matters because AI can appear inexpensive when only the model or software line item is counted. The real economic question is whether the organisation creates enough value after infrastructure, oversight, and change costs are included. For countries and large systems, the same logic extends to compute, energy, cloud concentration, and trusted-access dependencies.[24]
The Next Five Years
Over the longer horizon, the competitive question shifts. The issue is no longer whether the organisation can use AI at all. The issue is whether it can build durable capability while maintaining trust, legitimacy, and strategic room to manoeuvre.
Over five years, leadership teams should aim to build:
- a repeatable governance operating model
- a credible internal evidence standard for higher-impact systems
- stronger resilience against vendor concentration and opaque dependencies
- organisational capability to redesign work around AI rather than merely layer tools on top
- a differentiated position based on trust, reliability, and disciplined execution
- institutional ability to stop, replace, or redirect systems when the external environment changes
This phase is about institution building, not just technology adoption. Over a five-year window, strong organisations become easier to distinguish from weak ones. Strong organisations can explain where AI is used, why it is there, who owns it, how it is monitored, and what dependencies it creates. Weak organisations mostly have pilots, vendors, and slide decks.
The Agenda Changes by Leadership Level
The five-year agenda should not be identical for every leadership context. The horizon is the same; the governing problem is not.
SMEs
OECD’s 2025 work suggests the five-year challenge for SMEs is not to mirror enterprise AI programs. It is to close readiness gaps without overbuilding overhead.[16], [17] For SMEs, the agenda should usually be:
- a short list of durable workflow gains
- safe vendor use and explicit data-handling rules
- enough internal competence to challenge supplier claims
- avoidance of brittle dependence on tools the firm cannot supervise or replace
Cooperatives and Mutuals
For cooperatives, the right agenda is inseparable from legitimacy. The International Cooperative Alliance’s principles imply that five-year success cannot be measured only in automation efficiency.[18] Cooperative leaders should focus on:
- member transparency where AI affects service, pricing, lending, or support
- reviewable use of AI in decisions that affect access or treatment
- governance that preserves trust in the cooperative model rather than weakening it
Research Institutions
Research leaders should use the five-year period to normalise AI-assisted research without weakening integrity. NSF guidance on responsible conduct of research and ICMJE publication rules point in the same direction: humans remain accountable for integrity, disclosure, and attribution.[19], [20] Over five years, research institutions should build:
- clear disclosure rules for AI use in writing, analysis, coding, and experimental workflows
- reproducibility standards for material AI-assisted steps
- governed protection of confidential, clinical, proprietary, or export-controlled data
- durable review paths for dual-use and high-consequence research applications
Large Enterprises
Large enterprises should use the next five years to reduce unmanaged adoption, build reusable governance and platform layers, and concentrate investment where scale or differentiation is real. Their core problem is fragmentation, so their agenda should emphasise:
- portfolio visibility across business units
- tiered governance rather than one uniform gate
- stronger vendor leverage and platform standards
- deeper redesign of work, not just tool distribution
Public-Sector Institutions
UNESCO and OECD guidance imply that the public-sector five-year agenda has to be capability-oriented, not merely compliance-oriented.[21], [22], [23] Public institutions should prioritise:
- lawful service redesign rather than novelty deployments
- procurement maturity and stronger implementation capacity
- citizen-facing uses that preserve reviewability and challenge rights
- moving useful tools out of pilot limbo and into governable operations
National Leadership
National leaders should treat the next five years as a capacity-building window. OECD’s compute-capacity and policy-observatory work suggest that countries increasingly need joined-up policy across talent, research, infrastructure, public-sector use, and resilience.[24], [25] The national agenda should therefore cover:
- talent and research capability
- compute, cloud, data, and energy infrastructure
- secure public-sector adoption
- industrial competitiveness and firm adoption
- resilience, security, and democratic legitimacy
- clear decisions on which external dependencies are acceptable and which are strategically dangerous
In each setting, the right five-year plan is the one that strengthens capability without accepting hidden dependency or avoidable loss of control.
Board Agenda
Boards and executive committees should expect recurring attention in four areas:
- material AI risk exposure
- major deployments and use-case expansion
- incidents, complaints, and remediation
- strategic capability gaps in data, talent, governance, or vendor dependence
The board does not need to manage AI directly. It does need confidence that management can see the real exposure, challenge weak assumptions, and intervene when AI use outpaces control. In other settings, the equivalent question applies to governing boards, trustees, ministers, or cabinet-level leadership: is there credible oversight of the systems that matter most?
What Good Looks Like
By the end of a credible five-year agenda, the organisation should be able to say:
- we know where AI is used
- we know which uses matter most
- we know who is accountable
- we can show what evidence supports deployment
- we can slow, pause, or stop systems when the context changes
- we understand which external dependencies we have chosen and why
That is the difference between AI ambition and durable AI capability.
Leadership Lens
The executive agenda should balance urgency with discipline. Waiting too long creates strategic weakness. Moving too fast without evidence, governance, organisational readiness, or infrastructure realism creates avoidable exposure. Over the next five years, the leaders who matter most will not be the ones who sounded most ambitious in year one. They will be the ones who built capability that still works under pressure.
Key Questions for Leaders
- What should we do in the next 12 months that will still matter in five years?
- Which capabilities must become durable internal strengths?
- Which dependencies are we accepting today that may constrain us later?
- Where do we need board-level or equivalent oversight now rather than later?