|
PM Takeaways |
|
•
The
AI project charter serves the same purpose as any project charter — authorise
the work, define scope and governance, establish accountability. What makes
it different is not the format but the questions it must answer that
conventional project charters never ask: Why AI rather than an alternative?
Who might be harmed? What decisions will never be fully automated? What data
is this system permitted to learn from? |
|
•
NIST
AI RMF MAP 1.5 identifies the go/no-go decision as an explicit outcome of the
mapping function: after understanding the context, intended use, and risk
profile of a proposed AI system, organisations should have sufficient
information to make an initial decision about whether to design, develop, or
deploy it. The charter is the document that captures that decision and the
reasoning behind it. A charter that cannot answer the go/no-go questions is
not ready. |
|
•
EU
AI Act Article 14 requires that human oversight for high-risk AI systems be
designed into the system — including the technical measures enabling
oversight persons to understand system limitations, detect anomalies,
interpret outputs, and override or halt the system. These are architecture
requirements, and they must be resolved during chartering. A human oversight
section that says only ‘there will be human review’ is not sufficient: the
charter must specify who, with what authority, under what conditions, and
with what technical capability to act. |
|
•
EU
AI Act Article 10 requires that training, validation, and testing data for
high-risk AI systems be subject to data governance practices covering origin,
collection processes, preprocessing operations, and bias assessment. The
charter’s data strategy section must address these questions before the
project begins — not because they are abstract governance concerns but
because a data problem discovered during development is expensive and a data
problem discovered post-deployment may be irreversible. |
|
•
NIST
AI RMF MAP 1.1 requires that the intended use of an AI system be
well-specified and finite, and that the system be analysed for whether it
provides net benefit accounting for both benefits and costs. This maps
directly to the ‘Why AI?’ charter section. An AI justification that addresses
only benefits and not trade-offs, costs, and alternatives does not meet the
standard the MAP function sets. The charter is the right place to document
that analysis — before sunk cost makes an honest assessment harder. |
If you have chartered projects before, you know the mechanics: document the purpose, define what success looks like, establish scope, assign accountability, and get the sponsor to sign it. None of that changes for AI projects. What changes is the set of questions the charter must answer before it can be considered complete.
The AI project charter is your first and best opportunity to surface the considerations that are unique to AI — before the architecture is set, before the data is collected, before a team is ramped and sunk costs make an honest assessment politically difficult. The charter is the place to answer questions that would never appear on a conventional software or infrastructure project: Why AI rather than a simpler approach? What decisions will this system make, and which of those decisions should never be fully automated? Who might be harmed if the system is wrong, and how will they raise a concern? What data is this project permitted to use, and what data governance is required before training begins?
These are not compliance exercises. They are the questions that, left unanswered at chartering, will surface as expensive problems during development or as governance failures after deployment. The charter exists to surface them at the cheapest possible moment.
The Fundamentals Don’t Change
Before covering what’s different, it’s worth being explicit about what isn’t. If you have a charter template that works for you, do not throw it out. Extend it.
Your AI project charter still needs all of the following, and these elements should be addressed with the same rigour you would bring to any project:
• Business case and objectives. Why is the organisation doing this? What problem does it solve? What does success look like, and how will it be measured? For AI projects, this section sets the standard against which the ‘Why AI?’ justification is evaluated: if the business objective can be achieved with a rules-based system or a simpler tool, that should be apparent here.
• Scope. What is in scope, what is explicitly out of scope, and where are the boundaries? AI projects have a particular tendency to scope creep toward additional use cases and user populations — a clear scope boundary at chartering limits this.
• Stakeholders. Who is involved in the project, who benefits from it, who is accountable for outcomes? Note that the AI-specific section on affected parties (below) extends this beyond the conventional stakeholder definition.
• Authority. Who can make decisions? Who approves scope changes? Who has authority to escalate or stop the project? For AI projects, authority questions include who can order the system taken offline if a production incident occurs — this must be named in the charter and reflected in the operational design.
• Constraints. Budget, timeline, technology, organisational, and regulatory constraints that bound the project.
• Assumptions and risks. What is the project betting on? What could go wrong? The AI risk register (see related article) is a deep treatment of AI-specific risks; the charter should reference it and note the highest-priority risks identified at chartering.
What the AI Project Charter Adds
1. AI Justification: Why This Approach for This Problem
Traditional projects do not require justification for the technology choice. AI projects do — and the justification must be substantive, not assumed.
NIST AI RMF MAP 1.1 explicitly requires that the intended purpose of an AI system be well-specified, and that the system be analysed for whether it provides net benefit accounting for both benefits and potential costs. The Playbook is direct: context mapping should include discussion of non-AI and non-technology alternatives, particularly in settings where the context is narrow enough to manage without AI’s potential negative impacts. The charter is the place to document that analysis — and to explain why it concluded that AI was the right choice.
The AI justification section should address three questions:
• Why AI or ML for this specific problem? What makes this problem suitable for a machine learning approach rather than rules-based logic, expert systems, or human judgment? Is the problem characterised by patterns in data that are too complex or numerous for explicit rules? Is there a clear signal in the training data? Is the performance ceiling of a non-AI approach inadequate for the use case?
• What trade-offs are being accepted? Every AI system involves trade-offs that do not arise in deterministic software: accuracy versus interpretability, performance versus fairness, speed versus human oversight, capability versus robustness. These trade-offs must be made explicit in the charter — not because they are necessarily disqualifying, but because implicit trade-offs become hidden assumptions that cause problems later.
• What is the cost-benefit including potential harms? NIST MAP 3.1 and MAP 3.2 require analysis of both benefits and costs of AI system deployment. The charter should include a realistic assessment of what happens when the system is wrong, who bears the cost of errors, and whether the benefit of automation justifies the harm profile. A charter that presents only business benefits and does not acknowledge the harm side of the ledger is not a complete analysis.
The AI justification section is a forcing function. Many AI projects fail not because the technology fails but because the problem was not suitable for an AI approach, or because the trade-offs were not acceptable and were never honestly examined. A charter that cannot answer these questions clearly is a signal to slow down before building.
2. Human Oversight: Architecture Decisions Made at the Start
Traditional software does what it is programmed to do. AI systems make predictions and generate outputs that may be wrong in ways that are hard to predict and difficult to detect. Human oversight is the mechanism by which those errors are caught before they cause harm — and for high-risk AI systems under the EU AI Act, it is a mandatory design requirement, not a policy preference.
EU AI Act Article 14(1) requires that high-risk AI systems be designed so they can be effectively overseen by natural persons during the period in which they are in use. Article 14(4) specifies five capabilities that human oversight must enable: understanding the system’s capacities and limitations; remaining aware of the risk of automation bias; correctly interpreting the system’s outputs; deciding in any particular situation to disregard, override, or reverse an output; and intervening in or halting the system’s operation. These are design requirements. They cannot be retrofitted after the architecture is set — which is why they must be resolved in the charter.
Article 26(2) requires that deployers assign human oversight to named natural persons who have the necessary competence, training, and authority. ‘Human review will be in place’ is not a human oversight specification. The charter must answer the following:
|
Charter Question |
Why It Must Be Answered at
Chartering |
|
What level of human involvement will the system operate
under? (Fully automated, human-in-the-loop, human-on-the-loop,
human-in-command) |
This determines the system’s architecture.
Human-in-the-loop review changes the workflow, timing, and technical
interface requirements. Getting to the right answer after the architecture is
set is expensive. |
|
Who will be assigned oversight, and what competence and
authority do they need? |
EU AI Act Article 26(2) requires that oversight be
assigned to persons with necessary competence, training, and authority. The
charter must name the role (or the individual) and confirm that the required
competence and authority actually exist in the organisation. |
|
What decisions should never be fully automated, regardless
of system confidence? |
Some decisions — those affecting fundamental rights,
liberty, safety, or significant financial consequences for individuals —
require human judgment as a matter of principle, not as a fallback when the
system is uncertain. These must be identified and protected in the charter
before the system is designed around the assumption of automation. |
|
What are the override conditions and mechanisms? |
Article 14(4)(d) requires that oversight persons be
enabled to decide to disregard, override, or reverse system outputs. Article
14(4)(e) requires that they be able to halt the system. Both require
technical mechanisms that must be designed in from the start. The charter
should specify what triggers an override and what technical capability
enables it. |
|
How will automation bias be managed? |
Article 14(4)(b) requires that oversight persons remain
aware of the tendency to over-rely on AI outputs. This is a training
requirement, an interface design requirement, and a process requirement. The
charter should address how the system design and operational processes will
counteract automation bias rather than reinforce it. |
3. Data Strategy: Surface the Data Problems Before Development Starts
Traditional projects use data. AI projects depend on data in ways that create risks with no direct analogy in conventional software development — because data problems in AI do not manifest as build errors or test failures. They manifest as systematic performance disparities, embedded bias, or post-deployment incidents that trace back to decisions made during data collection and preprocessing that were never documented.
EU AI Act Article 10(2) requires that training, validation, and testing data for high-risk AI systems be subject to governance practices covering: the design choices underlying data selection; the collection processes and origin of data; any preprocessing operations such as annotation, labelling, and cleaning; the assumptions made about what the data represents; assessment of the availability and suitability of the data; examination for possible biases that may affect health, safety, or fundamental rights; and identification of data gaps. Every one of these is a question for the charter’s data strategy section. A data problem discovered during model training is expensive to correct; a data problem discovered post-deployment may require taking the system offline.
The charter’s data strategy section should address:
• What data will train the system? Source, volume, the time period it covers, and the population it represents. Is the data representative of the population the system will be deployed against? Are there known gaps between the training population and the deployment population?
• Is this data appropriate for this use? EU AI Act Article 10(2)(b) requires documentation of the original purpose of data collection. Data collected for one purpose may not be suitable for training an AI system serving a different purpose — either because of consent limitations, because the data does not represent the deployment context, or because it encodes patterns from a different time period or user population.
• Who owns the data, and under what terms? Licensing, intellectual property, and privacy rights must be established before training begins, not discovered during a compliance review after the model is trained. If the organisation does not have clear rights to use the data for AI training, that is a blocking issue.
• What bias examination has been done or is planned? Article 10(2)(f) and (g) require examination for biases likely to affect health, safety, or fundamental rights, and appropriate measures to detect, prevent, and mitigate those biases. The charter should document what bias assessment methodology will be used and who is responsible for it.
• How will data lineage be maintained? Can the team trace a production decision back to the training data that shaped it? NIST AI RMF emphasises scientific integrity and documentation of data decisions throughout the lifecycle. Without lineage tracking established from the start, this becomes structurally impossible to reconstruct later.
4. Affected Parties: Extend Beyond the Stakeholder Register
Traditional stakeholder analysis focuses on people who are involved in, responsible for, or benefit from the project. AI projects affect people who may have no involvement with the project team at all — people who are subject to the system’s decisions or outputs, who may experience disproportionate impact, who have no mechanism to raise a concern unless one is deliberately built.
NIST AI RMF MAP 1.2 requires engagement with a diverse team and with people external to the team that developed or deployed the system, including end users and potentially impacted communities. The Playbook is explicit that this engagement helps identify known and foreseeable negative impacts related to intended use, and anticipates risks of use beyond intended use. EU AI Act Article 27 requires that deployers of high-risk AI systems (public bodies and certain private entities) conduct a Fundamental Rights Impact Assessment prior to deployment — which includes identifying the categories of persons and groups likely to be affected.
The charter should address affected parties before the FRIA is required — because the affected party mapping shapes the design of the system, the testing methodology, and the feedback mechanisms. At chartering, the questions are:
• Who is subject to this system’s decisions or outputs? End users are the obvious answer. But in many deployments, the affected party is not the user of the system — it is the person the system is making a decision about. A credit scoring model’s affected parties are loan applicants, not the bank’s loan officers. A recruitment screening system’s affected parties are candidates, not hiring managers.
• Who might be disproportionately affected? Which groups could experience different outcomes from the system? Are those groups represented in the training data? Are their interests represented in the design process? The stakeholder register should include affected communities, not only organisational stakeholders.
• What is the feedback mechanism? How will affected parties raise concerns about the system’s outputs? How will they contest decisions? NIST MEASURE 3.3 requires that feedback processes for end users and impacted communities to report problems and appeal system outcomes be established and integrated into AI system evaluation metrics. This mechanism must be designed and resourced; it does not exist by default.
If the team cannot identify affected parties at chartering, the project is not ready to proceed. Affected party identification is not a deliverable that can be deferred to the impact assessment phase: it shapes scope, design, and testing in ways that cannot be retrofitted.
5. Regulatory and Compliance Context
Traditional projects operate within known regulatory frameworks. AI projects face a landscape of evolving requirements that may change between the charter date and the deployment date, and that vary significantly by use case, jurisdiction, and the risk classification of the specific system being built.
The regulatory section of the charter must document:
• What regulations apply? EU AI Act, sector-specific requirements (financial services, healthcare, employment), applicable national law, and any contractual obligations to customers or partners that impose AI governance standards. The answer ‘we will check this later’ is not acceptable — regulatory applicability must be determined before scope, budget, and timeline are established, because compliance activities have cost and schedule implications.
• What is the system’s risk classification? Under EU AI Act, the classification determines the compliance pathway: prohibited systems may not be built at all; high-risk systems face the full Article 8–17 requirements including risk management, technical documentation, data governance, human oversight, transparency, accuracy, and conformity assessment; limited-risk systems face transparency obligations; minimal-risk systems face no specific obligations. The classification must be documented at chartering, along with the reasoning and evidence supporting it.
• What compliance activities are required, and when? For high-risk AI systems: risk management system (Article 9); data governance (Article 10); technical documentation (Article 11); record-keeping (Article 12); transparency and instructions for use (Article 13); human oversight provisions (Article 14); accuracy, robustness, and cybersecurity requirements (Article 15); conformity assessment (Article 43); registration (Article 49); and post-market monitoring (Article 72). Each of these is a project deliverable. They must appear in the scope, the schedule, and the budget.
• Who owns regulatory monitoring? AI regulation is evolving. The EU AI Act is complemented by sector-specific guidance, implementing acts, harmonised standards, and codes of practice that will continue to develop throughout the project lifecycle. A named person must be responsible for monitoring these developments and flagging changes that affect the project.
6. Success Criteria: Beyond ‘Does It Work?’
Traditional projects measure success by whether the deliverable functions as specified. AI projects require a broader definition of success — because a system that is accurate but unexplainable, performant but unfair, or technically compliant but operationally unmonitorable is not a successful AI project.
The charter should define acceptance criteria across all of the following dimensions, with specific metrics where possible:
|
Success Dimension |
What the Charter Should
Specify |
|
Technical performance |
Accuracy, precision, recall, F1 score, latency, and
throughput targets. These should be specified at the use-case level, not as
abstract model metrics. What performance is required for the system to be
operationally viable? |
|
Fairness |
Performance disaggregated by relevant demographic and
contextual subgroups. A system that achieves aggregate accuracy targets but
performs significantly worse for a specific population is not meeting its
fairness criteria. The subgroups to be tested should be identified in the
charter based on the affected party mapping. |
|
Robustness |
Performance under edge cases, distribution shift, and
adversarial conditions. What happens when the system receives inputs that are
unusual, corrupted, or deliberately crafted to cause errors? EU AI Act
Article 15 requires that high-risk AI systems be as resilient as possible to
errors, faults, and inconsistencies arising from interaction with people or
other systems. |
|
Explainability |
Can decisions be understood by those who need to
understand them? This includes the oversight persons who must interpret
outputs, the affected parties who may request an explanation, and the
regulators who may review the system. The level of explainability required
should be specified in the charter based on the use case and regulatory
context. |
|
Operational viability |
Can the system be monitored, maintained, and updated? Is
there a plan for detecting performance degradation in production? Who is
responsible for retraining decisions? What is the process for taking the
system offline if a serious incident occurs? A system that cannot be safely
operated post-deployment is not complete. |
7. AI Governance: Who Reviews, Who Approves, Who Escalates
Governance for an AI project is not the same as governance for a conventional software project, because the decisions that require governance attention are different. The questions that must be escalated — was this training data appropriate? is this system performing fairly? should this system be suspended? — require judgment that combines technical, ethical, legal, and operational expertise. The governance structure must be designed to bring those perspectives together.
The charter’s governance section should specify:
• Who reviews AI-specific milestone decisions? Which decisions require review beyond the normal project approval chain? Candidates include: the decision to proceed after the initial go/no-go assessment; the decision to approve training data; the decision to approve deployment after test results; the decision to retrain after distribution shift; the decision to suspend the system after an incident.
• What is the escalation path for ethical concerns? If a team member identifies a potential harm that is not addressed by existing governance structures, who do they escalate to? The charter should name a path, not assume that the standard project issue log is adequate for concerns about potential discrimination or rights violations.
• What external reviews or approvals are required? For high-risk AI systems, EU AI Act Article 43 requires conformity assessment before the system is placed on the market or put into service. Depending on the system type, this may require involvement of a notified body. The charter should confirm whether third-party conformity assessment is required, and if so, at what point in the project lifecycle it must be completed.
The Charter as a Go/No-Go Gate
The charter is not just documentation. It is a decision. NIST AI RMF is explicit: after completing the MAP function, organisations should have sufficient contextual knowledge about AI system impacts to inform an initial go/no-go decision about whether to design, develop, or deploy the AI system.
A well-constructed AI charter surfaces the conditions that should trigger a no-go or a fundamental redesign before the project begins. The following conditions warrant stopping or significantly revising before proceeding:
|
Condition |
Governance Implication |
|
No clear justification for why AI is the right approach,
or the justification does not account for trade-offs and alternative options |
Proceed only if the AI justification can be made complete.
Starting an AI project without a defensible answer to ‘why AI?’ is a
governance failure, not a planning oversight. |
|
Training data is unavailable, legally constrained, or
insufficiently representative for the intended deployment population |
Proceed only if the data problem can be resolved before
training begins. Do not proceed on the assumption that it will be sorted out
during development. EU AI Act Article 10 requirements cannot be met
retroactively. |
|
Affected parties cannot be identified, or known affected
parties have no mechanism to raise concerns |
Proceed only after affected party mapping is complete and
a feedback mechanism is designed. An AI project that cannot identify who
might be harmed is not ready to build. |
|
Regulatory requirements have not been assessed, or the
compliance scope exceeds available resources |
Proceed only after the regulatory classification is
confirmed and the compliance activities are scoped, resourced, and scheduled.
A high-risk AI project without a compliance plan is not a viable project. |
|
Human oversight needs are incompatible with operational
requirements or business objectives |
This is a fundamental conflict that must be resolved
before development begins. If the business case requires a level of
automation that the regulatory framework prohibits or that the risk profile
does not support, either the business case or the project must change. |
|
The team cannot articulate measurable success criteria
that include fairness, robustness, and explainability |
Proceed only after success criteria are defined. A project
without acceptance criteria for the dimensions that matter for AI cannot
determine whether it has succeeded. |
Better to discover these during chartering than during development. The cost of a no-go at charter is a planning exercise. The cost of a no-go at deployment, or a governance failure after deployment, is substantially higher.
AI Charter Additions: Quick Reference
Add these sections to your existing charter template. Standard charter elements — business case, scope, stakeholders, authority, constraints, assumptions and risks — remain required.
|
Section |
Key Questions to Answer |
|
AI Justification |
Why AI rather than rules-based or manual alternatives?
What trade-offs are being accepted (accuracy vs. interpretability,
performance vs. fairness)? What is the cost-benefit including potential
harms? |
|
Human Oversight |
What level of human involvement? Who is assigned oversight
with what competence and authority? What decisions are never fully automated?
What are the override conditions and mechanisms? How is automation bias
addressed? |
|
Data Strategy |
What data trains the system? Is it appropriate for this
use? Who owns it under what terms? What bias examination is required? How is
data lineage maintained? |
|
Affected Parties |
Who is subject to the system’s outputs (not only who uses
the system)? Who might be disproportionately affected? What is the feedback
and contestation mechanism? |
|
Regulatory Context |
What regulations apply? What is the risk classification
and supporting rationale? What compliance activities are required, with what
schedule and budget? Who monitors for regulatory changes? |
|
Success Criteria |
Technical performance targets; fairness criteria by
subgroup; robustness under edge cases and distribution shift; explainability
requirements; operational viability and monitoring plan. |
|
Governance |
Who reviews AI-specific milestone decisions? What is the
escalation path for ethical concerns? What external reviews or conformity
assessments are required, and when? |
Right-Sizing for Your Situation
How formally you address these additions depends on the risk level of the system, the maturity of your organisation’s AI governance, and the regulatory context you operate in. A low-risk internal tool needs a lighter-weight charter than a high-risk AI system making consequential decisions about individuals. But all AI projects benefit from working through these questions at the start — the value is not the documentation, it is the thinking.
|
Greenfield
— AI Charter Playbook For PMs
without formal AI governance. A one-page charter extension covering the seven
AI-specific sections at the level of detail appropriate for lower-risk
systems. Includes worked examples for each section and a simplified go/no-go
checklist. Designed to add governance without adding bureaucracy. |
|
Emerging
— AI Charter Playbook For PMs
building repeatable processes. A full charter template with guidance on each
AI-specific section, including the EU AI Act regulatory classification
framework and NIST AI RMF MAP function mapping. Includes a structured
go/no-go gate with documented decision criteria and a data strategy section
aligned to Article 10 requirements. |
|
Established
— AI Charter Playbook For PMs
in organisations with formal AI governance. How to align the AI project
charter with existing organisational AI policies, ethics review boards, and
regulatory approval gates. Covers conformity assessment planning for
high-risk AI systems, FRIA (Fundamental Rights Impact Assessment) initiation
at chartering, and integration with enterprise risk management frameworks. |
Framework References
• EU AI Act (Official Journal, 12 July 2024) — Article 9 (risk management system as a continuous iterative process throughout the entire lifecycle, including identification and analysis of known and reasonably foreseeable risks, and assessment under intended use and reasonably foreseeable misuse); Article 10(2) (data governance requirements for training, validation, and testing data: design choices, data collection processes and origin, the original purpose of data collection, preprocessing operations, assumptions made about what the data represents, assessment of availability and suitability, examination for biases likely to affect health/safety/fundamental rights, and identification of data gaps); Article 14 (human oversight requirements for high-risk AI systems: designed to be effectively overseen by natural persons; five specific oversight capabilities required under Article 14(4): understanding capacities and limitations, awareness of automation bias, correct interpretation of outputs, ability to override or disregard outputs, ability to halt the system); Article 26(2) (deployers shall assign human oversight to natural persons with necessary competence, training, and authority); Article 27(1) (Fundamental Rights Impact Assessment for deployers who are public bodies or private entities providing public services, prior to deployment: description of processes, period of use, categories of affected persons and groups, specific risks of harm, implementation of human oversight measures, and measures to be taken if risks materialise); Article 43 (conformity assessment required before high-risk AI system is placed on the market or put into service)
• NIST AI RMF 1.0 (NIST AI 100-1, 2023) — MAP 1.1 (intended purpose of AI system should be well-specified and finite; system analysed for net benefit accounting for both benefits and costs; analysis should demonstrate that AI is on net better suited to accomplish the task than alternatives including non-AI options; required to consider and document risk avoidance or reduction measures); MAP 1.2 (engagement with diverse internal team and external collaborators, end users, and potentially impacted communities; helps identify known and foreseeable negative impacts related to intended use and anticipates risks of use beyond intended use); MAP 1.5 (go/no-go decision as explicit outcome of the MAP function: after completing the MAP function, organisations should have sufficient contextual knowledge to inform an initial go/no-go decision about whether to design, develop, or deploy the AI system; context mapping includes discussion of non-AI and non-technology alternatives); MAP 3.1 and MAP 3.2 (analysis of benefits and costs of AI system, including potential harms; risk tolerances informing go/no-go decisions remain independent from vested financial or reputational interests); MEASURE 3.3 (feedback processes for end users and impacted communities to report problems and appeal system outcomes shall be established and integrated into AI system evaluation metrics)
• NIST AI RMF Playbook (2023) — MAP 1.1 Suggested Actions: context mapping includes examination of intended purpose and impact, concept of operations, deployment setting, end user and operator expectations, potential negative impacts to individuals/groups/communities/organisations/society, and unanticipated contextual factors; identify whether non-AI or non-technology alternatives would lead to more trustworthy outcomes; MAP 1.5: risk tolerance levels establish maximum allowable risk above which the system will not be deployed; articulate and analyse trade-offs across trustworthiness characteristics; document decisions, risk-related trade-offs, and system limitations for off-label purposes
This article is part of AIPMO’s PM Practice series. See also: AI Risk Classification | The PM’s Guide to NIST AI RMF