|
PM Takeaways |
|
•
The
OECD AI Principles (OECD/LEGAL/0449) are the original source document for
most of what you encounter as ‘trustworthy AI’ language in regulations and
standards. The EU AI Act’s requirements for transparency, human oversight,
and risk management trace directly to the Principles. NIST AI RMF’s structure
maps closely to them. ISO 42001’s management system requirements reflect
them. Understanding the Principles helps you understand why downstream
requirements are what they are — which makes them easier to implement and
explain to stakeholders. |
|
•
The
Recommendation was meaningfully updated in 2024, not merely reaffirmed. Four
substantive changes affect how PMs apply the principles to current projects:
misinformation and disinformation were added to the human rights principle
(directly relevant to generative AI deployments); the misuse framing was
expanded to cover uses outside intended purpose and intentional or
unintentional misuse (not just foreseeable misuse); the robustness principle
was strengthened to explicitly require that systems can be overridden,
repaired, and decommissioned by human interaction; and traceability and risk
management text was moved to and elaborated under the accountability
principle. Environmental sustainability was also added as an explicit
consideration. |
|
•
The
Recommendation’s definition of ‘AI actors’ is deliberately broad:
organisations and individuals that play an active role in the AI system
lifecycle, including those that deploy or operate AI. This definition means
the Principles apply to your project team and your organisation as deployer,
not just to the upstream model developer or technology vendor. The
obligations are not delegated away by using a third-party model or platform. |
|
•
The
five principles are explicitly described as complementary and to be
considered as a whole. They are not a checklist where satisfying one
discharges the others. A system that is technically robust but operates
without meaningful human oversight does not satisfy Principle 4. A system
with excellent transparency disclosures but whose outputs perpetuate
discrimination does not satisfy Principle 2. Apply them together, and use
them to pressure-test design decisions from multiple angles. |
When the EU AI Act, NIST AI RMF, ISO 42001, and national AI strategies reference ‘trustworthy AI’, they are drawing from a common source: the OECD Recommendation on Artificial Intelligence (OECD/LEGAL/0449). Adopted in May 2019, updated in 2023 and again in 2024, it was the first intergovernmental standard on AI. The G20 adopted equivalent principles at the Osaka Summit the same year. Over 45 countries have now adhered to the Recommendation.
The Principles are not legally binding — they are a Recommendation of the OECD Council, not a treaty or regulation. But their influence on binding frameworks is substantial and direct. Understanding them gives PMs something more useful than a compliance checklist: a principled basis for making design decisions, explaining governance requirements to teams, and evaluating whether a project is actually doing what it claims.
This article covers the structure of the Recommendation, each of the five principles with their 2024 updates, the ‘AI actors’ definition that determines who they apply to, and a mapping from each principle to concrete project deliverables.
Structure of the Recommendation
The Recommendation has two substantive sections. The first sets out five values-based principles for responsible stewardship of trustworthy AI, addressed to all AI actors. The second sets out five recommendations for national policies and international co-operation, addressed to adherent governments. For PMs, the first section is the operationally relevant one; the second provides context for why governments are building regulatory frameworks the way they are.
The Recommendation also establishes a shared vocabulary that has been adopted across most major frameworks:
• AI system: A machine-based system that, for a given set of objectives, infers from inputs how to generate outputs — such as predictions, recommendations, decisions, or content — that can influence physical or virtual environments. The 2023 update clarified that this covers generative AI systems producing content, and that objectives may be explicit or implicit.
• AI system lifecycle: Plan and design; collect and process data; build and adapt models; test, evaluate, verify and validate; deploy; operate and monitor; retire/decommission. The Recommendation explicitly notes that phases are often iterative and not necessarily sequential, and that decommissioning decisions can occur at any point in the operation and monitoring phase.
• AI actors: Organisations and individuals that play an active role in the AI system lifecycle, including those that deploy or operate AI. Stakeholders is the broader category, encompassing all those affected by AI systems, directly or indirectly; AI actors are a subset.
• AI knowledge: Skills and resources — data, code, algorithms, models, research, know-how, training programmes, governance, processes, and best practices — required to understand and participate in the AI system lifecycle, including managing risks.
The definition of AI actors is significant for PMs. It explicitly includes deployers and operators — the organisations and individuals who put AI systems into production and run them. The obligations of the Principles apply to your organisation and your project team, not only to the company that trained the model or built the platform you are deploying on.
The 2023 and 2024 Updates
Many summaries of the OECD AI Principles reference the 2019 text. The Recommendation has been revised twice since then, and both revisions made substantive changes that PMs working on current projects should understand.
2023 update — AI system definition: The definition of ‘AI system’ was revised to explicitly cover generative AI systems that produce content, clarify that objectives may be implicit as well as explicit, acknowledge that AI systems can continue to evolve after deployment, and align terminology with other international processes.
2024 update — substantive principle changes: The 2024 revision at OECD Ministerial level made the following changes to the principles themselves:
• Misinformation and disinformation were added to the human rights principle, specifically in the context of generative AI, framing information integrity as a human rights concern alongside the existing enumeration of non-discrimination, privacy, dignity, and autonomy.
• The misuse framing in the human rights principle was broadened from ‘foreseeable misuse’ to cover risks arising from uses outside of intended purpose, intentional misuse, or unintentional misuse — a meaningful expansion given the range of ways AI systems are repurposed in practice.
• The robustness and safety principle was updated to explicitly require that if AI systems risk causing undue harm or exhibit undesired behaviour, they can be overridden, repaired, and/or decommissioned safely by human interaction. This is now an explicit design requirement in the Principles, not an implied one.
• Traceability and risk management text was moved to and elaborated under the accountability principle, recognising it as the most appropriate location for these concepts.
• Responsible business conduct was emphasised throughout the lifecycle, including co-operation with suppliers of AI knowledge and AI resources — acknowledging the supply chain dimension of AI governance.
• An explicit reference to environmental sustainability was introduced for the first time.
The 2024 update also clarified transparency requirements, specifying what meaningful information AI actors should provide and for whom, and underscored the need for interoperable governance environments across jurisdictions given the proliferation of national AI policy initiatives.
The Five Principles
1.1 Inclusive Growth, Sustainable Development and Well-Being
Stakeholders should proactively engage in responsible stewardship of trustworthy AI in pursuit of beneficial outcomes for people and the planet. The Recommendation specifies concrete dimensions: augmenting human capabilities and enhancing creativity; advancing inclusion of underrepresented populations; reducing economic, social, gender, and other inequalities; and protecting natural environments. The 2024 update added environmental sustainability as an explicit consideration alongside well-being and inclusive growth.
This principle establishes the positive obligation — the reason for building AI systems at all. It asks whether the system creates value beyond operational efficiency, whether that value is distributed broadly or narrowly, and whether it creates negative externalities for people or environments that are not the direct beneficiaries.
PM implication: Your project justification should address more than cost savings or process speed. Who benefits, and how? Are there groups who might be excluded from the benefits or disproportionately affected by the system’s operation? Does the system have environmental costs — energy, water, compute — worth weighing against its benefits? These are not rhetorical questions for an ethics statement. They belong in the business case and the project charter’s benefit realisation section.
1.2 Respect for the Rule of Law, Human Rights and Democratic Values, Including Fairness and Privacy
AI actors should respect the rule of law, human rights, democratic and human-centred values throughout the AI system lifecycle. The Recommendation enumerates the specific values at stake: non-discrimination and equality; freedom; dignity; autonomy of individuals; privacy and data protection; diversity; fairness; social justice; and internationally recognised labour rights.
The 2024 update added two elements that are directly relevant to current deployments. First, addressing misinformation and disinformation amplified by AI, while respecting freedom of expression — framing information integrity as a human rights concern, not only a content moderation problem. Second, the misuse framing was expanded: AI actors should implement safeguards including human agency and oversight to address risks from uses outside of intended purpose, intentional misuse, or unintentional misuse.
PM implication: Human oversight is not optional under this principle — it is named explicitly as a required safeguard. Systems that affect decisions about people’s employment, access to services, creditworthiness, or other substantive interests must include mechanisms for humans to understand, review, and contest those decisions. For generative AI deployments, the 2024 update also requires thinking about the information integrity risk of the system’s outputs — not just whether outputs are accurate in a narrow technical sense, but whether they could contribute to the spread of misinformation at scale.
1.3 Transparency and Explainability
AI actors should commit to transparency and responsible disclosure regarding AI systems. The Recommendation specifies that meaningful information should be provided, appropriate to the context and consistent with the state of the art, across three dimensions: to foster a general understanding of AI systems including capabilities and limitations; to enable those affected by an AI system to understand the outcome and, where possible, to contest it; and to allow AI actors to understand how and where AI systems are used.
The Recommendation draws a distinction between transparency — disclosure of relevant information about the system — and explainability — the ability to provide meaningful explanations of specific outputs to those affected. Not all systems need to be equally explainable in every context; what is required is appropriate explainability given the stakes of the decisions the system influences.
PM implication: Two separate design questions follow from this principle. First, disclosure: who needs to know that AI is involved, and what do they need to know about its capabilities and limitations? This applies to users, affected parties, regulators, and internal stakeholders. Second, contestability: for decisions that significantly affect people, can those people understand the basis for the decision well enough to challenge it? These are UX and architecture requirements, not documentation tasks.
1.4 Robustness, Security and Safety
AI systems should be technically robust and developed and tested in a way that is consistent with their intended purpose, including under conditions of reasonably foreseeable misuse. They should minimise unintended or unexpected outputs; be resilient to adversarial attacks, unintended inputs, and changing conditions; and avoid producing inaccurate outputs in a way that could harm people.
The 2024 update added the explicit override-repair-decommission requirement: if AI systems risk causing undue harm or exhibit undesired behaviour, they can be overridden, repaired, and/or decommissioned safely by human interaction. This is now a named design requirement in the Principles, establishing that every AI system must have a defined safe-shutdown and remediation pathway. A system that cannot be stopped or corrected without undue difficulty does not satisfy this principle regardless of how well it performs under normal conditions.
PM implication: Three questions translate this principle into project work. First, what conditions does the system need to function correctly under — and have you tested those? Include edge cases and adversarial conditions in your test plan, not just happy-path scenarios. Second, what happens when the system fails or produces harmful outputs? Document failure modes and the response procedure. Third, and specifically required by the 2024 update: how do you override, repair, or decommission the system if needed, and is that process documented and tested? Override and decommission procedures are project deliverables, not afterthoughts.
1.5 Accountability
AI actors should be accountable for the proper functioning of AI systems and for the respect of the above principles, based on their roles and the context. The Recommendation specifies that this requires appropriate mechanisms to ensure responsibility is attributed and that legal liability frameworks are applicable.
The 2024 update moved the traceability and risk management text to this principle and elaborated it. Traceability means maintaining records of how the system was designed, trained, tested, and deployed, and how it evolves over time, in a way that supports accountability attribution. Risk management means implementing systematic processes throughout the lifecycle to identify, assess, and address risks — not as a pre-deployment gate, but as a continuous practice.
PM implication: Accountability starts with knowing who is responsible for what. A RACI matrix for AI system decisions — not just project tasks — is a direct implementation of this principle. Decision logs throughout the lifecycle (data governance decisions, design choices, risk management findings, post-deployment incidents) create the traceability the principle requires. And ongoing risk management, maintained past deployment, is what distinguishes a compliant governance approach from a compliance exercise performed once before go-live.
From Principles to Project Deliverables
The principles are abstract by design — they are meant to apply across sectors, jurisdictions, and use cases. But they produce concrete project requirements when translated through the ‘AI actors as deployers’ lens. The following maps each principle to the project artifacts that give it substance:
|
Principle |
Project Deliverables |
|
1.1 Inclusive growth, well-being |
Business case with benefit distribution analysis: who
benefits, and are any groups excluded or disproportionately affected?
Environmental cost consideration in technical design. Workforce impact
assessment where applicable. |
|
1.2 Human rights, rule of law, fairness |
Bias testing results across relevant subgroups. Human
oversight design: who reviews, intervenes, and can contest decisions?
Recourse mechanism for affected parties. For generative AI: information
integrity risk assessment. |
|
1.3 Transparency and explainability |
User disclosure plan: who is informed, what they are told,
and when. Explainability requirements by stakeholder group (users, affected
parties, regulators, internal reviewers). Contestability mechanism for
consequential decisions. |
|
1.4 Robustness, security, safety |
Test plan covering edge cases, adversarial conditions, and
foreseeable misuse. Failure mode documentation and incident response
procedure. Override, repair, and decommission procedure — documented and
tested. |
|
1.5 Accountability |
RACI matrix for AI system decisions and governance
responsibilities. Decision log maintained throughout lifecycle. Risk register
with post-deployment monitoring. Traceability record for data, design, and
deployment decisions. |
You do not need separate ‘OECD compliance’ activities. You need project management activities that reflect these values — and when stakeholders or auditors ask why you have done things a particular way, the Principles provide the authoritative rationale.
Why the Principles Matter Even Without EU Exposure
PMs sometimes treat the OECD AI Principles as background context for EU-regulated projects only. That undersells their utility. Three reasons to apply them regardless of jurisdiction:
They explain downstream requirements. When you encounter a regulatory or standard requirement that seems arbitrary or over-specified — a particular logging obligation, a prescribed human oversight capability, a required disclosure format — the Principles usually explain why. Requirements that are grounded in principles are easier to implement faithfully because you understand the intent, and easier to scope correctly because you know what the requirement is actually trying to achieve.
They surface questions that checklists miss. A risk classification checklist tells you whether your system is high-risk. The inclusive growth principle asks whether your system distributes its benefits fairly. The human rights principle asks whether your system could contribute to misinformation at scale. The accountability principle asks whether someone will be responsible when things go wrong. These questions are not on most regulatory checklists, but they are the ones that determine whether an AI project actually serves the people it claims to.
They are becoming organisational expectations. Most large organisations and many smaller ones have AI ethics commitments that mirror the OECD framework. Demonstrating that your project reflects these principles is increasingly what ‘responsible AI’ looks like in internal governance reviews, procurement processes, and public accountability contexts. The Principles give you a principled vocabulary for those conversations.
Right-Sizing for Your Situation
|
Greenfield
— OECD Principles Playbook For PMs
without a formal AI governance process. How to incorporate the five
principles into your project when no one is requiring it — practical
questions to ask at initiation, design, and deployment that ensure each
principle is addressed without adding unnecessary overhead. |
|
Emerging
— OECD Principles Playbook For PMs
building repeatable processes across multiple projects. How to create project
templates and review checkpoints that embed the principles by default — so
that the questions get asked consistently rather than project by project. |
|
Established
— OECD Principles Playbook For PMs
in organisations with formal AI ethics commitments. How to demonstrate
alignment with the Principles in governance reviews and audits, map
organisational AI policies to the OECD framework, and use the Principles as
the authoritative rationale for project governance decisions. |
Framework References
• OECD Recommendation on Artificial Intelligence (OECD/LEGAL/0449, adopted May 2019, revised November 2023 and May 2024) — Section 1 (five principles: 1.1 inclusive growth; 1.2 human rights and democratic values; 1.3 transparency and explainability; 1.4 robustness, security and safety; 1.5 accountability), definitions (AI system, AI system lifecycle, AI actors, AI knowledge, stakeholders). Source framework for all five principles and 2024 revision content.
• NIST AI RMF 1.0 (NIST AI 100-1, 2023) — GOVERN, MAP, MEASURE, MANAGE functions. Operationalises OECD accountability and robustness principles into a lifecycle risk management structure.
• UNESCO Recommendation on the Ethics of AI (2021) — Eleven values and principles with global development emphasis. Expands the OECD framework with explicit treatment of environmental sustainability, cultural diversity, and data protection.
• EU AI Act (Regulation (EU) 2024/1689) — Articles 9, 13, 14, 15, 17. Translates OECD principles into binding obligations: risk management (accountability), transparency and instructions for use (transparency), human oversight (human rights), accuracy and robustness (robustness).
This article is part of AIPMO’s Frameworks series. See also: AI Impact Assessments | The PM’s Guide to NIST AI RMF | ISO 42001 for Project Managers | AI Risk Classification