Skip to content

IMP - ISO 42001 for Project Managers

ISO 42001 is a certifiable AI management system standard — not a risk framework or a checklist. Its AI-specific requirements in Clauses 6 and 8 generate concrete PM deliverables: risk assessments, impact assessments, lifecycle controls, and supply chain governance.

By AIPMO
Published: · 13 min read

   

PM Takeaways

       ISO 42001 is a management system standard — same pattern as ISO 9001 or ISO 27001. Ten clauses, ‘shall’ language, an Annex A controls catalogue you select from based on your risk profile, and third-party certification available. The AI-specific substance lives in Clauses 6 and 8 and the two annexes.

       ISO 42001 distinguishes providers (who develop and place AI systems on the market) from deployers (who put AI systems into use). Most PMs on internal AI projects are on the deployer side — and deployer obligations differ meaningfully from provider obligations. The standard also recognises ‘affected persons’ whose interests must be accounted for even though they aren’t AI actors.

       The two most significant PM deliverables in ISO 42001 are the AI risk assessment (Clause 6.1.2) and the AI impact assessment (Clause 6.1.4). They are not the same document — risk asks what could go wrong with the system, impact asks who could be harmed by it. Both feed the Annex A controls selection, and neither closes at deployment.

       An ISO 42001-certified management system provides documented evidence for many of the EU AI Act’s Article 17 quality management requirements. For organizations pursuing high-risk AI conformity, certification is a practical complement to the regulatory pathway — not a substitute for it.

If you’ve worked with ISO standards before — ISO 9001 for quality management, ISO 27001 for information security — you know what to expect from the structure. A defined high-level format across ten clauses. Requirements written in ‘shall’ language. A controls catalogue you select from based on your context and risk profile. Third-party certification available. Continual improvement built in by design.

ISO/IEC 42001:2023 applies that same pattern to AI. Published in December 2023, it is the first international standard for AI management systems. It defines requirements for an Artificial Intelligence Management System (AIMS): the documented policies, processes, roles, and controls through which an organization governs the responsible development, provision, and use of AI.

For PMs, the question is not whether to pursue organizational certification — that is a leadership decision. The question is what the standard requires at the project level, and how to ensure your project generates the evidence and documentation an AIMS-based organization needs. This article explains the structure, the AI-specific requirements that produce project deliverables, the deployer versus provider distinction, and the relationship between ISO 42001 and other frameworks your project may already be navigating. 

Structure: HLS and AI-Specific Clauses

ISO 42001 follows ISO’s High-Level Structure (HLS) — the same ten-clause template used by ISO 9001, ISO 27001, and other management system standards. Clauses 1–3 cover scope, references, and terms. Clauses 4–10 contain the requirements. This shared structure allows organizations with existing ISO management systems to integrate an AIMS without duplicating governance infrastructure.

The ten clauses map to the Plan-Do-Check-Act cycle:

Clause

Content and PM Relevance

4: Context

Define the organization’s purpose and context for AI use. Identify internal and external factors (regulatory, competitive, ethical) affecting the AIMS. Understand the needs of interested parties, including affected persons. Establish the AIMS scope. PM input: project context documentation, stakeholder register including affected communities.

5: Leadership

Top management establishes the AI policy, assigns roles with authority and accountability, and ensures resources are available. PM input: project charter must reference AI policy commitments. Role assignments (model owner, data owner, deployment owner) should align with policy-defined responsibilities.

6: Planning

The AI-specific core. Risk assessment (6.1.2), impact assessment (6.1.4), AI objectives (6.2), and planning for changes (6.3). PM input: risk register, impact assessment, project objectives. The most significant PM deliverables live here.

7: Support

Resources, competence, awareness, communication, and documented information. Requires that personnel have or can obtain the AI-related competencies they need. PM input: training and communication plan, documentation management. Competency gaps are a resource risk.

8: Operation

AI system lifecycle controls (8.3–8.5), supply chain and third-party obligations (8.4), data governance requirements (8.5). PM input: lifecycle control documentation from design through decommission, vendor governance artifacts.

9: Performance Evaluation

Monitoring, measurement, internal audit, and management review. Establishes what to measure and how often to review it. PM input: monitoring plan, KPIs defined before deployment, post-deployment review schedule.

10: Improvement

Nonconformity and corrective action, continual improvement. Requires a process for identifying and responding to AIMS failures. PM input: incident log, lessons learned, change request process for AI systems.

The AI-specific substance — what makes 42001 different from a generic management system standard — is concentrated in Clauses 6 and 8, and in the two annexes. Those are the sections that generate project-level deliverables unique to AI. 

Clause 6: Planning — Where Project Deliverables Are Generated

Clause 6 is where ISO 42001 becomes distinctly AI-specific. It contains four AI-related requirements that translate directly into project artifacts.

6.1.2 — AI Risk Assessment

The AI risk assessment identifies what could go wrong with the AI system and evaluates likelihood and consequence. The standard requires the assessment to be documented, repeatable, and reviewed when conditions change. It feeds the selection of controls from Annex A.

AI risk assessment differs from conventional project risk management in several important ways. The risk register is not closed at project go-live — it extends through the operational life of the system. Risks include technical failure modes specific to AI (model drift, distributional shift, adversarial attack), operational risks (misuse, over-reliance, shadow deployment), and accountability risks (unclear ownership when something goes wrong). Likelihood is not simply a probability estimate; for AI systems deployed at scale, a low-probability failure with high reach can be more consequential than a high-probability failure affecting few users.

PM implication: The AI risk assessment is a formal project deliverable — not the project manager’s risk register, but a document that will be reviewed in an AIMS audit. It needs an owner (typically not the PM, who manages the project risk register as a separate artifact), a review trigger, and a documented linkage to the controls selected. Building both documents and clarifying their relationship is a scoping and governance task for project initiation.

6.1.4 — AI Impact Assessment

The AI impact assessment evaluates the system’s effects on individuals, communities, and society. It is distinct from the risk assessment: risk asks what could go wrong for the organization; impact asks who might be harmed and how, including people who are not the direct users or customers of the system.

The standard specifies five impact domains that the assessment should consider: sustainability (effects on the environment and natural resources), economics (employment, access to financial services, economic inclusion), governance (effects on legislative processes, democratic institutions, national security), health and safety (access to healthcare, physical and psychological harm), and norms and values (misinformation, bias, effects on cultures and communities).

This breadth is intentional and consequential. A payroll automation system that affects fewer employees than planned is a project scope risk. The same system that produces systematic pay errors for a demographic group is an AI impact issue. The impact assessment is the mechanism that makes the latter visible before deployment.

PM implication: The impact assessment requires engaging with stakeholders who may not appear on a conventional stakeholder register — people who will be affected by the system’s outputs but who are not customers, users, or sponsors. This is a stakeholder analysis and communication planning task. The assessment also needs a review trigger: if the system’s use cases, data, or deployment context change, the impact assessment must be revisited.

6.2 — AI Objectives

The standard requires the organization to establish measurable AI objectives consistent with the AI policy and to plan how to achieve them. Objectives must be documented, communicated, monitored, and updated. This is the mechanism that connects the policy commitments in Clause 5 to the operational controls in Clauses 8 and 9.

PM implication: AI objectives belong in the project business case and success criteria. They are not the same as project success metrics (on-time, on-budget, in-scope) — they include governance outcomes such as bias within defined thresholds, explainability requirements met for defined user groups, and monitoring alerts responded to within defined timelines. Define these at initiation. They become the baselines for Clause 9 performance monitoring. 

Clause 8: Operations — Lifecycle Controls and Supply Chain

8.3 — AI System Lifecycle Controls

Clause 8.3 requires controls throughout the AI system lifecycle: from problem definition and data acquisition through design, development, testing, deployment, monitoring, and decommissioning. The standard does not prescribe a specific development methodology but requires documented evidence that controls are applied at each lifecycle phase.

The controls applicable to each phase are selected from Annex A based on the risk and impact assessment findings. An organization deploying a higher-risk system will apply more controls and generate more documentation than one deploying a lower-risk system. This risk-proportionate approach means there is no universal lifecycle checklist — the applicable controls depend on your specific system and context.

PM implication: Your project plan must include lifecycle control activities as deliverables, not just system development activities. For each phase — data acquisition, model development, testing, deployment — there should be an identifiable control (or documented rationale for exclusion) traceable to the risk and impact assessment. This is the audit trail an AIMS certification audit will look for.

8.4 — Supply Chain and Third-Party AI

ISO 42001 explicitly addresses AI supply chain governance. When an organization uses externally developed AI components — pre-trained models, third-party APIs, AI platforms — it remains responsible for ensuring those components meet the AIMS requirements to the extent they affect the organization’s AI systems. The standard requires documented information on the controls applicable to external providers and mechanisms for evaluating and monitoring their performance.

This is not a simple pass-through. An organization cannot discharge its AIMS obligations by pointing to a vendor’s terms of service or model card. The deployer must understand what controls the provider has applied, what information the provider has documented, and what residual obligations the deployer carries for the deployed system.

PM implication: Third-party AI vendor selection should include a governance due diligence step aligned to ISO 42001 Clause 8.4 requirements. Procurement criteria should include questions about the vendor’s own governance documentation (model cards, data datasheets, system cards, risk documentation), the controls they have applied, and what information they will make available to support your AIMS requirements. Vendor governance artifacts are project deliverables, not just commercial terms.

8.5 — Data Governance

Clause 8.5 establishes data governance requirements for AI systems: data quality, data provenance, data security, and data bias controls. Data used for training, validation, and testing must be documented and managed in ways that support the risk and impact assessment findings. For systems using personal data, data governance must align with applicable privacy requirements.

PM implication: Data governance is not a separate workstream that runs in parallel to the AI project. It is a project delivery requirement with documented artifacts. The data specification (what data is used, where it comes from, what biases may be present, what cleaning has been applied) feeds the risk assessment, the impact assessment, and the system card. Build data governance activities into the project schedule with explicit owners and review points. 

Annex A: The Controls Catalogue

Annex A contains 38 controls organised across eight control domains. These controls are not all mandatory — the organization selects applicable controls based on the risk and impact assessment outputs and documents the selection rationale in a Statement of Applicability (SoA). Any control can be excluded if the organization documents why it does not apply.

Annex A Control Domain

Scope of Controls

A.2: Policies for AI

AI policy content, review, and communication requirements

A.3: Internal organization

Roles, responsibilities, and cross-functional coordination

A.4: Resources for AI systems

Computational, data, and human resource governance

A.5: Assessing impacts of AI systems

Impact assessment process and scope (aligned to Clause 6.1.4)

A.6: AI system lifecycle

Controls from data acquisition through decommissioning (aligned to Clause 8.3)

A.7: Data for AI systems

Data quality, provenance, bias, and security controls (aligned to Clause 8.5)

A.8: Information for interested parties

Disclosure obligations to users, affected persons, and regulators

A.9: Use of AI systems

Controls governing intended use, misuse prevention, and user guidance

Annex B provides implementation guidance on AI concepts and applications — it is informative, not normative, but it is practically useful for understanding how the standard’s requirements apply to different AI system types.

PM implication: The Statement of Applicability is an organizational artifact, not a project artifact. But each project should be able to identify which Annex A controls are triggered by its risk and impact assessments, and which project deliverables provide the evidence for those controls. Building a project-level control mapping at initiation helps ensure the project generates the right evidence and avoids surprises in AIMS audits. 

Provider vs. Deployer: Which Obligations Apply to Your Project?

ISO 42001 recognises that organizations operate in different roles in the AI value chain. The obligations under the standard differ based on role, and many organizations operate in multiple roles simultaneously.

Role

Description and Obligations

Provider

Organizations that develop AI systems and place them on the market or into service. Subject to the full range of Clause 8.3 lifecycle controls, documentation requirements for the systems they develop, and supply chain governance for any components they use in development. Most AI vendors, model developers, and platform builders are providers.

Deployer

Organizations that put AI systems into use in a professional context. Subject to obligations around intended use, impact assessment, user guidance, monitoring, and human oversight. The PM running an internal AI deployment project is typically working on the deployer side. Deployers remain responsible for ensuring the systems they deploy are used within intended scope and that effects on affected persons are managed.

Affected persons

Not an AI actor role but an explicit category in the standard. The AIMS must account for the interests of people affected by AI system outputs, even if they are not users or customers. Impact assessments must consider this population. Stakeholder registers should include or reference affected persons even where direct engagement is not feasible.

Many organizations are both providers and deployers. A financial services firm that develops a credit scoring model for internal use is both. The applicable obligations depend on the role for each specific AI system, not on the organization’s general profile. 

ISO 42001 and the EU AI Act

ISO 42001 has a direct and practical relationship with EU AI Act compliance. The EU AI Act’s Article 17 requires providers of high-risk AI systems to establish a quality management system covering documented policies and processes across the full lifecycle, from design through post-market monitoring. Annex VI and Annex VII of the EU AI Act specify the conformity assessment procedures, and Annex VII explicitly references quality management system assessment.

An ISO 42001-certified management system is not a substitute for EU AI Act conformity assessment — the two have different scope and different requirements. But certification provides documented evidence of many of the quality management requirements Article 17 imposes, and reduces the evidence-gathering burden in a conformity assessment. For organizations pursuing high-risk AI system deployment under the EU AI Act, ISO 42001 certification is a practical complement to the regulatory pathway rather than an alternative to it.

NIST has published a crosswalk between ISO 42001 and the NIST AI RMF confirming substantial structural overlap between the two frameworks. Organizations that have implemented the AI RMF have typically satisfied many ISO 42001 requirements in substance. The difference is that ISO 42001 provides a certifiable structure that an AI RMF implementation does not. 

ISO 42001 vs. NIST AI RMF: A PM Comparison

Both frameworks address AI governance, but they serve different purposes and generate different obligations. Understanding the distinction helps PMs navigate situations where both apply.

Dimension

ISO 42001

NIST AI RMF

Type

Certifiable management system standard

Voluntary risk management framework

Language

‘Shall’ — mandatory requirements

‘Should’ / ‘may’ — outcomes and suggested actions

Structure

Ten HLS clauses + Annex A controls catalogue

Four functions (GOVERN, MAP, MEASURE, MANAGE) + Profiles

Certification

Yes — third-party audit by accredited certification body

No — self-assessed alignment

AI-specific focus

Management system around AI policy, risk, impact, lifecycle, supply chain

Risk identification, measurement, and management throughout AI lifecycle

Origin and scope

International (ISO/IEC) — global applicability

United States (NIST) — voluntary national framework

EU AI Act alignment

ISO 42001 QMS evidence supports EU AI Act Article 17 requirements

NIST AI RMF alignment referenced in EU AI Act recitals but not formally mapped to Article 17

Starting point for PMs

High — HLS structure is familiar if you’ve worked with ISO 9001 or 27001

High — GOVERN/MAP/MEASURE/MANAGE maps well to PM lifecycle thinking

The two frameworks are complementary. Many organizations use the NIST AI RMF for practical day-to-day risk management and ISO 42001 for the management system structure that supports certification. Building your project governance against both simultaneously is practical rather than redundant — most project-level deliverables satisfy requirements in both. 

The ISO 42001 Family

ISO 42001 is part of a broader family of AI-related ISO/IEC standards. The standard explicitly references several companion standards that provide additional guidance:

•       ISO/IEC 23894:2023 — AI risk management guidance. Provides detailed guidance on the risk management processes that ISO 42001 requires but does not fully specify. Practically useful for developing the AI risk assessment methodology required by Clause 6.1.2.

•       ISO/IEC 24028:2020 — AI trustworthiness. Addresses the trustworthiness properties of AI systems including accuracy, robustness, reliability, security, privacy, fairness, and explainability. Provides the technical vocabulary for specifying trustworthiness requirements in risk and impact assessments.

•       ISO/IEC 42005 — AI system impact assessment (in development). Expected to provide detailed guidance on the AI impact assessment process required by ISO 42001 Clause 6.1.4 and Annex A.5.

•       ISO/IEC 42006 — Requirements for AI certification bodies (in development). Establishes what accredited certification bodies must demonstrate to conduct ISO 42001 certification audits.

ISO 42001 is also listed alongside the NIST AI RMF in the NIST AI Standards Plan (NIST AI 100-5) as one of the primary frameworks for which conformity assessment standards are being developed — confirming its growing role in the international AI standards landscape. 

Right-Sizing for Your Situation

Full ISO 42001 certification requires organizational commitment beyond any single project. But the standard’s requirements apply at project level regardless of whether your organization is pursuing certification — because they define what responsible AI governance looks like in practice.

Greenfield — ISO 42001 Playbook

For PMs in organizations without formal AI governance. How to apply ISO 42001 principles to your project — risk assessment, impact assessment, lifecycle controls, and monitoring — even without organizational certification. Practical templates for each deliverable.

Emerging — ISO 42001 Playbook

For PMs building repeatable AI governance across multiple projects. How to create project templates that align with ISO 42001 requirements, manage the Statement of Applicability at project level, and build evidence trails that will support a future certification process.

Established — ISO 42001 Playbook

For PMs in organizations pursuing or maintaining ISO 42001 certification. How to ensure each project generates the documented evidence the AIMS requires, integrate project artifacts into the AIMS documentation system, and prepare for internal audits.

Become a member →

 

Framework References

•       ISO/IEC 42001:2023 (International Organization for Standardization, December 2023) — Clauses 4–10, Annex A (controls catalogue), Annex B (AI concepts guidance). AI management system requirements, AI risk and impact assessment, lifecycle controls, supply chain governance, performance evaluation.

•       ISO/IEC 23894:2023 (International Organization for Standardization, 2023) — Full standard. AI risk management guidance; provides detailed methodology for Clause 6.1.2 risk assessment process.

•       ISO/IEC 24028:2020 (International Organization for Standardization, 2020) — Full standard. AI trustworthiness properties providing technical vocabulary for specifying requirements in risk and impact assessments.

•       NIST AI RMF 1.0 (NIST AI 100-1, 2023) — GOVERN, MAP, MEASURE, MANAGE functions; NIST-ISO 42001 crosswalk. Complementary risk management framework; substantial structural overlap with ISO 42001 requirements.

•       EU AI Act (Regulation (EU) 2024/1689) — Article 17, Annexes VI and VII. Quality management system requirements for high-risk AI providers; ISO 42001 certification supports Article 17 compliance evidence.

 

This article is part of AIPMO’s Frameworks series. See also: AI Impact Assessments | The PM’s Guide to NIST AI RMF | OECD AI Principles: The Framework Behind t