Skip to content

IMP - AI Impact Assessments: Running Them Like a PM

An AI impact assessment asks who could be harmed by this system — not just what could go wrong with it. Those are different questions. Here's what the frameworks actually require, why the PM needs to own it, and how to run one that produces decisions rather than paperwork.

By AIPMO
Published: · 11 min read

  

PM Takeaways

       An AI impact assessment is not an AI risk assessment. The risk assessment asks what could go wrong with the system. The impact assessment asks who could be harmed — and a system can perform exactly as designed and still produce discriminatory outcomes, displace workers, or enable decisions that individuals can’t contest.

       Timing is where impact assessments most often fail. Done at go-live, they become post-hoc justification rather than decision-making tools. The frameworks are consistent: iterate — first pass at planning, detailed assessment during design, validation before deployment, review after launch.

       Under the EU AI Act, fundamental rights impact assessments are a legal obligation for certain deployers, not a recommended practice. Article 27 requires completion before first deployment and notification to the market surveillance authority. For organisations in scope, this is a go/no-go gate with a statutory deadline.

       The PM is the right person to own this — not because PMs have ethics expertise, but because the assessment depends on project context that lives in the PM’s head. Compliance teams provide the framework. Someone still has to drive the process. That role looks exactly like PM work.

       The affected population is not the same as the user population. A recruitment tool is used by recruiters; the people affected are job applicants. Getting this distinction right is the hardest part of any impact assessment — and the most important one.

Impact assessments have been standard practice in regulated industries for decades. Environmental impact assessments, privacy impact assessments, health impact assessments — the pattern is well established. When a decision carries risk, you characterise the risk systematically before committing. You document what you found and what you’re doing about it. You create a record that can be reviewed if things go wrong.

AI impact assessments follow the same logic, applied to AI systems. The question they answer is not ‘does this system work?’ — that’s what testing is for. The question is ‘what could happen to people as a result of this system existing and being used?’ That includes intended effects, unintended effects, and effects on people who never interact with the system but whose lives are shaped by its outputs.

This article covers what an AI impact assessment actually is, why the PM needs to own it, what the major governance frameworks require, and how to run one that produces useful decisions rather than paperwork. 

What an Impact Assessment Is — and What It’s Not

The AIGP Body of Knowledge describes the AI impact assessment as building on privacy impact assessments as a foundation, then extending to AI-specific risks: algorithmic bias, lack of explainability, unintended behavioural effects, and harms that emerge only after deployment. That framing is useful because it establishes what the privacy assessment misses.

Privacy impact assessments focus on data: what personal information is collected, how it’s stored, who has access, whether individuals can exercise rights over it. That remains relevant for AI systems. But AI introduces risks that have nothing to do with data handling. A model can be trained entirely on anonymised data and still produce outputs that systematically disadvantage a demographic group. A decision support tool can handle personal data impeccably and still be used in ways that individuals have no ability to challenge or even know about. The impact assessment covers the full downstream effect of the system’s existence, not just its data practices.

The difference between an impact assessment and a risk register is worth being clear about. The risk register captures threats to the project and the organisation: schedule risk, budget risk, technical risk, reputational risk. The impact assessment captures threats to people outside the project — individuals affected by the system’s outputs, communities touched by its decisions, workers whose roles it displaces, users who may be harmed by errors they have no way to detect. The two documents are related but they are not the same thing, and one does not substitute for the other. 

What the Frameworks Actually Require

Several major AI governance frameworks address impact assessments, and they do so in meaningfully different ways. Understanding the differences matters for scoping what you need to produce.

NIST AI RMF — MAP Function

The NIST AI RMF positions impact analysis within its MAP function, which covers context establishment and risk identification. MAP 5 specifically requires characterising impacts on individuals, groups, communities, organisations, and society — documenting the likelihood and magnitude of each identified impact, including both potentially beneficial and harmful effects. This is explicitly broader than user-facing risks; it extends to people who are affected by the system without interacting with it directly.

The framework also requires documenting practices and personnel for ongoing engagement with relevant AI actors and integrating feedback about positive, negative, and unanticipated impacts after deployment. The impact assessment doesn’t close at go-live. Unanticipated effects that emerge in production need to be captured and fed back into the assessment.

EU AI Act — Article 27 Fundamental Rights Impact Assessment

The EU AI Act creates a binding obligation, not a recommendation. Article 27 requires deployers that are public bodies or private entities providing public services — and deployers of specific Annex III high-risk systems including employment, education, and financial access — to conduct a fundamental rights impact assessment before first deployment.

The assessment must contain: a description of the deployer’s processes in which the system will be used; the period of time and frequency of use; the categories of persons and groups likely to be affected in that specific context; specific risks of harm to those persons or groups; a description of how human oversight will be implemented; and the measures to be taken if identified risks materialise. Once completed, the results must be submitted to the relevant market surveillance authority.

This is a project deliverable with a statutory deadline. It must exist before the system is deployed. It cannot be an internal document produced by the ethics team and filed away — it has to be notified externally. For organisations in scope, this is a go/no-go gate.

Canada’s Algorithmic Impact Assessment

Canada’s Directive on Automated Decision-Making, in force since 2019, introduced the first legally binding AI-specific instrument of its kind. The Algorithmic Impact Assessment (AIA) is mandatory for federal departments deploying automated decision systems. It’s a 65-question risk questionnaire covering rights and freedoms, health and well-being, economic interests, environmental sustainability, and data risks, with 41 additional mitigation questions.

The AIA produces an impact level from I (little to no impact) to IV (very high impact), which then determines the mitigation requirements under the directive — the type of peer review required, the extent of human oversight in decisions, and the notification obligations. Critically, it must be completed twice: at the beginning of the design phase and again before production deployment. It must also be reviewed and updated on a scheduled basis and whenever system functionality or scope changes. It’s a living document, not a one-time gate.

Canada’s model is worth studying even by organisations with no Canadian regulatory exposure, because it demonstrates a mature approach to what impact assessment governance looks like in practice: scored, tiered, required to be published, and mandated to be refreshed. It is the most operationally detailed publicly available AIA framework.

UNESCO Recommendation on the Ethics of AI

UNESCO’s Recommendation calls for governments to introduce frameworks for impact assessments that identify and assess benefits, concerns, and risks of AI systems, as well as appropriate risk prevention, mitigation, and monitoring measures. It specifically calls out impacts on human rights and fundamental freedoms, and requires that assessments be carried out continuously and systematically in a manner proportionate to the relevant risks.

UNESCO’s framing is broader than the others: it situates impact assessment as a societal-level governance tool, not just an organisational compliance process. For PMs, the practical implication is that the scope of ‘who is affected’ is wider than the direct user base or even the immediate community. AI systems deployed at scale can have systemic effects that no single impact assessment captures. 

Scope: Who Is Actually Affected?

Getting the affected population right is where most impact assessments fail. It’s also the thing that’s hardest to do from inside the project team.

The project team knows the users — the people who will interact with the system directly. They know the customers — the business units or external clients who commissioned the work. What they often don’t know, because those people are invisible in the project plan, are the individuals whose lives are shaped by the system’s outputs without ever touching the interface.

A few questions that force the right scope:

•       Who receives a decision or recommendation from this system, and do they know AI was involved?

•       Who is affected by that decision downstream — family members, communities, third parties?

•       Are there groups who are disproportionately likely to be affected, either as heavier users or because they’re over-represented in the data the system was trained on?

•       Are there groups who might be systematically excluded from the system’s benefits, even if they are not harmed by it directly?

•       What happens to the people whose jobs this system changes or eliminates?

These questions don’t all need exhaustive answers. But they need to be asked, and the findings — including the gaps in what you were able to determine — need to be documented. An impact assessment that can’t answer the question ‘who is affected?’ with specificity is not an impact assessment. It is a form with boxes ticked. 

The Impact Domain Categories

The major governance frameworks converge on a similar set of impact domains, even if they express them differently. ISO 42001’s Clause 6.1.4 and Annex A.5, Canada’s AIA, and the EU AI Act’s Article 27 all require assessment across overlapping territory:

Impact Domain

What to Assess

Rights and freedoms

Does the system affect people’s ability to access services, appeal decisions, or exercise legal rights? Does it create or exacerbate discrimination on protected grounds?

Health and safety

Could the system’s outputs cause physical or psychological harm? Are there safety implications for the environment in which the system operates?

Economic interests

Does the system affect access to credit, employment, insurance, or essential services? Who bears the economic cost of errors?

Privacy and autonomy

Beyond the privacy impact assessment: does the system produce outputs that individuals have no ability to understand, contest, or opt out of?

Environmental sustainability

What are the environmental costs of the system’s development, training, and ongoing operation? Are these proportionate to the benefits?

Social norms and institutions

At scale, does the system reinforce or undermine trust in institutions, democratic processes, or community norms? Does it enable or amplify misinformation?

Labour and workforce

Does the system change the nature of work for the people in the affected environment? Are there workforce transition impacts that need to be addressed?

Not all domains are equally relevant for every system. A low-risk internal productivity tool doesn’t need a rigorous assessment of its effects on democratic institutions. A high-risk decision system affecting access to financial services or employment probably does. Scope the assessment to the risk level — but document the scoping decision, including which domains you considered and why you concluded they were not material. 

Timing: Iterate, Don’t Checkpoint

The governance frameworks are consistent on timing in a way that project teams often resist: impact assessment should be iterative, not a one-time checkpoint. The Canada AIA requires completion twice — at design phase and again pre-production. The EU AI Act requires updating the assessment whenever the deployer determines that relevant factors have changed. The NIST AI RMF builds ongoing impact monitoring into the MAP and MANAGE functions as a continuous activity.

The reason is practical. An assessment done during planning is necessarily speculative — the technical architecture isn’t fixed, the training data isn’t finalised, the deployment environment isn’t known. It’s still worth doing because it surfaces questions that should inform design decisions. An assessment done only at planning will miss risks that emerge from the actual implementation choices the team makes along the way.

A workable cadence for most projects: a first-pass during project initiation to surface high-level impact areas and inform the risk register; a detailed assessment during design when the technical architecture and data sources are known; a validation assessment before go-live confirming that mitigations are in place; and a post-deployment review at a defined interval (three to six months is common) to capture effects that only become visible in production.

This is not four separate documents. It is one living assessment that gets revised. Each revision should be dated and retained, because the revision history is itself evidence of responsible governance. 

Why the PM Has to Own It

Impact assessments fail when they’re handed to a compliance team with a delivery deadline. The compliance team will produce a document. It will often be a good document. But if it wasn’t built iteratively from the information inside the project, it won’t be connected to the actual decisions being made — and a finding that isn’t connected to a decision is just a finding.

The project manager has the context that makes an impact assessment useful. The business justification. The stakeholder map. The data governance decisions. The technical constraints. The timeline. The go/no-go criteria. An effective impact assessment is built from that information, not written separately from it.

This doesn’t mean the PM writes the assessment alone. It means the PM drives it: sets up the process, identifies who needs to contribute what, coordinates the cross-functional inputs (legal, data, engineering, operations, user research), keeps it on schedule, and ensures the findings actually reach decision-makers in time to influence decisions. Compliance teams provide the framework. Ethics advisors flag concerns the team’s inside view misses. But someone has to run the work. That role looks exactly like a PM.

The output should be treated like any other project deliverable: an owner, a delivery date, review criteria, and a defined path to approval or escalation. An impact assessment with no owner is an impact assessment that won’t get done. 

The Findings That Actually Matter

A useful impact assessment produces two types of findings: things the project can fix, and things the project can’t fix.

Fixable findings — a bias risk that can be addressed by adjusting the training data, a transparency gap that can be closed by adding a disclosure to the user interface, a contestability gap that can be addressed by adding a human review step — generate action items. They feed the risk register. They get tracked. The mitigation gets implemented and tested, and the next iteration of the assessment validates that it worked.

Unfixable findings are harder. Sometimes an impact assessment reveals that a system, as designed, will produce a harm that the project cannot mitigate within its current scope, timeline, or budget. That finding needs to go to a decision-maker with authority to change the scope, delay the launch, or in some cases, stop the project. A PM who buries an unfixable finding in an appendix has not protected the organisation. An unfixable finding that reaches a decision-maker with authority and documented rationale gives the organisation the chance to make a defensible choice. That is what the assessment is for.

Escalating findings is not a failure. It’s the mechanism by which AI governance actually works. 

Right-Sizing for Your Situation

Greenfield — AI Impact Assessment Playbook

For PMs building governance from scratch with limited support. A minimum viable approach that covers the essential impact domains, can be completed without dedicated compliance resources, and produces documentation that will hold up to scrutiny — including what ‘good enough’ looks like when you’re the only one doing this.

Emerging — AI Impact Assessment Playbook

For PMs creating repeatable processes in organisations with growing AI awareness. A structured approach to stakeholder engagement, cross-functional coordination, tiered assessment scope by system risk level, and building artifacts that others can reuse across projects.

Established — AI Impact Assessment Playbook

For PMs integrating AI projects into existing governance frameworks. Guidance on navigating Article 27 notification obligations, aligning with ISO 42001 Clause 6.1.4 and Annex A.5, coordinating with legal and compliance on regulated systems, and managing the assessment across a multi-project portfolio.

Become a member →

 

Framework References

•       NIST AI RMF 1.0 (NIST AI 100-1, 2023) — MAP 5 (impact characterisation), MAP 5.1 (likelihood and magnitude of impacts), MAP 5.2 (ongoing stakeholder feedback). Impact assessment scope, population, and continuous monitoring requirements.

•       EU AI Act (Regulation (EU) 2024/1689) — Article 27 (fundamental rights impact assessment), Article 26 (deployer obligations), Annex III (high-risk system categories). Mandatory FRIA requirements, content, notification obligations, and deployer scope.

•       Canada Directive on Automated Decision-Making — Algorithmic Impact Assessment (2024 edition). 65-question mandatory risk assessment for federal automated decision systems; four impact levels with tiered mitigation requirements; two-stage assessment (design phase and pre-production).

•       ISO/IEC 42001:2023 — Clause 6.1.4 (AI impact assessment requirement), Annex A.5 (controls for assessing impacts of AI systems). Organisational obligation to assess and document AI system impacts as part of AIMS planning.

•       UNESCO Recommendation on the Ethics of AI (2021) — Policy Area 1 (Ethical Impact Assessment), paragraphs 50–52. Member state obligations to introduce impact assessment frameworks; continuous and proportionate monitoring requirements.

 

This article is part of AIPMO’s PM Practice series. See also: The PM’s Guide to NIST AI RMF | ISO 42001 for Project Managers | AI Risk Registers