Skip to content

NEW - Stakeholder Engagement for AI Projects: Beyond the Usual Suspects

AI systems affect people who never appear on a traditional stakeholder register — and under the EU AI Act, informing them and giving them a mechanism to contest decisions is a legal obligation. Here's how to build a stakeholder approach that goes beyond the usual suspects.

By AIPMO
Published: · 14 min read

 

PM Takeaways

       Your stakeholder register is incomplete if it only includes people involved in the project — NIST AI RMF explicitly distinguishes end users (who interact with the system) from affected individuals and communities (who experience impacts without ever touching it). Both categories require active identification and planned engagement.

       EU AI Act Article 27 requires deployers of high-risk AI systems to perform a Fundamental Rights Impact Assessment identifying the specific categories of natural persons and groups likely to be affected before deployment — this is a structured stakeholder identification requirement, not optional due diligence.

       Participatory engagement is not a one-time exercise — NIST AI RMF MAP 5.1 and 5.2 require that practices and personnel for regular engagement with relevant AI actors and integration of feedback are in place and documented across the entire lifecycle, not only at the design phase.

       The positionality of your team is a governance input, not a DEI exercise — NIST AI RMF explicitly identifies team composition as a risk factor because builders’ blind spots directly shape what gets tested, what gets surfaced as a risk, and who gets heard in design decisions.

       EU AI Act Article 86 gives affected persons the right to obtain a clear and meaningful explanation of any AI-based decision that produces legal effects or significantly affects their health, safety, or fundamental rights — designing the explanation mechanism is a project deliverable, not a post-deployment afterthought.

Every PM knows stakeholder management. Identify stakeholders, assess their influence and interest, develop engagement strategies, communicate throughout the project. The basics apply to AI projects too — but the stakeholder landscape is fundamentally different.

AI systems affect people who never appear on a traditional stakeholder register. The customer service rep whose calls are now monitored by AI. The loan applicant who never meets a human decision-maker. The job seeker filtered out by an algorithm. These aren’t users, sponsors, or team members — but your system’s decisions may shape their lives. And under the EU AI Act, informing them of that fact, and giving them a mechanism to contest it, is a legal obligation, not a courtesy. 

The Expanded Stakeholder Universe

Traditional stakeholder analysis focuses on people involved in the project. AI projects require thinking about people affected by the system. These are categorically different groups, and conflating them is one of the most common governance failures in AI project management.

Traditional Project Stakeholders

These still matter and your existing engagement approach applies:

Stakeholder

Role

Sponsor

Provides funding and authority; accountable for strategic direction

Business owner

Accountable for business outcomes and operational impact

Project team

Builds the system; carries the most direct influence over design decisions

End users

Operates the deployed system; primary source of usability feedback

IT/Operations

Maintains and supports; critical input on integration and monitoring feasibility

Legal/Compliance

Advises on regulatory requirements; must be engaged early, not only at gate reviews

AI-Specific Stakeholders

These must be added to your register. NIST AI RMF explicitly distinguishes between end users and affected individuals and communities, and treats both as requiring active identification and engagement throughout the AI lifecycle.

Stakeholder

Why They Matter for AI Governance

Affected individuals

People subject to AI decisions who may never interact with the system directly — loan applicants, job seekers, benefits recipients. EU AI Act Article 86 gives them a right to explanation of decisions that significantly affect them.

Affected communities

Groups that may experience differential impacts — often visible only through disaggregated performance analysis, not individual feedback. UNESCO’s AI Ethics Recommendation requires measures to allow meaningful participation by marginalised groups.

Data subjects

People whose data trains or feeds the system. They may have consented to data collection for a different purpose than the AI use you are now implementing — a consent and provenance question, not only a privacy one.

Domain experts

Provide context that your technical team cannot generate internally. NIST AI RMF MAP 1.6 notes that targeted consultation with subject matter experts can help identify potential negative impacts not previously considered.

AI ethics/governance

Internal review and oversight function — if your organisation has one. If it doesn’t, this is a gap to flag to your sponsor, not a gap to work around.

Regulators

External oversight, sector-specific compliance obligations, and in some cases mandatory pre-deployment notification — EU AI Act Article 27 requires deployers to notify market surveillance authorities of fundamental rights impact assessment results.

Downstream deployers

Organisations that integrate your AI into their own systems inherit outputs you produced. Their use cases may create impacts you did not anticipate when you built the system.

 

Identifying Affected Parties

The hardest stakeholders to identify are often the most important: people who will be affected by your system but have no direct relationship with your project. They won’t show up in your project management tool, and they won’t attend your steering committee. You have to find them.

Questions to Surface Affected Parties

Work through these questions systematically at project initiation, and revisit them as scope evolves.

Dimension

Questions to Ask

Decision scope

What decisions will this system make or influence? Who is subject to those decisions? Who benefits from favourable decisions, and who is harmed by unfavourable ones?

Data sources

Whose data trains the system? Whose data feeds it in operation? Did those people consent to this use of their data, or was it collected for a different purpose?

Differential impact

Could certain groups experience systematically different outcomes? Who is underrepresented in training data? Who might be disadvantaged by design assumptions built into the system?

Downstream effects

Who else uses the system’s outputs? What decisions do they make based on those outputs? Who is affected by those downstream decisions, and does your project have visibility of that chain?

Worked Example: AI Hiring Tool

The same system has multiple, distinct affected party categories — each requiring a different engagement approach.

Stakeholder Type

Who They Are in an AI Hiring Context

End users

Recruiters using the tool day-to-day — traditional stakeholders, engaged through user research and training

Affected individuals

All job applicants screened by the system, including those who never know they were filtered out

Affected communities

Groups underrepresented in training data; people with non-traditional career paths; candidates from geographies or backgrounds the training data didn’t adequately cover

Data subjects

Employees whose historical performance data trained the model — they did not consent to being training data for a hiring algorithm

Domain experts

Industrial-organisational psychologists; employment lawyers; labour relations specialists

Regulators

Equal employment opportunity bodies; state civil rights agencies; data protection authorities

EU AI Act Article 27 frames this as a structured requirement: prior to deploying a high-risk AI system, deployers must perform a Fundamental Rights Impact Assessment that describes the categories of natural persons and groups likely to be affected, and the specific risks of harm likely to have an impact on their fundamental rights. The categories above are exactly what that assessment must produce. 

Engagement Methods

Different stakeholders require different engagement approaches, different levels of participation, and different timing within the project lifecycle. The AIGP Body of Knowledge recommends a structured process that includes evaluating salience, determining the appropriate level of engagement, and selecting engagement methods accordingly.

Engagement Levels

Not all stakeholders require the same depth of engagement. The appropriate level depends on their interest in the system’s outcomes, the degree to which they are affected, and the organisation’s ability to meaningfully incorporate their input.

Level

Description

When to Use

Inform

One-way communication about the project and system

Low-influence stakeholders; general awareness of a system’s existence and purpose

Consult

Gather input, feedback, or concerns; you retain decision authority

Subject matter experts; affected communities providing input on design choices

Involve

Active participation in specific decisions; input shapes outcomes

Key stakeholders with legitimate interests in how the system is designed

Collaborate

Partnership in design and implementation; shared decision-making

High-influence, high-interest stakeholders; governance bodies

Empower

Delegate decision authority to the stakeholder

Ethics review boards; formal oversight bodies with sanctioned authority

NIST AI RMF warns against treating participation as a perfunctory exercise — “participation washing” where communities are consulted but their input doesn’t demonstrably shape outcomes. Organisational transparency about the purpose and goal of the engagement, and what decisions it will and will not influence, is a prerequisite for engagement that has governance value.

Methods by Stakeholder Type

Stakeholder Type

Recommended Engagement Methods

Internal teams and sponsors

Standard project communication — status reports, steering committee, design reviews, and formal decision gates. No change from conventional PM practice.

End users

User research and usability testing during design; training and change management pre-deployment; structured feedback channels post-deployment. Feedback must be integrated into system updates, not only acknowledged.

Affected individuals and communities

Focus groups with representative populations; public comment periods for high-impact systems; community advisory boards with ongoing engagement (not one-off consultation); user studies conducted with informed consent and appropriate compensation per NIST AI 600-1.

Domain experts

Consultation during design to identify impacts not visible to the technical team; review of system outputs and edge cases during testing; participation in TEVV activities where their domain knowledge is material to interpreting results.

Regulators

Proactive engagement where mandatory (EU AI Act Article 27 notification); documentation and transparency practices that satisfy disclosure obligations; structured incident reporting on the cadence required by applicable regulation.

 

The Positionality Exercise

NIST AI RMF recommends examining your own team’s perspective and potential blind spots as a formal part of stakeholder engagement. This is not a DEI exercise bolted onto governance — it is a risk identification exercise. The people who build a system determine what gets tested, whose use cases are treated as the default, and which edge cases are visible enough to be addressed.

A team that is demographically similar, professionally homogeneous, or geographically concentrated will systematically miss certain failure modes — not through negligence, but because those failure modes are invisible from their vantage point. NIST AI 600-1 explicitly requires demographically and interdisciplinarily diverse AI red teams because team composition directly affects what gets found.

Question

What It Surfaces

Who is on our team, and what perspectives do we bring?

The default assumptions that are likely to be embedded invisibly in design decisions — about who the user is, what a normal input looks like, what a good outcome means

Whose perspectives are missing?

The populations whose use cases are most likely to produce edge cases, failure modes, or disparate outcomes that the current team won’t anticipate

What assumptions are we making about users and affected parties?

Unstated design decisions that should be explicit and reviewed — assumptions about literacy, language, internet access, document types, workflow patterns

How might our backgrounds shape the system’s design?

Embedded values that the system will operationalise, often without anyone explicitly deciding that it should

Are we representative of the populations affected by this system?

Whether the team has standing to make design decisions on behalf of the affected population, or whether external input is necessary to fill the gap

The point is not to achieve perfect representativeness — it is to recognise blind spots systematically and actively seek perspectives that are absent. Where gaps are identified, they become a project input: who else needs to be consulted, and at what stage? 

Feedback Mechanisms

Stakeholder engagement does not end at deployment. You need ongoing mechanisms for people to provide feedback, report problems, and contest decisions. NIST AI RMF MEASURE 3.3 requires that feedback processes for end users and impacted communities to report problems and appeal system outcomes are established and integrated into AI system evaluation metrics — meaning feedback channels are a measurement input, not a customer service function.

Structured Feedback Approaches

NIST AI 600-1 identifies four approaches that serve different purposes and deployment stages.

Approach

When and How to Use It

Participatory engagement

Focus groups, user studies, surveys — used in early design stages to gather input on intended use, potential impacts, and design choices. Best carried out by personnel with expertise in qualitative methods who can translate community feedback for technical audiences.

Field testing

Structured evaluation of how people interact with the system in realistic conditions, with real users rather than internal testers. Surfaces issues that controlled testing misses. Requires informed consent and human subjects protections where applicable.

Red-teaming

Structured adversarial exercises to identify potential harms. NIST AI 600-1 requires demographically diverse red teams because team composition determines what gets found. Results require analysis before incorporation into governance decisions.

Production feedback channels

Ongoing mechanisms for users and affected parties to report problems after deployment. Must be accessible, offer anonymity options where retaliation is a risk, include a defined response process, and close the loop with those who raised issues.

Feedback Channel Design Requirements

A feedback channel that people cannot access, or fear using, or that produces no visible response, has no governance value. Design choices matter.

Design Element

What It Requires in Practice

Accessibility

Can affected parties actually reach the channel? Is it available in relevant languages, accessible to people with disabilities, reachable without requiring system account creation?

Anonymity options

Will people fear retaliation for reporting problems? Employees reporting AI-related concerns about workplace systems face particular risk. Anonymity mechanisms must be genuine, not nominal.

Response process

How will feedback be triaged, assessed, and addressed? Who owns the response? What is the SLA? A feedback channel with no documented response process is a liability, not a governance mechanism.

Closure loop

How will you communicate back to those who raised issues? Affected parties who report a problem and hear nothing will not report again — and silence signals the channel is performative.

Integration with system updates

How does feedback inform model updates, policy changes, or operational adjustments? MEASURE 3.3 requires feedback to be integrated into evaluation metrics, not siloed in a support queue.

 

Communication Planning

AI projects require communication plans that address transparency and explainability obligations — not just project status. Regulatory frameworks define what you must communicate, to whom, and when. These are project deliverables, not optional transparency gestures.

Required Communications by Framework

Communication Type

Framework Requirement

Notification that AI is involved in a decision

EU AI Act Recital 93: deployers of high-risk AI systems “should, when they make decisions or assist in making decisions related to natural persons, inform the natural persons that they are subject to the use of the high-risk AI system.” This includes the intended purpose and type of decisions.

Right to explanation

EU AI Act Article 86: affected persons have the right to obtain a clear and meaningful explanation of the role of the AI system in any decision that produces legal effects or significantly affects their health, safety, or fundamental rights. The deployer must provide this on request.

Worker notification

EU AI Act Recital 92: employers have an information obligation to workers or their representatives regarding planned deployment of high-risk AI systems at the workplace. This is in addition to existing national labour law obligations.

Regulator notification

EU AI Act Article 27: deployers who perform a Fundamental Rights Impact Assessment must notify the results to the relevant market surveillance authority.

Incident communication

NIST AI RMF MANAGE 4.1 and EU AI Act post-market monitoring obligations: how will you communicate when things go wrong, to users, affected parties, and regulators, and on what timeline?

Tailoring Communication by Audience

The same information must be communicated differently to different audiences. What an executive needs is not what an affected individual needs, and conflating the two satisfies neither.

Audience

Communication Needs

Executives

Risk profile, compliance status, business impact, residual risks accepted — sufficient to support informed governance decisions without requiring technical depth

Technical teams

System specifications, performance metrics against defined thresholds, known limitations, monitoring requirements — sufficient to build, maintain, and improve the system responsibly

End users

How to use the system effectively, how to override or escalate, when the system’s output should not be trusted without verification — practical operational guidance, not a manual

Affected parties

What the system does, how it affects them specifically, what the grounds were for a decision about them, and how to contest that decision — EU AI Act Article 86 sets the standard: clear and meaningful, not technical

Regulators

Technical documentation, TEVV results, fundamental rights impact assessment, incident reports, and compliance evidence — in the form and on the timeline that applicable law requires

 

Continuous Engagement

Stakeholder engagement for AI is not a phase that ends — it is a continuous process. NIST AI RMF is explicit: participatory engagement is not a one-time exercise and is best carried out from the very beginning of AI system commissioning through the end of the lifecycle.

The reasons engagement must be ongoing are structural, not procedural:

•       AI systems can drift and change behaviour over time — communities that were not experiencing adverse outcomes at deployment may experience them after model updates or data distribution shifts

•       New affected parties may emerge as use expands beyond the original deployment context

•       Regulations and societal expectations evolve; an engagement approach that satisfied requirements at launch may not satisfy them two years later

•       Incidents reveal previously unknown impacts — post-incident engagement with affected parties is a governance obligation, not a PR exercise

•       Affected communities may not surface concerns immediately; trust must be built before concerns are shared, which takes time and consistent follow-through

NIST AI RMF MEASURE 4.3 requires that measurable performance improvements or declines based on consultations with relevant AI actors, including affected communities, are identified and documented. Consultation that doesn’t feed back into measurement has no governance value.

Ongoing Touchpoint

What It Should Produce

Regular review of feedback channels

Documented assessment of whether feedback is being received, triaged, and addressed — and whether the volume or content signals emerging issues not visible in performance metrics

Periodic consultation with affected communities

Updated understanding of how the system is actually experienced, not how it was designed to be experienced — these diverge over time

Post-incident engagement with impacted parties

Explanation of what happened, what is being done, and how affected parties can seek redress — required by EU AI Act complaint handling obligations

Annual stakeholder reassessment

Identification of new affected parties as scope evolves; reassessment of whether existing engagement methods are still reaching the right people in the right ways

 

Right-Sizing for Your Situation

The depth of stakeholder engagement should match system risk. A low-risk internal productivity tool requires different engagement than a public-facing system making consequential decisions about individuals’ employment, credit, housing, or healthcare.

Greenfield — Stakeholder Engagement Playbook

For PMs without formal AI stakeholder engagement processes. Simplified stakeholder mapping focused on identifying affected parties beyond the project team, establishing basic feedback channels, and documenting who you consulted and what they said. Covers the EU AI Act Article 27 Fundamental Rights Impact Assessment stakeholder identification requirement for high-risk systems.

Emerging — Stakeholder Engagement Playbook

For PMs building repeatable processes. Full stakeholder analysis framework, engagement planning templates calibrated to stakeholder type and engagement level, feedback mechanism design with integration into MEASURE function metrics, and positionality exercise guide.

Established — Stakeholder Engagement Playbook

For PMs in organisations with formal governance. How to integrate AI stakeholder engagement with existing enterprise stakeholder management frameworks, compliance processes, and community engagement infrastructure — including portfolio-level approaches for organisations running multiple AI systems affecting overlapping communities.

Become a member →

 

Framework References

•       NIST AI RMF 1.0 (NIST AI 100-1, 2023) — MAP 1.6 (targeted consultation with subject matter experts to identify potential negative impacts not previously considered); MAP 5.1 and 5.2 (practices and personnel for regular engagement with relevant AI actors and integration of feedback; documentation required; must be in place across the lifecycle, not only at design); MEASURE 3.3 (feedback processes for end users and impacted communities to report problems and appeal outcomes must be established and integrated into AI system evaluation metrics); MEASURE 4.3 (performance improvements or declines based on consultations with affected communities must be identified and documented); participatory engagement guidance (not a one-time exercise; risk of ‘participation washing’; organisational transparency about purpose required; personnel with expertise in qualitative methods required)

•       NIST AI 600-1: Generative AI Profile (2024) — Participatory engagement methods (focus groups, field testing, structured public feedback); AI red-teaming diversity requirement (demographically and interdisciplinarily diverse teams required; monoculture teams miss context-specific vulnerabilities); informed consent and human subjects requirements for field testing and production feedback collection

•       EU AI Act (Official Journal, 12 July 2024) — Article 27 (Fundamental Rights Impact Assessment: required prior to deployment by specified deployers of high-risk AI; must describe categories of affected persons, specific risks to fundamental rights, human oversight measures, and complaint mechanisms; results must be notified to market surveillance authority); Recital 92 (worker information obligations regarding planned AI deployment at the workplace); Recital 93 (deployers must inform affected natural persons that they are subject to a high-risk AI system, including intended purpose, type of decisions, and right to explanation); Article 86 (right to explanation: affected persons subject to high-risk AI decisions with legal effects or significant impact on health, safety, or fundamental rights have the right to clear and meaningful explanation of the AI system’s role and the main elements of the decision)

•       UNESCO Recommendation on the Ethics of AI (2021) — Article 47 (participation of different stakeholders throughout the AI system lifecycle is necessary for inclusive governance; measures must allow meaningful participation by marginalised groups and communities; governance frameworks should account for shifts in technologies and emergence of new stakeholder groups)

•       AIGP Body of Knowledge (IAPP, 2024) — Structured stakeholder engagement process: evaluating stakeholder salience, determining appropriate level of engagement, establishing engagement methods; positionality exercise as a formal governance input to identify team blind spots and missing perspectives

•       PMI — Guide to Leading and Managing AI Projects (CPMAI, 2025) — Phase I (stakeholder identification including affected parties as a project initiation deliverable); Phase V (stakeholder feedback integration into model evaluation and monitoring)

 

This article is part of AIPMO’s PM Practice series. See also: The AI Project Charter | AI Impact Assessments | Human Oversight in AI Systems