Skip to content

NEW - AI Risk Classification: A Project Scoping Exercise

The EU AI Act's four-tier risk framework is a project scoping tool, not just a compliance checklist. This article covers all four tiers, the Article 6(3) downward self-classification mechanism, the profiling override that closes the most common gap, and how classification drives planning decisions.

By AIPMO
Published: · 11 min read

 

PM Takeaways

       Risk classification under the EU AI Act is a scoping tool, not just a compliance exercise. Getting it right at initiation determines stakeholder scope, documentation requirements, testing rigor, timeline, and whether the project is viable at all. A high-risk classification that surfaces mid-execution is far more expensive than one identified during chartering.

       The EU AI Act’s high-risk framework has a frequently missed nuance: Annex III use-case categories are a necessary but not sufficient condition for high-risk classification. Article 6(3) provides four conditions under which an Annex III system is not high-risk — narrow procedural tasks, improvement of a previously completed human activity, pattern detection without replacing human assessment, and preparatory tasks. The exception has a hard floor: any Annex III system that profiles natural persons is always high-risk regardless of Article 6(3).

       Article 6(3) self-classification downward is not free. A provider who determines that an Annex III system is not high-risk must document that assessment before placing the system on the market, register the reasoning in the EU database under Article 49(2), and provide documentation to national competent authorities on request. Market surveillance authorities can challenge the classification under Article 80, and misclassification to circumvent requirements triggers the full penalty regime under Article 99.

       Classification can change. A system that qualifies for the Article 6(3) narrow-procedural exception today may lose that classification if the scope expands — for example, if a preparatory-task tool is subsequently redesigned to influence the outcome of the human assessment it was meant to support, or if a pattern-detection system is configured to automatically act on its findings. Build classification review into your change control process, and treat scope changes in Annex III use-case areas as potential classification-trigger events.

       The EU AI Act’s framework is useful for non-EU projects too. The four-tier structure — prohibited, high-risk, limited risk, minimal risk — gives teams a principled vocabulary for harm-based scoping conversations, even where EU law does not apply directly. The prohibited category in particular maps well onto organizational red lines that most AI governance policies need regardless of jurisdiction.

Every project needs scope definition. For AI projects, that means answering not just ‘what are we building?’ but ‘how risky is what we’re building, and to whom?’ The answer shapes everything from governance overhead to stakeholder engagement to go/no-go decisions.

The EU AI Act’s risk classification framework provides a structured, legally grounded basis for that question. Even for projects outside EU regulatory scope, the framework is worth using: it forces systematic thinking about potential harms to natural persons, it maps those harms to governance obligations with real consequence, and it gives teams a shared vocabulary for scoping conversations that often stall on vague notions of ‘responsible AI.’

This article walks through the classification structure as defined in the Act, covers the Article 6(3) downward self-classification mechanism that the original tier descriptions usually omit, explains the profiling override that closes the most common compliance gap, and maps classification outcomes to project planning decisions. 

The Four-Tier Classification Structure

The EU AI Act classifies AI systems into four risk categories. Each carries a different set of obligations. Classification is determined by the system’s intended purpose and use case — not by its technical architecture, training data, or model type.

Unacceptable Risk — Prohibited (Article 5)

These practices are banned outright. The prohibition applied from 2 February 2025. There are no compliance pathways for prohibited systems — they cannot be placed on the market or put into service in the EU under any conditions:

•       Subliminal, manipulative, or deceptive techniques that distort behaviour and impair informed decision-making, causing significant harm

•       Exploitation of vulnerabilities related to age, disability, or socio-economic circumstances to distort behaviour, causing significant harm

•       Biometric categorisation systems that infer sensitive attributes — race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation — from biometric data (with a narrow exception for lawful labelling or filtering of lawfully acquired datasets)

•       Social scoring: evaluating or classifying individuals based on social behaviour or personal traits in ways that cause detrimental or disproportionate treatment in unrelated contexts

•       Assessing the risk of an individual committing a criminal offence based solely on profiling or personality traits, without objective verifiable facts directly linked to criminal activity

•       Compiling facial recognition databases by untargeted scraping of facial images from the internet or CCTV footage

•       Inferring emotions of individuals in workplaces or educational institutions, except for medical or safety purposes

•       Real-time remote biometric identification in publicly accessible spaces for law enforcement purposes, subject to three narrowly defined exceptions: targeted search for missing or trafficked persons; preventing specific, substantial, and imminent threat to life or foreseeable terrorist attack; identifying suspects in serious crimes

PM implication: If your project involves any of these practices, stop. The prohibition is absolute — no risk management system, no conformity assessment, no documentation pathway makes a prohibited system compliant. Redesign the use case or cancel the project.

High Risk (Articles 6, 8–17)

High-risk AI systems are authorised but subject to a mandatory compliance regime before they can be deployed. Two categories qualify as high-risk under Article 6:

Article 6(1) — Safety components in regulated products: AI systems intended as a safety component of, or themselves constituting, a product covered by EU harmonisation legislation listed in Annex I, where that product is required to undergo third-party conformity assessment. This covers AI in medical devices, in vitro diagnostics, civil aviation safety systems, vehicles, agricultural machinery, lifts, toys, and radio equipment. These systems must satisfy both the sector-specific conformity assessment requirements and the AI Act’s requirements under Articles 8–17.

Article 6(2) — Annex III use-case categories: AI systems falling within the eight application areas listed in Annex III. These are: (1) biometric identification and categorisation of natural persons; (2) management and operation of critical infrastructure; (3) education and vocational training; (4) employment, workers management, and access to self-employment; (5) access to essential private and public services and benefits; (6) law enforcement; (7) migration, asylum, and border control; (8) administration of justice and democratic processes.

The compliance obligations for high-risk AI systems under Articles 8–17 include: a risk management system maintained throughout the lifecycle (Article 9); data governance over training, validation, and testing datasets (Article 10); technical documentation maintained for ten years (Article 11); automatic logging and record-keeping (Article 12); transparency and instructions for use (Article 13); five specific human oversight capabilities (Article 14); and accuracy, robustness, and cybersecurity requirements (Article 15). Before deployment, providers must complete conformity assessment and register in the EU database.

PM implication: High-risk classification means compliance work is a project deliverable, not a governance aspiration. Budget, schedule, and resource plans must account for risk management system development, data governance documentation, technical documentation maintenance, conformity assessment, and post-deployment monitoring. For most Annex III systems, self-assessment under Annex VI is the conformity assessment pathway — no notified body is required. The work is substantial; the administrative process is self-administered.

Limited Risk (Article 50)

AI systems that interact directly with people or generate certain types of content carry transparency obligations, but not the full high-risk compliance regime. The specific obligations under Article 50 are:

•       Chatbots and conversational AI: Deployers must ensure that users are informed they are interacting with an AI system, unless this is obvious from the context.

•       Emotion recognition systems and biometric categorisation systems: Providers and deployers must inform natural persons who are exposed to the system.

•       Deepfakes and AI-generated or manipulated content: Providers must ensure that AI-generated images, video, audio, and text are labelled as artificially generated or manipulated in a machine-readable format, with a visible marking for audio-visual content.

•       AI-generated text on matters of public interest: Must be disclosed as AI-generated, unless the text has undergone substantial human review or editorial control.

PM implication: Limited risk requires disclosure mechanisms built into the user experience, not a compliance program. The design question is: how and when does the system notify users of AI involvement? Build this into UX requirements and test it explicitly.

Minimal Risk

AI systems that do not fall into the above categories. Spam filters, recommendation engines, content moderation tools, most internal business process automation, and video game AI all fall here. The EU AI Act imposes no obligations on minimal-risk systems.

PM implication: No regulatory requirements does not mean no governance. Organisational AI policies, stakeholder expectations, and reputational risk considerations may still apply. Minimal-risk classification is a floor, not a ceiling. 

The Article 6(3) Mechanism: Annex III Does Not Always Mean High Risk

This is the most commonly omitted element in risk classification summaries. Article 6(3) provides that an AI system falling within an Annex III use-case category shall not be considered high-risk if it does not pose a significant risk of harm to health, safety, or fundamental rights — including by not materially influencing the outcome of decision-making. This is not a policy aspiration; it is a statutory carve-out with defined conditions.

The downward classification is available where any of the following conditions is met:

Article 6(3) Condition

What It Means in Practice

(a) Narrow procedural task

The system performs a bounded, well-defined task within a larger human-administered process. Example: an Annex III employment-context tool that extracts structured data from unstructured CVs for display to a human recruiter, without ranking, scoring, or filtering candidates.

(b) Improvement of a previously completed human activity

The system augments or improves the output of an assessment that a human has already completed independently. Example: a proofreading or formatting tool applied to a draft assessment already written by a human assessor.

(c) Pattern detection without replacing human assessment

The system detects patterns or deviations in prior decision-making data, and is not configured to replace or influence the human assessment without proper human review. Example: an analytics dashboard that flags statistical anomalies in historical hiring decisions for HR review, without making recommendations.

(d) Preparatory task

The system performs a task that is preparatory to an assessment under an Annex III use case. Example: a document classification tool that organises files before a human caseworker reviews them for a benefits eligibility determination.

There is a hard override that closes the most common gap: regardless of whether any Article 6(3) condition is met, an Annex III system is always high-risk if it performs profiling of natural persons. The Act defines profiling as automated processing of personal data to evaluate aspects of a natural person’s life — work performance, economic situation, health, personal preferences, interests, reliability, behaviour, location, or movements. Any Annex III system that scores, ranks, or categorises individuals against these dimensions is high-risk, period.

The administrative consequence of invoking Article 6(3) is also frequently overlooked. A provider who determines that an Annex III system is not high-risk must: document the assessment before placing the system on the market; register the documentation in the EU database under Article 49(2); and provide documentation to national competent authorities on request. Market surveillance authorities can challenge the classification under Article 80. If a system is found to have been misclassified to circumvent the compliance requirements, the provider faces fines under Article 99.

The practical implication for PMs: Article 6(3) is a documented, challengeable position, not a self-issued exemption. If you intend to rely on it, the classification reasoning must be written down before deployment, kept updated as the system evolves, and defensible on its merits. 

Classification Drives Project Decisions

Risk classification is most useful when it happens at initiation, because the classification outcome determines the shape of the project — not just its governance overhead. The following maps classification to planning decisions:

Planning Decision

How Classification Changes It

Stakeholder scope

Minimal risk: IT and business owner. Limited risk: UX, legal, communications. High risk: legal, compliance, affected user groups, data governance, potentially external stakeholders and regulators. Prohibited: stop.

Documentation

Minimal risk: standard project documentation. Limited risk: transparency disclosure records. High risk: technical documentation demonstrating Articles 8–17 compliance, data governance records, risk management logs, 10-year retention.

Testing

Minimal risk: functional QA. Limited risk: disclosure mechanism testing. High risk: accuracy, bias, robustness, adversarial testing, validation against intended purpose across subgroups, documented TEVV program.

Timeline and budget

High-risk compliance work — risk management system, data governance, technical documentation, conformity assessment, EU database registration — adds material time and cost. Identify this at chartering, not during execution.

Go/no-go

Classification is a viability test. If the project falls in the prohibited tier, stop. If it falls in the high-risk tier and the organisation cannot meet the compliance requirements, change the use case, scope down to a non-high-risk configuration, or do not proceed.

Change control

Scope changes in Annex III use-case areas, changes that add profiling functionality, and changes that extend the system’s decision influence are classification-trigger events. Route them through classification review, not just standard change control.

 

Classification Checklist: Working Through It at Initiation

Apply these questions during project initiation or as part of an AI Impact Assessment. Work through them in order — each positive answer determines the outcome.

Question

If Yes

1. Does the system use subliminal manipulation, social scoring, banned biometrics, untargeted facial scraping, workplace/school emotion inference, or real-time public biometric identification for law enforcement?

Prohibited. Stop — no compliance pathway exists.

2. Is the system a safety component of, or itself a product governed by, EU product harmonisation legislation in Annex I, and does that product require third-party conformity assessment?

High risk under Article 6(1). Articles 8–17 obligations apply plus sector conformity assessment.

3. Does the system fall within any of the eight Annex III use-case categories (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice)?

Potentially high risk. Proceed to Questions 4 and 5.

4. Does the system perform profiling of natural persons (automated processing to evaluate work performance, economic situation, health, preferences, behaviour, location)?

High risk regardless of anything else. Article 6(3) does not apply. Articles 8–17 obligations apply.

5. Does the system perform only a narrow procedural task, improve a previously completed human activity, detect patterns without replacing human assessment, or perform only a preparatory task?

Potentially not high risk under Article 6(3). Document the reasoning, register in EU database under Article 49(2), and review whenever scope changes.

6. Does the system interact directly with users, generate synthetic content, or infer emotions?

Limited risk under Article 50. Transparency and disclosure obligations apply.

7. None of the above?

Minimal risk. No EU AI Act obligations, but organisational governance may still apply.

A few practical notes on applying the checklist. First, classification is based on intended purpose — the use case the system is designed and deployed for, not the use case it theoretically could be used for. A general-purpose large language model is not inherently high-risk; a deployment of that model for automated employment screening is. Second, context matters: the same underlying model deployed in two different organisational contexts may have two different classifications. Third, if classification is genuinely borderline, the conservative choice is high-risk, because the documentation burden of a wrongly downward-classified system is far greater than the compliance burden of a correctly classified one. 

Right-Sizing for Your Situation

The formality of classification work should match your context. A team doing a first-ever AI project in an Annex III area needs a guided framework; an organisation running a portfolio of AI projects needs a classification process integrated into standard intake and approval workflows.

Greenfield — AI Risk Classification Playbook

For PMs running their first AI project or working without a formal AI governance process. A practical classification checklist, worked examples across the Annex III categories, guidance on when Article 6(3) applies and when it doesn’t, and a summary of what high-risk classification actually requires you to do.

Emerging — AI Risk Classification Playbook

For PMs building a repeatable classification process across multiple projects. How to create a classification decision record template, route classification through change control, and maintain classification defensibility over a system’s lifecycle.

Established — AI Risk Classification Playbook

For PMs in organisations with existing AI governance functions. How to integrate EU AI Act classification into existing intake and approval workflows, coordinate with legal and compliance on Article 6(3) assessments, and connect classification outcomes to the broader AI Impact Assessment process.

Become a member →

 

Framework References

•       EU AI Act (Regulation (EU) 2024/1689) — Article 5 (prohibited practices), Article 6 (classification rules: 6(1) Annex I safety components; 6(2) Annex III use cases; 6(3) downward self-classification conditions; profiling override), Article 7 (Commission power to amend Annex III), Articles 8–17 (high-risk system requirements), Article 49(2) (registration obligation for self-classified non-high-risk systems), Article 50 (limited-risk transparency obligations), Article 80 (market surveillance challenge to non-high-risk classification), Article 99 (penalties for misclassification). Core framework for all four tiers and the Article 6(3) mechanism.

•       EU AI Act Annex III — Eight high-risk use-case categories: biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and administration of justice.

•       NIST AI RMF 1.0 (NIST AI 100-1, 2023) — GOVERN 1.1, MAP 1.1, MAP 1.5, MAP 2.1. Organisational risk tolerance, context establishment, intended use specification, and go/no-go framing as a complementary governance layer for projects regardless of EU jurisdictional scope.

 

This article is part of AIPMO’s Frameworks series. See also: AI Impact Assessments | EU AI Act: Implementation Timeline | The PM’s Guide to NIST AI RMF | ISO 42001 for Project Managers