Skip to content

NEW - What the EU AI Act Means for Your Project Timeline

The EU AI Act's August 2026 general application date is a statutory deadline, not a target. Here's what high-risk AI systems must actually deliver, how the conformity assessment pathways work, what the transitional provisions mean for systems already in service, and where the penalties sit.

By AIPMO
Published: · 15 min read

 

PM Takeaways

       The EU AI Act entered into force on 1 August 2024 with a phased application schedule governed by Article 113. The sequence is: prohibited practices banned from 2 February 2025; GPAI obligations applied from 2 August 2025; the main high-risk Annex III requirements apply from 2 August 2026 (the general application date for the regulation); Annex I product-safety high-risk AI (medical devices, vehicles, machinery) applies from 2 August 2027. These are not aspirational milestones — they are statutory application dates. If your high-risk AI system will be operating after 2 August 2026, compliance is a precondition for legal operation in the EU.

       The conformity assessment pathway for most high-risk AI systems under Annex III (points 2–8) is self-assessment under Annex VI — no notified body involvement required. This is often misunderstood. The high-compliance-cost scenario (third-party notified body assessment under Annex VII) applies to Annex III point 1 systems (biometrics, remote biometric identification) and to Annex I product-safety systems. For employment, education, credit, essential services, and most other Annex III use cases, the provider conducts internal conformity assessment and issues an EU Declaration of Conformity. The work is substantial; the pathway is self-administered.

       Article 113 contains an important transitional provision: high-risk AI systems already placed on the market or put into service before 2 August 2026 are not retroactively required to comply — unless they undergo significant changes in design from that date onwards. This means a system deployed before the deadline operates under a grandfathering provision. However, any significant modification triggers the full compliance obligation. PMs planning deliberate pre-deadline deployment to avoid compliance should be aware that subsequent system updates or significant changes will void the exemption.

       The EU AI Act applies extraterritorially. Under Article 2, it covers providers placing AI systems on the EU market regardless of where those providers are established, and systems whose outputs are used in the EU. A company headquartered in the US, UK, or elsewhere that deploys an AI system accessed by EU users, or whose outputs affect EU residents, is within scope. There is no carve-out for non-EU providers. The practical implication is that EU compliance planning is required for any system with EU exposure — it cannot be treated as a market-access concern only for EU-based organisations.

       Non-compliance fines under Article 99 are set at three levels: up to €35 million or 7% of worldwide annual turnover (whichever is higher) for violations of the prohibited practices under Article 5; up to €15 million or 3% for violations of provider, deployer, notified body, and transparency obligations; and up to €7.5 million or 1% for supplying incorrect, incomplete, or misleading information to authorities. These are maximum figures; actual fines take into account the nature and gravity of the infringement, the degree of responsibility, and mitigating actions taken. SME and start-up fines are capped at the lower of the percentage or absolute-amount thresholds.

Project managers understand constraints. Budget, scope, resources, time — every project operates within boundaries. For AI projects with EU exposure, the EU AI Act adds a statutory constraint that cannot be negotiated: regulatory deadlines. The Act is not a framework with flexible implementation; it is a directly applicable EU Regulation with fixed application dates, mandatory requirements, and financial penalties for non-compliance.

The EU AI Act was published in the Official Journal on 12 July 2024 (Regulation (EU) 2024/1689) and entered into force on 1 August 2024. It is the world’s first binding horizontal AI regulation — covering AI systems across sectors and use cases, not just specific applications. Understanding the timeline is the minimum required to determine whether your project has EU AI Act obligations, and if so, when those obligations become binding.

This article maps the implementation timeline from first principles, explains what the compliance activities for high-risk AI systems actually involve, clarifies the conformity assessment pathways that are often misunderstood, and identifies the transitional provisions that affect projects already in flight. 

The Statutory Timeline: Article 113

The EU AI Act’s application schedule is set out in Article 113. The regulation entered into force on 1 August 2024. Its general application date is 2 August 2026. Specific provisions apply earlier or later:

Date

What Applies

1 August 2024

Regulation enters into force. The EU AI Office is established to oversee implementation, in particular for GPAI model providers.

2 February 2025 (6 months)

Chapter I (general provisions) and Chapter II (prohibited AI practices) apply. AI literacy obligations (Article 4) apply. Prohibited systems must have been discontinued by this date.

2 August 2025 (12 months)

Chapter III Section 4 (notifying authorities and notified bodies), Chapter V (GPAI model obligations), Chapter VII (governance), and Chapter XII (penalties) apply. Member states must have designated national competent authorities. GPAI providers must comply with Articles 53–55 obligations (transparency, copyright policy, training data summaries; systemic-risk models face additional safety and security obligations).

2 August 2026 (24 months)

General application date. High-risk AI systems under Annex III must comply with the full Article 8–17 requirements. Transparency obligations for limited-risk AI systems under Article 50 apply. Registration in the EU database (Article 49) required.

2 August 2027 (36 months)

High-risk AI systems covered by EU product safety legislation under Annex I (medical devices, vehicles, machinery, lifts, toys, etc.) must comply. These systems are already subject to sector-specific conformity assessment and the AI Act requirements are integrated into those existing procedures.

One additional transitional provision under Article 113 affects GPAI providers specifically: providers of GPAI models that were already placed on the market before 2 August 2025 have until 2 August 2027 to achieve compliance — an extended period acknowledging the practical difficulty of retrofitting compliance obligations onto models already in deployment. 

What ‘High-Risk’ Means: Annex I vs. Annex III

The EU AI Act uses two annexes to define high-risk AI systems, and the distinction between them matters for compliance planning because the conformity assessment pathways differ.

Annex I — Product-safety AI (August 2027 deadline): AI systems embedded as safety components in products governed by existing EU product harmonisation legislation. This includes medical devices, in vitro diagnostics, aviation, vehicles, agricultural and forestry machinery, marine equipment, railway interoperability systems, lifts, and toys. These systems were already subject to sector-specific conformity assessment processes. The EU AI Act integrates its requirements into those existing processes, and the combined assessment typically requires a notified body.

Annex III — High-impact use case AI (August 2026 deadline): AI systems meeting the risk threshold in eight application areas: biometric identification and categorisation; management of critical infrastructure; educational and vocational training; employment, workers management, and access to self-employment; access to and enjoyment of essential private services and essential public services and benefits; law enforcement; migration, asylum, and border control; administration of justice and democratic processes.

Not every AI system in these categories is automatically high-risk. Article 6(3) provides that an Annex III system is not high-risk if it does not pose a significant risk of harm to health, safety, or fundamental rights of natural persons — including by not materially influencing the outcome of decision-making. Providers can self-classify downwards and register the reasoning in the EU database; the classification is subject to challenge by market surveillance authorities. 

The Compliance Pathway: What High-Risk AI Systems Must Do

For Annex III high-risk AI systems with an August 2026 application date, the compliance obligations under Articles 8–17 are substantial but well-defined. The following are required before the system is placed on the EU market or put into service:

Article 9: Risk Management System

A documented risk management system that operates as a continuous iterative process throughout the AI system’s entire lifecycle. Not a one-time risk assessment — a system. Required components include: identification and analysis of known and reasonably foreseeable risks associated with the intended purpose and under conditions of reasonably foreseeable misuse; estimation and evaluation of those risks; adoption of appropriate risk management measures; testing to ensure the system performs as intended and meets the requirements of the Regulation. Article 9(2)(b) explicitly requires consideration of risks that may emerge from foreseeable misuse, not just intended use. This is a design requirement, not a documentation exercise.

Article 10: Data Governance

Training, validation, and testing data must be subject to data governance practices addressing: the design choices underlying data collection; the origin and collection processes; preprocessing operations (annotation, labelling, cleaning, enrichment); assumptions made about what the data represents; assessment of availability and suitability; examination for biases likely to affect health, safety, or fundamental rights; and identification of data gaps. Article 10(3) requires that data be relevant, sufficiently representative, and as free as possible from errors in view of the intended purpose.

Article 11 and Annex IV: Technical Documentation

Technical documentation must be drawn up before the system is placed on the market or put into service and must be kept up to date throughout the system’s lifecycle. It must contain all information necessary to assess compliance, including: a general description of the system and its intended purpose; a detailed description of system elements and development process; information on monitoring, functioning, and control; a description of the risk management system; information about post-market monitoring plans; and performance metrics disaggregated by relevant subgroups. Technical documentation must be maintained for ten years after the system is placed on the market (Article 18).

Article 12: Record-Keeping and Logging

High-risk AI systems must be designed and developed to automatically generate logs to the extent technically feasible. Logs must enable, at minimum, recording of each use of the system (start date/time and end date/time), the reference database against which input data was checked, input data that led to results, and the identity of the natural persons involved in verification where relevant. Deployers must retain logs for a minimum of six months.

Article 13 and 14: Transparency and Human Oversight

High-risk AI systems must be accompanied by instructions for use in appropriate format, covering: the system’s intended purpose; level of accuracy and performance metrics; known and foreseeable circumstances that may lead to risks; technical capabilities for output explanation; and human oversight measures required. Human oversight must be designed into the system so that natural persons assigned to oversight can understand the system’s capacities and limitations, remain aware of automation bias, interpret outputs correctly, override or disregard outputs, and halt the system. These are architecture requirements to be resolved at design, not training topics to be addressed at deployment.

Article 15: Accuracy, Robustness, and Cybersecurity

High-risk AI systems must achieve an appropriate level of accuracy, robustness, and cybersecurity throughout their lifecycle. They must be resilient to errors, faults, and inconsistencies arising from interaction with people or other systems. For systems that continue to learn after deployment, measures must prevent learning processes from giving rise to outputs that contradict the system’s initial performance or the requirements of the Regulation.

Article 43: Conformity Assessment

This is the step most frequently misunderstood in compliance planning. There are two conformity assessment pathways, and most Annex III systems do not require a notified body:

•       Annex III points 2–8 (employment, education, essential services, law enforcement, migration, justice, etc.): Providers follow the internal control procedure in Annex VI. No notified body involvement. The provider conducts the assessment, verifies compliance with all Article 8–17 requirements, draws up technical documentation, and issues an EU Declaration of Conformity (Article 47). The CE marking is affixed (Article 48).

•       Annex III point 1 (biometric identification and categorisation, including remote biometric identification): Providers must use the procedure in Annex VII involving a notified body, unless they have applied harmonised standards. For law enforcement, immigration, and asylum biometric applications, the relevant national market surveillance authority acts as notified body.

•       Annex I product-safety systems: Providers follow the conformity assessment procedure specified in the relevant sector legislation, with AI Act requirements integrated into that process. These typically involve notified bodies.

For the vast majority of Annex III high-risk AI systems — the employment, education, credit, healthcare triage, and essential services use cases that most PMs encounter — conformity assessment is a self-administered process. It is work-intensive because it requires demonstrating compliance with Articles 9–15, but it does not require engaging a third-party notified body and does not depend on the availability of approved notified bodies for scheduling purposes.

Article 49: EU Database Registration

Before placing a high-risk Annex III system on the market, providers must register it in the EU database maintained by the Commission. The registration must include: the provider’s identity and contact details; the AI system’s name, type, and intended purpose; the member states in which the system is placed on the market or put into service; and a copy of the EU Declaration of Conformity. Public authority deployers must also register use of third-party high-risk AI systems, including a summary of their Fundamental Rights Impact Assessment findings.

Article 72: Post-Market Monitoring

Compliance does not end at deployment. Providers must establish a post-market monitoring system that actively and systematically collects, documents, and analyses relevant data on system performance throughout the system’s lifetime. The monitoring plan forms part of the technical documentation. Article 73 requires reporting of serious incidents — incidents resulting in death, serious harm to persons, significant disruption to critical infrastructure, or violations of fundamental rights — to market surveillance authorities within 15 days (or immediately for death or widespread serious incidents). 

Transitional Provisions: Systems Already in Service

Article 113 contains a critical transitional provision that affects planning for systems currently deployed or under development. High-risk AI systems already placed on the market or put into service before 2 August 2026 are not required to comply with the Act’s high-risk requirements — unless, from that date, those systems undergo significant changes in their design.

The implication is a grandfathering mechanism for existing deployments: if your high-risk system is operating before the deadline and does not undergo significant design changes, it continues to operate under the pre-Act legal framework. However, any significant change — new intended use, significant modification to the algorithm or training data, material change to the human oversight design — triggers the full compliance obligation from the date of that change.

PMs evaluating whether to accelerate deployment ahead of the August 2026 deadline to secure the transitional exemption should be clear-eyed about what that strategy implies: it means committing not to make significant design changes, which constrains the system’s future development roadmap for as long as you want to rely on the exemption. For systems with planned iterative development, the exemption may be less durable than it appears.

There is one additional transitional provision specifically for public authorities: providers and deployers of high-risk systems intended to be used by public authorities must comply by 2 August 2030, regardless of when the system was placed on the market — a further extended timeline reflecting the procurement and implementation cycles of government technology. 

Extraterritorial Scope: The EU AI Act Is Not Just for EU Organisations

Article 2 defines the regulation’s scope of application. It covers:

•       Providers placing AI systems on the EU market or putting them into service in the EU, regardless of where those providers are established.

•       Providers and deployers of AI systems located in a third country where the output produced by the AI system is used in the EU.

•       Importers and distributors of AI systems.

•       Product manufacturers placing an AI system on the market or into service with their own product under their name or trademark.

The practical implication is that EU AI Act obligations apply to any organisation whose AI system is accessed by EU users or whose outputs affect EU residents — regardless of whether that organisation is based in the EU. A US or UK company deploying a high-risk AI system to EU customers faces the same obligations as an EU-based provider. There is no territorial safe harbor for non-EU providers.

Non-EU providers without an EU establishment are required to designate an authorised representative in the EU (Article 22) before placing a high-risk AI system on the EU market. The authorised representative carries the regulatory interface role: accepting responsibilities on behalf of the provider, providing the compliance documentation to national competent authorities on request, and cooperating with market surveillance. 

GPAI Models: Obligations Already in Effect

If your project deploys a general-purpose AI model — a large language model, a multimodal foundation model, or another model trained on broad data and capable of a wide range of tasks — the upstream provider of that model has been subject to GPAI obligations since 2 August 2025.

Under Article 53, GPAI model providers must: draw up and maintain technical documentation; draw up and provide documentation for downstream providers; comply with EU copyright law and publish a summary of training data (with an exception for open-weight models released under open-source licences); establish a policy for complying with the Copyright Directive. For GPAI models with systemic risk — those trained with compute above 10^25 FLOPs, or those designated by the Commission as systemically significant — additional obligations under Article 55 apply: adversarial testing; incident reporting; cybersecurity measures; and energy efficiency reporting.

The GPAI Code of Practice, developed by the AI Office with input from over 1,000 industry, civil society, and academic representatives, was finalised in July 2025. Providers who sign the Code and adhere to it benefit from a presumption of compliance with Articles 53 and 55 requirements. As of late 2025, major AI providers including several large US-based foundation model companies had signed the Code.

PM implication: If your project uses a commercially available GPAI model via API, your vendor compliance posture affects your own compliance position. If the upstream provider’s model is used in a high-risk AI system you build on top of it, Article 25 requires that you, as the downstream provider, demonstrate compliance with all high-risk requirements — relying on whatever documentation the GPAI provider makes available but remaining responsible for the overall system’s compliance. Request GPAI technical documentation from your providers and build provider compliance verification into your vendor management process. 

Penalties: What Non-Compliance Actually Costs

Article 99 sets out three fine levels, with member states responsible for implementing penalty regimes within these ceilings:

Violation

Maximum Fine

Prohibited AI practices (Article 5): social scoring, banned biometric applications, manipulation, exploitation of vulnerabilities

€35,000,000 or 7% of total worldwide annual turnover, whichever is higher

Provider obligations (Article 16), deployer obligations (Article 26), notified body requirements, transparency obligations (Article 50)

€15,000,000 or 3% of total worldwide annual turnover, whichever is higher

Supplying incorrect, incomplete, or misleading information to notified bodies or national competent authorities in response to a request

€7,500,000 or 1% of total worldwide annual turnover, whichever is higher

For SMEs, including start-ups, fines are capped at the lower of the percentage or absolute-amount thresholds — a deliberate concession in Article 99(6) to avoid disproportionate penalties on smaller operators.

These are maximum figures. Actual enforcement will vary by member state and the circumstances of the infringement. National competent authorities are required to consider mitigating factors including: the gravity and duration of the infringement; the degree of responsibility; mitigating actions taken; good-faith compliance efforts; and whether the infringer cooperated with the investigation. A provider that can demonstrate a structured compliance program and documented risk management will be in a materially different position than one with no governance framework. 

Building the Timeline Into Your Project

Compliance is not a phase at the end of your project — it is work distributed across the entire lifecycle. The following maps compliance activities to project phases:

Project Phase

EU AI Act Compliance Activities

Initiation and chartering

Risk classification: is the system high-risk under Annex I or Annex III? If borderline, document the classification reasoning (needed for Article 6(3) self-classification or as defense against challenge). Assess extraterritorial scope: are EU users or EU-resident affected parties in scope? Identify applicable obligations and deadlines. Plan the conformity assessment pathway: Annex VI self-assessment vs. Annex VII notified body. Confirm whether a FRIA (Article 27) is required.

Design

Design human oversight capabilities to meet Article 14(4)(a)–(e) — these are architecture decisions that cannot be retrofitted. Design logging and record-keeping to meet Article 12. Begin technical documentation (Article 11/Annex IV) — this is a living document, not a pre-deployment deliverable. Establish data governance framework for Article 10 compliance.

Development and testing

Execute the Article 9 risk management program including bias examination and mitigation. Document data provenance, preprocessing decisions, and representativeness assessments (Article 10). TEVV (testing, evaluation, verification, and validation) to demonstrate Article 15 accuracy, robustness, and cybersecurity. Continue maintaining and updating technical documentation.

Pre-deployment (conformity assessment gate)

Complete the Annex VI internal conformity assessment (or Annex VII notified body assessment if applicable). Draw up EU Declaration of Conformity (Article 47). Affix CE marking (Article 48). Register in EU database (Article 49). For public authority deployers: complete FRIA (Article 27) and notify market surveillance authority. Designate an EU authorised representative if provider is not EU-established (Article 22).

Post-deployment (ongoing)

Execute post-market monitoring plan (Article 72). Maintain logs per Article 12; deployers retain logs for minimum six months. Report serious incidents to market surveillance authorities within 15 days (or immediately for death/widespread harm) under Article 73. Update technical documentation as the system evolves. Reassess compliance if significant design changes are made.

 

Right-Sizing for Your Situation

The formality of compliance planning should match the risk level of your system and the proximity of the August 2026 deadline. A high-risk Annex III system deploying to EU customers in 2026 requires a compliance program that is resourced, scheduled, and tracked as project work. A minimal-risk internal tool with no EU exposure requires none of this. The risk classification decision, made at chartering, determines the compliance scope.

Greenfield — EU AI Act Timeline Playbook

For PMs without formal compliance processes or EU AI Act experience. A practical classification checklist: does your system fall under Annex III? What are the application dates that matter? What does a self-assessment under Annex VI actually involve? A readable guide to understanding whether and when the Act applies to your project, with a simplified compliance activity list for Annex III point 2–8 systems.

Emerging — EU AI Act Timeline Playbook

For PMs building repeatable compliance processes or managing first EU AI Act-scoped projects. How to integrate regulatory milestones into project planning. Includes: a risk classification framework with worked examples across Annex III categories; an Article 9 risk management system design template; an Article 10 data governance checklist; a technical documentation outline (Annex IV); and a conformity assessment timeline with buffer planning guidance.

Established — EU AI Act Timeline Playbook

For PMs in organisations with formal compliance functions and existing legal/compliance team engagement on EU AI Act. Covers: coordinating compliance activities across the project lifecycle with legal and compliance; GPAI provider due diligence and downstream compliance documentation; FRIA design and notification process; post-market monitoring plan structure; incident reporting workflow; and harmonised standard adoption strategy (ISO 42001 crosswalk).

Become a member →

 

Framework References

•       EU AI Act (Regulation (EU) 2024/1689) — Articles 2, 4, 5, 6(3), 8–17, 18, 22, 25, 43, 47, 49, 72, 73, 99, 113; Annexes I, IV, VI, VII. Territorial scope, prohibited practices, high-risk requirements, conformity assessment pathways, post-market monitoring, penalties, and implementation timeline.

•       EU AI Act Annex III — Eight high-risk use-case categories: biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and administration of justice.

•       EU AI Act Articles 53–55 — GPAI model obligations: transparency, copyright compliance, training data summaries; additional safety and security obligations for systemic-risk models above 10^25 FLOPs.

•       GPAI Code of Practice (EU AI Office, July 2025) — Voluntary compliance tool for Articles 53 and 55; adherence creates a presumption of conformity for GPAI model providers.

This article is part of AIPMO’s Frameworks series. See also: AI Impact Assessments | AI Risk Classification | The PM’s Guide to NIST AI RMF