Skip to content

NEW - AI Governance for U.S. Projects: What Actually Applies?

The US has no comprehensive federal AI law. What applies to your project depends on your sector, your use cases, and which states your users are in. Here's how to navigate a landscape built from executive orders, voluntary standards, existing law, and accelerating state regulation.

By AIPMO
Published: · 16 min read

 

PM Takeaways

       The United States has no comprehensive federal AI law. Federal governance operates through executive orders (which primarily apply to federal agencies and contractors, not the general private sector), voluntary frameworks such as the NIST AI RMF, and existing law interpreted and applied by agencies such as the FTC and EEOC. Understanding what ‘the law’ requires for your AI project means starting with what sector you’re in and what laws already apply, not looking for an AI-specific statute to follow.

       NIST AI RMF 1.0 is voluntary but is the primary reference standard for AI risk governance in the US private sector. Its voluntary status does not mean it is optional in practice: enterprise customers, government contractors, and insurance underwriters increasingly expect documented NIST AI RMF alignment. More importantly, the GOVERN, MAP, MEASURE, and MANAGE functions provide a structured governance approach that reduces legal exposure under existing laws even where no AI-specific compliance requirement exists.

       The most significant US enforcement precedent on algorithmic discrimination is EEOC v. iTutorGroup (2023), in which the EEOC successfully sued a company whose automated hiring system rejected female applicants aged 55 and older and male applicants aged 60 and older. The case settled for USD 365,000. The principle it establishes is straightforward: the use of an algorithmic system to make or screen employment decisions does not change the employer’s obligations under Title VII, the ADA, or the ADEA. If your AI system makes or influences employment decisions, existing federal civil rights law is already in scope.

       Colorado’s Artificial Intelligence Act (effective June 30, 2026) is the most comprehensive state AI law currently in effect in the United States. It requires developers and deployers of high-risk AI systems — defined by the consequential decisions they make, including housing, employment, education, healthcare, and credit — to implement risk management programs and conduct impact assessments to prevent algorithmic discrimination. If your system makes consequential decisions affecting Colorado residents, you have compliance obligations regardless of where your organisation is incorporated.

       State AI legislation is accelerating. In 2024 alone, 700 AI legislative proposals were introduced across US states. The federal government’s attempt to impose a ten-year moratorium on state-level AI legislation enforcement was rejected by the Senate in a 99-1 vote. The practical implication for PMs is that the compliance landscape will continue to grow more complex, and governance frameworks built on voluntary standards today are the infrastructure that makes compliance with mandatory requirements tractable when they arrive.

If you are managing AI projects in the United States, the most common early question is: what law governs this? The answer is structurally different from managing AI projects in the European Union, where a single regulation — the EU AI Act — provides a comprehensive framework with defined risk tiers, mandatory requirements, and an enforcement structure. In the US, there is no equivalent.

What exists instead is a layered combination of executive orders primarily applicable to federal agencies, voluntary standards without legal mandate, sector-specific agency guidance extending existing law to AI contexts, and an accelerating body of state legislation that varies by jurisdiction and use case. Understanding the US AI governance landscape is not about finding the applicable AI statute: it is about understanding how multiple overlapping sources of obligation interact for your specific system, sector, and states of operation.

This article maps that landscape for project managers. It explains what the federal framework actually consists of, what the key state laws require, which existing laws already apply to AI systems, and what practical governance steps make sense in an environment where voluntary standards are the floor, not the ceiling.

 

The Federal Landscape: Voluntary, Sectoral, and Evolving

No Omnibus AI Law

The United States has no single federal statute that comprehensively governs AI development, deployment, or risk management in the private sector. This reflects a deliberate policy approach: the US has traditionally preferred market-driven self-regulation over prescriptive government mandates when addressing emerging technology risks, reflecting a priority for competitive innovation over regulatory uniformity.

Federal AI governance comes from three main sources, each operating differently and covering different actors:

•       Executive orders. Presidential directives have addressed AI policy across multiple administrations, but these primarily govern how federal agencies and government contractors develop, acquire, and use AI. They do not directly regulate private sector AI projects unless your work involves federal contracts or government programs.

•       Agency guidance. Federal agencies have issued interpretive guidance clarifying how their existing statutory authorities apply to AI. This is enforcement-oriented: it tells you what the FTC, EEOC, FDA, and SEC already have authority to act on, not what new requirements AI-specific law creates.

•       Voluntary frameworks. NIST AI RMF 1.0 is the primary federal contribution to AI governance for the private sector. It is explicitly voluntary, designed to be rights-preserving, non-sector-specific, and use-case agnostic — but it carries significant practical weight as the reference standard against which responsible AI governance practice is measured.

Executive Orders: Government AI, Not Private Sector Mandates

Three executive orders frame the current federal posture:

Executive Order

Scope and PM Implication

EO 13960 (2020) — Promoting Trustworthy AI in Federal Government

Established AI principles for federal agency use (accuracy, reliability, explainability, safety, security, accountability) and laid groundwork for subsequent federal AI governance. PM implication: Applies directly to federal agency projects; signals the governance principles federal procurement will reflect.

EO 14110 (2023) — Safe, Secure, and Trustworthy AI (Biden)

Comprehensive order directing federal agencies on AI risk management, civil rights, workforce impacts, and international coordination. Directed NIST standards development, EEOC and DOJ civil rights enforcement guidance, and sector-specific agency action. PM implication: Though largely rescinded in 2025 under EO 14179, this order produced agency guidance and enforcement activity that remains in effect — including EEOC guidance on algorithmic discrimination.

EO 14179 (2025) — Removing Barriers to American AI Leadership (Trump)

Revoked EO 14110, directed development of an AI Action Plan, and signalled a shift toward reducing regulatory friction on AI development. PM implication: Federal posture now prioritises innovation over precaution at the policy level. Enforcement activity by agencies under existing law continues regardless of executive policy direction.

The key PM implication across all three orders: unless your project involves federal contracts, federal procurement, or federal agency programs, executive orders do not directly regulate your AI project. They signal the direction of federal policy and influence what government customers will expect — but they are not legal requirements for private sector deployments.

Agency Guidance: Existing Law Already Applies

Federal agencies have been clear: the use of AI does not exempt organisations from legal obligations that apply to their sector. Agency guidance does not create new law; it clarifies how existing law applies in an AI context. The enforcement implications, however, are concrete.

The most instructive case is EEOC v. iTutorGroup, the first landmark algorithmic discrimination enforcement action under existing federal civil rights law. The EEOC sued iTutorGroup after its automated hiring platform was found to automatically reject female applicants aged 55 and older, and male applicants aged 60 and older, in violation of the Age Discrimination in Employment Act. The case settled for USD 365,000 in 2023. The company’s use of an automated system was not a defense — the discrimination embedded in the algorithm’s decision logic was the discrimination under the statute.

This is the enforcement principle that PMs need to internalise: if your AI system performs a function that is already regulated — employment screening, credit decisioning, benefit eligibility, medical diagnosis, financial advice — the regulatory framework governing that function applies to your AI system. The following table summarises the key agency positions across sectors:

Area

Applicable Law

AI-Specific Enforcement Position

Consumer protection

FTC Act (Section 5)

Deceptive or unfair AI practices — including deceptive AI personas, unfair algorithmic pricing, and false claims about AI capabilities — are enforceable under existing FTC authority. The FTC has stated AI does not create a legal exemption from consumer protection obligations.

Employment

Title VII, ADA, ADEA

Algorithmic discrimination in hiring, promotion, termination, and compensation violates civil rights law regardless of whether the decision is made by a person or a system. EEOC v. iTutorGroup (2023, USD 365,000 settlement) is the primary precedent. The EEOC’s AI and Algorithmic Fairness Initiative addresses bias, adverse impact, and reasonable accommodation obligations for AI-driven employment decisions.

Credit and housing

Equal Credit Opportunity Act, Fair Housing Act

Discriminatory lending or housing decisions made by AI systems are subject to the same disparate impact analysis as human decisions. The CFPB has issued guidance on adverse action notice requirements when AI is used in credit decisioning.

Healthcare

HIPAA, FDA device regulations

AI systems processing protected health information are covered by HIPAA’s privacy and security rules. AI-based diagnostic or treatment decision support tools that meet the definition of a medical device are regulated under FDA’s Software as a Medical Device framework.

Finance and securities

Securities Exchange Act, investment adviser rules

AI-related material risks require disclosure under SEC rules. The SEC’s 2024 guidance addressed conflicts of interest in AI-powered investment recommendations and predictive data analytics tools used by investment advisers and broker-dealers.

Civil rights — federal programs

Title VI, Section 504, Executive Order 12250

AI systems used in federal government programs, including benefits administration, law enforcement, and civil enforcement, are subject to federal civil rights law. EO 14110’s civil rights directives to DOJ, HHS, and USDA produced sector-specific compliance guidance that remains operative.

NIST AI RMF: The Practical Standard for Private Sector Governance

The NIST AI Risk Management Framework (NIST AI 100-1, January 2023) is the primary federal contribution to AI governance for the US private sector, and it is voluntary. The NIST AI RMF is explicitly designed to be rights-preserving, non-sector-specific, and use-case agnostic — its goal is to be applicable to organisations of all sizes across all sectors.

The voluntary nature of the NIST AI RMF does not mean it is optional in practice. Several dynamics make NIST AI RMF alignment a de facto requirement for many organisations:

•       Federal procurement. OMB guidance on federal agency AI governance references NIST AI RMF alignment. Federal contractors and government technology suppliers face increasing expectations around documented AI risk management.

•       Enterprise customer requirements. Large enterprise customers, particularly in regulated industries, are beginning to require supplier attestations of AI governance maturity. NIST AI RMF provides the reference structure against which those attestations are evaluated.

•       Legal due diligence. Documented NIST AI RMF implementation creates an evidentiary record of responsible governance practice that is relevant in enforcement actions, litigation, and regulatory examinations under existing law. A company that cannot demonstrate structured AI risk management has fewer defenses when an algorithmic harm is alleged.

•       Insurance and cyber risk. As cyber insurance and technology errors-and-omissions policies begin to address AI-related risks, insurers are looking at governance frameworks as indicators of risk maturity. NIST AI RMF alignment is likely to become a rating factor.

The Framework’s four functions — GOVERN, MAP, MEASURE, and MANAGE — provide the structural backbone for a practical AI risk management program. GOVERN establishes the policies, roles, and accountability structures. MAP identifies the context, affected parties, and risk categories for each AI system. MEASURE assesses risks against defined criteria. MANAGE implements mitigations, monitors in production, and maintains response readiness. These functions apply regardless of whether formal compliance requirements exist. See the PM’s Guide to NIST AI RMF for a full treatment of implementing the framework across the project lifecycle.

NIST has also published AI 600-1 (Generative AI Profile, 2024) and NIST AI 100-5 (Agentic AI Standards Plan, 2025) as extensions of the core framework for GenAI-specific and agentic AI risk considerations. For projects involving large language models or autonomous AI agents, these documents extend the MAP function’s risk taxonomy to include GenAI-specific concerns such as hallucination, data privacy in training pipelines, and the governance implications of multi-agent orchestration.

 

State-Level Regulation: The Patchwork Is Accelerating

While federal action remains limited to voluntary frameworks and sector-specific agency guidance, states are moving faster. In 2024, 700 AI legislative proposals were introduced across the US, with 45 states, Puerto Rico, Washington D.C., and the US Virgin Islands introducing AI bills. Thirty-one states, Puerto Rico, and the US Virgin Islands enacted legislation or resolutions. The federal government’s attempt to impose a ten-year moratorium on state-level AI legislation enforcement was introduced as part of the One Big Beautiful Bill Act in 2025 — and was rejected by the Senate on a 99-1 vote. State AI regulation is not being preempted; it is accelerating.

For PMs, this means that the question ‘what law applies?’ must be answered at the state level based on where affected individuals are located, not where your company is incorporated. The following are the most significant state-level requirements currently in effect or taking effect in 2026:

Colorado AI Act (Effective June 30, 2026)

Colorado’s Artificial Intelligence Act, enacted in May 2024, is the most comprehensive state-level AI regulation in the United States. Its core obligation: developers and deployers of high-risk AI systems must implement risk management programs and conduct impact assessments to prevent algorithmic discrimination in consequential decisions.

Key definitional scope:

•       High-risk AI system. Any AI system making or substantially influencing consequential decisions about a Colorado consumer.

•       Consequential decision. A decision that has a material effect on access to or the cost of housing, employment, credit, education, healthcare, insurance, or a legal service or process.

Developer obligations include: providing deployers with documentation on known or foreseeable risks of algorithmic discrimination, training data used, and evaluation metrics. Deployer obligations include: implementing a risk management program, conducting impact assessments before deployment and annually thereafter, providing notice to consumers when a consequential decision is made, and providing a mechanism for consumers to appeal and correct information. The Colorado Attorney General has enforcement authority.

PM implication: If your AI system makes or influences consequential decisions affecting Colorado residents, the Colorado AI Act applies regardless of where you are based. The impact assessment requirement is a project deliverable, not a post-deployment exercise. It must be completed before deployment and updated annually.

Illinois: AI in Employment Decisions (Effective January 1, 2026)

Illinois House Bill 3773, effective January 1, 2026, amends the Illinois Human Rights Act to address AI use in employment. Key requirements for employers using AI in hiring, promotions, or termination decisions affecting Illinois workers: notice to employees and applicants that AI is being used; prohibition on AI systems that result in discrimination based on protected characteristics; and data collection obligations related to AI demographic impact.

PM implication: If your AI system is used for employment decisions affecting Illinois workers, notice and non-discrimination obligations apply as of January 2026. This is a mandatory requirement, not a best practice.

New York City: Local Law 144 (In Effect)

NYC Local Law 144 requires bias audits for automated employment decision tools used in hiring and promotion decisions affecting New York City workers. Employers and employment agencies using covered tools must: obtain and publish an annual bias audit from an independent auditor; provide notice to candidates and employees that an automated employment decision tool is being used; and disclose the tool’s characteristics and the job qualifications it evaluates.

PM implication: If your AI hiring or promotion tool affects NYC workers, the bias audit and notice obligations are already in effect. The audit must be completed before the tool is deployed for covered decisions, and results must be made publicly available.

California: A Sectoral Mosaic

California has taken a sectoral rather than comprehensive approach, passing multiple laws addressing specific AI risks. The PM’s exposure depends on the use case:

California Law

Scope and PM Implication

AB-2885 (AI Definitions, 2024)

Defines ‘artificial intelligence’ in California law as a machine-based system that can, for a given set of objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Provides the definitional basis for subsequent legislation.

AB-3030 (Healthcare AI Disclosures, 2024)

Requires healthcare providers to disclose to patients when AI is used to generate clinical communications. PM implication: AI tools generating patient-facing communications in healthcare settings require explicit disclosure.

SB-926 / SB-981 (Non-consensual Intimate Images, 2024)

Criminalise non-consensual AI-generated intimate imagery. PM implication: Generative AI tools must prohibit this use in terms of service and implement safeguards.

AB-2355 / AB-2839 (Political AI Disclosures, 2024)

Require disclosure labels on AI-generated content in political advertisements and on deceptive materials distributed before elections. PM implication: AI tools used in political communications require disclosure mechanisms.

AB-2602 / AB-1836 (Digital Replicas, 2024)

Require consent for AI-generated replicas of actors and deceased performers for entertainment purposes. PM implication: AI tools that replicate individuals’ likenesses require consent frameworks.

The Federal Preemption Question

A recurring question in US AI policy is whether federal law will eventually preempt state AI regulation, creating a uniform national framework. The 99-1 Senate vote against the AI legislation moratorium in 2025 is the clearest signal that federal preemption is not imminent. States have demonstrated both the political will and the legislative capacity to regulate AI independently, and the Senate’s overwhelming rejection of preemption reinforces that this will continue.

The governance implication: do not build your AI compliance strategy around the expectation that a future federal AI law will simplify the landscape. Plan for a multi-jurisdictional environment and build governance processes that can accommodate new requirements as they emerge at the state level.

 

Practical Governance for the US Context

The absence of comprehensive federal AI requirements does not mean governance is optional. It means governance choices fall more directly on project teams. The following five practices provide a defensible foundation in a patchwork environment:

1. Start with the Laws You Already Face

Before asking ‘what AI law applies,’ ask ‘what laws govern my sector, and do they apply to what my AI system does?’ Employment, healthcare, finance, housing, and consumer-facing applications are all already subject to regulatory frameworks. EEOC v. iTutorGroup established that algorithmic implementation of discriminatory decision logic violates the same laws that would prohibit a human from making the same decision. Your AI system does not operate in a legal vacuum — it inherits the regulatory framework of the function it performs.

2. Map Your Jurisdictions Before You Build

State AI laws apply based on where affected individuals are located, not where your company is headquartered. Before finalising scope, identify which states’ residents will be subject to your system’s consequential decisions. Colorado residents trigger the Colorado AI Act. Illinois workers trigger HB 3773. NYC candidates trigger Local Law 144. A system with national reach may be subject to all three simultaneously. Jurisdiction mapping is a chartering-phase activity, not a legal review step at the end of development.

3. Implement NIST AI RMF as Your Governance Backbone

NIST AI RMF provides the structured risk management approach that is most likely to remain relevant as the regulatory landscape continues to evolve. Because it is use-case agnostic and non-sector-specific, it creates a governance infrastructure that can accommodate new mandatory requirements as they arrive — whether from additional state legislation, federal agency enforcement guidance, or eventual federal AI legislation. Organisations that have implemented the NIST AI RMF’s GOVERN, MAP, MEASURE, and MANAGE functions before mandatory requirements arrive are not starting from scratch when they need to comply: they are mapping existing practices to new obligations.

4. Document Your Governance

In the US’s enforcement-oriented approach to AI governance, documentation of risk management decisions creates the evidentiary record that matters when questions arise. This includes: impact assessments documenting affected parties and mitigation decisions; bias testing results and the methodology used; data governance decisions during the training data selection process; human oversight design decisions; and post-deployment monitoring results. Documentation does not prevent enforcement action, but its absence significantly weakens any defense that the organisation acted responsibly. The NIST AI RMF’s transparency and documentation practices operationalise this across the AI lifecycle.

5. Build Flexibility for the Landscape That’s Coming

The US AI regulatory landscape will continue to evolve faster than any single project lifecycle. New state laws will create new requirements. Federal agency guidance will clarify existing enforcement positions. Enterprise customers will raise their expectations of supplier AI governance. Build your governance processes with parameterised scope — not customised to the minimum of what’s required today, but structured to accommodate new requirements without a complete rebuild. The Colorado AI Act’s impact assessment framework, Illinois HB 3773’s notice requirements, and NYC Local Law 144’s bias audit obligations are the leading edge of a converging national standard.

 

Right-Sizing for Your Situation

How much governance structure is appropriate depends on your use case risk level, jurisdictions of operation, and organisational maturity. A low-risk internal productivity tool needs a lighter touch than a system making consequential decisions about employment, credit, or healthcare access. But all US AI projects benefit from the five practices above — the value is not bureaucracy, it is building governance that scales as the landscape changes.

Greenfield — U.S. AI Governance Playbook

For PMs without formal requirements or AI governance programs. A practical governance starter covering NIST AI RMF essentials, jurisdiction mapping, existing law coverage, and documentation practices. Designed for teams building their first structured AI governance approach without a compliance mandate.

Emerging — U.S. AI Governance Playbook

For PMs in regulated sectors or systems with multi-state deployment. Covers Colorado AI Act impact assessment structure, Illinois and NYC employment AI obligations, EEOC bias audit guidance, and integrating voluntary NIST AI RMF implementation with existing compliance frameworks. Includes a jurisdiction-by-jurisdiction obligation mapping template.

Established — U.S. AI Governance Playbook

For PMs in organisations with mature compliance functions and enterprise customer governance obligations. How to align AI project governance with enterprise risk management, federal procurement AI requirements, and the legal due diligence documentation that supports enforcement defense. Includes NIST AI RMF to legal obligation crosswalk.

Become a member →

 

Framework References

•       NIST AI RMF 1.0 (NIST AI 100-1, January 2023) — The primary federal voluntary framework for AI risk governance in the US private sector. Designed to be rights-preserving, non-sector-specific, and use-case agnostic. Four core functions: GOVERN (policies, roles, accountability), MAP (context, affected parties, risk categories), MEASURE (risk assessment against defined criteria), and MANAGE (mitigation implementation, production monitoring, incident response). Explicitly voluntary; practical weight derives from federal procurement expectations, enterprise customer requirements, and its role as the evidentiary reference for due diligence in enforcement contexts. The AI RMF Roadmap identifies alignment with international standards as a top NIST priority, including crosswalks to ISO 42001, OECD AI Principles, and EU AI Act requirements.

•       NIST AI 600-1 — Generative AI Profile (2024). Extension of the core AI RMF to address risks specific to generative AI systems including large language models. Expands the MAP function’s risk taxonomy to cover hallucination, data privacy in training pipelines, CBRN information hazards, homogenisation, intellectual property concerns, and human-AI configuration risks specific to GenAI deployments.

•       EEOC v. iTutorGroup (2023, USD 365,000 settlement) — First major EEOC enforcement action establishing that algorithmic discrimination in employment decisions violates Title VII, the ADA, and the ADEA regardless of whether the discriminatory logic is implemented by a human decision-maker or an automated system. The EEOC’s AI and Algorithmic Fairness Initiative, launched under EO 14110, issued guidance on adverse impact analysis for AI hiring tools, reasonable accommodation obligations for AI screening systems, and the equal employment obligations that apply when AI is used for selection, performance evaluation, or compensation decisions.

•       Colorado Artificial Intelligence Act (enacted May 2024, effective June 30, 2026) — Most comprehensive US state AI law. Applies to developers and deployers of high-risk AI systems making or substantially influencing consequential decisions (housing, employment, credit, education, healthcare, insurance, legal services) affecting Colorado consumers. Developer obligations: documentation of known discrimination risks, training data, and evaluation methodology. Deployer obligations: risk management program; impact assessment before deployment and annually thereafter; consumer notice of consequential decisions; consumer appeal and correction mechanism. Enforcement by Colorado Attorney General.

•       Illinois House Bill 3773 (effective January 1, 2026) — Amends the Illinois Human Rights Act to address employer use of AI in employment decisions (hiring, promotions, terminations) affecting Illinois workers. Requires: notice to employees and applicants; prohibition on discriminatory AI systems; data collection on demographic impact of AI employment decisions.

•       NYC Local Law 144 (in effect) — Requires employers and employment agencies using automated employment decision tools for hiring or promotion decisions affecting NYC workers to: obtain an annual independent bias audit before deployment; publish audit results publicly; provide notice to candidates and employees of tool use and characteristics evaluated.

•       Executive Order 14110 (October 2023, largely rescinded 2025 by EO 14179) — Directed EEOC, DOJ, DOL, and sector agencies to issue AI civil rights enforcement guidance, coordinate on algorithmic discrimination, and develop sector-specific compliance requirements. Though rescinded as executive policy, the enforcement guidance and sector agency positions produced under EO 14110 remain operative under the agencies’ independent statutory authorities.

 

This article is part of AIPMO’s Frameworks series. See also: AI Risk Classification | EU AI Act Timeline | The PM’s Guide to NIST AI RMF