IP
IPValueLabs
FeaturedCompliance14 min read

EU AI Act Compliance Requirements for US Companies: A 2026 Practical Guide

The European Union’s AI Act is the world’s first comprehensive AI regulation—and it does not stop at Europe’s borders. If your company builds, deploys, or distributes AI systems that reach EU users, you are subject to its requirements regardless of where you are headquartered. With prohibited practices already in force since February 2025 and the critical high-risk system deadline approaching on August 2, 2026, this guide explains what US companies need to know and do, from risk classification to enforcement timelines to a concrete compliance roadmap.

1. What Is the EU AI Act and Why Should US Companies Care?

The EU AI Act (Regulation (EU) 2024/1689) entered into force on August 1, 2024, establishing a harmonized legal framework for artificial intelligence across all 27 EU member states. It is the first binding, horizontal AI regulation enacted by any major jurisdiction, and its influence is already shaping regulatory conversations in the US, UK, Canada, and beyond.

For US companies, the critical point is extraterritorial reach. The AI Act applies to any provider that places an AI system on the EU market or puts it into service within the EU—regardless of where the provider is established. It also applies to deployers (users of AI systems) located in the EU, and to providers and deployers outside the EU whose system’s output is “used in the Union.” In practice, this means any US company whose AI-powered product or service is accessible to EU customers falls within scope.

The penalties are substantial and structured in three tiers under Article 99. Violations of prohibited practices carry fines of up to €35 million or 7% of global annual turnover, whichever is higher. Other regulatory violations can result in fines of up to €15 million or 3% of global turnover. Supplying misleading information to authorities carries fines of up to €7.5 million or 1% of global turnover. SMEs benefit from a cap set at the lower of the percentage or fixed amount. For context, a US technology company with $10 billion in annual revenue could face a theoretical maximum penalty of $700 million for the most serious violations.

The regulation establishes the European AI Office within the European Commission, now staffed with over 125 personnel across six operational units. The AI Office holds exclusive authority over general-purpose AI providers and will gain the power to impose fines directly starting August 2, 2026. US companies should expect enforcement to be active and well-resourced, modeled on the GDPR enforcement infrastructure that has already resulted in billions of euros in fines against American technology firms. Notably, the AI Act references the GDPR over 30 times, and approximately 90% of high-risk AI deployments are expected to trigger dual compliance obligations under both regulations.

2. Risk Classification Framework

The EU AI Act organizes AI systems into four risk tiers. The tier determines the compliance obligations that apply to your system. Correctly classifying each AI system in your portfolio is the essential first step in any compliance program.

Risk TierRegulatory TreatmentExamples
Unacceptable RiskBanned outright—Article 5 defines eight prohibited practices with no exceptions (in force since Feb 2, 2025)Subliminal manipulation, exploitation of vulnerable groups, social scoring, criminal risk prediction based solely on profiling, untargeted facial image scraping, emotion recognition in workplaces and schools, biometric categorization by sensitive attributes (race, religion, sexual orientation), real-time remote biometric identification for law enforcement (with narrow exceptions)
High RiskPermitted but subject to strict compliance requirements including conformity assessment, registration, and ongoing monitoringAI in recruitment and HR decisions, credit scoring, insurance underwriting, medical devices, critical infrastructure management, law enforcement, immigration and border control
Limited RiskSubject to transparency obligations—users must be informed they are interacting with AIChatbots and conversational AI, emotion recognition systems (outside prohibited contexts), deepfake generators, AI-generated content
Minimal RiskNo mandatory requirements—voluntary codes of conduct encouragedSpam filters, AI-powered video games, inventory management systems, basic recommendation engines

One subtlety that catches many US companies off guard: the high-riskcategory is defined not only by Annex III of the regulation (which lists specific use cases) but also by whether the AI system is a safety component of a product covered by existing EU harmonization legislation, such as medical devices, machinery, or automotive regulations. A US company selling an AI-enabled industrial sensor to a European manufacturer may find itself subject to high-risk obligations even if the AI component seems straightforward. Additionally, Article 10(5) provides a narrow exception allowing the processing of sensitive personal data (such as race or ethnicity) specifically for bias detection and correction purposes in high-risk AI systems—a significant intersection with GDPR obligations that requires careful legal navigation.

3. Timeline: When Do Requirements Take Effect?

The EU AI Act uses a phased enforcement schedule, giving companies time to achieve compliance in stages. However, several deadlines have already passed, and the most impactful obligations take effect in 2026.

DateMilestoneWhat It Means for US Companies
February 2, 2025Prohibited practices ban and AI literacy requirements in effectAll eight prohibited AI practices under Article 5 must cease immediately; organizations must ensure staff have sufficient AI literacy
August 2, 2025GPAI model obligations and penalty regime in effectFoundation model providers must comply with transparency, documentation, and copyright rules; the three-tier penalty framework becomes enforceable; systemic risk providers face additional requirements
August 2, 2026High-risk AI system obligations apply in full; AI Office gains fining powerFull compliance required for all high-risk systems listed in Annex III, including conformity assessments, registration in the EU database, and post-market monitoring
August 2, 2027Legacy GPAI models must reach full complianceGeneral-purpose AI models already on the market before August 2025 must meet all GPAI obligations; AI components in regulated products under EU product safety legislation must satisfy all requirements
August 2, 2030Legacy public authority AI systems must complyAI systems deployed by or on behalf of public authorities before August 2026 must achieve full compliance—relevant for US companies selling to European government agencies

For US companies reading this in 2026, the most urgent deadline is August 2, 2026, when the full suite of high-risk AI obligations comes into force and the AI Office gains direct fining authority. Companies that have not yet begun their compliance programs have approximately five months to conduct an AI inventory, classify their systems, and implement the required technical and organizational measures.

4. Key Compliance Requirements for High-Risk AI Systems

High-risk AI systems face the most demanding compliance requirements. US companies deploying AI in areas such as human resources, credit decisioning, or healthcare must build and document systems that satisfy all of the following obligations. Standards development is ongoing: seven CEN-CENELEC harmonized standards are currently under development, with the first expected for publication in 2026. While ISO/IEC 42001 (AI Management Systems) has been published, it does not cover all AI Act requirements, meaning companies cannot rely on it as a sole compliance path.

Risk Management System

Providers must establish and maintain a continuous, iterative risk management system throughout the AI system’s entire lifecycle. This includes identifying and analyzing known and reasonably foreseeable risks, estimating residual risks after mitigation, and testing against those risks. The risk management system must be documented and updated as the system evolves.

Data Governance

Training, validation, and testing datasets must meet quality criteria addressing relevance, representativeness, accuracy, and completeness. Providers must examine datasets for potential biases and implement appropriate measures to detect, prevent, and mitigate them. Article 10(5) grants a narrow exception allowing the processing of sensitive personal data specifically for bias detection—a critical provision given that approximately 90% of high-risk deployments will also fall under GDPR. Data governance practices must be documented, including descriptions of data sources, collection methodologies, and any preprocessing or labeling procedures.

Technical Documentation

Comprehensive technical documentation must be prepared before the system is placed on the market and kept up to date. This includes a general description of the system, detailed information about development methodology, design specifications, system architecture, training procedures, and validation and testing results. The documentation must be sufficient for authorities to assess the system’s compliance. Providers should note that the forthcoming CEN-CENELEC harmonized standards will likely define more granular documentation templates.

Transparency & Human Oversight

High-risk systems must be designed to be sufficiently transparent that deployers can interpret the system’s output and use it appropriately. Instructions for use must accompany the system. Human oversight measures must allow natural persons to effectively oversee the system’s operation, including the ability to decide not to use the system, override its output, or intervene in its operation.

Accuracy, Robustness & Cybersecurity

High-risk AI systems must achieve appropriate levels of accuracy, robustness, and cybersecurity, as declared in their accompanying documentation. They must be resilient against errors, faults, and inconsistencies, and designed with appropriate technical redundancy solutions. Cybersecurity measures must protect against unauthorized access, data poisoning, model manipulation, and adversarial attacks throughout the system’s lifecycle.

5. General-Purpose AI (GPAI) Model Obligations

The EU AI Act creates a distinct regulatory category for general-purpose AI models—foundation models and large language models that can be adapted for a wide range of downstream tasks. This category is particularly relevant for US companies that develop or fine-tune foundation models, as these obligations have been enforceable since August 2, 2025. The GPAI Code of Practice, published on July 10, 2025, provides the operational framework that companies are expected to follow.

All GPAI model providers, regardless of risk level, must comply with a baseline set of requirements:

  • Training data summary: Providers must prepare and maintain a sufficiently detailed summary of training content using a mandatory template, updated every six months. This summary must be comprehensive enough to allow copyright holders to exercise their opt-out rights under the EU Copyright Directive. This requirement creates a direct tension with the trade secret protection that many US AI companies rely on to safeguard their data curation processes.
  • Copyright compliance: Providers must implement a policy to comply with EU copyright law, including the text and data mining provisions of the Digital Single Market Directive and respecting opt-out reservations made by rights holders. Compliance requires demonstrating that training data acquisition respected these opt-outs—a significant operational burden for models trained on web-scale data.
  • Technical documentation: Providers must draw up and maintain technical documentation of the model, including its training and testing process and evaluation results, and make it available to the AI Office and national authorities upon request.
  • Downstream provider information: GPAI model providers must make available to downstream providers (companies that build products on top of the model) sufficient information and documentation to enable those downstream providers to comply with their own obligations under the AI Act.

Systemic Risk: Additional Obligations for High-Capability Models

GPAI models that pose systemic risk—currently defined as models trained with more than 10^25 FLOPs of computation, or designated by the European Commission based on other criteria—face additional requirements:

  • Model evaluation and adversarial testing: Standardized evaluations must be performed, including red-teaming exercises, to identify and mitigate systemic risks such as large-scale disinformation, catastrophic misuse, and significant impacts on critical infrastructure.
  • Systemic risk assessment and mitigation: Providers must assess and mitigate reasonably foreseeable systemic risks, documenting the process and providing results to the AI Office.
  • Serious incident reporting: Any serious incident must be reported to the AI Office without undue delay and in any event within two weeks, along with corrective measures taken or envisaged.
  • Adequate cybersecurity: The model and its physical infrastructure must be protected with an appropriate level of cybersecurity.

Industry Response: Who Has Signed the Code of Practice?

The industry response to the GPAI Code of Practice has been mixed and reveals important strategic divisions. By August 2025, 26 AI providershad signed the Code. OpenAI, Google, and Anthropic are among the signatories. Microsoft has indicated it “probably will sign.” However, Meta has explicitly refused to sign, publicly calling the Code “legally uncertain.” US companies should monitor these developments closely, as the Code of Practice is expected to become the de facto compliance benchmark even for non-signatories once the AI Office begins exercising its fining authority.

6. Practical Compliance Roadmap for US Companies

Building an EU AI Act compliance program is a cross-functional effort that requires coordination among legal, engineering, product, and data science teams. The following steps provide a structured approach.

  1. Conduct a comprehensive AI inventory. Catalog every AI system and model your organization develops, deploys, or distributes. Include internal tools, customer-facing products, and third-party AI components embedded in your products. For each system, document its purpose, data inputs, decision outputs, and the jurisdictions where it operates or produces effects. Given that the AI Act references the GDPR over 30 times, coordinate this inventory with your existing GDPR data processing records.
  2. Classify each system by risk tier. Map every inventoried AI system to the four-tier risk framework. Pay particular attention to systems that make or support decisions about individuals (hiring, lending, insurance, law enforcement) as these are most likely to fall into the high-risk category. Screen all systems against the eight prohibited practices in Article 5, as these have been enforceable since February 2, 2025. Engage legal counsel with EU regulatory expertise for borderline cases.
  3. Perform a gap analysis. For each high-risk system, compare your current development and deployment practices against the specific requirements in Articles 9–15 of the AI Act. Identify gaps in risk management, data governance, documentation, transparency, human oversight, and cybersecurity. Assess dual compliance obligations where GDPR applies alongside the AI Act—approximately 90% of high-risk deployments will trigger both. Prioritize gaps by severity and the effort required to remediate.
  4. Build or update technical documentation. Create documentation packages that meet the requirements set out in Annex IV of the regulation. This includes detailed descriptions of design choices, training data, model architecture, validation methodology, and performance metrics. Monitor the seven CEN-CENELEC harmonized standards under development, as the first are expected for publication in 2026 and will likely define more specific documentation formats. Treat this as an ongoing process, not a one-time deliverable.
  5. Prepare for conformity assessment. Determine whether your high-risk system requires third-party conformity assessment or qualifies for self-assessment. Most Annex III systems allow self-assessment, but certain biometric and critical infrastructure systems require a notified body. Establish internal review processes that mirror the assessment criteria.
  6. Implement ongoing post-market monitoring. Design and document a post-market monitoring system proportionate to the nature and risks of your AI system. This must include mechanisms for collecting and analyzing data on the system’s performance, logging capabilities for traceability, and procedures for taking corrective action when risks materialize.

Many US companies find it practical to align their AI Act compliance program with existing frameworks—NIST AI Risk Management Framework (AI RMF), ISO/IEC 42001 (AI Management Systems), and SOC 2 controls—rather than building a parallel compliance structure from scratch. However, ISO/IEC 42001, while published, does not cover all AI Act requirements; companies should treat it as a foundation rather than a complete solution. The NIST AI RMF categories (Govern, Map, Measure, Manage) map reasonably well to the AI Act’s requirements, though the EU regulation imposes more prescriptive documentation and conformity assessment obligations.

7. Intersection with US IP Strategy

The EU AI Act creates direct and sometimes uncomfortable intersections with US intellectual property strategy. The regulation’s mandatory disclosure requirements fundamentally challenge the trade secret approach that many US AI companies have relied on, making patent protection an increasingly attractive alternative for securing competitive advantage.

Patent Filing Strategy

The AI Act’s documentation requirements effectively mandate that high-risk AI providers disclose detailed information about system architecture, training methodology, and design choices. Because these mandatory disclosures erode the secrecy that trade secret protection depends on, patenting may become the safer path for protecting AI innovations deployed in the EU market. The required compliance documentation often provides a well-structured record of inventive steps that can directly support patent applications. However, timing is critical: companies should file patent applications before or simultaneously with regulatory submissions, as publicly accessible compliance documentation could constitute prior art that limits future patent claims. Coordinating patent filing timelines with the AI Act compliance documentation schedule is no longer optional—it is a strategic necessity.

Trade Secret Protection Under Pressure

The GPAI transparency requirements create a structural conflict with trade secret protection. The mandatory training data summary, updated every six months using a standardized template, forces disclosure of information that many US AI companies have treated as core trade secrets. The obligation to respect EU Copyright Directive opt-outs further requires transparency about data sourcing practices. Companies must find the balance between disclosing enough to satisfy the AI Act and preserving protectable trade secrets. Practical approaches include disclosing data categories and sources at a level of abstraction that satisfies regulators while preserving the specific curation methodology, weighting schemes, and quality filtering criteria as protected trade secrets. However, the trend is clear: mandatory disclosures are narrowing the scope of what can be maintained as a trade secret, and companies should proactively shift their most valuable innovations toward patent protection before compliance deadlines force disclosure.

Licensing Implications for AI Models Deployed in the EU

The AI Act’s copyright compliance requirements and downstream provider information obligations have significant implications for AI model licensing. Companies that license foundation models to European customers must ensure their license terms accommodate the downstream provider’s need to receive and pass through compliance documentation. This may require restructuring existing licensing agreements to include compliance data sharing provisions, defining responsibility allocation for ongoing monitoring obligations, and addressing liability for non-compliant uses of the model by downstream deployers. The fact that 26 AI providers have signed the GPAI Code of Practice also creates market pressure: licensees may increasingly require Code of Practice compliance as a contractual condition. US companies with substantial EU patent portfolios should also consider how AI Act compliance intersects with FRAND licensing commitments, particularly for AI systems that implement standard-essential technology.

Quantify Your IP Position

EU AI Act compliance decisions should be informed by the value of your underlying IP. Use our Patent Damages Estimator to model the financial impact of your patent portfolio and make informed decisions about disclosure, licensing, and enforcement strategies in the EU market.

Assess Your AI Compliance Readiness

Understanding your patent portfolio’s value is the first step toward an effective EU AI Act compliance strategy. Model reasonable royalty scenarios and assess the financial implications of your IP decisions.

Open Damages Estimator

Sources

Selected primary or official reference materials used for this guide.

Disclaimer: This article is for educational and informational purposes only and does not constitute legal advice. EU AI Act compliance involves complex regulatory analysis that requires qualified professional guidance. Consult a licensed attorney with expertise in EU technology regulation for advice on specific matters.