The EU AI Act and What It Means for US Manufacturers

European flag to illustrate the topic of the blog, the EU AI Act

TL;DR:The European Union’s Artificial Intelligence Act (EU AI Act, Regulation (EU) 2024/1689) is the world’s first comprehensive, horizontal AI law. It applies even to non-EU companies whenever AI systems or their outputs are used in the EU. Core rules roll out in phases through 2026–2027, with outright bans on certain practices already in force. Penalties are hefty and can reach the higher of €35 million or 7 percent of global turnover for the worst violations. The first part of this blog lays out the basics, the second provides an overview on whether and if so how, this new Act applies to manufacturers using AI. 

Update: Draft changes from Brussels would simplify parts of GDPR and push back enforcement of some AI Act obligations. For manufacturers using AI in production, this shifts timelines rather than removing the need to understand how their systems are classified and documented.Here is an article in The Verge about that topic.

The EU AI Act- A Quick Overview

The EU AI Act, adopted in 2024, marks the world’s first comprehensive legal framework for artificial intelligence, with significant implications for U.S. companies that develop, distribute, or deploy AI-based solutions in the European Union. The Act is designed to address risks posed by both generative AI as well as more traditional machine learning applications, including those used in manufacturing environments, it imposes new requirements and potential penalties that American businesses operating globally have to know about.

What Is the EU AI Act?

The EU AI Act establishes a risk-based set of rules that classify AI systems according to their level of risk: unacceptable, high, limited, and minimal. Unacceptable risk systems are banned altogether, while high-risk systems are subject to robust obligations. Limited risk systems must meet transparency requirements, and minimal-risk systems face few if any legal restrictions.

For U.S. companies, this means any AI tool offered in the EU – whether as a provider, distributor, or user – may be subject to these regulations, regardless of whether the company has physical offices in Europe.

Key Requirements and Who Is Impacted

The Act’s jurisdictional reach is broad. U.S.-based AI providers, importers, distributors, and even those who use AI tools and their outputs in EU markets must comply. That extraterritorial hook means a US vendor serving an EU manufacturer, or generating decisions used in the EU, is in scope even if all infrastructure is in the States. The obligations differ depending on their role and the risk profile of their AI system. Non-compliance can get very expensive fast with fines of up to €35 million or 7% of annual global turnover, whichever is higher.

Risk-Based Structure of the EU AI Act

The Act groups AI into four tiers based on risk. Here is a short overview and some examples to help you determine which risk level might be applicable to you..

  • Unacceptable risk – the highest category – is outright banned. In this category you find applications such as social scoring by public authorities, biometric categorization by sensitive traits or emotion recognition in workplaces and schools. These prohibitions are already in force.
  • High risk — includes AI that either is a safety component of a regulated product (e.g., under EU product-safety laws) or AI used in listed “Annex III” use cases such as employment, access to essential services, education, critical infrastructure, and specified biometric systems.
  • Limited risk — applies to applications like chatbot and here transparency rules apply. You have, for example, clearly tell people that they are interacting with a chatbot and label AI-generated or manipulated content e.g., synthetic images or audio so users aren’t misled.
  • Minimal risk – applies to everyday AI tools (spell-checkers, content filters, simple recommendation widgets) that don’t trigger specific legal obligations. The EU encourages voluntary best practices to promote safety and trust without adding compliance burden.
EU AI Act risk categories overview
Screenshot

Examples of all four categories can be found here.

What About Generative AI and GPAI

General-purpose AI models, also known as foundation models, are large AI models trained on broad, diverse data that can be adapted to many downstream tasks across domains like language, vision, and code. These models are often integrated into many different AI systems. Under the EU AI Act the company that provides the GPAI model in the EU must meet GPAI obligations e.g., publishing model information, copyright-related safeguards, technical documentation, and a host of other steps.

While this responsibility lies with the provider, a company integrating a GPAI into a product for the EU market will likely “inherit” some duties or require attestations from your supplier

When Do the EU AI Act Rules Start Applying

The EU AI Act entered into force on August 1, 2024; however, the obligations phase in on the following schedule:

  • February 2, 2025: bans on “prohibited” practices and AI-literacy provisions started.
  • August 2, 2025: governance rules and obligations for general-purpose AI (GPAI) models begin (with enforcement ramping to 2026–2027).
  • August 2, 2026: most obligations for high-risk AI systems apply.
  • August 2, 2027: extended deadline for certain high-risk AI embedded in regulated products and GPAI already on the market. The Commission has rejected calls to delay these dates.

Why and How the EU AI Act Matters to Manufacturers

While generative AI comes to mind first and foremost when reading about the EU AI Act, it applies to all forms of AI, including so-called classical or mature AI such as machine learning and as such is relevant for manufacturers using classical AI on the shop floor.

For factories, the question generally boils down to classification (Are we “high-risk”?) and what obligations follow if the answer is yes.

Let’s have a look at how to approach this question.

How to Think About “High-Risk” on the Shop Floor

There are two main routes to “high-risk” status:

  • Safety component of a regulated product. If your AI function is part of a machine or product covered by EU product-safety legislation and it performs a safety role your AI can be high-risk. Example: a vision system that actively controls machine motion to prevent human injury would likely be treated as a safety component. That triggers the high-risk regime, including conformity assessment and CE marking before the system is placed on the EU market.
  • “Annex III use cases”. AI used for worker management and access to employment is expressly listed. Example: If you deploy AI to rank, evaluate, allocate shifts, or make other consequential HR decisions about operators on the line, assume high-risk and prepare accordingly.

Concrete Examples for Manufacturing

Let’s start with the most severe category under the EU AI Act: banned use cases. Here is a (not comprehensive) list of such use cases to give you an idea:

Banned Uses – Must Avoid!

  • Any emotion-recognition on the shop floor e.g., a camera-based model scores operators’ anger, fatigue, or engagement during shifts and feeds those scores into performance reviews or shift assignments.
  • Biometric categorization by sensitive traits e.g., a face-analysis system on a production line that classifies employees by inferred religion, ethnicity, political opinions, trade-union membership, or sexual orientation to “optimize team cohesion.”
  • Untargeted scraping to build a facial database e.g., a security team compiles an employee/contractor facial database by scraping images from LinkedIn, Facebook, and public CCTV feeds to “improve” badge checks at factory gates.
  • Predictive “risk of wrongdoing” profiling of staff e.g., a ML model profiles workers’ “likelihood of theft or sabotage” based solely on behavioral metadata (break timing, past lateness, social interactions) and flags them for extra monitoring.

High-Risk Applications of AI on the Shop Floor

Use of AI for applications such as visual inspection or predictive maintenance can be high-risk in certain cases which makes a thorough analysis by a trained individual necessary before going full steam ahead.

Here are three examples that show that the line between high and limited risk can be a narrow one.

Example 1: Visual Inspection for Defect Detection in Quality Control Only

If the model flags surface defects and routes parts for rework, without safety control over a machine’s motion, it may not be high-risk. General product-safety and transparency duties still apply, though. Document your reasoning: what is the intended purpose, what decisions are automated, and why it is not a safety component. If, however, the same solution interlocks with a press brake or robot to prevent hazardous movement, it likely crosses into high-risk as a safety component.

Example 2: Predictive Maintenance

A model that forecasts bearing failure and schedules a service window is usually not high-risk. If the AI instead performs a safety function (e.g., actively preventing dangerous states in a way tied to product-safety compliance), it may be high-risk. Again, classification depends on the intended purpose and the safety role.

Example 3: Worker-Management AI

Systems that profile operators, rank performance, or auto-assign shifts or training often fall into Annex III “employment and worker management,” which is high-risk. This will trigger tighter controls and documentation requirements.

What High-Risk Obligations Look Like in Practice

If a system is classified as high-risk, the manufacturer must build and maintain a full compliance stack before EU deploying it in, including:

  • Risk-management system across the lifecycle, aligned to the intended purpose.
  • Data governance and quality – representative, relevant, and as error-free as possible for the context; bias monitoring where relevant.
  • Technical documentation and logging for traceability; instructions for use enabling safe operation.
  • Human oversight designed into the system; robustness, accuracy, and cybersecurity requirements.
  • Conformity assessment and CE marking, plus post-market monitoring and serious-incident reporting.

These requirements are detailed in the regulation and the Commission’s public materials.

Using Foundation Models in a Product

Many factory systems now embed large models e.g., for documentation search, code generation for PLCs, or operator assistance. If those features are marketed in the EU, the manufacturer must ensure that the upstream model provider can meet GPAI obligations The Commission’s GPAI Code of Practice and Q&A can be used to comply with the obligations of the EU AI Act. In addition, it’s important to flow obligations into contracts and vendor assessments.

A Practical Action List for US Manufacturers and Vendors

  1. Map your AI features and classify. For each use case, record the intended purpose, whether it plays a safety role in a regulated product, and whether it touches Annex III areas (notably worker management). Keep a short internal memo with your classification decision and why.
  2. If high-risk, start the file now. Build your technical documentation, risk-management artifacts, data governance evidence, logging approach, oversight plan, and post-market monitoring process—so you are ready for the 2026/2027 application dates.
  3. GPAI dependencies. If you embed a foundation model, capture model cards/technical documentation, copyright-safeguard statements, and (for advanced models) evaluation results. The Commission’s materials outline what to request and how the Code of Practice can help.
  4. Mind the bans today. Check your UIs, HR analytics, and any biometric features against Article 5; remove emotion recognition in workplaces, manipulative interfaces that distort behavior, biometric categorization by sensitive traits, and any public-authority-style “social scoring.” These are already unlawful in the EU.
  5. Plan to the real dates. The Commission has publicly rejected calls to delay the AI Act. Align resourcing to 2025–2027 milestones.

And Finally: The Fines

The penalties for running afoul of the EU AI Act are significant. Fines can reach the higher of €35 million or 7% of global turnover for prohibited practices and up to €15 million or 3% for most other breaches. Member States enforce within these caps.

In Summary

The EU AI Act sets a new standard for responsible AI, directly affecting American businesses including manufacturers producing in the EU. For U.S. companies, now is the time to classify AI systems, update compliance protocols, and prepare to demonstrate that their AI is trustworthy and legally compliant in the European market.

Disclaimer

Just to be on the safe side, here is a disclaimer that pertains to this blog:

This post is for general information only and doesn’t constitute legal, regulatory, or compliance advice. The content may not reflect the most current developments, and no attorney–client or advisor relationship is created by reading it. Decisions about your specific situation should be based on your own research and analysis, and, where appropriate, input from qualified legal or compliance professionals. Neither the authors nor the publisher make any warranties about completeness or accuracy, and any reliance is at the reader’s own risk.