Press "Enter" to skip to content

EU’s AI Act vs. U.S. Approach on Regulating AI

Key takeaways:

  • The European Council officially gave its final approval to the EU AI Act, which is set to be signed into law and take effect shortly thereafter.
  • The EU AI Act is more binding than the U.S. Executive Order regulating AI, which relies on voluntary commitments from industry.
  • The U.S. Order set its AI model threshold for regulation an “order of magnitude” higher than the EU, meaning no existing AI models fall under its purview.

This week, the European Council gave its final approval to the EU AI Act, the first robust set of regulations on artificial intelligence in the world. The Act, once it is signed into law, will take effect shortly thereafter.

The passage of the Act has the potential to exert the so-called ‘Brussels Effect,’ in which rules governing the EU will end up affecting organizations worldwide that deal with EU nations and citizens.

The EU AI Act takes a ‘risk-based’ approach in which rules get stricter for technologies that are potentially more harmful but does carve out exceptions for systems used for defense and research, according to the European Council. In the U.S., Pres. Biden signed an Executive Order last October that is the nation’s most comprehensive ruling thus far.

But there are stark differences between the EU’s approach and that of the U.S., which is the global AI leader.

The EU’s regulations are more binding and capture a higher number of AI models than the U.S. approach, which counts more on voluntary industry practices and also set much higher AI model size thresholds that no foundation models fall under its purview, wrote Benjamin Cedric Larsen, AI and machine learning project lead at the World Economic Forum, and Sabrina Kuspert, policy officer at The European AI Office of the European Commission, in a Brookings Institution blog post.

EU vs. U.S.: What’s different?

Both the EU AI Act and the U.S. Executive Order share common goals of promoting responsible AI innovation while mitigating potential risks. However, they differ in their regulatory mechanisms and scope of application.

“The (U.S. Executive Order) primarily outlines guidelines for federal agencies to follow with a view to shape industry practices, but it does not impose regulation on private entities except for reporting requirements by invoking the Defense Production Act,” they wrote.

“The EU AI Act, on the other hand, directly applies to any provider of general-purpose AI models operating within the EU, which makes it more wide-ranging in terms of expected impact.”

“While the U.S. Executive Order) can be modified or revoked, especially in light of electoral changes, the EU AI Act puts forward legally binding rules in a lasting governance structure.”

Another difference is that the EU AI Act captures more general-purpose models by setting its ‘systemic risk’ threshold to 1025 FLOPs. The U.S. is an “order of magnitude higher,” they said.

“Currently, no existing AI model is known to fall within the U.S. threshold,” the authors concluded. Under the EU, however, the AI models from OpenAI and Google could be regulated.

Key features of the EU AI Act:

1. Threshold for systemic risks: Models exceeding more than 1023 FLOPs of computing power are presumed to carry systemic risks, triggering more stringent regulatory requirements.

2. Transparency obligations: Providers of general-purpose AI models are required to disclose relevant information to downstream users, enhancing transparency along the AI value chain.

3. Centralized governance: The establishment of a European AI Office will enforce rules on AI and serve as a central point of expertise for AI-related matters.

Key features of the U.S. Executive Order:

1. Focus on dual-use foundation models: The Order addresses the risks and potentials of dual-use foundation models, particularly in areas related to national security and public safety. (Dual use means the AI model can be used for good or ill.)

2. Reporting requirements: Companies developing potential dual-use foundation models are required to report information on their activities and cybersecurity measures to the federal government.

3. Establishment of AI Council: The Order mandates the establishment of a White House AI Council to coordinate AI-related initiatives across government agencies.

While the U.S. approach relies on voluntary commitments from industry stakeholders, it also introduces mandatory reporting requirements to enhance transparency and accountability in AI development and deployment.

At a glance: The EU vs U.S. approaches

EU AI Act:

– Directly regulates general-purpose AI models.

– Imposes binding rules on providers operating within the EU.

– Emphasizes transparency, accountability, and safety.

U.S. Executive Order:

– Provides guidelines for federal agencies and industry stakeholders.

– Relies on voluntary commitments from industry.

– Focuses on dual-use foundation models and reporting requirements.

Author