Long overdue in Europe, the "EU AI Act" is also known colloquially as the European Union's risk-based rulebook for artificial intelligence. Expect to hear an awful lot more about the regulation in the next few months (years) as key compliance deadlines kick in. Meanwhile, read on for an overview of the law and its aims.
So what is the EU trying to achieve? Dial back the clock to April 2021, when the Commission published the original proposal and lawmakers were framing it as a law to bolster the bloc's ability to innovate in AI by fostering trust among citizens. The framework would mean AI technologies remained "human-centered" while also giving businesses clear rules to work their machine learning magic, the EU suggested.
Clearly, increased adoption of automation across industry and society certainly has the potential to supercharge productivity in various domains. On the other hand, where those outputs are poor and/or where AI intersects with individual rights and fails to respect them, there is the risk of fast-scaling harms.
The bloc for the AI Act aims therefore to drive take-up of AI and develop a local AI ecosystem by setting conditions that are meant to shrink the risks that things could go horribly wrong. Lawmakers believe it will boost citizens' trust in and uptake of AI.
The ecosystem-fostering-through-trust idea was quite uncontroversial at the time when the law was being discussed and drafted in the early part of the decade. However, in some quarters, it was objected to as too soon to start regulating AI and that European innovation and competitiveness could suffer.
Of course, few would now say it's too early, given how the technology has exploded into mainstream consciousness thanks to the boom in generative AI tools. There remain objections that the law sandbags prospects for homegrown AI entrepreneurs, even with support measures like regulatory sandboxes.
In spite of this, what many lawmakers debate today is how to regulate AI, and the EU has set its course with the AI Act: all in the next years about the bloc executing on that plan.
What does the AI Act require?
In fact, most uses of AI are completely outside the scope of the risk-based rules under the AI Act, and thus not regulated at all. It's also worth noting that military uses of AI are entirely out of scope as national security is a member-state rather than EU-level legal competence.
But as something a risk-based approach under the Act would establish precisely: a hierarchy in which only a few of the many potential use cases of AI--for example "harmful subliminal, manipulative and deceptive techniques" or "unacceptable social scoring"--are characterized as coming with an "unacceptable risk" and are thus outlawed. But the list of outlawed uses is riddled with exceptions, so even the law's scant list of prohibitions comes with plenty of caveats.
For instance, a ban of blanket nature on law enforcement using real-time remote biometric identification in publicly accessible spaces is not what most parliamentarians and civil society groups demanded, with exceptions in respect of their usage in these areas for committing certain crimes.
The next rung down is "high-risk" use cases such as AI apps related to critical infrastructure, law enforcement, education and vocational training, healthcare, and so on, which require conformity assessments by app makers before market deployment as well as on a continuing basis (for example, when they make significant updates to models).
This means the developer has to be able to show that they meet the requirements of the law concerning, for example, data quality, documentation and traceability, transparency, human oversight, accuracy, cybersecurity, and robustness. They have to set up quality and risk-management systems so they can prove it, should an enforcement authority come knocking to do an audit.
Public authorities also have to register in a public EU database all high-risk systems it has deployed.
There is also a third class of "medium-risk" applications, compelling transparency rules for artificial intelligence systems, whether through chatbots or any other machines able to produce synthetic media. Here, the fear is that they will use these kinds of machines to deceive people, and hence such kinds of technology require that the users must be told that they are interacting with or viewing content produced by AI.
All other use of AI is automatically deemed as low/minimal risk, unregulated. That is, by example, AI use to sort and recommend social media content or advertising targeting doesn't bring any responsibilities regarding those rules, but the bloc encourages all the developers of AI to apply best practice about boosting user trust voluntarily.
The majority of the AI Act falls into this class of tiered risk-based rules. Dedicated requirements, however, apply only to the complex models behind the generative AI technologies — that is, the "general purpose AI" models, or GPAIs, as the AI Act refers to them.
This class of AI technologies is sometimes referred to as "foundational models"; it tends, on the whole, to be upstream of most applications making use of AI. Developers are tapping into GPAIs to deploy that capabilities of these models to their own softwares, oftentimes fine-tuned for some use case to add value. And all of which is to say GPAIs have quickly gained a powerful position in the market with the capacity to influence AI outcomes at large-scale.
GenAI has entered the chat …
More than rhetoric, the noise around it actually changed the rulebook with GenAI itself reshaping the contours of the EU's AI Act. All this coincided with the bloc's legislative pace and the hype surrounding GenAI tools like ChatGPT. Members of the European parliament seized this opportunity to respond to this.
MEPs suggested adding further restrictions on the GPAIs — that is, the underlying models that support GenAI tools. In turn, these sharpened the tech sector focus on what the EU was doing with the law, leading some to engage in fierce lobbying for a carve-out for GPAIs.
The loudest voice was French AI company Mistral, arguing that rules on model makers would hold back Europe's ability to compete against AI giants from the U.S. and China. OpenAI’s Sam Altman also chipped in, suggesting, in a side remark to journalists that it might pull its tech out of Europe if laws proved too onerous, before hurriedly falling back to traditional flesh-pressing (lobbying) of regional powerbrokers after the EU called him out on this clumsy threat.
Altman getting a crash course in European diplomacy has been one of the more visible side effects of the AI Act.
The upshot of all this noise was a white-knuckle ride to wrap up the legislative process. Months and a marathon final negotiating session by the European parliament, Council, and Commission did drive the file over the line last year. The political agreement was clinched in December 2023, which paved the way for adoption of the final text in May 2024.
The EU has branded the AI Act as a "global first." But being first in this cutting-edge tech context means there is plenty of detail to be worked out in terms of specifics where the law applies and produces detailed compliance guidance in the form of Codes of Practice in order to make it possible for the regime for overseeing and building an ecosystem that the Act contemplates.
So, as for how to measure its success, the law is still a work in progress — and will be for a long time.
GPAIs. The AI Act continues with the risk-based approach, applying lighter requirements for most of these models.
This translates to commercial GPAIs to transparency rules, including those for technical documentation requirements and disclosures around use of copyrighted material used to train models. Such provisions are intended to assist downstream developers in getting their own AI Act compliance right.
Then there's a secondary tier — for the most powerful (and potentially riskiest) GPAIs — where the Act ratchets up obligations on model makers requiring proactive risk assessment and risk mitigation for GPAIs with "systemic risk."
Here the EU is concerned about very powerful AI models that might pose risks to human life, for example, or even risks that tech makers lose control over continued development of self-improving AIs.
The lawmakers opted to default to compute threshold for model training as a classifier for this level of systemic risk. GPAIs will fall into this category based on the cumulative amount of compute used for their training measured in floating point operations or FLOPs exceeding 1025.
No models are, to this point, thought to be in scope, but naturally that could all change as GenAI continues to continue.
There is also some scope for AI safety experts involved in AI Act oversight to raise concerns about systemic risks that may emerge elsewhere. (For more on the governance structure the bloc has devised for the AI Act — including the various roles of the AI Office — see our earlier report.)
The authors successfully lobby, thereby watering down the rules on GPAIs: open source providers were treated to lighter requirements (luck Mistral!), and R&D is carved out, which means GPAIs not yet commercialized fall completely out of scope under the Act, with no even applying transparency requirements.
A long march toward compliance
The AI Act came into effect fully across the EU on August 1, 2024. That date effectively triggered a starting gun since deadlines for compliance with various elements are to be met over different periods from next year's early months into mid-2027.
For some key compliance dates, six months in the rules concerning prohibited use cases are supposed to be in effect; nine months in, Codes of Practice are supposed to start being used; at 12 months in for transparency and governance requirements; other AI requirements, also including obligations over some high-risk systems take 24 months; others apply at 36 months over high-risk systems.
One part of this staggered approach to legal provision is to make sure the companies have enough time to get their operations in order. More than this, though, there is a clear need for time being needed to figure out what regulators would do to develop the compliance within this cutting-edge context.
As of writing, the block is busy formulating guidelines for aspects of the law ahead of these deadlines, including Codes of Practice for producers of GPAIs. The EU is also consulting on the law's definition of "AI systems" (i.e., which software will be in scope or out) and clarifications related to banned uses of AI.
Still, the full picture of what the AI Act will mean to in-scope companies is painted only partially in words and filled out in detail. But key details will be locked down in the coming months and into the first half of next year.
One other thing to keep in mind: due to the rate at which these AI development technologies are progressing, what is required in order to be on the correct side of the law will likely continue to change along with these (and their accompanying risks) as they continue to evolve, also. So this is one rulebook that may well need to stay a living document.
AI rules enforcement
The regulations for GPAIs are implemented at the EU level, while oversight is led by the AI Office, and violators can face penalties, for up to 3% of model makers' global turnover, that the Commission can sanction for.
Elsewhere, implementation of the Act's rules for AI systems is devolved, so in practice it will be up to member state-level authorities-there may be more than one-overseeing body-to assess and investigate compliance problems for most AI apps. How workable this structure will prove is yet to be seen.
In paper terms, fines can run to 7% of global turnover, or €35 million, whichever is higher, for violations of prohibited uses. Other infringements of AI obligations can be fined at up to 3% of global turnover, or up to 1.5% in the case of the supply of incomplete or incorrect information to the authorities. To summarize, then, there is a sliding scale of sanctions that the enforcing authorities can hit:.