Home > European Parliament Adopts AI Act: 7 Key Takeaways
Nathalie Fouet
22 June 2023
Lire cet article en Français

European Parliament Adopts AI Act: 7 Key Takeaways

European Parliament Adopts AI Act: 7 Key Takeaways

On Wednesday June 14, the European Parliament adopted its position on the AI regulation by a large majority (499 in favor, 28 against, 93 abstentions). What are the Key Takeaways?


Is this the Final Adoption of the Text? When will it Come into Force?


MEPs adopted the text in mid-June 2023, triggering a final stage: negotiations with member states in the Council to finalize the text by the end of the year.

The most optimistic projections indicate the final text will be adopted by the end of 2023 and fully applied in 2026.


A Risk-Based Approach


The text Parliament adopted maintains the initial approach the European Commission proposed: classification by level of risk.

  • “Very high-risk” AI systems, which are prohibited in theory. They remain rare (applications that are contrary to European values, such as the citizen rating or mass surveillance systems used in China).
  • “High-risk” AI systems, which pose “significant harm to people’s health, safety, fundamental rights or the environment.” They are subject to certain requirements, including human oversight, technical documentation, risk management system implementation, impact study, certification, disclosure to users that they are in contact with a machine, and disclosure to users that an image has been AI-generated or is fake.
    • Companies face high stakes in this category, which focused discussions in Parliament.
    • This list was expanded to include AI systems that impact people’s health, safety and fundamental rights and the environment, AI systems used to influence voters and the outcome of elections, and AI used in recommendation systems operated by social media platforms with more than 45 million users.
  • “Limited risk” AI systems: simply disclosing to users that they are talking with a machine.
  • Low-risk AI systems


What is the Impact of Generative Ai’s Emergence and its “Reveal” to the General Public via ChatGPT?


In April 2021, the European Commission proposed this ambitious project, which followed the traditional adoption path until the MEPs adopted their position in mid-June 2023. The timetable for adopting the project’s text was upended when generative AI systems were unveiled to the general public, particularly ChatGPT in November 2022.

MEPs added obligations to better mitigate the risks that generative AI systems like ChatGPT or Midjourney could pose: setting up a system to identify the content they generate (to distinguish between fake and real images), conducting an impact study if the system is considered high-risk, and putting in place safeguards against illegal content.

And beyond generative AI, they specified a series of human oversight and evaluation obligations for applications considered high-risk.


Restrictions on Biometric Identification Systems


As initially planned by the Commission, MEPs are including a principle banning “intrusive and discriminatory uses” of Artificial Intelligence, particularly real-time biometric identification systems in public places. Parliament is going even further by allowing no exceptions (the Commission provided for exceptions in cases of kidnapping or terrorist threat).

Parliament also restricts their post use by police forces to “serious crimes” and only after judicial authorization.

Parliament’s list of prohibited technologies includes the following:

  • biometric identification systems that use sensitive characteristics (such as gender, race, ethnicity, citizenship status, religion, etc.)
  • predictive policing systems (i.e., based on profiling, location or past criminal behavior)
  • emotion recognition systems in law enforcement, the workplace and educational establishments, and border management.

Biometric identification is likely to be a major point of contention between Parliament and the Commission and Council in upcoming discussions.


What about Copyright and Intellectual Property?


Targeting generative AI systems, the text specifies an obligation to declare whether model training data (text, images, music) is protected by copyright, which could enable copyright holders to take legal action seeking compensation for content used without their consent. These models will have to be registered in a European database.


What are the Limits of Risk-based Regulation of AI Systems?


What is this text’s future and how will it evolve between now and the regulation’s application (in 2026 at the earliest)? The risk-based classification is already showing its limitations with the revision of the high-risk AI system category after the unanticipated issues posed by generative AI.

This category was originally proposed by the European Commission as an exhaustive list, but MEPs changed it to a non-exhaustive list and the text stipulates the publication of guidelines only six months before the regulation enters into force.


Balancing Regulation, Protecting our Freedoms and Democracy, and Supporting Innovation


On the one hand, U.S. companies are challenging the new rules. For example, CEO Sam Altman held talks with EU regulators about the AI Act and later announced that OpenAI could cease operating in the EU. And Google excluded the EU when launching its Bard AI system.

On the other hand, there is the much-talked-about April 2023 open letter calling for a pause in research, signed by some of those developing these technologies, including Elon Musk and Altman.

Faced with criticism that the text would hamper innovation, MEPs have introduced a number of amendments, including a requirement for each EU country to set up at least one regulatory sandbox for AI development (to support R&D).

The Center for Research on Foundation Models at Stanford University conducted a study on the main model providers’ compliance with AI Act requirements. The Bloom Model, developed by the Hugging Face startup tops the list. It is based in New York but founded by three French entrepreneurs.


Richi Bommasani, Kevin Klyman, Daniel Zhang et Percy Liang

Richi Bommasani, Kevin Klyman, Daniel Zhang and Percy Liang

@ Center for Research on Foundation Models Stanford CRFM


AI Act Vote: Key Takeaways


The vote by MEPs on June 14, 2023, is a further step towards adoption of the final AI regulation text scheduled for the end of the year. It remains to be seen which amendments adopted by MEPs will remain after negotiations within the Council. Stay tuned for the end of the year!

This posts should interest you
Leave a Reply

Receive the best of Cloud, DevOps and IT news.
Receive the best of Cloud, DevOps and IT news.