March 28. 2024. 1:42

The Daily

Read the World Today

MEPs seal the deal on Artificial Intelligence Act


Following months of intense negotiations, members of the European Parliament (MEPs) have bridged their difference and reached a provisional political deal on the world’s first Artificial Intelligence rulebook.

The AI Act is a flagship legislative proposal to regulate Artificial Intelligence based on its potential to cause harm. The European Parliament is now inching toward formalising its position on the file, after EU lawmakers reached a political agreement on Thursday (27 April).

The text might still be subject to minor adjustments at the technical level ahead of a key committee vote scheduled on 11 May, but it is expected to go to a plenary vote in mid-June.

“We have a deal now in which all groups will have to support the compromise without the possibility of tabling alternative amendments,” a European Parliament official told EURACTIV.

Until the last moments, the EU lawmakers were horse-trading on some of the most controversial parts of the proposal.

AI Act: European Parliament headed for key committee vote at end of April

EU lawmakers in the leading European Parliament committees are voting on the political agreement on the AI Act on 26 April, with many questions being settled but a few critical issues still open.

General Purpose AI

How to deal with AI systems that do not have a specific purpose has been a heatedly-debated topic in the discussion. The MEPs confirmed previous proposals to put stricter obligations on foundation models, a sub-category of General Purpose AI that includes the likes of ChatGPT.

The only significant last-minute change was on generative AI models, which would have to be designed and developed in accordance with EU law and fundamental rights, including freedom of expression.

AI Act: MEPs close in on rules for general purpose AI, foundation models

The European Parliament is set to propose stricter rules for foundation models like ChatGPT and distinguish them from general purpose AI, according to an advanced compromise text seen by EURACTIV.

Prohibited practices

Another politically sensitive topic was what type of AI applications are to be banned because they are considered to pose an unacceptable risk.

Two weeks ago, the idea was floated to prohibit AI-powered tools for all general monitoring of interpersonal communications. The proposal was dropped following opposition from the conservative European People’s Party (EPP).

In exchange, centre-right lawmakers had to accept an extension of the ban on biometric identification software. Initially only banned for real-time use, this recognition software could be used ex-post only for serious crimes and with pre-judicial approval.

The EPP, which has a strong law enforcement faction, is the only partial exception to the agreement not to table alternative amendments. The group accepted not to table ‘key’ votes that might threaten its support for the entire file but it might still try to change the ex-post biometric ban.

The AI regulation also bans “purposeful” manipulation. The word purposeful was subject to debate as the intentionality might be difficult to prove, still, it was maintained as the MEPs did not want to cast the net too wide.

The use of emotion recognition AI-powered software is banned in the areas of law enforcement, border management, workplace, and education.

The EU lawmakers’ ban on predictive policing was extended from criminal offenses to administrative ones, based on the Dutch child benefit scandal that saw thousands of families wrongly incriminated for fraud due to a flowed algorithm.

AI Act: EU Parliament walking fine line on banned practices

Members of the European Parliament closed several critical parts of the AI regulation at a political meeting on Thursday (13 April), but the prohibited uses of AI could potentially divide the house.

High-risk classification

The initial proposal automatically classified AI solutions falling under the critical areas and use cases listed in Annex III as at high-risk, meaning providers would have to comply with a stricter regime including requirements on risk management, transparency and data governance.

MEPs introduced an extra layer, meaning that an AI model that falls under Annex III’s categories would only be deemed at high risk if it posed a significant risk of harm to the health, safety or fundamental rights.

Significant risk is defined as “a risk that is significant as a result of the combination of its severity, intensity, probability of occurrence, and duration of its effects, and it’s the ability to affect an individual, a plurality of persons or to affect a particular group of persons”.

Upon requests from the Greens, AI used to manage critical infrastructure like energy grids or water management systems would also be categorised as high risk if they entail a severe environmental risk.

Moreover, centre-left lawmakers obtained the provision that the recommender systems of very large online platforms, as defined under the Digital Services Act (DSA), will be deemed high-risk.

AI Act: All the open political questions in the European Parliament

The European Parliament’s rapporteurs on the AI Act circulated on Monday (13 February) an agenda for a key political meeting which includes new compromises on AI definition, scope, prohibited practices, and high-risk categories.

Detecting biases

MEPs included extra safeguards for the process whereby the providers of high-risk AI models can process sensitive data such as sexual orientation or religious beliefs to detect negative biases.

In particular, for the processing of such a special type of data to happen, the bias must not be detectable by processing synthetic, anonymised, pseudonymised or encrypted data.

Moreover, the assessment must happen in a controlled environment. The sensitive data cannot be transmitted to other parties and must be deleted following the bias assessment. The providers must also document why the data processing took place.

General principles

Ahead of Wednesday’s meeting, the office of the co-rapporteurs circulated a proposal for general principles that would apply to all AI models. The new article is not meant to create new obligations but will have to be incorporated into technical standards and guidance documents.

These principles include human agency and oversight, technical robustness and safety, privacy and data governance, transparency, social and environmental well-being, diversity, non-discrimination and fairness.

Sustainability of high-risk AI

High-risk AI systems will have to keep records of their environmental footprint, and foundation models will have to comply with European environmental standards.

More on the same topic...

EU Commission pitches rules for patents ‘essential’ for standards

EU Commission pitches rules for patents ‘essential’ for standards