June 23. 2024. 12:45

The Daily

Read the World Today

AI Act: European Parliament headed for key committee vote at end of April


EU lawmakers in the leading European Parliament committees are voting on the political agreement on the AI Act on 26 April, with many questions being settled but a few critical issues still open.

The AI Act is a landmark EU proposal to regulate Artificial Intelligence based on its potential to cause harm. The European Parliament is set to finalise its position on the file by May to quickly enter into negotiations with the EU Council and Commission in the so-called trilogues.

The discussions on the AI regulation have taken longer than expected due to political infighting in the Parliament, which resulted in a co-lead of the Internal Market and Consumer Protection committee (IMCO) and Civil Liberties committee (LIBE).

At last, a ‘package’ deal on the overall proposal is taking shape, although several significant points remain to be settled.

“It’s a delicate balance,” a European Parliament official told EURACTIV.

That balance might be called into question during the plenary vote in May when some alternative amendments might be tabled. However, some critical hurdles must be fully addressed before getting there.

General Purpose AI

Technological developments like the stellar rise of ChatGPT played a role in disrupting the AI Act’s discussions, as EU lawmakers scrambled to decide how to deal with a technology that is moving at a neck-breaking speed and is not covered in the original proposal.

This is perhaps the most significant political issue still open. A potential compromise discussed last week was to put stricter obligations on foundation models like ChatGPT, notably with strict risk and quality management requirements and external audits.

However, the centre-right is pushing to better tailor these requirements to the specificity of this technology. Meanwhile, what some call ‘true’ General Purpose AI (GPAI), which can be adapted to multiple purposes, is headed for a somewhat lighter regime.

Those making substantial modifications to the models would become responsible for complying with the AI rulebook. Still, GPAI providers would have to support this compliance effort by providing non-commercially sensitive information.

Leading EU lawmakers propose obligations for General Purpose AI

The EU lawmakers spearheading the work on the AI Act pitched significant obligations for providers of large language models like ChatGPT and Stable Diffusion while seeking to clarify the responsibilities alongside the AI value chain.

AI definition

Defining Artificial Intelligence was a major political hurled as that defines the very scope of the legislation. In a significant concession to the centre-right European People’s Party (EPP), the definition was aligned with the OECD’s.

EU lawmakers set to settle on OECD definition for Artificial Intelligence

The European Parliament agreed to close a critical contentious point of the AI Act by adopting the definition used by the OECD. Most other definitions have also been agreed upon, with new measures like a right to explanation also on the table of EU lawmakers.

Prohibited practices

The AI Act bans some technological applications considered to pose an unacceptable risk, like subliminal technics that exploit a person’s vulnerability, although an exception for therapeutical purposes was introduced in this case.

A hot topic was how to deal with biometric recognition systems. Progressive MEPs pushed for a total ban, whilst conservative lawmakers wanted to keep this technology available for exceptional circumstances like terrorist attacks and kidnapping.

Upon the request from left-to-centre lawmakers, the list of prohibited practices was significantly expanded to include biometric categorisation, predictive policing, and facial recognition databases based on indiscriminate scaping, as per the controversial company Clearview AI.

An open question is whether emotion recognition should also be banned with a medical carveout, a point the centre-left hopes to obtain as part of the package deal.

The ban on social scoring was expanded to include private companies.

AI Act: MEPs extend ban on social scoring, reduce AI Office role

The ban on social scoring has been extended to private companies, regulatory sandboxes could be used to demonstrate compliance, and the AI Office’s role has been downsized in a whopping new set of compromise amendments to the upcoming AI Act.

On …

High-risk categories

AI applications at significant risk of causing harm must comply with a strict regime. In the original proposal, all AI models that fall into specific application areas like law enforcement were automatically categorised as high-risk.

That automaticity has been removed, as only the AI models that pose a risk to harm people’s health, safety and fundamental rights have been added. This provision is strongly opposed by groups like the Greens and the Left, for which some additional safeguards have been added.

AI Act: co-rapporteurs seek closing high-risk classification, sandboxes

The EU lawmakers spearheading the work on the AI Act have circulated new compromise amendments to finalise the classification of AI systems that pose significant risks and the measures to promote innovation.

At the same time, progressive MEPs obtained a broader list of high-risk applications, extending them to biometric categorisation and identification, deep fakes, and AI-generated text, except where there is editorial responsibility.

Use cases in the field of employment, education, migration control, and critical infrastructure have been expanded.

AI Act: EU Parliament’s crunch time on high-risk categorisation, prohibited practices

The European Parliament’s co-rapporteurs proposed compromise amendments to the list of high-risk AI applications, banned uses and concept definitions.

EU lawmakers Brando Benifei and Dragoș Tudorache are striving to close the negotiations on the Artificial Intelligence Act in the coming days. …

High-risk obligations

Significant amendments to the obligations for high-risk providers have been agreed upon at a political meeting on Wednesday. These include risk management and reporting obligations, transparency on the original purpose of the training datasets, stricter record-keeping, and transparency on the model’s energy consumption.

All users of high-risk systems will have to conduct an impact assessment to consider their potential impact on the fundamental rights of the affected person.

Moreover, public authorities and tech companies classified as gatekeepers under the Digital Markets Act that use high-risk AI will have to register on an EU-wide public database with any other user who wants to register voluntarily.

AI Act: MEPs want fundamental rights assessments, obligations for high-risk users

The European Parliament’s co-rapporteurs circulated new compromise amendments to the ArtificiaI Intelligence (AI) Act proposing how to carry out fundamental rights impact assessments and other obligations for users of high-risk systems.

The new compromise was circulated on Monday (9 January) to …

Enforcement and governance

The chapter on enforcement and governance is still open. The question is whether the AI Office, a new EU body, will have a purely coordinating role or some enforcement powers on cross-border cases, for which it is unclear where the resources will come from.

Leading MEPs push for European ‘office’ to enforce the EU’s AI rulebook

The lawmakers spearheading the work on the AI Act launched the idea of an AI Office to streamline enforcement and solve competency disputes on cross-border cases.