March 4. 2024. 7:23

The Daily

Read the World Today

AI Act: EU Parliament’s crunch time on high-risk categorisation, prohibited practices


The European Parliament’s co-rapporteurs proposed compromise amendments to the list of high-risk AI applications, banned uses and concept definitions.

EU lawmakers Brando Benifei and Dragoș Tudorache are striving to close the negotiations on the Artificial Intelligence Act in the coming days. The Act is the world’s first attempt to regulate AI based on its potential to cause harm.

Among the pending issues the two lawmakers are trying to close is the list of AI uses that pose significant risks, the prohibited practices and the definitions of the key concepts used in the draft law, according to documents obtained by EURACTIV.

High-risk areas

The AI Act’s Annex III lists critical areas with specific use cases.

In a compromise version of Annex III circulated on Monday (6 February), the co-rapporteurs extended the notion of biometric identification and categorisation to biometric-based systems like Lensa, an app that can generate avatars based on a person’s face.

As the co-rapporteurs want live biometric identification in publicly accessible spaces to be banned altogether, the high-risk use case has been limited to ex-post identification. For privately-accessible spaces, both live and ex-post identification have been added to the list.

Moreover, the use cases include remote biometric categorisation in publicly-accessible spaces and emotion recognition systems.

AI Act: EU Parliament’s discussions heat up over facial recognition, scope

EU lawmakers held their first political debate on the AI Act on Wednesday (5 October) as the discussion moved to more sensitive topics like the highly debated issue of biometric recognition.

For what concerns the critical infrastructure, any safety component for road, rail and road traffic has been included.

However, systems meant to ensure the safety of water supply, gas, heating, energy, and electricity would only qualify in this category if the system’s failure is highly likely to lead to an imminent threat to such supply.

In the educational area, the wording has been amended to include systems that allocate personalised learning tasks based on the students’ personal data.

In the field of employment, the high-risk category was expanded to include algorithms that make or assist decisions related to the initiation, establishment, implementation or termination of an employment relation, notably for allocating personalised tasks or monitoring compliance with workplace rules.

Regarding access to public services, the text now specifies the type of public assistance the provision refers to, namely housing, electricity, heating and cooling and internet. The exemption for systems designed by small providers to assess people’s creditworthiness has been removed.

AI models intended to assess the eligibility for health and life insurance and those to classify emergency calls, for instance, for law enforcement or emergency healthcare patient triage, are also covered.

A new risk area was added for systems meant to be used by vulnerable groups, particularly AI systems that may seriously affect a child’s personal development. This vague wording might result in covering social media’s recommender systems.

Lawmakers expanded the wording in the law enforcement, migration and border control management to avoid the high-risk classification being circumvented using a contractor.

Moreover, any AI application that could influence people’s voting decisions at local, national or European polls is considered at risk, together with any system that supports democratic processes such as counting votes.

A residual category was introduced to cover generative AI systems like ChatGPT and Stable Diffusion. Any AI-generated text that might be mistaken for human-generated is considered at risk unless it undergoes human review and a person or organisation is legally liable for it.

AI-generated deep fakes, audio-visual content representing a person doing or saying something that never happened, the high-risk category applies unless it is an obvious artistic work.

AI-powered subliminal techniques for scientific research and therapeutical purposes are also at high-risk.

AI Act: co-rapporteurs seek closing high-risk classification, sandboxes

The EU lawmakers spearheading the work on the AI Act have circulated new compromise amendments to finalise the classification of AI systems that pose significant risks and the measures to promote innovation.

Prohibited practices

Another important part of the AI rulebook is the type of applications that are banned completely.

According to another compromise seen by EURACTIV, AI models using subliminal techniques beyond a person’s consciousness are to be banned except if their use is approved for therapeutical purposes and with the explicit consent of the individuals exposed to them.

Also prohibited are the AI applications that are intentionally manipulative or designed to exploit a person’s vulnerability, like mental health or economic situation, to materially distort his or her behaviour in a way that can cause significant physical or psychological harm.

The co-rapporteurs are proposing expanding the ban on the social scoring not only of individuals but also to groups over inferred personal characteristics that could lead to preferential treatment.

The ban on AI-powered predictive policing models was maintained.

MEPs to discuss regulatory dialogue on high-risk AI classification

EU lawmakers are due to deliberate on how artificial intelligence (AI) systems should be classified in terms of the actual or potential risks they pose under the auspices of the new AI Act.

The AI Act is a flagship EU proposal …

Definitions

The definition of AI is a sensitive topic as it determines the whole scope of the regulation. This part has been ‘parked’ for the moment, according to a comment at the margin of a third document.

The concept of risk was defined as “the combination of the probability of an occurrence of a hazard causing harm and the degree of severity of that harm.”

More definitions have been added concerning data, profiling, deep fakes, biometric identification and categorisation, subliminal techniques and sensitive data, bringing more clarity to these concepts and aligning them to the EU’s General Data Protection Regulation.