June 20. 2024. 1:09

The Daily

Read the World Today

EU lawmakers set to settle on OECD definition for Artificial Intelligence

The European Parliament agreed to close a critical contentious point of the AI Act by adopting the definition used by the Organisation for Economic Cooperation and Development (OECD). Most other definitions have also been agreed upon, with new measures like a right to explanation also on the table of EU lawmakers.

Last Friday (3 March), representatives of the European Parliament’s political groups working on the AI Act reached a political agreement on one of the most politically sensitive parts of the file, the very definition of Artificial Intelligence, according to two European Parliament officials.

The AI Act is a flagship legislative proposal to regulate this emerging technology based on its capacity to cause harm. What is defined as Artificial Intelligence will be highly consequential as it will also define the scope of the EU’s AI rulebook.

“‘Artificial intelligence system’ (AI system) means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate output such as predictions, recommendations, or decisions influencing physical or virtual environments,” reads the text, seen by EURACTIV, which was discussed on Friday.

AI Act: All the open political questions in the European Parliament

The European Parliament’s rapporteurs on the AI Act circulated on Monday (13 February) an agenda for a key political meeting which includes new compromises on AI definition, scope, prohibited practices, and high-risk categories.

AI definition

According to one EU official present at the meeting, the agreement was to remove the notion of a ‘machine-based’ system from the wording. A revised text is now expected from the office of the co-rapporteurs, Brando Benifei and Dragoș Tudorache.

The definition largely overlaps with the one of the OECD, an international organisation often regarded as a club for rich countries.

Similarly, an addition to the text’s preamble calls for the AI definition to be “closely aligned with the work of international organisations working on artificial intelligence to ensure legal certainty, harmonisation and wide acceptance.”

The international recognition is the line pushed by the conservative European People’s Party, which also wished to narrow the definition to systems based on Machine Learning, whilst the left-to-centre lawmakers asked for a broader approach targeting automated decision-making.

In addition, wording has been included stating that the reference to predictions includes content, a measure intended to ensure that generative AI models like ChatGPT do not fall through the cracks of the regulation.

“But then the next question is, of course, whether they fall under high risk and chapter 2 [related obligations] or can count on a big GPAI [General Purpose Artificial Intelligence] exemption,” said a parliamentary official, highlighting a divisive question MEPs have barely started to address.

The accompanying text also specifies that AI should be distinguished by simpler software systems o programming approaches and that the set objectives could be different from the intended purpose of the AI system in a specific context.

Remarkably, the compromise notes that where an AI model is integrated into a more extensive system entirely dependent on the AI component, the entire system is considered part of a single AI solution.

Artificial Intelligence definition, governance on MEPs’ menu

The definition of Artificial Intelligence and how the new EU’s rulebook for this emerging technology will be implemented will be the focus of a political meeting on Wednesday (9 November).

Other definitions

On Monday (6 March), an agreement was reached at a technical meeting on most other definitions of the AI regulation. In this case, the most significant addition to the compromise amendments seen by EURACTIV relates to the definitions of significant risk, biometric authentication and identification.

“‘Significant risk’ means a risk that is significant in terms of its severity, intensity, probability of occurrence, duration of its effects, and its ability to affect an individual, a plurality of persons or to affect a particular group of persons,” the document specifies.

Remote biometric verification systems were defined as AI systems used to verify the identity of persons by comparing their biometric data against a reference database with their prior consent. That is distinguished by an authentication system, where the persons themselves ask to be authenticated.

On biometric categorisation, a practice recently added to the list of prohibited use cases, a reference was added to inferring personal characteristics and attributes like gender or health.

A reference was introduced to the Law Enforcement Directive for profiling by law enforcement authorities.

AI Act: EU Parliament’s crunch time on high-risk categorisation, prohibited practices

The European Parliament’s co-rapporteurs proposed compromise amendments to the list of high-risk AI applications, banned uses and concept definitions.

EU lawmakers Brando Benifei and Dragoș Tudorache are striving to close the negotiations on the Artificial Intelligence Act in the coming days. …

Stand-alone articles

On Thursday, another technical meeting is set to discuss so-called stand-alone articles, provisions that are not necessarily linked to the rest of the draft law.

A new article was introduced with the right to an explanation of individual decision-making that applies in cases where the AI informs a decision that produces legal effects or similar effects on someone.

A meaningful explanation should include the role played by the AI solution in the decision-making, the logic and main parameters used, and the input data. To make this provision effective, law enforcement and judicial authorities are prevented from using AI systems that are covered by proprietary rights.

Other new measures include accessibility requirements for AI providers and users, the right not to be subject to non-compliant AI models, and an obligation for high-risk AI systems to be designed and developed so as to minimise environmental impact.

Articles on general principles applicable to all AI systems and AI literacy were largely maintained since they were initially proposed at a political meeting in mid-February but there was no time to discuss them.