March 4. 2024. 7:01

The Daily

Read the World Today

Leading EU lawmakers propose obligations for General Purpose AI


The EU lawmakers spearheading the work on the AI Act pitched significant obligations for providers of large language models like ChatGPT and Stable Diffusion while seeking to clarify the responsibilities alongside the AI value chain.

The AI Act is a flagship EU legislation to regulate Artificial Intelligence based on its capacity to cause harm. A big question mark in the negotiations of the legislative proposal is how to deal with General Purpose AI (GPAI), large language models that can be adapted for various tasks.

The offices of the European Parliament’s co-rapporteurs, Dragoș Tudorache and Brando Benifei, shared on Tuesday (14 March) a first draft on this sensitive topic, proposing some obligations for the providers of this type of AI models and responsibilities for the different economic actors involved.

General Purpose AI is an “AI system that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of tasks”.

The document, seen by EURACTIV, specifies that AI systems developed for a limited set of applications that cannot be adapted for a wide range of tasks such as components, modules, or simple multi-purpose AI systems should not be considered General Purpose AI systems.

Obligations for GPAI providers

The co-rapporteurs want GPAI providers to comply with some of the requirements initially meant for the AI solutions more likely to cause significant harm, regardless of the distribution channel and if the system is provided as a standalone or embedded in a larger system.

The design, testing, and analysis of the GPAI solutions be aligned with the risk management requirements of the regulation to protect people’s safety, fundamental rights, and EU values, including by documenting non-mitigable risks.

The datasets feeding these large language models should follow appropriate data governance measures like assessing their relevance, suitability and potential biases, identifying possible shortcomings and relative mitigation measures.

Moreover, throughout their lifecycle, ChatGPT and the like will have to undergo external audits testing their performance, predictability, interpretability, corrigibility, safety and cybersecurity in line with the AI Act’s strictest requirements.

In this regard, the lawmakers introduce a new article mandating that European authorities and the AI Office should jointly develop with international partners cost-effective guidance and capabilities to measure and benchmark the compliance aspects of AI systems, and in particular GPAI.

AI models that generate text based on human prompts that could be mistaken for authentic human-made content should be subject to the same data governance and transparency obligations of high-risk systems unless someone is legally responsible for the text.

In addition, the providers of a GPAI model would have to register it on the EU database. Similarly, they would have to comply with the quality management and technical document requirements as high-risk AI providers and follow the same conformity assessment procedure.

Responsibilities along the value chain

EU lawmakers have reworked a paragraph from the text’s preamble on the responsibilities of economic actors alongside the AI value chain and included it in the mandatory part of the text.

In particular, the compromise states that any third party, like an AI’s distributor, importer or deployer, would be considered a provider of a high-risk system (with the related obligations) if it substantially modifies an AI system, including a General Purpose AI one.

In these cases, the provider of the original GPAI system will have to assist the new provider, notably by providing the necessary technical documentation, relevant capabilities, and technical access to comply with the AI regulation without compromising commercially sensitive information.

In this regard, a new annexe was introduced listing examples of information that the GPAI providers should grant to the downstream operators concerning specific obligations of the AI Act like risk management, data governance, transparency, human oversight, quality management, accuracy, robustness and cybersecurity.

For instance, to help the downstream economic operator comply with the risk management requirements of the AI rulebook, the annexe asks the GPAI provider to share information on the system’s capabilities and limitations, use instructions, performance testing results, and risk mitigation measures.

Unfair contractual terms

The leading lawmakers also proposed introducing a new article preventing all providers from unilaterally imposing unfair contractual terms on SMEs for using or integrating tools into a high-risk system. Otherwise, the contracts are to be deemed nulled.

“A contractual term is unfair if it is of such a nature that its use grossly deviates from good commercial practice in the supply of tools, services, components or processes that are used or integrated in a high-risk AI system, contrary to good faith and fair dealing,” the text reads.

Another practice considered unfair is if the contract shifts the penalties or litigation costs resulting from breaching the regulation.

AI Office

The list of tasks of the AI Office has been extended to also issue guidance on how the AI regulation would apply to fast-changing AI value chains and the related implications in terms of accountability.

Furthermore, the EU body is requested to monitor GPAI models, how they are used, and the best practices in self-governance. The AI Office would also have to arrange exchanges between GPAI providers and national authorities, experts, and auditors.