March 4. 2024. 6:02

The Daily

Read the World Today

MEPs advance on AI conformity assessment for high-risk uses


The EU lawmakers leading on the AI Act have circulated revised compromise amendments on how AI systems that could pose significant risks should comply with the regulation’s requirement.

The AI Act is a flagship legislative proposal to regulate Artificial Intelligence (AI) based on its potential to cause harm. A fundamental part of the draft law is the category of high-risk applications, which will be asked to comply with stricter requirements.

How the AI providers of these systems will have to comply with the rules set out in the AI Act has been the focus of a technical meeting that took place on Monday (6 February). Ahead of the meeting, the co-rapporteurs Brando Benifei and Dragoș Tudorache shared compromise documents obtained by EURACTIV with other lawmakers.

Conformity assessment procedure

The AI Act allows the providers of high-risk systems to show their compliance with two alternative procedures; a third-party assessment of their quality management system and technical documentation or based on internal control.

For the lawmakers, AI developers will only be able to use internal control if they use harmonised standards in full. If the standards do not exist, exist but are limited, or the AI developers decide not to use them, then an external audit is required.

The AI developers might also ask for a third-party assessment in case they consider this necessary, regardless of the system’s level of risk.

SME representatives like the European DIGITAL SME Alliance have warned that these auditing firms will be incentivised to inflate the compliance costs, which would put smaller actors at a disadvantage.

To mitigate these concerns, Members of European Parliament (MEPs) have introduced an article requiring third-party bodies to consider small AI providers’ specific interests and needs when calculating their fees, “reducing those fees proportionately to their size and market share.”

The European Commission will be able to amend the provisions of the conformity assessment procedures. According to the compromise, the EU executive will be able to do so following a consultation with the AI Office and the affected stakeholders and only after providing ‘substantial evidence’.

The concept of substantial evidence is not defined.

Making the AI Act work for SMEs: The EU tries to square the circle

The EU has a first-mover advantage in advancing the first set of rules on Artificial Intelligence in the world. But without appropriate measures, this emerging market might be left in the hands of big players.

The AI Act regulates Artificial Intelligence …

Conformity assessment derogation

Initially, the co-rapporteurs proposed deleting the entire article that allowed a market surveillance authority to derogate from the conformity assessment procedure to put into service a high-risk AI system within its national territory for exceptional reasons like people’s protection.

The whole article has been reinstated, but now the market surveillance authority would need authorisation from a judicial authority for the derogation. Both the requests and authorisations would have to be notified to the other member states and the Commission.

Importantly, public security has been removed from the exceptional needs for which a derogation might be requested.

The technical and political discussions are currently ongoing in the European Parliament, with the view of reaching a common position in the coming weeks.

AI Act: leading MEPs seek first compromise on administrative procedures, standards

The European Parliament’s co-rapporteurs circulated the first batch of compromise amendments for the EU’s Artificial Intelligence (AI) Act on Thursday (7 July) which will be the basis for a technical discussion on Monday.

The AI Act has received over 3,000 amendments …