April 25. 2024. 5:12

The Daily

Read the World Today

Trustworthy AI made in the EU: how common values can be a competitive edge


As the ethics of AI seem to have become an afterthought superseded by a lack of time for discussion and concerns about anti-competitive effects, it is on the European Parliament to put the trustworthiness of all AI systems developed in the EU back in the centre of the AI Act negotiations.

Ever since the European Parliament began the conversation around AI in 2016, there has been an underlying narrative across all EU Institutions about how regulating this key technology means having a distinctively European approach. A balanced regulatory intervention that effectively protects fundamental rights and promotes citizens’ trust in the uptake of AI while creating legal certainty for businesses and giving them enough leeway to innovate.

Fast forward to April 2023 and the ongoing negotiations of the AI Act, the piece of legislation that will shape AI innovation in the EU for years to come. Despite the common goal of establishing a European approach to AI, the concrete way to materialise it remains a point of dispute.

Neither the European Commission’s proposal nor the Council’s general approach offered a clear concept in this regard. The ‘Trustworthy AI’ reference in both texts remains rather vague about how exactly this goal will be achieved. From the beginning, it has been the European Parliament that was the driving force and pushed for the inclusion of a central provision in the AI Act that outlines a set of common principles to be respected by all AI systems in the EU, irrespectively of which risk category these might fall into. While it is understandable that not all AI systems should be subject to the same risk-based requirements, as this would inevitably prove counter-productive to the goal of boosting EU competitiveness, it cannot be forgotten that the remaining AI systems that will not fall under a given risk categorisation are not necessarily devoid of risks. Whereas it is also true that for the latter systems already other data and consumer protection obligations within the EU legal acquis apply, these still do not cover all relevant issues at stake.

So what has the Legal Affairs Committee (JURI) proposed? Similarly to what has been done in other legislation, chief among them Article 5 of the General Data Protection Regulation (GDPR), six common principles would set the scene for how all AI systems in the EU should be developed and deployed.

Stemming, inter alia, from work carried out by the High-Level Expert Working Group set up in 2018 by the Commission and the European Parliament’s 2020 legislative own-initiative report on the ethics of AI, they would become the guiding reference point that denotes the spirit of the AI Act and serves as a basis of interpretation for future jurisprudence.

Firstly, human agency and oversight a reminder that human well-being and dignity should be at the forefront of technological development. Secondly, technical robustness and safety, a by-default approach to minimising harm and preventing misuse. Thirdly, respect for privacy and data governance, an underlining of applicable legislation whose proper application is highly dependent on governance choices. Fourthly, transparency an essential requirement in the context of trust, redress and overall accountability of AI systems towards its subjects. Fifthly, diversity, non-discrimination and fairness three imperatives in the fight against the dissemination of existing or the creation of new biases or socio-economic gaps.

Finally, social and environmental well-being is a goal common to the ongoing twin transition that cannot be understated given the impact technology can have in achieving or undermining it.

These common principles are already translated by the AI Act into obligations for the providers and deployers of high-risk AI systems in Articles 16 and 29. For all other AI systems, a voluntary application based on harmonised standards, technical specifications and codes of conduct is proposed. Even the business sector has expressed sympathy for such an approach, and European Standardisation Organisations have repeatedly made it clear that it would help them to understand better in which direction policy-makers want them to draft their AI standards.

Including common principles based on our shared values would create an EU trademark of trustworthy AI, known worldwide for its high quality and safety, giving ‘AI made in the EU’ a much-needed competitive edge.

Ultimately, the balance found in the adopted opinion text to the AI Act provided by JURI, with exclusive competences in matters such as transparency, human oversight and codes of conduct, should serve as an undisputed basis for compromise between the EU co-legislators.

Doing so does not amount to choosing to write poetry, as some might argue. It amounts, in reality, to put forward an approach that privileges what many have said should be the EU’s added value in regulating AI.

As the house of European democracy, the European Parliament should prove it continues to be the co-legislator that puts the EU’s core values at the centre of AI development, as it knows that AI that is not trusted can never become competitive.