April 27. 2024. 7:26

The Daily

Read the World Today

The long and winding road to implement the AI Act


Yesterday, 13 March (Wednesday), the European Parliament passed its first comprehensive regulation on artificial intelligence (AI), but major questions remain on how the law will be implemented.

“There’s clarity around what we might have today, but there’s a lack of clarity as to what might come,” by way of guidance, codes of conduct and delegated acts, said partner and associate director at Boston Consulting Group, Kirsten Rulf.

Globally, the law is the first set of detailed rules for the development and deployment of AI technology, which has been progressing at breakneck speed.

Illustrating this rapid development at the perfect moment, US startup Cognition AI released an automated tool that can take on entire software development tasks, something previously unseen, was launched on the eve of the vote on the AI Act in Parliament.

Taking into account this speed of development, the legislation is somewhat of a “liquid document,” said Rulf, who co-negotiated the AI Act in her previous role as policy advisor for the German Chancellery.

The act establishes a risk-based evaluation for how different uses of AI are regulated, with some banned and others subject to more stringent rules.

Key aspects of the legislation are yet to be agreed including technical implementation standards and guidelines. It will also undergo a review process within the next months and years to determine if they are still relevant.

Within six months certain uses of AI technology will be prohibited, but the list will be reviewed after six months. This includes certain uses of biometric identification. Other categories, including those labeled high-risk, will also be reviewed.

Human rights groups have raised concerns that the law doesn’t go far enough in protecting individuals, particularly biometrics use and AI within an immigration context, such as identity checks.

Companies face fines of up to €35 million or 7% of annual turnover for non-compliance.

Full implementation is set for 2026, but AI systems already on the market have a longer compliance deadline, with some even until 2030.

To a large extent, companies are expected to self-evaluate whether their systems are high-risk, but civil society groups fear this could lead to undue exemptions.

Standards setting

“The meat of the regulation” will be in how standards that translate the legal text into technical specifications are set and harmonised, Sandra Wachter, Professor of Technology and Regulation at the Oxford Internet Institute, told Euractiv.

Much of that work will be done by existing standard-setting bodies, which are mostly composed of industry representatives, she said. This is “unfortunate” because voices from civil society will be largely absent, she added.

When drafting the Act, the politicians had to pick between having standards bodies do this work, or leaving it to the Commission, said co-rapporteur for the file and Romanian Renew MEP, Dragoş Tudorache.

They chose the latter option because they thought it is essential for the standards “to take into account the reality on the ground,” which the people developing AI are more in touch with than the policymaking body, said the MEP.

Tudorache recognises the risk of industry influencing these standards but said that this is “not necessarily a bad thing. On the contrary, you need standards that are workable and respond to the reality.”

He said the Commission, particularly the newly set-up AI Office, can write complementary standards and guidelines.

The new AI Office and standards bodies have already started this work, said Tudaroche. Codes are expected in the coming months.

Regulatory spaghetti bowl

The Commission needs to determine how the AI Act intersects with the numerous regulations on digital products; the General Data Protection Regulation (GDPR), the Digital Services Act and the Digital Markets Act. It also overlaps with copyright, product safety and other existing laws.

“It’s like a regulatory spaghetti bowl and a lot to digest – the next Commission will have to focus on untangling it,” said Director-General of industry group DigitalEurope, Cecilia Bonefeld Dahl.

Many of the deployers of AI systems already have responsibilities as data controllers and processors under the GDPR, VP for global privacy at think tank Future of Privacy Forum, Gabriela Zanfir-Fortuna, told Euractiv.

“Ensuring consistency in how the rules will be applied is going to be crucial for the law to actually be effective,” she said.

Some data protection supervisors in member states may find the implementation of the AI Act soon in their inbox. The national authorities in charge of implementing the law are to be determined by EU countries separately, and data authorities are one of the candidates.

What is more, the language of the Act needs to be nailed down.

In a LinkedIn post. Kai Zenner, assistant to German MEP Alex Voss, wrote that the “Legal Services of all three EU Institutions” found “the quality of the legal drafting” below EU standards.

“It’s going to be massively challenged in court on several levels over the next couple of years because weak drafting makes it easy to know for a well-paid lawyer” to do so, said one consultant for tech companies who spoke to Euractiv on condition of anonymity.

Read more with Euractiv

US House passes bill to force ByteDance to divest TikTok or face ban

US House passes bill to force ByteDance to divest TikTok or face ban

The US House of Representatives overwhelmingly passed a bill on Wednesday (13 March) that would give TikTok’s Chinese owner ByteDance about six months to divest the US assets of the short-video app, or face a ban, in the greatest threat to the app since the Trump administration.