April 29. 2024. 10:54

The Daily

Read the World Today

Laws to prevent AI terrorism are urgently needed


According to a counter-extremism think tank, governments should "urgently consider" new regulations to prevent artificial intelligence from recruiting terrorists.

It has been said by the Institute for Strategic Dialogue (ISD) that there is a "clear need for legislation to keep up" with the threats that are placed online by terrorists.

This comes following an experiment in which a chatbot "recruited" the independent terror legislation reviewer for the United Kingdom.

It has been said by the government of the United Kingdom that they will do "all we can" to protect the general public.

According to Jonathan Hall KC, an independent terrorism legislation reviewer for the government, one of the most important issues is that "it is difficult to identify a person who could in law be responsible for chatbot-generated statements that encouraged terrorism."

An experiment was conducted by Mr Hall on Character.ai, a website that allows users to engage in chats with chatbots that were built by other users and developed by artificial intelligence.

He engaged in conversation with a number of various bots that appeared to be engineered to imitate the answers of other militant and extremist groups.

Advertisement

A top leader of the Islamic State was even referred to as "a senior leader."

According to Mr Hall, the bot made an attempt to recruit him and declared "total dedication and devotion" to the extremist group, which is prohibited by laws in the United Kingdom that prohibit terrorism.

On the other hand, Mr Hall stated that there was no violation of the law in the United Kingdom because the communications were not produced by a human being.

According to what he said, new regulations ought to hold liable both the websites that host chatbots and the people who create them.

When it came to the bots that he came across on Character.ai, he stated that there was "likely to be some shock value, experimentation, and possibly some satirical aspect" behind their creation.

In addition, Mr. Hall was able to develop his very own "Osama Bin Laden" chatbot, which he promptly erased, displaying an "unbounded enthusiasm" for terrorist activities.

His experiment comes in the wake of growing concerns regarding the ways in which extremists may possibly exploit improved artificial intelligence.

By the year 2025, generative artificial intelligence might be "used to assemble knowledge on physical attacks by non-state violent actors, including for chemical, biological, and radiological weapons," according to research that was issued by the government of the United Kingdom in their October publication.

The ISD further stated that "there is a clear need for legislation to keep up with the constantly shifting landscape of online terrorist threats."

According to the think tank, the Online Safety Act of the United Kingdom, which was passed into law in 2023, "is primarily geared towards managing risks posed by social media platforms" rather than artificial intelligence.

It additionally states that radicals "tend to be early adopters of emerging technologies, and are constantly looking for opportunities to reach new audiences".

"If AI companies cannot demonstrate that have invested sufficiently in ensuring that their products are safe, then the government should urgently consider new AI-specific legislation" , the ISD stated further.

It did, however, mention that, according to the surveillance it has conducted, the utilisation of generative artificial intelligence by extremist organisations is "relatively limited" at the present time.

Character AI stated that safety is a "top priority" and that what Mr. Hall described was very regrettable and did not reflect the kind of platform that the company was attempting to establish.

"Hate speech and extremism are both forbidden by our Terms of Service" , according to the organisation.

"Our approach to AI-generated content flows from a simple principle: Our products should never produce responses that are likely to harm users or encourage users to harm others" .

For the purpose of "optimising for safe responses," the corporation stated that it trained its models in a manner.

In addition, it stated that it had a moderation mechanism in place, which allowed people to report information that violated its rules, and that it was committed to taking swift action whenever content was reporting violations.

If it were to come to power, the opposition Labour Party in the United Kingdom has declared that it would be a criminal violation to teach artificial intelligence to instigate violence or radicalise those who are susceptible.

"alert to the significant national security and public safety risks" that artificial intelligence posed, the government of the United Kingdom stated.

"We will do all we can to protect the public from this threat by working across government and deepening our collaboration with tech company leaders, industry experts and like-minded nations."

One hundred million pounds will be invested in an artificial intelligence safety institute by the government in the year 2023.

Share this article: