Live Markets, Charts & Financial News

AI should be regulated like medicine and nuclear power: UK minister

0 24

Developers working in AI are required to be licensed and regulated similarly to the pharmaceutical, medical and nuclear industries, according to a representative for Britain’s opposition political party.

Lucy Powell, a politician and digital spokesperson for the UK Labor Party, told The Guardian on June 5 that companies such as OpenAI or Google that create AI models must “get a license in order to build them,” adding:

“My real point of concern is the lack of any regulation of the large language models that can then be applied across a suite of AI tools, whether that is controlling how they are built, how they are managed, or how they are controlled.”

Powell argued that regulating the development of certain technologies is a better option than banning them along the lines of the European Union’s ban on facial recognition tools.

She added that AI “could have a lot of unintended consequences” but if developers had to open up about their AI training models and datasets, the government could mitigate some of the risks.

“This technology is moving so quickly that it needs an active and interventionist government approach, rather than a laissez-faire policy,” she said.

Powell also believes that such advanced technology could have a significant impact on the UK economy, and Labor is allegedly finalizing its own policies on artificial intelligence and related technologies.

Next week, Labor leader Keir Starmer plans to hold a meeting with the party’s shadow cabinet at Google’s UK offices so he can speak with executives focused on artificial intelligence.

Related: EU officials want to label all content generated by artificial intelligence

Meanwhile, Matt Clifford, head of the Agency for Advanced Research and Invention — the government research agency founded last February — appeared on TalkTV on June 5 to warn that Thailand could threaten humans in less than two years.

“If we don’t start thinking now about how to regulate and think about safety, in a couple of years we will find that we have very strong systems in place already,” he said. Clifford explained, however, that the two-year timeline is “the upward end of the spectrum”.

Clifford highlighted that today’s AI tools can be used to help “launch cyberattacks on a massive scale.” OpenAI has committed $1 million to leverage AI-assisted cybersecurity technology to thwart such uses.

“I think there are a lot of different scenarios to worry about,” he said. “I certainly think it’s right that it should be at the top of the agenda for policymakers.”

BitCulture: Fine Art on Solana, AI Music, Podcast + Book Reviews