Developers working in AI are required to be licensed and regulated similarly to the pharmaceutical, medical and nuclear industries, according to a representative for Britain’s opposition political party.
Lucy Powell, a politician and digital spokesperson for the UK Labor Party, told The Guardian on June 5 that companies such as OpenAI or Google that create AI models must “get a license in order to build them,” adding:
“My real point of concern is the lack of any regulation of the large language models that can then be applied across a suite of AI tools, whether that is controlling how they are built, how they are managed, or how they are controlled.”
Powell argued that regulating the development of certain technologies is a better option than banning them along the lines of the European Union’s ban on facial recognition tools.
She added that AI “could have a lot of unintended consequences” but if developers had to open up about their AI training models and datasets, the government could mitigate some of the risks.
“This technology is moving so quickly that it needs an active and interventionist government approach, rather than a laissez-faire policy,” she said.
Before speaking at the TechUk conference tomorrow, I spoke to The Guardian about Labor’s approach to digital technology and artificial intelligence. https://t.co/qzypKE5uJU
– Lucy Powell MP (@LucyMPowell) June 5, 2023
Powell also believes that such advanced technology could have a significant impact on the UK economy, and Labor is allegedly finalizing its own policies on artificial intelligence and related technologies.
Next week, Labor leader Keir Starmer plans to hold a meeting with the party’s shadow cabinet at Google’s UK offices so he can speak with executives focused on artificial intelligence.
Related: EU officials want to label all content generated by artificial intelligence
Meanwhile, Matt Clifford, head of the Agency for Advanced Research and Invention — the government research agency founded last February — appeared on TalkTV on June 5 to warn that Thailand could threaten humans in less than two years.
EXCLUSIVE: Matt Clifford, advisor to the Prime Minister’s AI Task Force, says the world may have only two years left to tame AI before computers become too powerful for humans to control.
– TalkTV (TalkTV) June 5, 2023
“If we don’t start thinking now about how to regulate and think about safety, in a couple of years we will find that we have very strong systems in place already,” he said. Clifford explained, however, that the two-year timeline is “the upward end of the spectrum”.
Clifford highlighted that today’s AI tools can be used to help “launch cyberattacks on a massive scale.” OpenAI has committed $1 million to leverage AI-assisted cybersecurity technology to thwart such uses.
“I think there are a lot of different scenarios to worry about,” he said. “I certainly think it’s right that it should be at the top of the agenda for policymakers.”
BitCulture: Fine Art on Solana, AI Music, Podcast + Book Reviews