Article content
LONDON (AP) — Authorities around the world are racing to rein in artificial intelligence, including in the European Union, where groundbreaking legislation is set to clear a major hurdle on Wednesday.
Lawmakers in the European Parliament are set to vote on the proposal – including controversial amendments to facial recognition – as it moves toward passage.
Article content
Brussels’ years-long efforts to put in place AI sandboxes are becoming more urgent as rapid developments in chatbots such as ChatGPT show the benefits the emerging technology can bring – and the new risks it poses.
Advertising 2
This ad hasn’t been uploaded yet, but your article continues below.
Article content
Here’s a look at EU artificial intelligence law:
How do the rules work?
The measure, first proposed in 2021, would govern any product or service that uses an AI system. The law will classify AI systems according to four levels of risk, from minimal to unacceptable.
Riskier applications, such as employment or technology targeting children, will face stricter requirements, including being more transparent and using accurate data.
Violations will result in fines of up to 30 million euros ($33 million) or 6% of the company’s annual global revenue, which in the case of tech companies like Google and Microsoft could run into the billions.
The 27 EU member states will have to apply the rules.
What are the risks?
Article content
Advertising 3
This ad hasn’t been uploaded yet, but your article continues below.
Article content
One of the main goals of the European Union is to protect against any AI threats to health and safety and to protect fundamental rights and values.
This means that some uses of AI are considered strictly off-limits, such as “social evaluation” systems that judge people based on their behaviour.
Also prohibited is the use of AI that exploits vulnerable people, including children, or uses unconscious manipulation that could lead to harm, for example, an interactive talking game that encourages dangerous behavior.
Predictive policing tools, which process data to predict who will commit crimes, are also excluded.
Lawmakers reinforced the original proposal from the European Commission, the European Union’s executive branch, by expanding bans on remote facial recognition and biometric identification in public places. The technology scans passers-by and uses artificial intelligence to match their faces or other physical features to a database.
Advertising 4
This ad hasn’t been uploaded yet, but your article continues below.
Article content
But it faces a last-minute challenge after the center-right party added an amendment allowing exceptions for law enforcement such as finding missing children, identifying suspects involved in serious crimes or preventing terrorist threats.
“We don’t want mass surveillance, we don’t want social scoring, we don’t want predictive surveillance in the EU, full stop. That’s what China is doing,” Dragos Todorac, the Romanian MEP who co-leads its work on AI law, said on Tuesday. , not us.
AI systems used in categories such as employment and education, which would affect a person’s life path, face challenging requirements such as being transparent with users and taking steps to assess and reduce the risk of bias from algorithms.
Advertising 5
This ad hasn’t been uploaded yet, but your article continues below.
Article content
The panel says most AI systems, such as video games or spam filters, fall into the low or no risk category.
What about chatting?
The original metric barely mentioned chatbots, mainly by requiring them to be rated so users know they’re interacting with a device. Negotiators later added provisions to cover general-purpose AI such as ChatGPT after it became so popular, subjecting this technology to some of the same requirements as high-risk systems.
A major addition is the requirement for comprehensive documentation of any copyright material used to teach AI systems how to create text, images, video, and music that resemble human labour.
This will allow content creators to see if blog posts, digital books, scholarly articles, or songs have been used to train algorithms running on systems like ChatGPT. Then they can decide whether their work has been copied and seek redress.
Advertising 6
This ad hasn’t been uploaded yet, but your article continues below.
Article content
Why are EU rules so important?
The European Union is not a big player in the development of cutting-edge AI. This role is played by the United States and China. But Brussels often plays a role in setting the trend with regulations that tend to become de facto global standards.
Experts say the sheer size of the EU’s single market, which has 450 million consumers, makes it easier for companies to comply rather than developing different products for different regions.
But it is not just a crackdown. By establishing common rules for artificial intelligence, Brussels is also trying to develop the market by instilling trust among users.
said Chris Schachek, a technologist and senior fellow with the Irish Council for Civil Liberties.
Advertising 7
This ad hasn’t been uploaded yet, but your article continues below.
Article content
“Other countries may want to modify and copy” EU rules, he said.
Others play catch-up. Britain, which left the European Union in 2020, is vying for a position in the leadership of artificial intelligence. Prime Minister Rishi Sunak plans to host a global summit on AI safety this fall.
“I want to make the UK not only the intellectual home, but the geographic home of global AI safety regulation,” Sunak said at a tech conference this week.
He said the UK summit would bring together people from “academia, business and governments from all over the world” to work on a “multilateral framework”.
What then?
It may be years before the rules take full effect. The vote will be followed by three-way negotiations involving member states, parliament and the European Commission, and may face further changes as they try to agree on wording.
Final approval is expected by the end of this year, followed by a grace period for companies and organizations to adjust, often around two years.
To fill the gap before the legislation goes into effect, Europe and the United States are working on a voluntary code of conduct that officials promised at the end of May would be drafted in a matter of weeks and could be expanded to “like-minded countries”.
comments
Postmedia is committed to maintaining an active and civil forum for discussion and encouraging all readers to share their opinions on our articles. Comments may take up to an hour to be moderated before they appear on the site. We ask that you keep your comments relevant and respectful. We’ve enabled email notifications – you’ll now receive an email if you get a response to your comment, if there’s an update to a comment thread you’re following or if it’s a user you’re following. Visit our Community Guidelines for more information and details on how to adjust your email settings.
Join the conversation