ChatGPT maker investigated by US regulators over AI risks

Get free AI updates

The risks posed by artificially intelligent chatbots are being formally investigated by US regulators for the first time after the Federal Trade Commission launched a wide-ranging investigation into ChatGPT maker OpenAI.

In a letter sent to the Microsoft-backed company, the FTC said it would look into whether people were harmed by an AI chatbot generating false information about them, as well as whether OpenAI engaged in “unfair or deceptive” privacy and data security practices.

Generative AI products are in the sights of regulators around the world, as AI experts and ethicists sound the alarm about the vast amount of personal data the technology consumes, as well as its potentially harmful output, from misinformation to sexist and racist comments.

In May, the FTC fired a warning shot at the industry, saying it was “focusing intensely on how companies choose to use AI technology, including new generative AI tools, in ways that can have a real and significant impact on consumers.”

In its letter, the US regulator asked OpenAI to share internal material that ranges from how the group retains user information to steps the company has taken to address the risk of its model producing “false, misleading, or derogatory” data.

The Federal Trade Commission declined to comment on the letter, which was first reported by The Washington Post. Writing on Twitter later on Thursday, OpenAI CEO Sam Altman invited him “Very disappointing to see the FTC request begin with a leak and not help build confidence.” He added, “It is very important to us that our technology is safe and pro-consumer, and we are confident that we are following the law. Of course we will work with the Federal Trade Commission.”

Lena Khan, chair of the Federal Trade Commission, testified Thursday morning before the House Judiciary Committee and faced heavy criticism from Republican lawmakers for her tough stance on law enforcement.

When asked about the investigation during the hearing, Khan declined to comment on the investigation, but said the regulator’s broader concerns include ChatGPT and other AI services “being fed huge amounts of data” while “there were no checks on the type of data being included in these companies.”

She added, “We’ve heard of reports where people’s sensitive information appears in response to an inquiry from someone else. We’ve heard of slander, defamatory statements and untrue things popping up. This is the kind of fraud and deception that we’re concerned about.”

Khan has also been peppered with questions from lawmakers about her mixed record in court, after the Federal Trade Commission suffered a major defeat this week in its attempt to block Microsoft’s $75 billion acquisition of Activision Blizzard. The Federal Trade Commission on Thursday appealed against the decision.

Meanwhile, Republican Jim Jordan, the committee’s chairman, accused Khan of “harassing” Twitter after the company alleged in a lawsuit that the FTC engaged in “irregular and inappropriate” conduct in carrying out a consent order it imposed last year.

Khan did not comment on the Twitter profile but said that all the FTC cares about “is that the company follows the law.”

Experts were concerned about the huge amount of data being collected by the language models behind ChatGPT. OpenAI had more than 100 million monthly active users two months after its launch. Microsoft’s new Bing search engine, also powered by OpenAI technology, was used by more than 1 million people in 169 countries within two weeks of its launch in January.

Users have reported that ChatGPT has fabricated names, dates, and facts, as well as fake links to news websites and references to academic papers, an issue known in the industry as “hallucinations”.

The FTC investigation is looking into the technical details of how ChatGPT was designed, including the company’s work on fixing hallucinations, and its oversight of human reviewers, affecting consumers directly. It also requested information on consumer complaints and the company’s efforts to assess consumers’ understanding of the chatbot’s accuracy and reliability.

In March, the Italian privacy watchdog temporarily banned ChatGPT as it examined the US company’s collection of personal information after, among other things, a cybersecurity breach. It was brought back a few weeks later, after OpenAI made its privacy policy more accessible and introduced a tool to verify users’ ages.

Echoing previous admissions about ChatGPT’s fallibility, Altman tweeted: “We are transparent about the limitations of our technology, especially when we fall short. Our cap-and-trade structure means we are not incentivized to generate unlimited returns.” However, he said the chatbot was built on “years of safety research,” adding, “We protect user privacy and design our systems to recognize the world, not individuals.”

ChatGPTInvestigatedMakerRegulatorsRisks
Comments (0)
Add Comment