Live Markets, Charts & Financial News

‘Hallucination problems’ plague A.I. chatbots, Google CEO says

35

Google’s new chatbot, Bard, is part of a revolutionary wave of artificial intelligence (AI) being developed that can quickly generate anything from an article about William Shakespeare to DMX-style rap lyrics. But Bard and all of its chatbot peers still have at least one serious problem — they sometimes make things up.

The latest evidence of this unwanted trend was shown during CBS’ 60 minutes Sunday. The Inflation Wars: A Recent History By Peter Temin “Presents a history of inflation in the United States” and discusses the policies that were used to control it, Bard declared during the report. The problem is that the book does not exist.

It’s a sexy lie from the Bard – because it just might be true. Temin is an accomplished MIT economist who studies inflation and has written over a dozen books on economics, never writing one called The Inflation Wars: A Recent History. “Hallucination” so cool, in addition to the names and summaries of a whole list of other economics books in response to a question about inflation.

This isn’t the first common mistake a chatbot makes either. When Bard launched in March to take on OpenAI’s rival ChatGPT, it claimed in a public presentation that the James Webb Space Telescope was the first to take an image of an exoplanet in 2005, but the aptly named Very Large Telescope had already done the job a year earlier in Chile. .

Chatbots like Bard and ChatGPT use large language models, or LLMs, that take advantage of billions of data points to predict the next word in a text string. The method of so-called generative AI tends to produce hallucinations in which models generate text that looks believable, but is not realistic. But with all the work being done on LLMs, are these types of hallucinations still common?

“Yes,” Sundar Pichai, CEO of Google, admitted in his letter 60 minutes Sunday interview, saying they are “expected.” No one in the field has solved hallucination problems yet. All models have this as an issue.”

When asked if the issue of hallucinations will be resolved in the future, Pichai noted “it’s a matter of intense debate,” but said he believes his team will eventually “make progress.”

This progress can be difficult to achieve, some AI experts have noted, due to the complex nature of AI systems. There are still parts of AI technology that its engineers “don’t fully understand,” Pichai said.

“There is an aspect of this that we — all of us in the field — call the ‘black box,’” he said. “And you can’t figure out why he says that, or why he’s wrong.”

Pichai said his engineers “have some ideas” about how their chatbots should work, and their ability to understand the model is improving. “But this is where the state of the art is,” he noted. This answer may not be good enough for some critics who warn of potential unintended consequences of complex AI systems.

For example, Microsoft co-founder Bill Gates argued in March that the development of AI technology could exacerbate wealth inequality globally. “Market forces will not naturally produce AI products and services that help the poorest,” the billionaire wrote in a blog post. “The opposite is more likely.”

Elon Musk has been sounding the alarm about the dangers of AI for months, arguing that the technology will hit the economy “like an asteroid.” The Tesla and Twitter CEO was part of a group of more than 1,100 AI executives, technologists and researchers who called a six-month pause in developing AI tools last month — even though he’s been busy creating his own. A competing AI startup backstage.

AI systems can also exacerbate the flow of disinformation by creating fake images—deceptive images of events or people created by AI—and even harming the environment, according to researchers surveyed in Annual Report on the technology by Stanford University’s Institute for Human-Centered Artificial Intelligence, which warned the threat amounted to a “potential nuclear catastrophe” last week.

On Sunday, Google’s Pichai revealed that he shares some of the researchers’ concerns, arguing that AI “can be extremely harmful” if deployed incorrectly. “We don’t have all the answers out there yet — and technology moves quickly. So does this keep me up at night? Absolutely,” he said.

Pichai added that the development of AI systems should involve “not just engineers, but social scientists, ethicists, philosophers, etc.” to ensure that the results benefit everyone.

“I think those are all things that the community needs to figure out as we move forward. It’s not for the company to decide,” he said.

Comments are closed.