Live Markets, Charts & Financial News

Sam Altman has an idea to get AI to ‘love humanity,’ use it to poll billions of people about their value systems

1

He is confident that this trait can be integrated into AI systems, but he is not sure.

“I think so,” Altman said when asked that question during an interview with Harvard Business School Senior Associate Dean Deborah Spar.

The issue of AI uprising was once limited to Isaac Asimov’s science fiction films or James Cameron’s action films. But since the advent of artificial intelligence, it has become, if not a hot-button issue, then at least a topic of discussion worthy of real attention. What might once have been considered crank musings is now a real organizational question.

Altman said OpenAI’s relationship with the government has been “fairly constructive.” He added that a project as far-reaching and large-scale as the development of artificial intelligence should have been a government project.

“In a well-functioning society, this would be a government project,” Altman said. “Given that’s not happening, I think it’s better that it happens this way as an American project.”

The federal government has yet to make significant progress on AI safety legislation. There have been efforts in California to pass a law that would make AI developers liable for catastrophic events such as using it to develop weapons of mass destruction or attack critical infrastructure. This bill was approved in the legislature but was vetoed by California Governor Gavin Newsom.

Some prominent figures in the field of artificial intelligence have warned that ensuring it is fully compatible with the good of humanity is crucial. Nobel laureate Geoffrey Hinton, known as the godfather of artificial intelligence, said he could “see no path to safety.” Tesla CEO Elon Musk has regularly warned that artificial intelligence could lead to the extinction of humanity. Musk was instrumental in founding OpenAI, providing the nonprofit with significant funding initially. “Funding Altman Stays For”Thankful“, even though Musk is suing him.

There were several organizations, such as the non-profit Alignment Research Center and the startup Super safe intelligence Founded by the former chief science officer of OpenAI – which has emerged in recent years dedicated to just this question.

OpenAI did not respond to a request for comment.

AI as currently designed is well suited to alignment, Altman said. For this reason, he believes that it will be easier than it may seem to ensure that artificial intelligence does not harm humanity.

“One of the things that has worked surprisingly well is the ability to adapt an AI system to behave in a certain way,” he said. “So, if we can explain what that means in a bunch of different cases, then yes, I think we can make the system behave that way.”

Altman also has a unique idea for how OpenAI and other developers can “express” those principles and ideals needed to ensure AI stays on our side: Use AI to poll the public at large. He suggested asking users of AI chatbots about their values ​​and then using those answers to determine how AI is aligned to protect humanity.

“I’m interested in a thought experiment (in which) an AI chats with you for a few hours about your value system,” he said. “He does that with me and everyone else. Then he says, ‘Well, I can’t make everyone happy all the time.’

Altman hopes that by connecting with and understanding billions of people “at a deep level,” AI can identify challenges facing society more broadly. From here, AI can reach a consensus on what it needs to do to achieve the overall well-being of the general public.

Artificial Intelligence has an in-house team dedicated to Superior alignmentcharged with ensuring that future digital superintelligences do not corrupt and cause untold damage. In December 2023, the group released an early paper that showed it was working on a process by which a single large language model could be created. He will supervise another one. This spring, the leaders of that team, Sutskever and Jan Leike, left OpenAI. Their team was disbanded, according to Preparing reports from CNBC at the time.

Lake said he left growing disagreements with OpenAI’s leadership over its commitment to safety as the company worked toward artificial general intelligence, a term that refers to artificial intelligence that is as intelligent as a human.

“Building machines that are more intelligent than humans is an inherently dangerous endeavor,” Leike said books On X. “OpenAI has a tremendous responsibility on behalf of all of humanity. But over the past years, safety culture and operations have given way to shiny products.

When Lake left, Altman Written on X That he was “highly appreciative.” (His) contributions to openai’s alignment and safety culture research.

How many degrees of separation are you from the most powerful business leaders in the world? Find out who made our new list of the 100 Most Powerful People in Business. Plus, learn about the metrics we used to achieve this.

Comments are closed, but trackbacks and pingbacks are open.