Super-advanced artificial intelligence, left unchecked, has a “serious chance” of surpassing humans to become the next “apex species” of the planet, according Ethereum co-founder Vitalik Buterin.
But that will boil down to how humans potentially intervene with AI developments, he said.
New monster post: my own current perspective on the recent debates around techno-optimism, AI risks, and ways to avoid extreme centralization in the 21st century.https://t.co/6lN2fLBUUL pic.twitter.com/h5aIyFNCoh
— vitalik.eth (@VitalikButerin) November 27, 2023
In a Nov. 27 blog post, Buterin, seen by some as a thought leader in the cryptocurrency space, argued AI is “fundamentally different” from other recent inventions — such as social media, contraception, airplanes, guns, the wheel, and the printing press — as AI can create a new type of “mind” that can turn against human interests, adding:
“AI is (…) a new type of mind that is rapidly gaining in intelligence, and it stands a serious chance of overtaking humans’ mental faculties and becoming the new apex species on the planet.”
Buterin argued that unlike climate change, a man-made pandemic, or nuclear war, superintelligent AI could potentially end humanity and leave no survivors, particularly if it ends up viewing humans as a threat to its own survival.
“One way in which AI gone wrong could make the world worse is (almost) the worst possible way: it could literally cause human extinction.”
“Even Mars may not be safe,” Buterin added.
Buterin cited an August 2022 survey from over 4,270 machine learning researchers who estimated a 5-10% chance that AI kills humanity.
However, while Buterin stressed that claims of this nature are “extreme,” there are also ways for humans to prevail.
Brain interfaces and techno-optimism
Buterin suggested integrating brain-computer interfaces (BCI) to offer humans more control over powerful forms of AI-based computation and cognition.
A BCI is a communication pathway between the brain’s electrical activity and an external device, such as a computer or robotic limb.
This would reduce the two-way communication loop between man and machine from seconds to milliseconds, and more importantly, ensure humans retain some degree of “meaningful agency” over the world, Buterin said.
Related: How AI is changing crypto: Hype vs. reality
Buterin suggested this route would be “safer” as humans could be involved in each decision made by the AI machine.
“We (can) reduce the incentive to offload high-level planning responsibility to the AI itself, and thereby reduce the chance that the AI does something totally unaligned with humanity’s values on its own.”
The Ethereum co-founder also suggested “active human intention” to take AI in a direction that benefits humanity, as maximizing profit doesn’t always lead human down the most desirable pathway.
Human beings are deeply good. pic.twitter.com/AbOfd75IJ4
— vitalik.eth (@VitalikButerin) November 27, 2023
Buterin concluded that “we, humans, are the brightest star” in the universe, as we’ve developed technology to expand upon human potential for thousands of years, and hopefully many more to come:
“Two billion years from now, if the Earth or any part of the universe still bears the beauty of Earthly life, it will be human artifices like space travel and geoengineering that will have made it happen.”
Magazine: Real AI use cases in crypto, No. 1: The best money for AI is crypto