Live Markets, Charts & Financial News

Anthropic CEO A.I. risks: short, medium, and long-term

0 35

Concern about the dangers of AI is a huge issue in 2023, fueled by the rapid adoption of tools like text-to-image generators and life-like chatbots.

The good news, for those prone to anxiety, is that you can organize your discomfort into three neat groups: short-term AI risks, medium-term risks, and long-term risks. That’s how Dario Amudi, co-founder and CEO of Anthropic, does it.

Amudi should know. In 2020, he left OpenAI, maker of ChatGPT, to co-found Anthropic on the principle that paradigms of large languages ​​have the potential to become exponentially more capable the more computing power is poured into them—and as a result, those paradigms must be designed with safety. In May, the company It raised $450 million in financing.

Speaking at the Fortune Brainstorm Tech Conference in Deer Valley, Utah on Monday, Amodei modeled his three-level fear in response to a question from Fortune’s Jeremy Kahn about the existential risks posed by AI. Here’s how Amodei worries about AI:

  • short term risks: the kind of problems we face today, “about things like bias and misinformation.”
  • Medium term risk: “I think in a couple of years when models are getting better at things like science and engineering and biology, you can just do really bad things with models that you couldn’t have done without.”
  • long term risks“As we move to models that have the main property of agency — meaning they don’t just output text, but they can do things, whether that’s with a bot or on the Internet — then I think we have to worry about them becoming too independent, and it’s hard for them to stop what they do or control it. And I think the extreme end of that is concerns about existential risk.”

Large language models are incredibly versatile. It can be applied across a wide number of uses and scenarios – “Most of them are good. But there are some bad ones lurking out there and we have to find them and stop them,” Amoudi said.

He advised that we should not “freak out” about the long-term existential risk scenario. “They’re not going to happen tomorrow, but as we continue with exponential AI, we have to understand that these risks are at the exponential end of that.”

But when asked by Kahn if he’s ultimately optimistic or pessimistic about artificial intelligence, the CEO of Anthropic gave an ambivalent response that will either be comforting or intimidating, depending on whether you’re the half-full or half-empty type. Person: “I think it’s going to go really well. But there’s a risk, maybe 10% or 20%, that it’s going to go wrong, and we have to make sure that doesn’t happen.”

Sign up for our newsletter packed with simple strategies for working smarter and living better, from the Fortune Well team. Register today.
Leave A Reply

Your email address will not be published.