OpenAI has launched its latest AI model, the o1 series, which the company claims has human-like thinking capabilities.
In a recent blog post, the creator of ChatGPT explained that the new model spends more time thinking before responding to queries, enabling it to tackle complex tasks and solve tougher problems in fields like science, coding, and mathematics.
The o1 series is designed to simulate a more deliberate thought process, honing its strategies and recognizing mistakes just like a human. Mira Murati, OpenAI’s chief technology officer, called the new model a major leap forward in AI capabilities, and predicted it would fundamentally change how people interact with these systems. “We’re going to see a deeper form of collaboration with technology, similar to the back-and-forth conversation that helps us think,” Murati said.
While current AI models are known for their fast, intuitive responses, the o1 series offers a slower, more thoughtful approach to reasoning, resembling human cognitive processes. Moratti expects the model to drive progress in fields like science, healthcare, and education, where it could help navigate complex ethical and philosophical dilemmas, as well as abstract reasoning.
Mark Chen, OpenAI’s vice president of research, noted that early tests by programmers, economists, hospital researchers and quantum physicists have shown that the O1 series performs better at solving problems than previous AI models. According to Chen, one economics professor noted that the model could solve a doctoral exam question “probably better than any of the students.”
However, the new model has some limitations: its knowledge base only extends until October 2023, and it currently lacks the ability to browse the web or upload files and images.
The launch comes amid reports that OpenAI is in talks to raise $6.5 billion at a staggering $150 billion valuation, with potential backing from major players like Apple, Nvidia and Microsoft, according to Bloomberg News. That valuation would put OpenAI ahead of its rivals, including Anthropic, which was recently valued at $18 billion, and Elon Musk’s xAI, which is valued at $24 billion.
The rapid development of advanced generative AI has raised safety concerns among governments and technologists about the broader societal implications. OpenAI itself has faced internal criticism for prioritizing commercial interests over its original mission to develop AI for the benefit of humanity. Last year, the board temporarily ousted CEO Sam Altman over concerns that the company was drifting away from its founding goals, an event referred to internally as a “tipping point.”
Furthermore, several safety executives, including Jan Lake, have left the company, citing a shift in focus from safety to marketing. Lake has warned that “building machines that are smarter than humans is an inherently dangerous endeavor,” and expressed concern that OpenAI’s safety culture has been marginalized.
In response to these criticisms, OpenAI announced a new approach to safety training for the o1 series, leveraging its improved reasoning capabilities to ensure adherence to safety guidelines and alignment. The company also entered into formal agreements with AI Safety Institutes in the US and UK, granting them early access to research versions of the model to further collaborative efforts in protecting AI development.
As OpenAI advances its latest innovations, the company aims to balance the pursuit of technological advancement with a renewed commitment to safety and ethical considerations in the deployment of AI.
Comments are closed, but trackbacks and pingbacks are open.