Get free AI updates
We will send you a file myFT Daily Digest Rounding email to the latest artificial intelligence News every morning.
The writer is Head of Global Affairs at Meta
Underlying much of the excitement – and fear – about advances in generative AI is a fundamental question: Who will control these technologies? Big tech companies with massive computing power and data to build new AI models or society at large?
This goes to the heart of the political debate over whether companies should keep their AI models in-house or make them available more openly. As the controversy continued, the issue of openness grew. This is partly because of practicality – it’s not sustainable to keep foundational technology in the hands of a few large companies – and partly because of open source’s track record.
It is important to distinguish between current AI models and potential future models. The most miserable warnings about artificial intelligence are in fact a technological leap – or many leaps. There’s a world of difference between chatbot-style implementations of today’s large language paradigms and massive frontier models theoretically capable of science fiction-style superintelligence. But we were still in the foothills discussing the dangers we might find on the top of the mountain. If and when these developments become more plausible, they may require a different response. But there is time to develop both technology and protective barriers.
Like all underlying technologies — from wireless transmitters to Internet operating systems — there will be many uses for AI models, some predictable and some not. And like every technology, AI will be used for both good and bad ends by good guys and bad guys. The answer to this uncertainty cannot simply be based on the hope that AI models remain secret. This horse has already galloped. Many large language models have already been opened, such as Falcon 40BAnd MPT-30B And dozens before them. And open innovation is not something to be afraid of. The infrastructure of the Internet runs on open source code, as do web browsers and many of the applications we use every day.
While we cannot eliminate risks related to AI, we can mitigate them. Here are four steps I think tech companies should take.
First, they need to be transparent about how their systems operate. in Meta, we recently released 22 “System Cards” for Facebook and Instagramwhich gives people insight into the artificial intelligence behind how we categorize and recommend content in a way that doesn’t require deep technical knowledge.
Second, this openness must be accompanied by collaboration across industry, government, academia, and civil society. Meta is a founding member of Partnership in artificial intelligence, along with Amazon, Google, DeepMind, Microsoft, and IBM. We participate in the Synthetic Media Collective Framework, which is an important step in ensuring that we create protective barriers around AI-generated content.
Third, AI systems must be stress tested. Before launching the next generation of Llama, our large language model, Meta is conducting a “red team”. Common in cybersecurity, this process involves teams taking on the role of adversaries to search for flaws and unintended consequences. Meta will present our latest Llama to the DEFCON conference in Las Vegas next month, where experts can further analyze and stress test their capabilities.
The wrong assumption is that editing source code or model weights makes systems more vulnerable. Conversely, outside developers and researchers can identify issues that may take much longer survival for teams hiding within company silos. Researchers who tested a large Meta language model, BlenderBot 2, found that they could be tricked into remembering wrong information. As a result, BlenderBot 3 was more resistant to it.
Finally, companies should share details of their work as it evolves, whether that be through academic papers and public announcements, open discussion of benefits and risks, or, if appropriate, making the technology itself available for research and product development.
Opening up isn’t altruistic – Meta believes it’s in her best interest. It leads to better products, faster innovation, and a thriving marketplace, which benefits us as it does many others. This does not mean that every model can or should be open source. There is a role for both private and open AI models.
But in the end, openness is the best remedy for fears surrounding AI. Allows collaboration, auditing, and iteration. And it gives corporations, startups, and researchers access to tools they could never build themselves with, powered by computing power they could not otherwise access, opening up a world of social and economic opportunity.