AI companies could be more transparent to help users make informed choices: Meta VP

Article content

TORONTO – Ask the head of AI research at Meta Platforms Inc. On how to make technology safer, she takes inspiration from an unexpected place: the grocery store.

Supermarkets are full of products that provide key information at a glance, says Joelle Pineau.

“This list of ingredients allows people to make informed choices about whether or not they want to eat that food,” explains Pino, who is scheduled to speak at the Elevate technology conference in Toronto this week.

Advertisement 2

Article content

“But right now, in AI, we seem to have a somewhat paternalistic approach to (transparency), which is that we decide what regulation is or what everyone should or shouldn’t do, rather than having something that empowers people to make choices.” . “.

Benno’s thoughts on the state of artificial intelligence come as the world is awash in conversations about the future of the technology and whether it will bring unemployment, bias, discrimination and even existential risks to humanity.

Governments are assessing many of these issues as they move toward AI legislation, which won’t come into effect in Canada until at least next year.

Technology companies are keen to get involved in shaping AI barriers, arguing that any regulation can help protect their users and keep competitors on a level playing field. However, they worry that regulation may limit the pace of progress and plans they have made using AI.

Whatever form AI guardrails take, Pino wants transparency to be a priority, and she already has an idea about how to achieve that.

She says legislation could require creators to document the information they used to build and develop AI models, their capabilities, and perhaps some findings from their risk assessments.

Article content

Advertisement 3

Article content

“I don’t yet have a very prescriptive view on what should or shouldn’t be documented, but I think this is kind of the first step,” she says.

She adds that many companies in the field of artificial intelligence are already doing this work, but “they are not transparent about it.”

Research suggests there is much room for improvement.

Stanford University’s Institute for Human-Centered AI analyzed the transparency of prominent AI models in May using 100 indicators including whether companies used personal information, disclosed licenses they held to the data and took steps to delete copyrighted material.

The researchers found that many models were far from passing the test. Meta’s Llama 2 got 60 percent, Anthropic’s Claude 3 got 51 percent, OpenAI’s GPT-4 got 49 percent, and Google’s Gemini 1.0 Ultra got 47 percent.

Likewise, Pineau, who also works as a computer science professor at McGill University in Montreal, found that “the culture of transparency is very different from company to company.”

At Meta, which owns Facebook, Instagram, and WhatsApp, there has been a commitment to open source AI models, which typically allow anyone to access, use, modify, and distribute them.

Advertisement 4

Article content

However, Meta also has an AI-powered search and assistant tool that has been rolled out on Facebook and Instagram that does not allow users to opt out.

In contrast, some companies allow users to opt out of such products or have embraced more transparency features, but there are many who have taken a more lax approach or rejected attempts to encourage them to make their models open source.

A more standardized and transparent approach used by all companies would have two main benefits, Pino said.

It would build trust and force companies to “do the right thing” because they know their actions will be subject to scrutiny.

“It’s very clear that this work is being done and it has to be good, so there’s a strong incentive to do high-quality work,” she said.

“The other thing is that if we’re that transparent and we get something wrong — and we do — we’ll learn very quickly and oftentimes … before it gets to (the product), so it’s also a much faster cycle in terms of figuring out where we need to do Better job.”

While the average person might not be thrilled about the types of data they envision organizations handling transparently, Pino said it would be beneficial for governments, companies, and startups trying to use AI.

“These people are going to take responsibility for how they use AI, and they should have that transparency as they introduce it into their workforce,” she said.

This report by The Canadian Press was first published Sept. 30, 2024.

Article content

ChoicesCompaniesinformedMetaTransparentUsers