Meta’s Oversight Board says deepfake policies need update and response to explicit image fell short

Article Content

LONDON (Reuters) – Meta Inc’s policies on non-consensual deepfakes need updating, including wording that is “insufficiently clear”, its oversight committee said on Thursday in a ruling on cases involving explicit, artificially generated depictions of two famous women.

In one case, the semi-independent watchdog said the social media giant failed to remove a fake intimate image of a famous Indian woman, who it did not identify, until the company’s review board intervened.

Advertisement 2

Article Content

Deepak’s nude photos of women and celebrities including Taylor Swift have gone viral on social media as the technology used to create them has become more accessible and usable. Online platforms are under pressure to do more to address the issue.

The panel, created by Meta in 2020 to act as an arbiter of content on its platforms including Facebook and Instagram, spent months reviewing the two cases involving AI-generated images of two famous women, one Indian and one American. The panel did not identify either woman, describing each only as a “female public figure.”

Meta said it welcomes the board’s recommendations and is reviewing them.

One case involved an “AI-manipulated image” posted on Instagram that depicted a naked Indian woman from behind with her face visible, resembling a “female public figure.” The commission said a user reported the image as pornographic but the report was not reviewed within 48 hours, so it was automatically closed. The user appealed to Meta, but it was also automatically closed.

It wasn’t until the user appealed to the Oversight Board that Meta decided its original decision not to remove the post was wrong.

Article Content

Advertisement 3

Article Content

Meta also disabled the account that posted the images and added them to a database used to automatically detect and remove images that violate its rules.

In the second case, an AI-generated image depicting naked American women being harassed was posted to a Facebook group. It was automatically removed because it was already in the database. A user appealed the removal to the board, but the board upheld Meta’s decision.

The committee said the two images violated Meta’s ban on “sexually degrading Photoshop” under its bullying and harassment policy.

But it added that the wording of its policy was not clear to users and recommended replacing the word “offensive” with a different term such as “non-consensual” and specifying that the rule covers a broad range of editing and media manipulation techniques that go beyond Photoshop.

She added that fake nude images should also fall under community standards on “adult sexual exploitation” rather than “bullying and harassment.”

When the board asked Meta why the Indian woman wasn’t already in its image database, it was dismayed by the company’s response that it was relying on media reports.

Advertisement No. 4

Article Content

“This is of concern because many victims of fake intimate images are not in the public eye and are forced to either accept the spread of their non-consensual depiction or seek out and report every instance,” the committee said.

The committee also said it was concerned about Meta’s “automatic closure” of image-based sexual assault appeals after 48 hours, saying it “could have a significant impact on human rights”.

Meta, then known as Facebook, launched the Oversight Board in 2020 in response to criticism that it had not moved quickly enough to remove misinformation, hate speech and influence campaigns from its platforms. The 21-member board is a multinational group of legal scholars, human rights experts and journalists.

Article Content

boarddeepfakeexplicitfellImageMetasoversightpoliciesResponseShortUpdate
Comments (0)
Add Comment