Meta’s own oversight board has attacked the company’s rules on adult images generated using artificial intelligence.
It needs to be more clear about banning sexually explicit images made of real people – and introduce changes to stop them spreading across the site, the Oversight Board said.
Meta established its Oversight Board to check its controversial decisions. It is funded by Meta but runs independently, and the company can choose whether to accept its suggestions.
The latest ruling came after the board reviewed two pornographic fakes of famous women created using artificial intelligence and posted on Meta‘s Facebook and Instagram.
Meta said it would review the board’s recommendations and provide an update on any changes adopted.
In its report, the board identified the two women only as female public figures from India and the United States, citing privacy concerns.
The board found both images violated Meta‘s rule barring “derogatory sexualized photoshop,” which the company classifies as a form of bullying and harassment, and said Meta should have removed them promptly.
In the case involving the Indian woman, Meta failed to review a user report of the image within 48 hours, prompting the ticket to be closed automatically with no action taken.
The user appealed, but the company again declined to act, and only reversed course after the board took up the case, it said.
In the American celebrity’s case, Meta‘s systems automatically removed the image.
“Restrictions on this content are legitimate,” the board said. “Given the severity of harms, removing the content is the only effective way to protect the people impacted.”
The board recommended Meta update its rule to clarify its scope, saying, for example, that use of the word “photoshop” is “too narrow” and the prohibition should cover a broad range of editing techniques, including generative AI.
The board also slammed Meta for declining to add the Indian woman’s image to a database that enables automatic removals like the one that occurred in the American woman’s case.
According to the report, Meta told the board it relies on media coverage to determine when to add images to the database, a practice the board called “worrying.”
“Many victims of deepfake intimate images are not in the public eye and are forced to either accept the spread of their non-consensual depictions or search for and report every instance,” the board said.
Additional reporting by agencies