On Thursday, Meta’s Oversight Board criticized the company for having unclear policies regarding sexually explicit AI-generated images of real individuals and urged for stricter measures to prevent such content from appearing on its platforms.
The board’s ruling came after reviewing cases involving pornographic AI-generated images of well-known women posted on Meta’s Facebook and Instagram. Meta has stated it will review the board’s recommendations and update its policies as needed.
In its report, the board did not disclose the names of the women involved, referring to them only as public figures from India and the United States for privacy reasons. The board determined that both images violated Meta’s policy against “derogatory sexualized photoshop,” which the company views as a form of harassment and bullying. Meta should have acted more swiftly to remove the images, according to the board.
For the Indian woman, Meta failed to address a user report within 48 hours, leading to the ticket’s automatic closure and no action taken. Although the user appealed, Meta did not act until the board intervened. In contrast, the image of the American celebrity was automatically removed by Meta’s systems.
The board supported restrictions on such content, asserting that removal is the only effective way to protect individuals from harm. It recommended Meta revise its rules to clearly cover a wider range of editing techniques, including those involving generative AI, rather than just “photoshop.”
The board also criticized Meta for not adding the Indian woman’s image to a database used for automatic removals, a practice Meta justified. The board expressed concern that this approach leaves victims of deepfake content, who are not public figures, with the burden of reporting every instance themselves.
Overall, the board emphasized the need for clearer and more proactive measures to prevent the spread of non-consensual explicit imagery.