Meta has taken down numerous fake, sexualized images of female celebrities after a CBS News investigation revealed widespread AI-manipulated deepfakes on Facebook.
The investigation found dozens of highly sexualized fake images of actors Miranda Cosgrove, Jennette McCurdy, Ariana Grande, Scarlett Johansson, and former tennis star Maria Sharapova being shared across multiple Facebook accounts. These images had accumulated hundreds of thousands of likes and numerous reshares.
“We’ve removed these images for violating our policies and will continue monitoring for other violating posts,” Meta spokesperson Erin Logan stated. “This is an industry-wide challenge, and we’re continually working to improve our detection and enforcement technology.”
Deepfakes Confirmed: Reality Defender Exposes AI-Generated Celebrity Pornography on Facebook
Reality Defender, a platform specializing in AI-generated media detection, analyzed over a dozen images and confirmed many were deepfakes – AI-generated bodies replacing celebrities’ bodies in otherwise authentic photographs. Some images were likely created using non-AI image stitching tools.
“Almost all deepfake pornography does not have the consent of the subject being deepfaked,” explained Ben Colman, co-founder and CEO of Reality Defender. “Such content is growing at a dizzying rate, especially as existing measures to stop such content are seldom implemented.”

Despite Meta’s actions, CBS News discovered numerous AI-generated, sexualized images of Cosgrove and McCurdy still publicly available on Facebook after the issue was flagged. One deepfake image of Cosgrove remained visible over the weekend, shared by an account with 2.8 million followers.
The former “iCarly” stars appear to be the most frequently targeted public figures for deepfake content, according to CBS News’ analysis. The show is owned by Paramount Global, CBS News’ parent company.
Meta Faces Criticism Over Deepfake Policy
Meta’s Oversight Board, which provides recommendations for content moderation, has criticized the company’s current regulations around sexualized deepfake content as insufficient. The board has urged Meta to update its prohibition against “derogatory sexualized photoshop” to specifically include “non-consensual” content and cover other manipulation techniques like AI.
Additionally, the board recommended that Meta incorporate its ban on “derogatory sexualized photoshop” into the company’s Adult Sexual Exploitation regulations for stricter enforcement.
Meta responded that it is assessing the feasibility of several recommendations but is currently ruling out changing the language of its policy to include “non-consensual.” The company is also unlikely to move its “derogatory sexualized photoshop” policy into its Adult Sexual Exploitation regulations.
“The Oversight Board has made clear that non-consensual deepfake intimate images are a serious violation of privacy and personal dignity, disproportionately harming women and girls,” said Michael McConnell, an Oversight Board co-chair. “These images are not just a misuse of technology — they are a form of abuse that can have lasting consequences.”
Meta isn’t alone in confronting this issue. Last year, X (formerly Twitter) temporarily blocked Taylor Swift-related searches after AI-generated fake pornographic images of the singer gained millions of views on the platform.
The problem continues to grow exponentially. A recent UK government study projected that approximately 8 million deepfakes will be shared this year, compared to just 500,000 in 2023.
As AI image generation technology becomes more accessible and sophisticated, social media platforms face mounting pressure to develop more effective detection methods and stricter enforcement policies to combat this form of digital exploitation.