Even after being notified using the in-app reporting mechanism, Instagram fails to deactivate accounts that receive hundreds of sexualized comments for sharing photographs of youngsters in swimwear or partial garments.
Meta, Instagram’s parent company, claims to have a zero-tolerance policy when it comes to child exploitation. However, accounts identified as suspicious via the in-app reporting mechanism have been deemed acceptable by the app’s automated moderation technology and are still active.
A researcher used the in-app reporting option to report an account that was sharing images of minors in sexualized poses. Instagram responded the same day, saying that it had been unable to view the report due to “heavy volume,” but that its “technology has found that this account probably does not go against our community guidelines.” The person was asked to report the account again or block or unfollow it. It was still active on Saturday, with over 33,000 followers.
On Twitter, similar profiles known as “tribute pages” were also discovered.
Despite claiming in tweets that he was looking to connect with people to share unlawful material, one account that posted pictures of a man doing sexual acts to images of a 14-year-old TikTok influencer was ruled not to infringe Twitter’s rules after being reported via the in-app tools. One of his tweets stated, “Looking to trade some younger stuff.” It was taken down when the campaign group Collective Shout made a public statement about it.
The findings raise questions about the platforms’ in-app reporting methods, with opponents claiming that the information was permitted to remain online despite being linked to suspected unlawful conduct since it did not satisfy a criminal threshold.
The accounts, according to Andy Burrows, the NSPCC’s head of internet safety policy, are a “shop window” for pedophiles. He urged MPs to close “loopholes” in the planned internet safety bill, which will be considered in parliament on April 19 and aims to regulate social media companies. He believes that firms should be forced to address not only unlawful information but also stuff that is clearly detrimental but does not meet the criminal level.
Collective Shout, an Australian NGO that monitors exploitative content around the world, said the platforms were relying on third-party organizations to moderate their content.
Instagram’s parent company, Meta, stated that it has rigorous policies against content that sexually exploits or endangers children, and that it removes such stuff as soon as it is discovered.