Meta has addressed and fixed an issue that caused some Instagram users to encounter violent and graphic content in their Reels feed. The company acknowledged the mistake and apologized for the incident, which led to widespread complaints on social media.
“We have fixed an error that caused some users to see content in their Instagram Reels feed that should not have been recommended. We apologize for the mistake,” a Meta spokesperson stated.
Users Alarmed by Graphic Content Despite Filters
Many Instagram users were surprised to see violent images and videos appearing in their feeds, even though they had enabled the platform’s strictest “Sensitive Content Control” settings. Normally, Meta’s policies prohibit disturbing content such as dismembered bodies, extreme violence, and images depicting suffering with sadistic commentary.
However, the company does allow certain graphic posts if they aim to raise awareness of global issues like human rights violations, war, or terrorism. In such cases, Instagram adds warning labels to caution viewers before they see the content.
Instagram Reels Shows Violent Imagery
Despite Meta’s content moderation efforts, CNBC reported that its team found multiple Instagram Reels displaying violent scenes, including dead bodies and injuries, in the U.S. on Wednesday night. These posts carried “Sensitive Content” labels but still appeared in user recommendations.
Meta relies on a combination of artificial intelligence and human moderators to filter inappropriate content. With over 15,000 moderators, the company claims its systems can remove most harmful posts before users even report them. However, the recent glitch suggests that the platform’s content controls failed to work as intended.
Meta Adjusting Content Moderation Policies
This issue surfaced just as Meta announced plans to refine its content moderation approach. In a statement on January 7, the company revealed changes aimed at reducing wrongful censorship.
Under the new policy, Meta’s automated moderation will now focus on the most severe violations—such as terrorism, child exploitation, fraud, and drug-related content—rather than flagging every possible rule violation. For less serious offenses, the company will rely more on user reports before taking action.
Meta also admitted that its system had been too aggressive in demoting content based on predictions that it “might” break platform rules. The company is now reversing many of these demotions to allow more posts to reach users.
Zuckerberg Shifts Toward More Political Content
As part of its broader strategy, Meta is also revising its approach to fact-checking. CEO Mark Zuckerberg announced that the company would transition to a “Community Notes” system, similar to what Elon Musk’s platform X (formerly Twitter) uses.
Additionally, Meta is loosening restrictions on political content, a move widely seen as an effort to rebuild ties with former U.S. President Donald Trump, who has been critical of Meta’s moderation policies in the past.
A Meta spokesperson confirmed that Zuckerberg visited the White House recently to discuss how the company could support U.S. technology leadership on the global stage.
Layoffs Weaken Content Moderation Efforts
Meta’s ability to moderate content has been impacted by massive job cuts over the past two years. In 2022 and 2023, the company laid off 21,000 employees—about a quarter of its workforce—including staff from its trust and safety teams.
This Instagram Reels mishap raises concerns about whether Meta’s reduced moderation staff can effectively manage inappropriate content, especially as the company shifts toward a more open approach to online speech.
Although the issue has been resolved, it highlights the ongoing challenge of balancing free expression with user safety on Meta’s platforms.