• Send Us A Tip
  • Calling all Tech Writers
  • Advertise
Monday, July 14, 2025
  • Login
TechStory
  • News
  • Crypto
  • Gadgets
  • Memes
  • Gaming
  • Cars
  • AI
  • Startups
  • Markets
  • How to
No Result
View All Result
  • News
  • Crypto
  • Gadgets
  • Memes
  • Gaming
  • Cars
  • AI
  • Startups
  • Markets
  • How to
No Result
View All Result
TechStory
No Result
View All Result
Home Business

Study Exposes Instagram’s Role in Promoting Self-Harm Content

by Harikrishnan A
December 1, 2024
in Business, Markets, News, Tech, Trending, World
Reading Time: 3 mins read
0
Instagram Tests Unskippable Ads: A New Era of Social Media Advertising
TwitterWhatsappLinkedin

A recent study has raised serious concerns about Instagram’s failure to adequately moderate self-harm content, potentially enabling the growth of dangerous online networks. Conducted by Danish researchers and supported by the digital advocacy group Digitalt Ansvar, the study critiques Meta’s content moderation efforts, despite its claims of using advanced artificial intelligence (AI) to combat harmful material.

You might also like

Apple Eyeing Potential Acquisition of Mistral AI, Report Suggests

Moonshot AI’s Kimi K2 Outperforms GPT-4, Available for Free

Trump Administration to Leverage AI for Medicare Authorization Denials

Test Reveals Gaps in Content Moderation

The Danish team created a private network on Instagram, featuring fabricated profiles of users as young as 13 years old. Over the course of a month, they shared 85 pieces of self-harm-related content, which included disturbing images of blood and razors, as well as harmful messages encouraging self-injury. To their shock, Instagram failed to remove any of the posts.

Meanwhile, Digitalt Ansvar used its own AI tool, which was able to identify 38% of the self-harm images and 88% of the most disturbing ones. This finding suggests that while Instagram has the technology to address harmful content, it has not utilized it effectively.

The researchers aimed to test Meta’s claim that it removes 99% of harmful content before users report it. However, the results exposed a significant discrepancy between the company’s assertions and reality, raising doubts about Instagram’s commitment to user safety.

The study uncovered another troubling issue: Instagram’s algorithm appeared to promote the spread of self-harm networks. Once a 13-year-old profile befriended a member of the self-harm group, the platform’s algorithm encouraged them to connect with other members. Instead of curbing the spread of harmful content, Instagram’s automated systems helped it grow, putting vulnerable users at even greater risk.

Violating EU Regulations?

Digitalt Ansvar argues that Instagram’s lack of adequate moderation violates the Digital Services Act (DSA), a European Union regulation that requires digital platforms to identify and mitigate risks to users’ mental and physical well-being.

Ask Hesby Holm, CEO of Digitalt Ansvar, emphasized the gravity of the issue, noting, “Self-harm content is often closely tied to suicide. Without immediate intervention, these networks can go unnoticed, leading to severe consequences.”

The Broader Impact on Youth Mental Health

This failure to remove harmful content is not an isolated issue but part of a larger problem affecting the mental health of young users. A survey by youth mental health charity stem4 found that nearly half of children and teenagers had experienced negative effects on their mental well-being due to online bullying and trolling. These effects included withdrawal, excessive exercise, isolation, and self-harming behaviors.

The study highlights the urgent need for platforms like Instagram to better protect vulnerable users from harmful content that can significantly impact their mental health.

Meta’s Response and New Initiatives

In response to the study, Meta reiterated its commitment to removing harmful content. A spokesperson stated, “Content encouraging self-injury is against our policies, and we remove it when detected.” They also claimed to have removed over 12 million pieces of suicide and self-injury-related content on Instagram in the first half of 2024, with 99% of it removed proactively.

Meta also pointed to its launch of Instagram Teen Accounts, which offer stricter content controls for young users, automatically applying the most sensitive settings to protect them from harmful material. However, despite these measures, the Danish study suggests that more needs to be done to prevent self-harm content from circulating on the platform.

Experts Call for Better Moderation and Accountability

While Meta has made some efforts to address harmful content, experts remain skeptical. Holm speculated that Instagram may be prioritizing engagement and traffic over moderation, especially in smaller private groups where self-harm content often thrives. “It’s unclear if larger groups are effectively moderated, but smaller self-harm networks tend to operate in these private spaces,” he said.

The study’s findings also reveal a significant gap in AI intervention, with researchers expressing surprise that Instagram’s systems didn’t flag increasingly severe content as it was shared. “We thought their tools would catch these images as they escalated, but they didn’t,” Holm said.

 

Tags: facebookInstagramMetateens
Tweet55SendShare15
Previous Post

China Launches Cardless Payment System Using Hand Swipes

Next Post

Trump Warns of 100% Tariffs on BRICS Nations Amid US Dollar Threat

Harikrishnan A

Aspiring writer. Enjoys gaming, fried chicken and iced tea, preferably all together.

Recommended For You

Apple Eyeing Potential Acquisition of Mistral AI, Report Suggests

by Sneha Singh
July 14, 2025
0
Apple Eyeing Potential Acquisition of Mistral AI, Report Suggests

Apple is reportedly contemplating a significant move to bolster its artificial intelligence strength through the acquisition of Mistral, France's top AI startup. The Cupertino technology company is seriously...

Read more

Moonshot AI’s Kimi K2 Outperforms GPT-4, Available for Free

by Sneha Singh
July 14, 2025
0
Moonshot AI's Kimi K2 Outperforms GPT-4, Available for Free

Moonshot AI just dropped a bombshell that could reshape the artificial intelligence landscape. The Chinese startup behind the popular Kimi chatbot released an open-source language model on Friday...

Read more

Trump Administration to Leverage AI for Medicare Authorization Denials

by Sneha Singh
July 14, 2025
0
Trump Administration to Leverage AI for Medicare Authorization Denials

The Trump Administration has announced a significant change to Traditional Medicare that will require patients to get advance approval for many medical services—a process known as prior authorization....

Read more
Next Post
Trump Warns of 100% Tariffs on BRICS Nations Amid US Dollar Threat

Trump Warns of 100% Tariffs on BRICS Nations Amid US Dollar Threat

Please login to join discussion

Techstory

Tech and Business News from around the world. Follow along for latest in the world of Tech, AI, Crypto, EVs, Business Personalities and more.
reach us at [email protected]

Advertise With Us

Reach out at - [email protected]

BROWSE BY TAG

#Crypto #howto 2024 acquisition AI amazon Apple bitcoin Business China cryptocurrency e-commerce electric vehicles Elon Musk Ethereum facebook flipkart funding Gaming Google India Instagram Investment ios iPhone IPO Market Markets Meta Microsoft News NFT samsung Social Media SpaceX startup startups tech technology Tesla TikTok trend trending twitter US

© 2024 Techstory.in

No Result
View All Result
  • News
  • Crypto
  • Gadgets
  • Memes
  • Gaming
  • Cars
  • AI
  • Startups
  • Markets
  • How to

© 2024 Techstory.in

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?