• Send Us A Tip
  • Calling all Tech Writers
  • Advertise
Friday, May 16, 2025
  • Login
  • Register
TechStory
  • News
  • Crypto
  • Gadgets
  • Memes
  • Gaming
  • Cars
  • AI
  • Startups
  • Markets
  • How to
No Result
View All Result
  • News
  • Crypto
  • Gadgets
  • Memes
  • Gaming
  • Cars
  • AI
  • Startups
  • Markets
  • How to
No Result
View All Result
TechStory
No Result
View All Result
Home Tech

Content Moderators of AI chatbots like ChatGPT Reveal Trauma from Reviewing Graphic Content: ‘It has destroyed me completely’

by Sneha Singh
August 4, 2023
in Tech
Reading Time: 3 mins read
0
AI
TwitterWhatsappLinkedin

In a recent report by The Guardian, Kenyan content moderators who worked for OpenAI’s ChatGPT shared their distressing experiences. These moderators were employed by a California-based data-annotation company called Sama, which had a contract with OpenAI and provided data-labelling services to tech giants like Google and Microsoft. However, Sama ended its collaboration with OpenAI in February 2022 due to concerns about dealing with potentially illegal content for AI training.

You might also like

Tired of Zero Views and No TikTok Followers? Let’s Fix That

noyb Accuses Meta of Fresh GDPR Breach with New AI Training Plans on EU User Data

xAI’s Grok Repeatedly Mentions “White Genocide” in South Africa

One of the moderators, Mophat Okinyi, bravely spoke out about the challenges he faced while reviewing content for OpenAI. He revealed that he had to read up to 700 text passages every day, a staggering workload for anyone. Sadly, many of these passages revolved around explicit and graphic themes, particularly sexual violence. This constant exposure to such disturbing content took a toll on Mophat’s mental well-being, leaving him feeling paranoid and anxious about those around him.

The distressing nature of the work also deeply affected his personal life. Mophat disclosed that his relationship with his family suffered as a result of the emotional burden he carried from his job. The graphic and disturbing content he encountered on a daily basis haunted him even outside of working hours, making it challenging for him to find solace and peace at home.

Addressing the Human Cost: Prioritizing Moderator Well-being in AI Content Moderation

The story of Mophat and other Kenyan moderators highlights the importance of ensuring proper support and care for content reviewers who are exposed to such distressing material. The impact of this work on their mental health should not be underestimated, and it is crucial for companies like OpenAI to provide comprehensive assistance and resources to help these moderators cope with the emotional toll of their responsibilities.

The report sheds light on the potential risks and ethical concerns involved in training AI models using sensitive and explicit content. As technology continues to advance, it is essential for companies to consider the well-being of their workforce and take appropriate measures to protect their mental health.

Content Moderators of AI chatbots like ChatGPT Reveal Trauma from Reviewing Graphic Content: 'It has destroyed me completely'
Credits: Business Insider

The revelations made by these moderators have raised important questions about the content moderation industry’s practices and the responsibility of companies like OpenAI and Sama in safeguarding their workers’ mental and emotional well-being. It serves as a reminder that while AI development is vital for progress, the human toll behind the scenes should not be ignored or overlooked.

Alex Kairu, another former moderator, revealed to the news outlet that the experiences he encountered on the job “destroyed me completely.” He expressed how the role caused him to become more introverted and lamented the deterioration of his physical relationship with his wife. Despite a request for comment, representatives from OpenAI and Sama did not respond immediately. The Guardian was not provided with any statement from OpenAI.

Working Conditions and Compensation Controversy

According to The Guardian, moderators raised concerns about disturbing content they had to review, including violence, child abuse, bestiality, murder, and sexual abuse. They claimed that during the contract between OpenAI and Sama, they were paid meager wages ranging from $1.46 to $3.74 per hour. Additionally, Time previously reported that data labelers were paid less than $2 an hour to review content for OpenAI.

Now, four of the moderators are urging the Kenyan government to investigate the working conditions during the contract period. They expressed that they didn’t receive sufficient support for their work. Sama disagreed with this and informed The Guardian that workers had access to therapists around the clock and received other medical benefits.

“We are in agreement with those who call for fair and just employment, as it aligns with our mission,” stated a spokesperson from Sama during their interview with the news outlet. They further added, ” believe that we would already be compliant with any legislation or requirements that may be enacted in this space.”

Tags: AIChatGPTcontent moderatorsOpenAISexual Abuse
Tweet54SendShare15
Previous Post

How to get verified fan for taylor swift

Next Post

Salesforce Implements Workforce Reduction Amid Economic Challenges

Sneha Singh

Sneha is a skilled writer with a passion for uncovering the latest stories and breaking news. She has written for a variety of publications, covering topics ranging from politics and business to entertainment and sports.

Recommended For You

Tired of Zero Views and No TikTok Followers? Let’s Fix That

by Rohan Mathawan
May 16, 2025
0
Photo by Solen Feyissa on Unsplash

Okay, let’s imagine the real-life situation for a second. You post a video on TikTok, maybe you’re dancing, cooking, talking about your dog, or showing off your gym...

Read more

noyb Accuses Meta of Fresh GDPR Breach with New AI Training Plans on EU User Data

by Sneha Singh
May 16, 2025
0
noyb Accuses Meta of Fresh GDPR Breach with New AI Training Plans on EU User Data

Privacy advocate Max Schrems has thrown a significant roadblock in the path of Meta's European AI ambitions. His organization, noyb (None Of Your Business), sent a cease and...

Read more

xAI’s Grok Repeatedly Mentions “White Genocide” in South Africa

by Sneha Singh
May 16, 2025
0
xAI's Grok Repeatedly Mentions "White Genocide" in South Africa

X's chatbot Grok has been bizarrely fixated on discussing alleged "white genocide" in South Africa, regardless of what users are actually asking about. The AI assistant, owned by...

Read more
Next Post
Salesforce Implements Workforce Reduction Amid Economic Challenges

Salesforce Implements Workforce Reduction Amid Economic Challenges

Please login to join discussion

Techstory

Tech and Business News from around the world. Follow along for latest in the world of Tech, AI, Crypto, EVs, Business Personalities and more.
reach us at [email protected]

Advertise With Us

Reach out at - [email protected]

BROWSE BY TAG

#Crypto #howto 2024 acquisition AI amazon Apple bitcoin Business China cryptocurrency e-commerce electric vehicles Elon Musk Ethereum facebook flipkart funding Gaming Google India Instagram Investment ios iPhone IPO Market Markets Meta Microsoft News NFT samsung Social Media SpaceX startup startups tech technology Tesla TikTok trend trending twitter US

© 2024 Techstory.in

No Result
View All Result
  • News
  • Crypto
  • Gadgets
  • Memes
  • Gaming
  • Cars
  • AI
  • Startups
  • Markets
  • How to

© 2024 Techstory.in

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms bellow to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In
Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?