According to a Time investigation released on Wednesday, the maker of ChatGPT, OpenAI, pays Kenyan employees less than $2 per hour to filter through tens of thousands of lines of text to make their chatbot more secure.
According to Time, the workers were required to read explicit descriptions of NSFW content such as child sexual assault, bestiality, murder, suicide, torture, self-harm, and incest to classify and filter out hazardous data from ChatGPT’s training dataset.
Since the chatbot powered by machine learning, ChatGPT, was introduced by OpenAI in late November, it has become increasingly popular. The app’s sophisticated writing abilities impressed millions of users, who have used it for anything from producing songs to news stories. However, the bot was sometimes more expressive.
Since the model for its predecessor, GPT-3, was trained on a dataset that was scraped from billions of websites, it frequently generated content that was sexist, violent, and racist. OpenAI needs a quick solution to remove all of the offensive languages from its dataset before launching ChatGPT.
ChatGPT filtering tool, OpenAI made a collaboration with Sama
To identify and classify harmful content that could be used as input into a ChatGPT filtering tool, OpenAI collaborated with Sama. Sama is a data labeling partner based in San Francisco that promotes “ethical” and “dignified digital work.” To make the chatbot safe for public use, Sama hired data labelers in Kenya to work on behalf of OpenAI.
Despite playing a crucial part in creating ChatGPT, the employees had to put up with harsh working conditions and little pay. According to a Kenyan employee who read and labeled literature for OpenAI, “he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child.”
Depending on experience and performance, the workers received pay between $1.32 and $2 per hour.
In December, Motherboard published an article on the trend of underpaid overseas labor driving AI innovation. Tech corporations frequently recruit tens of thousands of gig workers to keep up the appearance that their AI products are entirely functional and independent. But, they still require a significant amount of human moderation and improvement. According to academics in AI ethics, the involvement of the Global South in the AI pipeline reinforces a history of colonial exploitation and inequality between the Global North and South.
Trauma experienced by meta
Sama discontinued its work for OpenAI in February 2022, eight months before the agreed-upon time frame. The exit was because of the stressful nature of the task and, in part, due to Time’s February 14 publication of an investigative piece regarding Sama’s work with Meta. In that article, Time claimed that for $1.50 per hour, content moderators at Sama who worked on projects for Meta experienced trauma after seeing images and videos of rape, child abuse, and executions.
To concentrate less on innovation and more on how to involve humans in the process ethically, AI experts want to make the human labor that forms the basis of machine learning systems visible. This entails admitting the power disparities, increasing transparency about the humans involved in the process, enhancing working conditions, and giving employees opportunities beyond data categorization and moderating.
It forces us to reevaluate how much we should be celebrating ChatGPT’s originality since it reminds us how far removed the tool is from magic and glamour.