OpenAI has raised a total funding of $17.9 billion over nine rounds. Its first funding round was on Dec 01, 2015. Its monthly revenue hit $300 million in August, up 1,700 per cent since the beginning of 2023, and the company expects about $3.7 billion in annual sales this year, according to financial documents reviewed by The New York Times.
Despite these funding successes, 60 Minutes, an American television news program on CBS, revealed in a recent investigation how ChatGPT’s creator overworked Kenyan data labellers to make its systems safe for public use.
The data labellers, employed by Sama—a San Francisco-based IT services and consulting firm—on behalf of OpenAI, were paid a take-home wage of between $1.32 and $2 per hour, depending on seniority and performance.
> How to Tailor Content with AI for Global Audience
In the exposé, Naftali Wambalo, a father of two with a degree in Mathematics who was interviewed by CBS, said that they were paid less than $2 before taxes by OpenAI’s outsourcing partner in Kenya. Their work largely entailed feeding the AI with labelled examples of violence, hate speech, and sexual abuse so it could learn to detect these forms of toxicity and filter them out before they ever reached users.
“I looked at people being slaughtered, people engaging in sexual activity with animals, and people abusing children physically and sexually. I also saw people committing suicide,” he said.
Another employee, Fasica, revealed she was misled during the hiring process. Initially promised a translation job, she instead faced a very different role. Over the following months, she reviewed explicit content describing child sexual abuse, bestiality, murder, suicide, torture, self-harm, and incest in graphic detail.
“I was basically reviewing content that was very graphic, very disturbing. I saw dismembered bodies and drone attack victims. You name it. Whenever I talk about this, I still have flashbacks,” she said.
She added that the low wage prevented her from saving money, forcing her to live paycheck-to-paycheck.
“I was living paycheck-to-paycheck. I have saved nothing because it is not enough.”
Classifying and filtering harmful content is necessary to minimize violent and sexual material in training data and to create tools that detect harmful content in ChatGPT, the employees acknowledged. However, they expressed deep frustration over poor working conditions, long hours, and low pay. OpenAI had signed three contracts worth about $200,000 in total with Sama by late 2021, agreeing to pay an hourly rate of $12.50 to Sama. Yet, the company failed to address employees’ mental health concerns adequately.
“They know we’re damaged, but they don’t care. We’re humans. Just because we’re Black or vulnerable, that doesn’t give them the right to exploit us like this,” Fasica added.
“After constantly seeing those sexual activities and pornography on the job, I hate sex,” Wambalo disclosed.
According to a TIME report, an OpenAI spokesperson stated that Sama managed employee payments and mental health support. They claimed wellness programs and counselling were available, and workers could opt out of assignments without penalty. Sensitive content was supposed to be handled by trained individuals.
Nerima Wako-Ojiwa, a Kenyan civil rights activist, compared the situation to modern-day slavery. She is now supporting former Sama employees in urging the government to help them seek compensation for the damages they suffered while working on AI training for OpenAI’s ChatGPT.
> One Million Kenyans to Receive Artificial Intelligence Training
Leave a comment