Including the Marginalized Through Exploitation

Although there can be many good things that can come from AI’s existence, there are equally several things that go on behind the scenes of the companies that create AI programs.
In an effort to at least mitigate the bad information spewed from generative AI programs like ChatGPT-4, the company OpenAI hired Kenyan workers to find and filter that information, according to a Time article. OpenAI outsourced the workers through the company Sama, which is noted in the article as a company that helps those in poverty and aims at creating ethical AI.
Similarly, Timnit Gebru, the creator of DAIR, was hired by Google to co-lead the company’s Ethical AI research team. According to her bio on the website Carnegie, she was fired by Google for speaking up against toxic language found in AI software. Timnit is also from African, specifically Ethiopia.
It seems these companies are working to eliminate the bad information only to please the public while not understanding why the information is out there to begin with. Paying below what the job is valued at, such as OpenAI, and firing minorities for doing what they were being paid to do is toxic behavior that adds on to the toxic information AI programs can find.
Leave a Reply