Exploited African Labor Is Behind The Brilliance Of ChatGPT
For most people, using ChatGPT is like waving a magic wand. You tell the tool some command prompts and voila, it gives you what you want. ChatGPT is even so magical that if you were to input inappropriate commands like violence, racism and sexual inappropriate commands, it will rightfully tell you that it cannot process those. Of course the fact is that, there is nothing magical about ChatGPT. Behind the GPT-3.5 model, the "engine" of the conversational AI tool, is hours of human labor exerted to ensure that ChatGPT can detect and flag inappropriate commands which include graphic description of murder, child sexual abuse, bestiality, incest,etc. According to a report by TIME magazine , some of the people responsible for this task, outsourced by OpenAI to a firm located in Kenya, earned as little as under $2 an hour, with some of them not having access to 1-on-1 therapy sessions. As part of the work, the staffers were assigned tens of thousands of snippets of text to detect fo