Imagine you’ve just accepted a promising new remote job as a “writing analyst” for a major tech contractor. You’re excited to use your skills to help shape the future of technology. This is the appealing offer that draws thousands into the world of AI training. But the job you’ll actually be doing is one they would never dare put in the advertisement.
On your first day, you discover the “analysis” involves sifting through the worst of what an AI can produce. Without any warning, you are tasked with moderating violent, sexually explicit, and hateful content. You weren’t asked to consent to this, and there are no mental health resources available. This is your new reality: you are a human filter for digital toxicity.
Your performance is not measured by the quality of your insights, but by the speed of your clicks. A stopwatch looms over every task, and if you take too long trying to get something right—especially a sensitive topic like medical advice—a supervisor will question your productivity. You are told to focus on quantity, not the quality of what you’re putting out into the world.
After months of this, you’re likely suffering from anxiety, disillusioned with the tech you once admired, and worried about being laid off. So, would you still take the job? And more importantly, would you ever trust a product that was built under these conditions? The people living this reality don’t.
So, You Want to Be an AI Trainer? Here’s the Job They Don’t Advertise
48