Victor's instinct here is correct — this piece deserves to break through the paywall noise. It's a practitioner story that shows what most AI adoption discussions carefully avoid: the human supply chain behind the models. Katya (pseudonym) is a displaced content marketer now training AI for $45/hour through Mercor, with zero visibility into which model she's improving or how long the work will last. Her project evaporated in two days. The irony is structural, not accidental: white-collar workers automated out of stable roles are now the precarious labor force teaching models to do 'the worst version' of what they used to do well.
This matters for GTM leaders because it surfaces the labor dynamics you're inheriting when you deploy AI tools. Someone trained that model. Probably someone who used to do the job you're now automating. The opacity is deliberate — workers don't know 'the client,' can't assess what they're building, can't opt out of training their own replacements. It's gig economy mechanics applied to knowledge work, with none of the transparency.
The consulting relevance is immediate: if you're running AI pilots in content, copywriting, or customer support, you're participating in this system whether you know it or not. The ethical questions aren't abstract. They're embedded in your vendor contracts and your team's job security. This article is a mirror, not a warning. You're already here.