Human-AI IntersectionPractitioner StoryThe Verge AI

You Could Be Next

Read original

Why I picked this

Victor's instinct here is correct — this piece deserves to break through the paywall noise. It's a practitioner story that shows what most AI adoption discussions carefully avoid: the human supply chain behind the models. Katya (pseudonym) is a displaced content marketer now training AI for $45/hour through Mercor, with zero visibility into which model she's improving or how long the work will last. Her project evaporated in two days. The irony is structural, not accidental: white-collar workers automated out of stable roles are now the precarious labor force teaching models to do 'the worst version' of what they used to do well.

This matters for GTM leaders because it surfaces the labor dynamics you're inheriting when you deploy AI tools. Someone trained that model. Probably someone who used to do the job you're now automating. The opacity is deliberate — workers don't know 'the client,' can't assess what they're building, can't opt out of training their own replacements. It's gig economy mechanics applied to knowledge work, with none of the transparency.

The consulting relevance is immediate: if you're running AI pilots in content, copywriting, or customer support, you're participating in this system whether you know it or not. The ethical questions aren't abstract. They're embedded in your vendor contracts and your team's job security. This article is a mirror, not a warning. You're already here.

ai-training-laborwhite-collar-displacementgig-economy-aidata-labeling-workforce

Three lenses

Builder

The opacity is a feature, not a bug — if workers knew they were training competitors to their own roles, the labor pool would collapse. But that fragility is your risk surface. Build assuming your training data pipeline has a half-life measured in months, not years.

Revenue Leader

If I'm deploying AI content tools across my org, I need to know: who trained this model, under what conditions, and what happens when that labor supply dries up? The $45/hour gig worker today is my vendor's existential risk tomorrow. Show me the sustainability model or I'm not buying.

Contrarian

Everyone celebrates AI efficiency gains. Nobody's pricing in the cost of burning through the knowledge worker class that makes training possible. When Katya and hundreds like her stop taking these jobs — and they will — your model quality degrades and you don't even know why. I've seen cost-cutting destroy vendor reliability before. This is that, but hidden three layers deep in the supply chain.

My job is gone because of ChatGPT, and I was being invited to train the model to do the worst version of it imaginable

Key takeaways

  • White-collar workers displaced by AI (content marketing, copywriting) are being recruited to train the very models that replaced them, creating a cruel economic feedback loop
  • AI training labor operates as precarious gig work with no job security - projects canceled with zero notice despite workers planning finances around the income ($45/hr but can disappear in 2 days)
  • The AI training supply chain is deliberately opaque - workers don't know which AI they're training ('the client'), what it's for, or how their work fits into the larger system, preventing informed consent about contributing to further automation

People mentioned

  • Katya, Freelance journalist/content marketer (pseudonym) @ Unemployed/Mercor contractor
  • Melvin, AI interviewer @ Mercor

Companies

MercorCrossing Hurdles

Key metrics

  • $45 per hour
  • hundreds of people
  • two days
  • several hours per task

Why this matters for operators: GTM leaders deploying AI content/copywriting tools need to understand the precarious labor dynamics and ethical implications embedded in their vendor supply chains; opacity in training pipelines creates sustainability and quality risks

I cover AI×GTM intelligence like this every Wednesday.

Get STEEPWORKS Weekly

More picks

Enterprise AIMIT Technology Review AI

Rebuilding the data stack for AI

  • Enterprise AI adoption is bottlenecked by fragmented, ungoverned data infrastructure rather than AI model capabilities
  • Competitive differentiation comes from proprietary data combined with third-party enrichment, not just AI tools
  • Evolution from 'system of engagement' to 'system of action' represents shift toward autonomous AI agents managing workflows
data-infrastructureenterprise-ai-readinessai-governance
Enterprise AIDemand Gen Report

Gartner: Explainable AI Will Drive LLM Observability Investments

  • LLM observability adoption will jump from 15% to 50% of GenAI deployments by 2028, driven by explainability requirements for scaling beyond low-risk use cases
  • Traditional IT observability (latency, cost) is insufficient - new metrics needed include hallucination detection, factual accuracy, logical correctness, and sycophancy measurement
  • Gartner recommends XAI tracing for high-impact use cases, multidimensional observability platforms, and continuous evaluation frameworks with human-in-the-loop validation
ai-policyregulatory-impactmarket-consolidation
AI DevelopmentLenny's Newsletter

From a $6.90 newsletter to $3M API: How a non-coder built Memelord | Jason Levin

  • Non-technical founder scaled from $6.90/month newsletter to $100K ARR using Bubble (no-code), then raised $3M to build API-first product - validates no-code as legitimate path to venture scale
  • Mandatory 'vibe-coding' rule for marketing team - employees must build their own AI tools/automations, representing shift from using AI to building with AI as core marketing skill
  • Free AI tools as lead gen replacing traditional content - 'free tools are the new PDF downloads' generated hundreds of thousands of emails, signaling evolution in PLG motion
ai-coding-toolsautomation-stacksplg-to-sales

This analysis was produced using the STEEPWORKS system — the same agents, skills, and knowledge architecture available in the GrowthOS package.