The money moved. The jobs didn't follow.
Issue #4

The money moved. The jobs didn't follow.

AI acceleration and the hiring freeze are two sides of the same coin.

By Victor Sowers — 15 years scaling B2B SaaS GTM

Labor MarketAI AccelerationGTM HiringConcentration DynamicsSubtraction·2 deep dives·~6 min read

The Signal

  • Claude MythosAnthropic's leaked next model, described internally as "a step change" in coding, reasoning, and cybersecurity; ARR went from $9B to $19B in 90 days (Fortune)
  • Claude ChannelsPush-based AI that surfaces insight before you ask; the activation energy problem for AI adoption just got a solution (GTM AI Podcast)
  • AWS agents in productionNot a pilot — production agents replacing production workflows, post-headcount cuts (The Information)
  • Mangomint 7.2xHit 7.2x growth by stripping complexity, not adding features — the salon software market had a complexity tax; they removed it (SaaStr)
  • Trust is the moatPeer validation and community close deals; now there's CFO-ready data to prove it (Demand Gen Report)

The Shift

The New York Times called it the worst spring for young degree holders since the depths of the pandemic. Unemployment for recent grads hit 5.6%. More than 40% are underemployed. Fortune says entry-level is the worst it's been in 37 years. BDR positions are getting 500 applications. Experienced AEs are taking SDR roles just to stay employed.

Meanwhile, Claude Mythos leaked internally as "a step change" in coding, reasoning, and cybersecurity. Anthropic's ARR went from $9B to $19B in 90 days. AWS is running agents in production after cutting the people who ran those workflows. Tunguz called the shift: we've moved from Jevons territory (cheaper tokens, more usage) into Veblen territory, where the best models command premium prices because the ROI keeps exceeding the cost. And Veblen economics concentrate. The companies that can afford Mythos-class capability will be operating in a different league than the ones running last year's stack, and that gap widens with every release.

As model capability compounds, the automation economics shift with it. Tasks that required human judgment last year don't this year. Companies don't announce this. They just stop backfilling. Entry-level roles disappear, absorbed into workflows, before anyone has a number for what happened. The youth hiring freeze isn't a lagging indicator of macroeconomic softness. It's a leading indicator of what displacement looks like at the beginning, before it reaches the middle.

Karpathy is writing about self-tuning agents that calibrate from what you act on and dismiss, without needing instruction. Anthropic announces Claude Channels, and agents become ambient: always on, event-triggered, learning what earns your attention. Models training on model outputs. The gap between "AI as tool" and "AI as colleague" is closing faster than the retraining window that every optimistic jobs forecast assumes.

Anthropic doubled ARR in a single quarter. AWS replaced production workflows with agents and cut the humans who ran them. The money is moving toward AI capacity, and the jobs being created (infrastructure, orchestration, signal quality) require a profile that the 500-application BDR candidate does not have yet. If you're planning headcount for 2027, plan around the automation economics your competitors are already running, not the ones you're comfortable with.

1

Mangomint's 7.2x — What Subtraction Actually Costs

Based on: SaaStr

Key takeaway: Find a ghost gate — an approval step that exists for a person who no longer works here, a tool you deprecated, or a policy that changed. Kill one this week.

Mangomint's VP Sales grew 7.2x in a market that Mindbody owned. By building less, not more.

Mangomint sells salon and spa software. They came into a market where incumbents had spent years adding integrations, admin panels, and feature checkboxes nobody asked for. The complexity wasn't accidental. It was the product of every reasonable short-term decision: add the requested feature, win the deal, keep the customer from churning to a competitor with a longer list. That logic compounds into software that requires training to use and support tickets to survive.

Mangomint consolidated. Scheduling, payments, client management, marketing on one surface instead of four tabs across three tools. Every feature you add creates a new decision point for the buyer: "do I use this?" Most of those decisions, after an enthusiastic onboarding week, resolve to no. That's complexity tax on your customer's attention. It shows up in churn, not in support tickets.

Here's where I keep getting stuck with this story. The lesson sounds clean in retrospect. In practice, building less is one of the hardest organizational bets you can make. Boards want roadmap momentum. Sales teams want feature charts. Investors want TAM stories that imply more surface area. Every legitimate pressure inside a growing company pushes toward addition. Staying deliberately smaller than you could be doesn't get easier when growth is working. It gets harder, because now you have more to protect.

The 7.2x didn't come from a product insight. It came from subtraction discipline: the organizational muscle to keep saying no when yes was easier, compounded over years. That's how you survive a concentrating market. You don't match every feature the well-capitalized competitor ships. You're the product that doesn't require a training program to use.

2

Claude Channels — Read the Direction, Not the Feature

Based on: GTM AI Podcast

Key takeaway: Don't start with Channels. Start with one workflow where your team consistently forgets to use an AI tool that would actually help them. Build the push layer for that single case first.

Claude Channels is going to get read as "pull is dead." Wrong read.

Pull isn't dead. What Channels signals is a direction: toward push-based, always-on, memory-rich agents that act without waiting to be asked. Most teams are still debating whether to adopt AI tools at all, while the architecture is moving toward "which agent told you first."

I've watched this play out in every AI tool rollout I've been part of. You deploy something useful. Train the team. Week one, enthusiastic. Week four, compliant. Week eight, everyone's back to the old workflow because it's frictionless and the AI tool requires someone to remember to open it. The activation energy problem kills more AI adoption than any technical failure. Channels inverts it. The tool shows up when the event fires (a calendar trigger, a pipeline flag, a deal going dark) instead of waiting to be summoned.

The deeper signal is Karpathy's self-tuning thesis. Right now, push means "we configured what to surface." Next: agents that calibrate from what you act on and ignore, getting sharper without explicit training. Channels is an early manifestation. The arrow, not the destination.

Here's where it gets hard. A push system that isn't calibrated becomes noise. Your team learns to ignore the channel, and you've built an expensive alert they've trained themselves to dismiss. The failure mode is signal quality: you've built the delivery mechanism before you've earned the interrupt.

Where this is heading: ambient agents that self-tune don't just solve the activation energy problem. Over time, they remove the human decision point. The loop tightens (agent surfaces signal, agent acts on signal, agent learns from outcome) until the human's role is ratification, not judgment. The question for operators is whether you're building the signal quality layer now, while you still control what the agent surfaces and acts on, or whether you're going to inherit whatever defaults ship with the platform.

Reading Corner

  • AWS Accelerates Internal AI Agents Following Staff Cuts AWS automated customer-facing support workflows first, not internal ops, not back-office, because that's where the ROI was legible and the headcount savings were immediate. If you're mapping your own automation sequence, that prioritization logic matters more than the headline ratio.
  • AI Is Destroying Open Source, and It's Not Even Good Yet AI-generated PR volume has outpaced maintainer review capacity to the point where legitimate contributions get buried. The management implication: any workflow where AI generates and humans validate has this problem. Generation scales. Review doesn't.
  • The Last 4 Jobs in Tech The map of which roles survive. More useful if you disagree with it than if you agree.
  • The Events and Community Playbook When speed commoditizes, IRL becomes the moat. The mechanism: trust that forms in person doesn't transfer to competitors who stay digital. Steal the event formats, not just the philosophy.
  • How to Turn Claude Code into Your Personal Life OS Hilary Gridley's proof of concept. Personal-scale before enterprise-scale is usually the right order.

One Thing I'm Thinking About

I keep coming back to the ladder.

The old path from SDR to AE to enterprise closer assumed years of accumulating pattern recognition through repetition. Cold calls taught you to read tone. Data entry forced you to learn the CRM. Basic qualification built your instinct for what a real deal looks like. The busywork wasn't just busywork. It was the apprenticeship. You developed judgment by doing the things that were beneath you.

If AI handles all of that, how does the next Bill Binch get built? He has 116 quarters on quota. Twenty-nine years of learning what closes and what doesn't, accumulated through thousands of small repetitions that no longer exist in the same form. The new grad entering in 2026 has better tools than any SDR in history and a narrower path to the judgment that makes those tools worth using.

Same story as the opening. The acceleration compresses the ladder from both ends: the top gets more productive with fewer people, the bottom loses the on-ramp that built the top. The training ground that produced the operators who can run these AI systems is being automated away before the next generation gets through it. And we haven't figured out what replaces the apprenticeship.

Get the verdict every Wednesday.

The AI x GTM briefing for operators. Free forever.

One email per week. Unsubscribe anytime. No spam, ever.