Everybody's building. What aren't they asking?
Issue #7

Everybody's building. What aren't they asking?

The AI wave is exciting and unsettling at the same time. Unemployment for recent grads hit 5.6% — worst in 37 years. Meanwhile Anthropic went from $9B to $19B ARR in 90 days. The build momentum is real. So are the costs nobody modeled: maintenance economics, token subsidies with expiration dates, and the apprenticeship disappearing underneath us.

By Victor Sowers — 15 years scaling B2B SaaS GTM

AI StrategyBuild EconomicsHuman PremiumLabor MarketToken CostsNewsletter Launch·3 deep dives·~8 min read

The Signal

  • Youth unemploymentRecent grad unemployment hit 5.6% — worst in 37 years. BDR positions getting 500 applications. (Fortune)
  • Anthropic ARRAnthropic went from $9B to $19B ARR in 90 days. Non-mega AI startups saw $18.2B in VC funding last quarter, up 57% YoY. (Multiple)
  • 95% AI pilot failureMIT research found gen AI pilots have a 95% failure rate inside companies. BCG found 42% of AI initiatives abandoned. (MIT / BCG)
  • Token subsidy mathOpenAI lost ~$5B on $3.7B revenue. Some power users generated $35K in compute costs on $200/month plans — a 175x subsidy. (Ben Thompson / Stratechery)

The Shift

The AI wave is exciting and unsettling at the same time.

Somewhat recently, unemployment for recent grads hit 5.6% which is the worst in 37 years. This worst in a generation stat doesn't feel like it's going anywhere and I predict that trend will accelerate, especially in GTM. This is hardly a controversial claim. Across many companies I interact with new hiring is flat. BDR positions get 500 applications. Those in jobs are hesitant to move. Those without are taking months to find something, if they can. Those of us that know recent college grads think 5.6% is drastically undercounting.

This trend is not limited to GTM, and this pattern has both short term pain and long-term questions on how we train the next generation if they can't get at-bats. The senior/junior AI benefit curve isn't evenly distributed.

Meanwhile, AI is transforming work in exciting ways and that is showing in the growth runs of some of the giants.

  • Anthropic went from $9B to $19B ARR in 90 days
  • Non mega AI start-ups saw $18.2B in VC funding last quarter, up 57% year over year
  • Opus 4.5 to 4.7 represents the beginning of a parabolic capability curve (or really 4.5 and 4.6 since Opus 4.7 kind of sucks)
  • OpenAI's new 5.5 model is nothing to sneeze at either

The money is moving in ways that are fast, concentrated, and almost entirely toward compute and the companies that control it.

The combination of unemployment and unequal distribution makes this more than a technology story. It's a labor story, a capital concentration story, and an equity story happening on the same timeline.

So AI is here, it's obviously making a difference. But then there's figuring out what's actually truly really real inside work and what is still hype or happening in places without much to displace yet. That's the story behind the now infamous MIT research study found that gen AI pilots have a 95% failure rate inside companies (in another study BCG found that 42% of AI initiatives have been abandoned).

So AI deployments have yet to fully grapple with the realities of navigating human organizations with different incentives, structures, and entrenched modes of working. But they also haven't crossed the technological chasm fully yet either. There are still so many constraints to what we can give to AI to do and what we cannot trust it with. That question will remain for a long time, with some of the more powerful models showing that they are reliable more often, but when they are wrong they can be categorically wrong.

This newsletter covers a lot of those macro trends as it relates to weekly news. A few of the trends in past weeks are below.

1

Build AND Govern — The Costs Nobody Modeled

Key takeaway: The maintenance line item that replaces the SaaS line item is going to surprise a lot of finance teams who haven't modeled it yet.

The build wave is real and it's moving fast:

  • A VP of Sales built his own Claude Code tooling and moved win rate 8% without buying a vendor.
  • Al Chen at Galileo fed Claude Code his entire production codebase and customers noticed the quality lift.
  • A weekend-coded CRM has three months of inbound interest nobody asked for.
  • A practitioner open-sourced an AI job search system that scored 740+ offers and landed him a job.

I'm part of the build momentum. Every conference I've attended this year has someone declaring SaaS dead because of this trend. Databricks reports that only 19% of organizations have deployed AI agents — but those 19% are already creating 97% of new databases. Is some of it slop? Undoubtedly. But not all of it.

Here's a few key learnings from building:

First, your builds are only as good as the context you bring. Deal history, ICP definitions, competitive positioning, how their buyers actually buy, and so on. Tomasz Tunguz at Redpoint is one of many voices highlighting that model parity means the differentiation is your first-party data. Well that + "taste" + thinking about what to work on and what not to work on + still stitching it all together.

Second, people are underweighting maintenance. Who maintains the context layer once it's built? Across all business domains? And what if you do have that person but they leave? Zach Vidibor at GTM Engineer School calls this the "strategy compression problem." And Jason Lemkin's post-mortem on Agents #001 is the most honest assessment I've read of vibe-coded apps needing daily maintenance.

Third, the economics are shifting underneath the build wave. OpenAI lost roughly $5 billion on $3.7B in revenue last year. Some power users generated $35K in compute costs on $200/month plans — a 175x subsidy. Ben Thompson's compute economics piece calls out the inescapable fact that reasoning models reintroduce real marginal costs into a stack everyone assumed would trend toward zero. And the contracts already reflect it — sub-1-year SaaS contracts have tripled from 4% to 13% since 2023. Buyers are hedging as builders accelerate. Anthropic literally rewrites Claude Code's harness from scratch every few weeks. The models and primitives shift constantly.

I don't think SaaS dies. But I think the maintenance line item that replaces the SaaS line item is going to surprise a lot of finance teams who haven't modeled it yet.

2

The Human Premium — What Compounds When Execution Is Free

Key takeaway: The busywork was the apprenticeship. What happens when it's automated away before the next generation gets through it?

Stated preference vs. revealed behavior. Gartner says 67% of B2B buyers prefer a rep-free buying experience (646 buyers surveyed Aug-Sep 2025 — treat as directional, not gospel). That stat will show up in every Q3 budget deck as justification for headcount cuts, and most of them will get it wrong. The gap between "I prefer rep-free" and "I bought without a rep" is enormous. I've been a power buyer of enterprise software for over a decade, and I'd check the "prefer rep-free" box in a heartbeat. I'd also tell you that every complex deal I've closed involved a human at the table when it mattered. The transactional mid-market may genuinely go rep-free. But for complex enterprise deals, the human isn't the bottleneck — they're the trust layer the buyer needs before they can say yes internally. Or the person they can sue if things go sideways?

The field advantage compounds. Brett Queener's data makes the point from the supply side: 75%+ of early-stage pipeline through events and field moments (his Bonfire Ventures portfolio, not universal — but the pattern holds across the lean GTM teams I've worked with). Not digital-first pipeline. Human-surface pipeline. When execution is commoditized — when everyone can automate outbound and scale content at roughly similar quality — the field tilts toward whoever built the network AI can't replicate. Pick your hill carefully and show up there, physically, repeatedly, until you own the terrain.

The apprenticeship is disappearing. Bill Binch has carried a quota for 116 consecutive quarters. Twenty-nine years of reading rooms, navigating seven-figure deals, knowing when to push and when to listen. That's judgment, and judgment doesn't automate. But if AI handles the commodity work, you need fewer total reps. Which raises the question of how the next generation gets to where Bill is. Cold calls teach them to read tone. Data entry forced them to learn the CRM. Basic qualification built instinct for what a real deal looks like. The busywork was part of an apprenticeship. What happens now?

3

Building Is Fragile Too

Key takeaway: The ones who built the thickest automation layers were, ironically, the most exposed when the model changed overnight.

On April 16, Anthropic shipped Opus 4.7 and deliberately broke backwards compatibility. Existing code threw errors. The new tokenizer consumed up to 1.35x more tokens. The model pushed back on instructions it used to follow.

My long-running agents stopped following their plans overnight. The teams that adapted quickly had three things: version-controlled prompts, eval suites, and a human who could tell them whether the output actually changed. Most teams have none of these. The ones who built the thickest automation layers were, ironically, the most exposed.

It gets worse and more meta if we extract this out a few years. For example, an eleven-year software engineer posted that he'd caught himself unable to debug without AI, something he called the scariest thing he'd seen in the industry. The muscle atrophy is already happening. I'm not saying I panic when Claude goes down, but I'm not saying I don't either.

Reading Corner

  • AEO: How to Make AI Recommend Your Product 94% of B2B buyers using AI during purchasing (survey-inflated, but directional signal is real). What to steal: "The new homepage is a ChatGPT prompt." The AEO framework is worth understanding now.
  • Claude Cowork 101 JJ Englert in Lenny's Newsletter. What to steal: the brain file pattern (persistent preferences doc Claude reads every session) and the sub-advisory-board technique. Both cost nothing to implement this week.
  • The Events and Community Playbook When speed commoditizes, IRL becomes the moat. The mechanism: trust that forms in person doesn't transfer to competitors who stay digital. Steal the event formats, not just the philosophy.
  • AI Native Growth Team Blueprint for structuring growth around AI-native workflows. What to steal: the org chart that puts AI at the center of the growth function instead of bolting it on after.

Tool Watch

  • Intercom — Outcome-Based PricingSwitched to charging per resolution instead of per seat. When your AI agent handles half the support conversations, per-seat pricing punishes your best customers for adopting the product. First credible template for AI-era pricing. Watch this model migrate to other categories — but note the hard part: outcome-based pricing assumes the outcome is legible. "Did the customer's problem go away?" has a verifiable answer. "Better strategy" doesn't.
  • Mangomint — 7.2x Growth via SubtractionHit 7.2x growth by stripping complexity, not adding features. Consolidated scheduling, payments, client management, and marketing onto one surface in a market where incumbents had years of feature bloat. The complexity tax shows up in churn, not in support tickets. They killed features their competitors were still adding.

One Thing I'm Thinking About

This newsletter is a build-in-public experiment in using AI to build data aggregation, classification, and filtering layers around a specific space — AI for operators in this case — and then running that AI layer through a human subject matter expert to create an ongoing editorial perspective on what's happening in AI as it relates to business, GTM, and shifting ways of working.

Functionally how this works is that I have scrapers and RSS feeds representing over 300 different sources plus additional ingest layers for Reddit, LinkedIn, and X that run daily. Claude then classifies all of this and gives me a filtered daily digest of relevant articles that I thumbs up/down/comment on or ignore. At the end of the week I then have seven AI agent personas evaluate, debate, and score the feed to try and figure out what's important and interesting across different personas (CRO, VP RevOps, AI-pilled builder, a skeptic CMO and so on). In this phase my daily picks matter but they don't dominate the discussion.

When the AI is done I then review, curate, debate, edit, rewrite and try to land on something interesting.

Get the verdict every Wednesday.

The AI x GTM briefing for operators. Free forever.

One email per week. Unsubscribe anytime. No spam, ever.