
Your $25 Leads Are 10x Cheaper. Are They 10x More Useless?
The GTM stack is splitting into two speeds — operational wins you ship today, architectural shifts that matter in six months. Teams that sequence both pull away. Everyone else picks a lane and wonders why they stalled.
By Victor Sowers — 15 years scaling B2B SaaS GTM
The Signal
- Pricing inflection — Intercom's outcome-based model is the first credible template for charging when AI does the work (Hello Operator)
- Context > model — Tomasz Tunguz argues the richest context wins, regardless of which LLM you pick (Redpoint)
- 29 years on quota — Bill Binch's 116 quarters selling is the correction "automate everything" needed (The GTM Newsletter)
- Budget gravity shift — Content marketing budgets are moving away from SEO-first as AI search fragments discovery (Demand Gen Report)
The Shift
Something quiet is happening in budget conversations, and it isn't about adding AI line items. It's about subtracting old ones.
A Clutch report found content marketers are actively reallocating spend away from SEO-first strategies. Not because SEO is dead — it isn't — but because discovery is fragmenting faster than most teams' measurement can track. Rand Fishkin's latest research confirms it: search happens everywhere now. TikTok, Reddit, ChatGPT, Perplexity, Discord. Your blog post ranking #3 for a competitive keyword matters less when the buyer never typed that query into Google.
This is the contour of a bigger pattern. The GTM stack is splitting into two speeds. Speed one: operational improvements you can ship this afternoon — cheaper leads, AI-augmented reps, tighter outreach sequences. Speed two: architectural decisions that compound over quarters — how you price, how you structure context for AI systems, which human skills you double down on. Most teams can only see one speed. They're either heads-down shipping tactical wins or staring at the horizon debating strategy. The ones who sequence both — shipping today's wins while laying pipe for tomorrow's structural edge — are the ones pulling away.
What Do You Charge When the AI Does the Work?
Based on: Hello OperatorIntercom just answered a question most SaaS companies are still ducking. Their outcome-based pricing model charges per resolution, not per seat. When your AI agent handles half the support conversations, per-seat pricing punishes your best customers for adopting the product. Intercom reaccelerated growth after making the switch.
The concept is clean. The execution is harder. You need to define "resolved" in a way that's auditable, defensible, and not gameable. A chatbot that marks every ticket "resolved" after auto-responding hits the metric while delivering nothing. Any buyer with a pulse will notice.
Here's where I keep getting stuck, though. Intercom's model works because support resolutions are measurable. "Did the customer's problem go away?" has a verifiable answer. But what about the rest of us? In consulting, the resolution metric doesn't exist. "Better strategy" isn't a countable outcome. "Improved pipeline quality" takes quarters to prove. Outcome-based pricing assumes the outcome is legible, and for most knowledge work, it isn't.
That's the real question this model surfaces. Every company selling AI-augmented products needs to figure out what their "resolved ticket" equivalent is. Most don't have one yet, which is why they're still charging per seat for work an agent handles. Intercom built the template. The hard part for everyone else is finding their own outcome metric that's honest enough to charge against.
Key takeaway: Outcome-based pricing only works when the outcome is measurable. Find your "resolved ticket" equivalent before switching models.
Context Engineering Is the Moat. But How Thick Should the Walls Be?
Based on: RedpointTeams are still debating Claude vs. GPT like it's a meaningful variable. Tomasz Tunguz's latest piece argues they're tuning the wrong dial. His thesis: the richest context wins, regardless of which model runs underneath.
I've been building context engineering systems for the past year. This newsletter is one of them. Specialized agents scan hundreds of sources, structured context flows through debate rounds, and the output is shaped by accumulated editorial memory. The prompt matters far less than the architecture feeding it. When you see "paste a prospect's LinkedIn into ChatGPT and ask for a personalized opener," that's prompting. Context engineering means building pipelines that assemble firmographic data, engagement history, and intent signals before the model ever sees the request.
Tunguz frames proprietary data as the moat. Your structured context becomes the thing no model upgrade can replicate. GPT-5 ships? Doesn't matter if your competitor's pipeline is five layers deeper.
But there's a competing argument worth holding alongside this one. Swyx's Latent Space analysis surfaces a genuine tension: Anthropic rewrites Claude Code's harness from scratch every three to four weeks. Their bet is that the model itself improves fast enough to make thick wrappers disposable. The "Big Model" camp says keep the harness thin, because anything you build around the model will be obsolete by next quarter. The "Big Harness" camp says the orchestration layer is where real value lives.
Both are partially right, and the answer depends on your time horizon. If you're building for this quarter, invest in context pipelines. The model is a commodity and your data assembly is the edge. If you're building for next year, hold the architecture loosely. The rep with twenty years of industry knowledge is running their own context engine. It's biological, it doesn't scale, and it walks out when they leave. But the models are getting better at replicating that judgment faster than most of us expected.
Key takeaway: Build context pipelines for this quarter's edge, but hold architecture loosely — models are catching up faster than expected.
116 Quarters on Quota and the Ladder That's Disappearing
Based on: The GTM NewsletterBill Binch has been carrying a quota for 116 consecutive quarters. Twenty-nine years across market cycles, tech waves, and multiple "this changes everything" moments. His career is a geological record of what compounds in B2B sales.
The obvious read: AI replaces reps. The sharper question Binch's career answers: which reps were doing replaceable work? The ones doing outreach, data entry, and basic qualification are already being automated. Binch's work compounds in a different direction: reading a room, navigating a seven-figure deal through twelve stakeholders who each need a different version of the story, knowing when to push and when to listen. That's judgment. Judgment doesn't automate.
But there's an implication the "humans win" crowd glosses over. If AI handles the commodity work, you need fewer total reps. Not different work for the same number of people. Fewer people doing higher-value work. The math is straightforward: automate research, CRM hygiene, scheduling, and first-draft emails, and you've eliminated the majority of a junior rep's day.
Which raises the harder question. Junior reps historically developed taste and judgment by doing the grunt work. Cold calls taught them to read tone. Data entry forced them to learn the CRM. Basic qualification built their instinct for what a real deal looks like. If AI handles all of that, how do you build the next Bill Binch? The traditional ladder from SDR to AE to enterprise closer assumed you'd spend years accumulating pattern recognition through repetition. That ladder is compressing.
If you already have judgment, this is a great time to be alive. AI clears the busywork and lets you spend more time on the work that closes deals. If you haven't developed that judgment yet, the path to developing it just got less obvious. Sales leaders need to think about this now: how do you build reps with taste when the apprenticeship work is being automated away?
Key takeaway: AI eliminates the apprenticeship work that built sales judgment. Leaders need new paths to develop reps with taste.
The Stack
Reading Corner
- AI Made Our Team Irrelevant — Cautionary tale of a team that automated itself out of relevance without building the next thing.
- AI Native Growth Team — Blueprint for structuring growth around AI-native workflows. Read alongside the above for the do/don't contrast.
- Search Happens Everywhere — Fishkin's data on discovery fragmenting beyond Google — directly relevant to budget conversations.
- OpenClaw and Claude Code in Sales — Practitioners sharing real Claude Code workflows for sales prep and outreach.
- Cold Email Volume Cut 68% — A rep cut outreach volume by two-thirds and improved results. Precision over spray-and-pray.
Tool Watch
- Claude Code Remote Control — Simon Willison documents programmatic control of Claude Code sessions, effectively turning it into an orchestratable agent. If you're building multi-step workflows that need AI in the loop, this is the plumbing layer worth understanding. (source)
- Claude Excel Plugin — An AI plugin for Excel that meets operators where they actually live — spreadsheets. One practitioner reported cutting five days of financial modeling to one. (source)
One Thing I'm Thinking About
The Intercom pricing story and the Bill Binch story seem like they're about different things — pricing models and sales craft. But they're answering the same question from different angles: what's the durable unit of value when AI handles the commodity work?
For Intercom, the answer is outcomes, not seats. For Binch, the answer is judgment, not activity. Both are saying: the thing you can charge for — whether you're a vendor or a rep — is the thing that requires discernment. Everything else is getting distilled down to infrastructure.
I keep coming back to a line I can't shake: the teams that steep in this question now, before the market forces it on them, will have already built their pricing and their org charts around the answer. Everyone else will be scrambling to retrofit.
That's the two-speed stack in a sentence. Ship today's wins. But don't mistake speed for strategy.
Get the verdict every Wednesday.
The AI x GTM briefing for operators. Free forever.
One email per week. Unsubscribe anytime. No spam, ever.
