A VP Sales Built His Own AI Tools. Your RevOps Team Stopped Doing RevOps. Both Are True.
Issue #5

A VP Sales Built His Own AI Tools. Your RevOps Team Stopped Doing RevOps. Both Are True.

Operators are shipping their own tools faster than vendors can sell them. RevOps teams are abandoning pipeline hygiene to chase AI pilots. Only 19% of orgs have deployed agents. The gap between building and governing is where things break.

By Victor Sowers — 15 years scaling B2B SaaS GTM

DIY OperatorsClaude CodeRevOpsAI AdoptionShadow AIHorizontal GTM·4 deep dives·~5 min read

The Signal

    The Shift

    DIY tool creation is hot right now, but also enterprise adoption is low.

    Operators are shipping their own tools faster than vendors can sell them. Al Chen at Galileo fed Claude Code his production codebase and customers noticed the quality lift. A VP of Sales built his own Claude Code tooling and moved win rate 8 percent without buying a vendor. A weekend-coded CRM for B2B SaaS has three months of inbound interest the founder did not ask for. A practitioner open-sourced an AI job search system that scored 740+ offers and landed him a job. Simon Willison, talking to Lenny, says the inflection point is behind us. He is not usually one for hype.

    Meanwhile Anthropic just went from $1B to $19B in ARR, the fastest-growing AI product in history. And that's not to mention Mythos, the new model so good they can't release it for cybersecurity concerns.

    The second trend contradicts the build scenario precisely enough to be interesting. Databricks reports that only 19 percent of organizations have deployed AI agents, and those 19 percent are already creating 97 percent of new databases. The adoption story is not "everyone is doing this." It is "a small number of teams are doing it, and even most of those don't have a governance plan."

    For peak cynicism a hilarious Reddit thread on McKinsey's AI productivity numbers highlights the hype vs. reality and how enterprises will be in scramble mode to invest aggressively for quite some time.

    1

    The RevOps Dispatch Nobody Wanted to Write

    Based on: r/revops

    A Reddit thread titled "some revops teams have stopped doing revops" formed a provocative take on people chasing the AI train instead of focusing on the foundations of the day job.

    The consensus here is that people who built the GTM tooling stack are abandoning pipeline hygiene to build AI tools instead without thinking about downstream workflows or maintenance costs. It's an interesting take on whether the AI move is partly tons of us investing a ton of time and energy here in order to not be left behind.

    2

    Al Chen, the Codebase, and What "Your Full Context" Actually Means

    Based on: Lenny's Newsletter

    Al Chen at Galileo fed Claude Code his entire production codebase and customers noticed the quality lift. My first read was "so risky, what about all your proprietary data, but the grounding and ROI makes sense." That hedge is the correct frame. Full context beats scoped prompts, and the quality lift is real.

    But here is what the article is actually about: Galileo had the engineering discipline to feed a structured codebase to an AI model, manage the output risk, and ship customer-visible improvements. The lesson transfers. The execution prerequisites do not.

    The part nobody is writing about yet is the shadow AI angle. Al Chen had structure, governance, and engineering oversight. Most operators feeding production data to Claude Code at two in the morning have none of those things. When employees feed customer data, deal threads, and internal communications to AI tools without a governance layer, you get a data exposure surface your security team cannot see and your procurement team has not scoped. Yash Tekriwal at Clay built a custom Slack inbox the same way and reports it was easier than expected. That is the point. The barrier to feeding production data to an AI tool is now low enough that it is happening whether your org has a policy or not.

    3

    How Databricks Sells to Everyone Without Being Vertical

    Based on: SaaStr

    Every growth-stage B2B company this quarter is having the same argument: vertical product variants, or horizontal product with verticalized GTM? SaaStr's breakdown of how Databricks sells to dozens of industries without a single vertical product is the best piece I have read on why the reflexive "build vertical" answer has more nuance than you would think.

    When a buyer asks "do you know our industry," they are asking a risk question: what blows up in my face if I buy this and your team does not understand my regulatory environment, my data flows, my customer pattern? Databricks answers that risk question directly. They demonstrate that the platform handles the buyer's risk surface through reference architectures and composability of the underlying data layer. The reframe from "let me show you our healthcare logos" to "here is how the platform addresses your specific risk" is teachable at any scale.

    Databricks' horizontal GTM works because their data architecture is genuinely composable, something that is not universally a given.

    4

    Contrarian Corner: Skill Decay Is Already Here

    Based on: r/artificial

    An eleven-year software engineer caught himself unable to debug without AI and called it the scariest thing he had seen in the industry. A separate thread surfaced practitioners feeling anxious about deviating from what AI tells them to do.

    Collectively we're going to need to figure out how to set aside time to keep doing "the work", even as what "the work is" undergoes dramatic transformation.

    At the same time and again showing the all-in vs. definitely-not split, eighty percent of white-collar workers are refusing their employers' AI adoption mandates. On the surface it sounds crazy, but it actually has some merit given skill decay patterns.

    Reading Corner

    • Leaked Claude Code source — community patched the token drain A community member treated a closed AI tool like an open-source debugging problem: read the leaked source, found root cause, shipped a patch. The governance read is more interesting. A paid enterprise tool had a cost-bleeding defect users could not see, the vendor did not surface it, and it was only diagnosed through unauthorized code access. Your AI stack is now a supply chain, and procurement has not caught up.
    • AI security is being figured out in production right now The thread normalizes using paying customers as the de facto security QA environment for AI tools. For any operator whose AI stack touches customer data, this is a liability disclosure, not a frontier story. Pairs with the Claude Code leak: your AI tool vendors are now attack surfaces, and nobody on your procurement team is running that assessment yet.

    Tool Watch

    • Cloudflare shipped a WordPress for AI agentsA managed hosting layer purpose-built for AI agent deployment, with the implied pitch that your agents should live on Cloudflare's edge. If you are building on Workers already, this is a natural extension. If you are not, watch whether the developer tooling matures before committing. The infrastructure layer for AI agents is consolidating. (source)
    • Claude Code now submits apps to App Store ConnectThe scope of "coding assistant" expanded again. Code generation was step one. Build, test, and deploy pipelines are step two. Submitting to app stores and walking through review is step three. AI dev tools are moving from writing code to owning the full shipping workflow. For GTM teams evaluating build-vs-buy, the "build" side just got cheaper again. (source)

    Get the verdict every Wednesday.

    The AI x GTM briefing for operators. Free forever.

    One email per week. Unsubscribe anytime. No spam, ever.