Human-AI IntersectionThought LeadershipAI Weekly

AI News Weekly - 100 years from now : The Case for Artificial Stupidity - Mar 23rd 2026

Read original

Why I picked this

Victor flags this because it inverts the entire AI capability race. While everyone's optimizing for speed and autonomy, this piece asks: what if we're building the wrong thing? The 'artificial stupidity' frame isn't cute contrarianism — it's a serious design question about intentional friction. The author's exploring what happens when we optimize for human agency preservation instead of task completion velocity. This matters now because we're hardcoding automation assumptions into systems that will compound for decades. The philosophical framing ('100 years from now') gives permission to question premises we're treating as axioms in 2025. Worth reading not for predictions but for the design principles it surfaces: when should AI deliberately slow down, ask dumb questions, or force human decision points? That's the kind of systems thinking that separates builders from feature shippers.

human-in-the-loop designautomation philosophyintentional frictionAI capability constraintslong-term systems thinking

Three lenses

Builder

The 'worse on purpose' constraint is actually a product spec — I'd prototype an AI assistant that requires human confirmation on every third action, measure task completion vs. error rate, and see if intentional friction creates better outcomes than full automation. Deployable this quarter.

Revenue Leader

Philosophically interesting, operationally vague. Show me the pilot where 'artificial stupidity' improved win rates or reduced churn, then we'll talk about rolling it out. Until then, this is a dinner party conversation, not a deployment strategy.

Contrarian

Everyone will nod along to this and then immediately go back to automating everything because 'intentional friction' doesn't show up in velocity metrics. The real test: name one company that's actually shipping AI that's deliberately less capable. You can't, because the incentives don't support it.

0

Why this matters for operators: Surfaces the design question operators aren't asking: when should AI deliberately not automate? Relevant for teams building internal tools where error cost exceeds speed benefit.

I cover AI×GTM intelligence like this every Wednesday.

Get STEEPWORKS Weekly

More picks

Enterprise AIn8n Blog

n8n Partners with SAP to bring Visual AI Workflow Orchestration to Enterprise

  • n8n will be embedded as fully managed environment within SAP's Joule Studio on Business AI Platform
  • Integration provides visual AI workflow orchestration for SAP developers with built-in identity, access control, and compliance
  • Partnership positions n8n within SAP ecosystem alongside SAP Build and Integration Suite for agentic workflow capabilities
automation-stacksai-workflow-orchestrationenterprise-ai-adoption
AI×GTMHello OperatorVictor's pick

SaaSletter - Maybe AI NRR Actually Will Be Great?

cool thesis and also lots of great links here

  • Article title suggests contrarian view that AI could positively impact NRR, contrary to fears about AI reducing expansion revenue
  • References ServiceNow 2026 data and State of Martech 2026 report as potential evidence sources
  • Includes podcast interview with Tim Sanders from G2, likely discussing market trends and vendor landscape
ai-nrr-impactrevenue-platform-consolidationmartech-landscape

This analysis was produced using the STEEPWORKS system — the same agents, skills, and knowledge architecture available in the GrowthOS package.