Human-AI IntersectionThought LeadershipAI Weekly

AI News Weekly - 100 years from now : The Case for Artificial Stupidity - Mar 23rd 2026

Read original

Why I picked this

Victor flags this because it inverts the entire AI capability race. While everyone's optimizing for speed and autonomy, this piece asks: what if we're building the wrong thing? The 'artificial stupidity' frame isn't cute contrarianism — it's a serious design question about intentional friction. The author's exploring what happens when we optimize for human agency preservation instead of task completion velocity. This matters now because we're hardcoding automation assumptions into systems that will compound for decades. The philosophical framing ('100 years from now') gives permission to question premises we're treating as axioms in 2025. Worth reading not for predictions but for the design principles it surfaces: when should AI deliberately slow down, ask dumb questions, or force human decision points? That's the kind of systems thinking that separates builders from feature shippers.

human-in-the-loop designautomation philosophyintentional frictionAI capability constraintslong-term systems thinking

Three lenses

Builder

The 'worse on purpose' constraint is actually a product spec — I'd prototype an AI assistant that requires human confirmation on every third action, measure task completion vs. error rate, and see if intentional friction creates better outcomes than full automation. Deployable this quarter.

Revenue Leader

Philosophically interesting, operationally vague. Show me the pilot where 'artificial stupidity' improved win rates or reduced churn, then we'll talk about rolling it out. Until then, this is a dinner party conversation, not a deployment strategy.

Contrarian

Everyone will nod along to this and then immediately go back to automating everything because 'intentional friction' doesn't show up in velocity metrics. The real test: name one company that's actually shipping AI that's deliberately less capable. You can't, because the incentives don't support it.

0

Why this matters for operators: Surfaces the design question operators aren't asking: when should AI deliberately not automate? Relevant for teams building internal tools where error cost exceeds speed benefit.

I cover AI×GTM intelligence like this every Wednesday.

Get STEEPWORKS Weekly

More picks

Human-AI Intersectionr/artificial

Why Hasn’t AI Made Work Easier?

  • Large-scale study (164K workers, 180-day tracking) shows AI adoption doubled time spent on email/messaging/chat and increased business software use by 94%, but reduced focused work time by 9%
  • This represents a 'productivity paradox'—AI accelerates shallow, context-switching work while cannibalizing the deep work that drives actual value creation
  • Pattern repeats historical technology adoption cycles (email, mobile, video-conferencing) where efficiency tools paradoxically increased busyness without proportional output gains
ai-productivity-paradoxshallow-work-trapdeep-work-decline
Personal Productivity & AI-Augmented WorkTechCrunch AI

Cursor admits its new coding model was built on top of Moonshot AI’s Kimi

  • Cursor's new coding model is built on Chinese AI company Moonshot AI's Kimi foundation model
  • This represents a significant supply chain transparency issue in a widely-adopted developer tool
  • Geopolitical tensions around Chinese AI models create regulatory and compliance risk for enterprises using Cursor
ai-coding-toolscursor-vs-copilotregulatory-impact
Enterprise AIThe Verge AI

Confronting the CEO of the AI company that impersonated me

  • Grammarly/Superhuman shipped 'Expert Review' feature that cloned real journalists and experts as AI advisors without permission, triggering class action lawsuit
  • Company response evolved from email opt-out to killing feature entirely after backlash - demonstrates reactive vs proactive approach to AI ethics
  • Case illustrates emerging regulatory/legal risk category: AI companies using real people's professional identities and expertise as training data or product features without consent
ai-policyregulatory-impactai-ethics-backlash

This analysis was produced using the STEEPWORKS system — the same agents, skills, and knowledge architecture available in the GrowthOS package.