Victor flags this because it inverts the entire AI capability race. While everyone's optimizing for speed and autonomy, this piece asks: what if we're building the wrong thing? The 'artificial stupidity' frame isn't cute contrarianism — it's a serious design question about intentional friction. The author's exploring what happens when we optimize for human agency preservation instead of task completion velocity. This matters now because we're hardcoding automation assumptions into systems that will compound for decades. The philosophical framing ('100 years from now') gives permission to question premises we're treating as axioms in 2025. Worth reading not for predictions but for the design principles it surfaces: when should AI deliberately slow down, ask dumb questions, or force human decision points? That's the kind of systems thinking that separates builders from feature shippers.