Human-AI Intersectionr/artificial

Why Hasn’t AI Made Work Easier?

Read original
ai-productivity-paradoxshallow-work-trapdeep-work-declineai-adoption-consequences

AI users spent 100%+ more time on email/messaging and 9% less time on focused work—we're working faster on the wrong things

Key takeaways

  • Large-scale study (164K workers, 180-day tracking) shows AI adoption doubled time spent on email/messaging/chat and increased business software use by 94%, but reduced focused work time by 9%
  • This represents a 'productivity paradox'—AI accelerates shallow, context-switching work while cannibalizing the deep work that drives actual value creation
  • Pattern repeats historical technology adoption cycles (email, mobile, video-conferencing) where efficiency tools paradoxically increased busyness without proportional output gains
  • The methodology is particularly strong: individual tracking before/after AI adoption with control group comparison, eliminating confounding variables
  • Represents emerging contrarian narrative against uncritical AI adoption—organizations need intentional frameworks to prevent AI from becoming another busyness multiplier

Why this matters for operators: Critical for companies implementing AI tools—need frameworks to prevent shallow work proliferation

I cover AI×GTM intelligence like this every Wednesday.

Get STEEPWORKS Weekly

More picks

Personal Productivity & AI-Augmented WorkTechCrunch AI

Cursor admits its new coding model was built on top of Moonshot AI’s Kimi

  • Cursor's new coding model is built on Chinese AI company Moonshot AI's Kimi foundation model
  • This represents a significant supply chain transparency issue in a widely-adopted developer tool
  • Geopolitical tensions around Chinese AI models create regulatory and compliance risk for enterprises using Cursor
ai-coding-toolscursor-vs-copilotregulatory-impact
Enterprise AIThe Verge AI

Confronting the CEO of the AI company that impersonated me

  • Grammarly/Superhuman shipped 'Expert Review' feature that cloned real journalists and experts as AI advisors without permission, triggering class action lawsuit
  • Company response evolved from email opt-out to killing feature entirely after backlash - demonstrates reactive vs proactive approach to AI ethics
  • Case illustrates emerging regulatory/legal risk category: AI companies using real people's professional identities and expertise as training data or product features without consent
ai-policyregulatory-impactai-ethics-backlash

This analysis was produced using the STEEPWORKS system — the same agents, skills, and knowledge architecture available in the GrowthOS package.