The AI productivity lie
> 96% of executives expect AI to boost productivity. 77% of workers say it’s making things worse. Something doesn’t add up.
I’ve been thinking about a strange experience I had recently.
I was working with three terminal windows open, running AI agents in parallel. Task one running. Task two in progress. Task three queued up. No waiting, no idle time. Just pure output. I built more in a single afternoon than I used to build in a week.
And by evening, I was completely wiped out.
Not tired in the normal “long day” sense. Something deeper. The kind of exhaustion that makes you question whether the speed was worth it.
I thought I was the only one feeling this way. Turns out, I’m not even close.
The Numbers That Should Worry You
A recent Upwork study surveyed 2,500 workers across the US, UK, Australia, and Canada. The findings are striking:
96% of C-suite executives expect AI tools to increase their company’s productivity.
77% of employees say AI has actually *decreased* their productivity.
That’s not a gap. That’s a chasm.
And here’s the part that really got my attention: workers who use AI frequently are 30% more likely to report burnout than those who never use it. 45% of frequent AI users are burned out, compared to 35% of non-users.
The tools that were supposed to give us our time back are somehow taking more of it.
What’s Really Happening
The core insight isn’t that AI doesn’t work. It’s that we’ve been deploying it without addressing what actually matters.
Nearly half of employees using AI don’t know how to achieve the productivity gains their employers expect. We handed people powerful tools and assumed they’d figure it out. They didn’t. Not because they’re incapable, but because nobody taught them how.
This is a skills gap masquerading as a technology problem.
Microsoft Research found something even more revealing: the more confident workers are in AI, the less critical thinking they apply. They trust the output, skip the verification, and end up with what researchers are now calling “workslop”: AI-generated work that looks productive but lacks the substance to actually move things forward.
41% of workers have encountered workslop. Think: AI-generated reports that sound impressive but say nothing actionable. Each instance costs about two hours to fix.
So much for the productivity gains.
The Efficiency Trap
Wharton researchers identified a pattern they call “the efficiency trap.”
Here’s how it works: Every productivity improvement becomes the new baseline. You complete a project in three days instead of five? Great. Now three days is the expectation. The time you saved doesn’t become free time. It becomes time for more work.
Workers report feeling:
”Simultaneously more productive and more overwhelmed.”
I recognize this feeling. That afternoon with three terminals running? I got an enormous amount done. And the next day, my brain expected that pace to continue. Anything less felt like slacking.
This isn’t a bug. It’s how we’ve designed our relationship with productivity tools. We optimize for output and assume the humans will adapt.
They are adapting. They’re burning out.
We’ve Seen This Before
In 1987, Nobel Prize-winning economist Robert Solow made an observation that became famous: “You can see the computer age everywhere but in the productivity statistics.”
Companies were spending 25% more on technology year over year. Productivity was actually declining.
Sound familiar?
We’re living through the same pattern with AI. Massive investment. Underwhelming results. Not because the technology doesn’t work, but because meaningful gains require more than just tools.
The NBER puts it clearly: “Meaningful productivity gains from transformative technologies require extensive complementary investments in new processes, business models, and human capital.”
Human capital. That’s the part we keep skipping.
The Burden Shift Nobody Talks About
There’s another dynamic I keep seeing in my work with companies.
AI makes it easy to generate output. Documents, code, proposals, analysis. The person using the AI saves time. But someone still has to review that output. Verify it. Fix the errors. Edit out the verbose parts.
Research on developer teams found exactly this: “While authors may save time by pasting in AI outputs, reviewers inherit the burden of checking quality, correcting errors, and editing down verbosity.”
The work didn’t disappear. It shifted. And it shifted to the people whose time is often already stretched thin.
If you’re a leader rolling out AI tools, ask yourself: who’s doing the quality control? Because if no one is, you’re not getting productivity gains. You’re getting “workslop” at scale.
What This Means for Leaders
This isn’t a technology problem. It’s a leadership problem.
The companies I work with that are actually seeing results from AI aren’t the ones with the best tools. They’re the ones that took time to answer three questions:
1. Who needs to learn what?
Not everyone needs the same skills. Some people need to learn how to prompt effectively. Others need to learn how to evaluate AI output critically. Some need both. Generic “AI training” doesn’t cut it.
2. What are the new expectations, really?
If AI makes certain tasks faster, does that mean more tasks? Or does it mean different work? Your team is guessing at the answer. Make it explicit.
3. Who owns quality control?
When output gets easier to produce, the bottleneck shifts to review. If you haven’t adjusted for this, you’ve just moved the burnout from one person to another.
These aren’t easy questions to answer alone. And they’re exactly the kind of questions I work through with executives who are navigating this transition.
The Takeaway
The AI productivity paradox isn’t about the technology being overhyped.
It’s about the gap between what we expect AI to do for us and what we’ve actually prepared people to do with it.
96% of executives believe AI will boost productivity. But belief isn’t a strategy.
The leaders who will actually capture those gains are the ones who recognize that tools alone don’t change outcomes. Skills do. Expectations do. The way you communicate why this matters and what it means for each person on your team does.
That’s the work that makes the difference. And right now, most companies aren’t doing it.
If you’re navigating this right now and want a thought partner, I’m always happy to talk.
Sources
- Upwork Research Institute: [From Burnout to Balance (2024)](https://www.upwork.com/research/ai-enhanced-work-models)
- Wharton: [The AI Efficiency Trap](https://knowledge.wharton.upenn.edu/article/the-ai-efficiency-trap-when-productivity-tools-create-perpetual-pressure/)
- Microsoft Research: [AI and Critical Thinking (CHI 2025)](https://www.microsoft.com/en-us/research/publication/the-impact-of-generative-ai-on-critical-thinking-self-reported-reductions-in-cognitive-effort-and-confidence-effects-from-a-survey-of-knowledge-workers/)
- Harvard Business Review: [AI-Generated Workslop (2025)](https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity)
- NBER: [AI and the Modern Productivity Paradox](https://www.nber.org/papers/w24001)
