Skip to content

Why AI Is Causing Fatigue at Work

AI boosts output but increases mental load, teams need to redesign how work gets done.
A view of a meeting looking over the shoulders of an individual at the head of the table

AI brain fry is real, it's measurable, and it's not your fault. Here's what the emerging research says - and what individuals and organisations can actually do about it.

Yesterday I had a splitting headache by 3pm.

But I'd been bouncing between agents since 8am. Strategy document with one, research synthesis with another, refining workflows with a third. Nothing felt quite finished. The temptation to set another plate spinning while I waited on other tasks to complete was calling.  It took me until mid-afternoon to realise I’d skipped things like breaks and moving my body and my head was thumping.  

That feeling has a name now. And it turns out it's not just me.

What the research is telling us

Boston Consulting Group published a study in March 2026 - 1,488 workers, across industries and seniority levels (nb: US-based workers only, and it's self-reported data, the NZ context will be undeniably different). They called it "AI brain fry." Mental fatigue caused, not by using AI, but by watching it. Monitoring it. Correcting it. Making hundreds of small quality judgements, back to back, all day, on outputs from systems that look confident but aren't always reliable.

Fourteen percent of AI-using workers in the study experienced it. In marketing, it was one in four. The business costs are measurable: 33% more decision fatigue, 39% more major errors, and a meaningful spike in intention to quit. The people most affected are often the ones using AI most seriously - the early adopters, the high performers, the people leaning hardest into new ways of working. That's not a coincidence, but I think we need to be taking it seriously as an early warning.

A companion study from UC Berkeley tracked a 200-person tech company for eight months and found something grimly predictable: AI saved time on individual tasks. Hooray. But, organisations filled that time with more tasks. Nobody got their afternoons back. The time AI freed was immediately claimed by the next thing on the list. Those researchers called it "workload creep."

"AI reduces the cost of production but increases the cost of coordination, review, and decision-making. Those costs fall entirely on the human."

There's also a subtler shift in the nature of the work itself. Before AI, making something required thinking through it. The friction of creation,  the slow drafting, the walking around the problem,  was also where a lot of our understanding formed. When AI removes that friction, it can remove some of the thinking too. You manage to consistently produce more., but the cost appears to be understanding less.

One engineer who builds AI agent infrastructure professionally described a pattern many of us will recognise: before AI, he might spend a full day on one design problem - sketching, thinking slowly, coming back with clarity. Now he might touch six problems in a day. Each one "only takes an hour with AI." But context-switching between six problems is brutally expensive for the human brain. The AI doesn't get tired between problems. He does.

And on some level, this aligns with what happened to me. I used to primarily think and work in lockstep with key human collaborators. Their break times, competing demands and our mutual schedules helped to shape the pace of work. And that pace was inherently human. While it’s hugley exciting to have a perpetually “on call” collaborator, boundaries are essential. 

The paradox we should have expected

AI genuinely makes (select) individual tasks faster. That part is real. What used to take three hours might now takes forty minutes. The productivity maths looks great on paper.

What the maths misses: when each task takes less time, you don't do fewer tasks. People tend to default to more. Capacity appears to expand, so work expands to fill it. And the work that expands isn't the easy stuff … because that’s the bit I have most confidence in my AI handling. What expands is the oversight. The review. The judgment calls. The checking. That lands on the human, sequentially, all day.

I think there is also a shift in the kind of work you're “mostly” doing. Creating is energising  it produces flow states, builds understanding, gives you something to show for the effort. Reviewing is…. much more draining. Evaluative work - is this right? Is this safe? Does this match what we actually meant?  involves hundreds of small decisions, none of which feel significant, all of which cost something.

The cruel irony is that AI-generated output often requires more careful review than human-generated output. When a colleague produces something, you know their patterns, their strengths, their blind spots. You can skim what you trust. 

With AI (for me anyway) every output is a bit suspect in a different way. It always looks so confident, it reads pretty cleanly, but it might be subtly wrong in ways that only surface later. So you read everything. And reading work you didn't make, generated by a system that doesn't know your context or your history, is tiring in a way that's much harder to name. 

This is a design problem, not a discipline problem

The first instinct, when you feel this kind of fatigue, is to look inward. Am I not doing this right? Am I not disciplined enough? Should I be taking better breaks? Ummm. yes, I do need to pick up the personal discipline here, for sure. 

That instinct is understandable - and I think both necessary and healthy. Individual coping strategies are useful and taking individual responsibility is important. But that shouldn't become a band aid for a more structural problem. If the work architecture is designed to exceed human cognitive capacity, no amount of boundary-setting prevents the overload. 

So, I think we will have to redesign the architecture.

The BCG study found that organisational signals matter as much as individual behaviour. Employees whose companies expected more output because of AI reported greater fatigue. Employees who felt their organisations valued work-life balance reported less strain. Guidance on how AI fits into daily work reduced cognitive pressure across teams - even just having clarity about what AI is for made a measurable difference. 

When organisations celebrate productivity gains without clarifying what those gains mean for workload, people tend to interpret that ambiguity as intensification. The message lands as: do more. The absence of a conversation is itself a message. As leaders, as organisations we need to take this on board and be conscious about how we architect and communicate work.  

Man scratching his head trying to think

What we're learning - and what seems to be helping others

I want to be clear about where we are with this: it's genuinely new territory. There are no settled best practices. The tools are months old, the patterns are still forming, and anyone claiming to have cracked sustainable AI use is probably either working at lower intensity than they're letting on, or isn't being straight with you.

What we can do - and what we're trying to do - is pay attention. Read the research as it emerges, share what we're noticing, trial things, and report back honestly. Here's what's coming through as useful, at both the individual and organisational level.

For individuals

STREAM CONSTRAINTS

Be clear about how many AI workstreams you can realistically monitor simultaneously. The BCG data is specific: productivity and mental clarity peak at two to three tools in use at once, then measurably drop. Three is the ceiling, not the floor. More agents isn't more power  beyond three, it's mostly just more to watch. I would add to this that it depends enormously, as well, on where the individual is at in their AI adoption journey - and the nature and complexity of the tasks they’re overseeing. 

PROTECT YOUR MORNING

Reserve your highest-cognitive-effort hours for work that doesn't require AI oversight. Sketch the problem by hand. Reason through the structure slowly. Only then reach for AI to execute. It’ll quite possibly feel inefficient. The return is that your AI-assisted work is better - because you’re clear on what good looks like before the output starts arriving. One pattern that seems to be gaining traction: mornings for thinking, afternoons for AI-assisted execution.  Again, my gut tells me that individuals will know their own (and their human colleagues) routines and rhythms best of all. Whether it’s mornings, afternoons or AI-free days - teams should be openly discussing how this works best for them. 

BATCH YOUR REVIEWS

Instead of verifying AI output continuously,  checking every few minutes as things generate,  create scheduled review blocks. Twenty to twenty-five minutes of focused evaluation, followed by a genuine break. Treat it like a quality checkpoint rather than active babysitting. This restructures the oversight load from sustained vigilance (the most draining kind) into discrete, bounded bursts. 

ACCEPT 70%, MOVE ON

Chasing perfect output from AI is one of the biggest drivers of the fatigue spiral. The prompt-check-refine-check loop is where a lot of the headaches live. Setting an internal bar of (say) 70% usable being good enough to work with - before you make a cup of tea and crack into active human finishing can be both faster overall, and less draining. It also keeps you in the creative loop rather than stuck on the review assembly line. 

RECLAIM PRODUCTIVE STRUGGLE

Don't outsource thinking tasks where AI adds more verification load than value. Writing or reasoning from scratch keeps your judgment sharp and reduces the need to fix plausible-sounding errors later. When you do use AI for complex work, try using it as a sparring partner rather than an answer machine,  ask it to challenge your assumptions, find flaws in your argument, generate alternatives. That keeps human judgment active rather than passive.

PHYSICAL DISCONNECT, NOT SCREEN-SWITCHING

When you take breaks, don't switch to another screen. Walk away from the desk. Go outside. Movement and distance allow the brain to consolidate rather than re-load immediately. “Other” screens don't count as rest, the research on cognitive recovery is consistent on this.

LOG WHERE AI HELPS AND WHERE IT COSTS

This is something I’m going to start. Keeping a simple record for a couple of weeks: task, used AI or not, time spent, quality of result. The data tends to be revealing. Highly individual insights help us to see where AI reliably saves the specific human time. Once you intimately know your own efficiency patterns, you can consciously stop reaching for AI tooling instinctively - and start using it strategically.

For teams and organisations

NAME THE NORMS EXPLICITLY

Teams with organised, shared AI integration experienced significantly less strain than those where individuals figured it out alone. The difference wasn't the tools - it was whether the team had actually talked about how they were using them. How many tools are we running at once? Who reviews what? What does good output look like? What are we not using AI for? These conversations don't need to be formal. They just need to happen.

ROTATE OVERSIGHT RESPONSIBILITY

If one person carries all the AI review load on a project, they carry all the cognitive cost. Distributing oversight duties across team members spreads the fatigue and creates collective literacy about what AI is actually producing - including errors that single-reviewer fatigue tends to miss.

MAKE AI EXPECTATIONS EXPLICIT, NOT AMBIENT

When organisations don't communicate clearly about AI's role, fatigue scores go up. When managers answer questions and provide genuine support, fatigue scores go down.  

MEASURE OUTCOMES, NOT ACTIVITY

Organisations that treat token consumption, output volume, or AI usage frequency as performance metrics are accidentally designing brain fry in. If you measure activity, you get exhausted people producing a lot. If you measure impact, you get space for judgment, recovery, and the kind of thinking that produces work worth doing. Yes it’s harder, but the human cost of lazy defaults does not feel justifiable. 

DON'T IMMEDIATELY BACKFILL AUTOMATED TIME

When someone finds a more efficient way to do something with AI, the instinct is to fill that time with more work. 

In our context there is always, always, more work on the backlog to be done. One of our consistent painpoints is finding sufficient time for learning & growth activities, DevEx improvements etc. It’s so incredibly important to resist the instinct to immediately grab the next task off the list and consciously consider what other high value/ impacts we can undertake with the space AI potentially offers us. 

The BCG researchers were direct: rushing to backfill automated capacity is punitive and signals to people that efficiency gains will only ever result in more load, not more breathing room. That kills the appetite for innovation fast.

The thing we keep coming back to

There's a version of AI integration that amplifies human capability - where people are sharper, more effective, better able to bring their judgment to the work that actually matters. And there's a version that just increases throughput until something gives.

The difference between those two versions isn't the tools. It's the intent behind how work is designed around them. Whether organisations ask "how do we get more output via AI" or "how do we help our people do their best work with AI" those are different questions. They will produce very different workplaces over time.

We're starting to explore how we can make changes to our own practices and figuring out what will stick for us. We're reading the research as it comes out — and this is a fast-moving space, so that means staying curious rather than treating anything as settled.

What we're confident about is that paying attention early matters. The people and organisations who navigate this well won't be the ones who used AI the most. 

They'll be the ones who thought hardest about how they used it - and who built the kind of culture where that question was safe to ask out loud.

Those headaches are signals folks, worth listening to.

 

Further reading

"When Using AI Leads to Brain Fry" - Boston Consulting Group / Harvard Business Review, March 2026

"AI Doesn't Reduce Work — It Intensifies It" - UC Berkeley Haas / Harvard Business Review, February 2026

"AI Fatigue is Real and Nobody Talks About It" - independent engineering essay, February 2026

Related

More Development Practices
More Development Practices

Most Recent

Show all articles
Show all articles

Media Suite
is now
MadeCurious.

All things change, and we change with them. But we're still here to help you build the right thing.

If you came looking for Media Suite, you've found us, we are now MadeCurious.

Media Suite MadeCurious.