iTranslated by AI
How to Deal with AI Fatigue for Engineers

This article is written for engineers who have ever felt, "I thought AI would make things easier, but for some reason, I'm more tired than before."
Throwing prompts at AI over and over, constantly fixing the generated code, and before you know it, the day is over.
"AI is supposed to be a useful tool, so why am I so exhausted... (crying)"

Do you ever have the sensation that "I'm using AI, but for some reason, I'm more tired than before"?
I do.
In fact, since I started using AI seriously, the sense of exhaustion has become stronger.
I began to wonder, "Why is it so painful even though we are in an era where AI can improve efficiency?"
Engineering work is originally meant to be about building systems to make things easier.
So, it feels like something is wrong when the more we use the latest technology, the more tired we get...
I use AI in almost every aspect of my work, including design consultation, code generation, review assistance, and documentation.
In the beginning, it was so convenient that I was overjoyed, feeling like "I've gained another brilliant new hire dedicated to me who works 24/7! I've got this!"
However, the joy was short-lived, and from a certain point, I started feeling more intense fatigue than before.
Work hours are shorter, but for some reason, my head feels heavy... Wait, writing code isn't really fun anymore...??
I was confused by the contradiction between my state of being exhausted right now and the fact that AI should be making my tasks easier.
In this article, while incorporating my own failures, detours, and trial and error, I would like to verbalize "AI Fatigue" that engineers are prone to feel and how to live with it.
What is AI Fatigue?
AI fatigue is a state where you become mentally and cognitively exhausted even though your work efficiency has improved through the use of AI.
Information overload from the non-stop output of AI places an excessive burden on the brain, leading to a sense of deep fatigue.
When first starting to use generative AI, many people are probably moved by how easily code and text can be written, thinking, "This is amazing!"
For the first few months, I also tried new ways to use it every day, excited that "this is revolutionary."
However, eventually, I reached a state where "work finishes early, but I'm incredibly tired..." and "Ugh... I can't think straight during work."
There was even a time when I didn't know why, but I hated looking at or writing code. I even felt fear toward the code that AI produced.
At first, I thought it was due to my age or busyness, but looking back, the cause was in how I interacted with AI.
AI takes over tasks for us, but in exchange, isn't it creating a new kind of burden on the human side?
The Increase in Tasks of "Reading," "Thinking," and "Judging"
I began to be strongly aware of AI fatigue after I started using generative AI seriously for work.
Previously, when thinking about design, I would struggle alone while facing a whiteboard or a notebook.
However, since I started using AI, the workflow changed to asking AI first, having it present proposals, and then evaluating them.
At first glance, this seems rational, but in reality, the tasks of "reading," "thinking," and "judging" had increased many times over.
For example, for the design of a new feature, I would repeat the process of having AI produce three proposals, comparing their pros and cons, and thinking of revisions.
This is far more brain-draining than thinking through a single proposal carefully on your own.
Before I knew it, I was in a state where I was doing nothing but reviewing and judging all day long.
Main Factors Creating AI Fatigue

There are several typical causes for AI fatigue.
Increase in Cognitive Load
The first one is the increase in cognitive load. The work of evaluating the text or code produced by AI and judging whether it is "correct," "safe," or "practical" uses more brainpower than one might imagine. Personally, I've had times when my head would feel fuzzy after reviewing AI output for just about an hour.
Increase in Multitasking
The second is the increase in multitasking. I found myself unconsciously repeating the cycle of opening another task while waiting for an AI response, coming back to check it, and then writing the next instruction. As a result, you can't concentrate on each individual task, and your head ends up in a constantly scattered state. The true identity of the feeling that "I was busy all day, but I don't feel much sense of achievement" lies right here.
Increase in Probabilistic Behavior
The third is the fact that AI operates probabilistically. Until now, many engineers have worked based on the premise of deterministic code. It was a given that for the same input, the same result would be returned no matter how many times it was executed. However, generative AI generates output based on probability. I have also experienced times where "a prompt that was perfect yesterday somehow yields mediocre results today." Normally, you can trace the cause by following logs, but that is difficult with AI. The state of "trial and error without knowing why it failed" becomes a major source of stress.
Comparison with Others and Increased Invisible Pressure
The fourth is unconscious pressure. You find yourself thinking, "I should be able to master this more" or "Am I still being too naive?" and you end up pushing yourself too hard. Every time I saw cases of others utilizing AI on social media, I felt a sense of impatience. Watching those "sparkling" posts on social media, my self-esteem would drop, thinking, "I'm a useless person who lacks the ability to master AI."
The Obsession with Being an Early Adopter
Every time a new AI tool or feature comes out, I sometimes feel anxious, thinking, "If I don't try this right now, I'll fall behind." I used to be that type of person as well.
I mainly use Claude Code, and every time I launch it, the version has gone up. New features are being added to the release notes at a rate that's hard to grasp.
I was desperate to keep up with Anthropic's movements.
When it comes to Claude-related topics, things like MCP, Skills, Subagent, Claude Agent SDK, Claude Cowork, Claude × Excel integration, and more are appearing endlessly.
I would touch every service that became a hot topic on social media, read through blogs and slide decks, and spend entire weekends on verification. Before I knew it, I was exhausting my energy on things unrelated to my main job.
What's even more troublesome is that the rise and fall of tools and best practices around AI is abnormally fast. It's not uncommon for a method that was called "the correct answer" six months ago to be hardly mentioned now.
I'm sure everyone has had the experience where the unique prompt designs, agent operation flows, or tool best practices that you worked so hard to master—thinking, "If I learn this way, I can compete for a while"—became outdated just a few months later. Each time, I felt a sense of emptiness, wondering, "What was all that time for?" I am stunned by how quickly information becomes obsolete.
Some time ago, "Bet everything on Cline" was a hot topic, but I feel that betting everything on any specific AI tool is quite high-risk. On the other hand, I'm still in a state of not really knowing what I should bet on. I don't know the right answer. Right now, I'm betting on Claude Code, but I don't know what will happen to it in a few months.
For a time, I was chasing new tool introductions, know-how articles, and slides every week, searching for the "latest correct answer." However, it was like a marathon without an end, and the more I ran, the more tired I became.
However, what is truly useful in the field are the tools and features that have been used by many people, improved, and stabilized. It is safer to use them after the first group of people has jumped on them, stepped on landmines, and identified the problems.
At some point, I started thinking, "I am not the aggressive type who starts using new tools immediately, but rather the type of engineer who reliably produces results in practice using mature tools and knowledge." Since then, I have intentionally delayed my adoption by one step.
As a result, the time spent dealing with troubles in new tools and features has decreased, and I've been able to concentrate on actual development. A position like the late majority was just right for me.
Working with AI is Like Working with an Overly Brilliant New Hire
I feel that collaborating with AI is similar to the sensation of working with a very fast new hire.
The new hire finishes what you asked for quickly and in large quantities.
However, you must check everything in detail.
Moreover, because deliverables arrive one after another, you are constantly forced to make judgments.
It's helpful at first, but after a while, you find yourself in a state where "there's no time to rest."
This is because, instead of doing the work yourself, you are burdened with new tasks: checking the outputs of the new hire and controlling or monitoring their actions.
Controlling others is far more difficult than controlling yourself.
Because it is like trying to fly a plane with only instructions when you cannot hold the control stick yourself.
Working with AI is exactly like this.
Mastering AI Means Fighting Probability

Engineering work has been associated with "determinism" for a long time.
If you run the same code under the same conditions, you get the same result. That was a given. That's why we can track bugs and pinpoint causes.
That crumbled with AI.
Even if you send the same prompt, you aren't guaranteed to get the same result.
An instruction that worked perfectly yesterday might not work today.
Even though no settings were changed, the behavior changes for some reason. I was bewildered by that instability.
When I left API code to AI, I got nearly perfect code one day, but the next day, the quality clearly dropped. Even if I wondered "Why?", I couldn't know the reason. AI outputs have no logs or stack traces. It just... came out that way.
At that moment, I realized, "AI is something that cannot be debugged." You can trace code, but the AI's thought process is a black box. You can't logically isolate the issue when a problem occurs.
When an AI's output isn't right, is there an issue with my prompt? Or is it a problem with the LLM's model capability? Or was it just bad luck "by chance"? There is no information to judge that.
This uncertainty subtly grinds down your spirit.
For a while, I tried to strictly version-control prompts and desperately reproduce success patterns. However, I still couldn't achieve complete reproducibility.
(I wonder how everyone does prompt version control...)
So I changed my mindset. I decided to accept that AI is not an "accurate calculator" but a "consultant that returns a probable-looking answer based on probability."
Standing on this premise makes things much easier. Don't design assuming AI output as a given. A human always makes the final judgment. Consider it within expectations even if it's off. By interacting that way, unnecessary stress decreased.
I now believe that mastering AI means **reconciling with this uncertainty while enjoying its convenience.
AI Brings Great Power, but the Price is High

Since I started using AI seriously, my productivity has undoubtedly increased. Creating documents, research, drafting designs, making code templates—tasks that used to take several hours are now often finished in 30 minutes.
To be honest, there are moments when I feel "I can't go back to life without AI." It's that powerful a tool.
However, on the other hand, there are definitely things I've lost. Those are "time to think slowly" and the "room for trial and error."
Before, I would dive into specifications, draw diagrams on whiteboards or paper, and assemble designs in my head over and over. Through that process, my understanding deepened and was accumulated as experience.
But since I started using AI, the habit of "just throwing it in" increased. Asking before thinking. Generating before pondering. As a result, what remained inside me began to decrease.
Once, I participated in a face-to-face design review without using AI for the first time in a while. I couldn't keep up with the discussion on the spot, and I was shocked, thinking, "Wait, has my brain always been this sluggish?"
Also, by over-relying on AI, I developed a habit of postponing judgments. "Let's decide after asking AI" had become a habit.
In exchange for convenience, I was gradually chipping away at my ability to think and my self-confidence. When I realized that fact, I felt a chill down my spine.
That's why now, I intentionally create time when I don't use AI. Writing on paper during the early stages of design. Thinking for myself first. Only after solidifying it to some extent do I consult AI.
By doing this, the sense of controlling my own work returns. It's important to grip the steering wheel directly with your own hands.
AI is undoubtedly a powerful weapon. However, if a weapon is used incorrectly, it can hurt you and make you weak. I try not to forget that."
Engineers' Perfectionism and AI are Incompatible

Many engineers unconsciously work with the ideal of a "perfectly functioning state." I was the same way. I'm the type who can only feel at ease when all tests pass, Lint shows zero warnings, and everything works exactly according to the specifications.
I once developed a tool for my work while using AI as an assistant. I had AI generate API wrapper code and test code, and it was quite convenient at first. However, after a while, "code that was 90% correct but 10% wrong" began to accumulate in large quantities.
Even if things start off well, once the scale of the codebase or the number of files increases, it starts to pressure the context window. Subtle mistakes or instances where the AI suddenly seems to have "amnesia" become more prominent.
It looks like it would work at a glance, but it fails in edge cases. The types are slightly different. Exception handling is missing. There's a gap in the context of the prerequisites. These "close but not quite" mistakes continued to pile up.
For a perfectionist engineer, this is quite stressful. This is because the state of "just a bit more fixing and it'll be perfect" lasts forever. Moreover, that correction work is neither creative nor fun. It's just a matter of cleaning up someone else's mess.
At one point, I was spending 2 to 3 hours every day checking and correcting AI-generated code line by line. As a result, I began to think, "Wouldn't it be faster to just write this myself from the beginning?" and fell into a contradictory state where I was exhausted despite using AI. I was tired of cleaning up after the AI.
What makes it even more troublesome is that AI calmly produces "70-point deliverables." AI doesn't care about a level of incompleteness that a human would be too embarrassed to show. This discrepancy in values hits perfectionists particularly hard mentally.
At first, I also thought, "I won't be satisfied unless I get it close to 100 points." However, when my schedule became tight, I made the decision to allow myself to "release at 70 points." I started releasing things thinking, "Well, the automated tests are mostly passing, so it should be fine." (Since this was for personal development, it should be... acceptable, I hope).
When I did that, strangely enough, it became much easier for me mentally.
When dealing with AI, you need the "courage not to aim for perfection." AI is ultimately there to create a draft, and the final quality is determined by humans. If you expect 100 points from the start, you will get quite tired.
Nowadays, I mentally label AI deliverables as "this is a draft" or "this is a memo." By doing so, I have been able to avoid getting unnecessarily frustrated.
Spec-Driven Development (SDD) is Extremely Difficult to Implement in Actual Work
While struggling with how to interact with AI, I also experimented with the "SDD (Spec-Driven Development)" approach. This is a method where you first document the specifications in detail and then have the AI implement them based on that document. Logically, it's very rational, and I had high expectations, thinking, "With this, I should be able to build things without any deviation."
I was overjoyed, thinking, "I've finally found the silver bullet for working with AI."
I immediately prepared detailed documentation covering screen specifications, input conditions, error cases, and even boundary values. To be honest, it took me a whole day just to write the specs. Then I fed it to the AI, but for some reason, the returned code ignored some of the prerequisites or was interpreted in a way that was convenient for the AI.
I stood stunned in front of my screen, thinking, "I wrote all this, and you still missed that...?" In the end, I fell into a loop of "writing specs -> feeding them to AI -> things deviating -> correcting specs -> a different deviation occurring," and the burden had actually increased. I was now in a state where I was required to have both the ability to write specifications and the ability to control the AI.
From this experience, I finally let go of the illusion that "doing things the right way will make it easier." I gave up on creating perfect specification documents and decided that it's okay to do some "vibe coding" based on my intuition. Surprisingly, having this "vibe" instantly reduced the burden on my mind. It's okay to be haphazard. Let's enjoy that process. Humans need the "room for trial and error."
If you are tired of Spec-Driven Development, I recommend re-evaluating whether "the vibe is there," as it will make your heart feel lighter.
I don't want you to misunderstand; SDD is not a bad method. In fact, I feel it is the correct way to work with AI. However, in actual work, aiming for perfect specification documents is incredibly exhausting. First of all, the act of "creating perfect specifications" itself is quite difficult, and even if you manage to do so, you will be plagued by the probabilistic behavior of AI. As a result, I've temporarily stopped trying to do perfect SDD at work. I now develop using a "loose" version of SDD. By loose SDD, I mean a style where I create specifications loosely to get about 70% of the way there, and then finish the rest through vibe coding. I feel that this kind of relaxed approach is just right for me.
Techniques for Dealing with AI Fatigue
Since realizing I had AI fatigue, I've started consciously changing how I use it.
The first thing I worked on was clearly separating the situations in which I use AI.
Previously, I would throw everything at AI—research, design, implementation, and even writing—but now I assign roles.
I leave research and drafting to the AI. I make sure I always take final responsibility and make the last judgment. I've been strict about this.
Next, I started setting aside dedicated time for reviews.
Previously, I would check the AI's output as soon as it arrived, but I stopped doing that.
By deciding on blocks of time—30 minutes or an hour—to review everything at once, my focus improved significantly.
Also, I gave up on having a human review every single AI output.
The more efficient AI becomes, the more the human review process becomes the bottleneck.
I've narrowed down human reviews to only the most critical areas, and for ensuring code safety, I've decided to rely on extensive automated testing, static analysis, and automated AI reviews.
I've decided to overlook minor roughness in the code.
It was also important not to try to make the prompts or specifications given to the AI perfect.
Previously, I would rewrite instructions over and over, thinking, "If I write a better instruction, I'll get a better answer."
However, that becomes an endless task.
Now, I accept that "if 70% is usable, that's enough," and I supplement the rest myself.
Furthermore, I've started intentionally creating time when I don't use AI.
For example, I ban AI for one hour in the morning and think about design using only my own brain.
During that time, I don't even use a PC; I scribble designs on paper or a whiteboard. It's okay if it's a mess.
At first, it felt inefficient, but as a result, my thinking power returned, and my work efficiency in the afternoon improved.
Finally, I've started keeping simple work logs.
Just writing down which tasks I used AI for and where I got stuck helps me see my own patterns.
This has been useful for objectively understanding the causes of my fatigue.
Perspective on Maintaining Mind-Body Balance
I think the most important countermeasure for AI fatigue is not forgetting the premise that "you are not a machine."
AI can work 24 hours a day, but humans cannot.
Despite this, I was unconsciously trying to match the speed of AI.
In the past, I used to set schedules based on AI standards, thinking, "I should be able to get this far today."
As a result, I began to feel intense stress when things didn't go according to plan.
Now, I try to set schedules based on human standards.
By incorporating buffer time from the start, I have become much more relaxed mentally.
Also, physical care cannot be ignored.
Working with AI often involves staring at a screen for long periods, which puts a strain on your eyes and shoulders.
I consciously stand up once an hour and move my body lightly.
This alone has significantly changed how fatigued I feel in the evening.
Furthermore, I've started intentionally creating "time to do nothing."
Previously, I would look at AI articles or social media even while commuting or taking a break.
Now, I make time to deliberately think of nothing.
During that time, I empty my mind and meditate.
This time is extremely helpful for resetting my thoughts.
Efficiency and productivity are important, but sustainability is more critical for continuing to work for a long time.
In dealing with AI, I now believe that we should use "whether I can work healthily tomorrow" as a criterion rather than "how much progress I made today."
Summary
AI is an indispensable presence for future engineers.
However, mastering it and overusing it are two different things.
When you feel AI fatigue, please try to stop for a moment and review how you are using it.
I hope this article serves as an opportunity for you to rethink your way of working.
Addendum: In the End, There's No Choice But to Keep Working with AI
There have been times when I felt "I'm done with AI" and hated even looking at it.
It doesn't work as expected. I don't know the cause. Even if I fix it, it deviates again.
Each time, my concentration and morale were chipped away.
While staring at the interaction with AI on the screen late at night, I sometimes felt empty, wondering "What am I doing?" I thought AI was supposed to make things easier, but for some reason, I'm more tired than before. For a while, I couldn't face that reality.
However, even so, I still use AI now. And I think I will probably continue to use it.
The reason is simple; it's not because "AI is amazing." It's not because "AI is versatile."
It's just that it's an entity that expands my limits just a little bit.
If you leave everything to AI, everything will break down immediately. It breaks if you expect too much. But if you use it with an appropriate distance, there are moments when it will surely help you. Because of those moments, I'm still living with AI.
AI fatigue happens more to people who work seriously. The more you try to produce proper results, the more you get exhausted. That is by no means a weakness.
If you are tired of AI right now, I think it's okay to distance yourself for a while. You can stop aiming for perfection. It's okay to have days when you don't use AI.
Engineer's work is meaningful only if it is continued for a long time.
If you burn out, no technology will be of any use.
I think I will continue to get along with AI while getting lost and complaining.
I still occasionally remember that sense of wrongness I felt late that night. But now, I've become able to get along well with that feeling.
I think that kind of loose relationship is probably just right.
Discussion
笑ってしまうほど同じような経験をしていて、楽しく読ませていただきました。
正しいやり方と宣伝されているものもバラバラで手元では動かない。3か月たてば別世界。AIに合わせていると疲弊する。
便利ですが、中々難しいですよね。
コメントありがとうございます!
共感していただけて自分一人の悩みじゃなかったことがわかって嬉しいです。
AIのスピードに合わせるのはキツイですよね。もっとゆとりを持って仕事に励みたいものです。
私はエンジニアではないものの、AIを普段遣いするようになり、やることが増えた感覚(実際出来ることが増えた)に追いついてない自分がおりました。
すごく読ませていただき腹落ちしました。