The Hidden Dangers of Numb Coding in the AI Era
The coding landscape is changing fast. As AI tools become more powerful, a recent trend that has emerged is called vibe coding. It's a compelling concept and may extend the benefits of technology to a much wider audience. Analogous to this, I've also noticed a concerning pattern from practicing software engineers that I've started calling "numb coding". I think it poses a risk to us as individual engineers and our industry as a whole.
There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It's possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper…
— Andrej Karpathy (@karpathy) February 2, 2025
Tools Are Great – It's How We Use Them
Let's be clear, this isn't about rejecting helpful tools. Engineers have always relied on tools, libraries, frameworks and sites like Stack Overflow to do their jobs effectively. Now we have AI assistants joining that toolkit, and that's not inherently problematic.
The issue isn't using tools. It's how we use them. There's a crucial difference between leveraging tools to enhance your capabilities and becoming completely dependent on them to the point where your skills atrophy. A calculator helps you work faster, but you still understand the math. Stack Overflow provides solutions, but you evaluate and adapt them to your needs.
AI coding tools are no different in principle, but their extraordinary capabilities make it easier to slip into problematic usage patterns.
When Engineers Go on Autopilot
Numb coding happens when engineers essentially outsource their thinking to AI. The pattern is simple: receive task, prompt AI, implement whatever code it generates, shallow review, move on. No deep understanding, minimal critical evaluation, just an endless cycle of prompt-and-paste.
It's seductively easy. Why struggle with a complex algorithm when an AI can write it for you? Why learn a new framework's intricacies when you can just ask for working code? The immediate productivity boost is undeniable but this convenience comes at a steep cost.
The Erosion of Core Skills
The most immediate danger of numb coding is the gradual atrophy of fundamental programming abilities. Engineers who consistently outsource their thinking stop exercising crucial mental muscles:
- Problem-solving skills deteriorate when you're not regularly working through challenges
- Algorithmic thinking weakens when you're not designing solutions from scratch
- Debugging capabilities suffer when you don't fully understand the code you're running
- System design knowledge stagnates when you're focused on isolated snippets
Unfortunately, this erosion of skills happens so gradually you might not notice until it's too late.
The Review Fatigue Problem
One of the most insidious aspects is the mental burden of constant review. When you write code yourself, understanding is built in. When AI generates it, you need to carefully review every line to ensure correctness . We've moved from a world where we are always writing our own code to constantly reviewing "somebody" else's code. The latter is surprisingly more exhausting than the former.
This review fatigue, much like with long traditional code reviews, leads to predictable behaviour:
- Initially, you carefully scrutinise all of the AI's output
- Over time, as your trust in the tool builds and mental energy depletes, your review becomes more cursory
- Eventually, you start glossing over details, assuming correctness
- Critical errors slip through because they look plausible at a glance
This creates a troubling scenario where engineers feel like they're being careful and gain a false sense of security. Regrettably, they are actually letting subtle bugs, inefficiencies, and even security vulnerabilities slip into their production code.
The Dependency Trap
Numb coders quickly develop an unhealthy reliance on specific AI tools. When those tools are unavailable, incapable, or change their functionality, these engineers find themselves paralysed. Their productivity isn't enhanced by AI, it's completely dependent on it.
Finding the Right Balance
The key is intentionality. Use AI and other tools strategically:
- For tasks where you already understand the underlying concepts
- To accelerate implementation of familiar patterns
- To explore new approaches that you then study and understand
- As a learning resource rather than a replacement for learning
- In manageable chunks that you can thoroughly review without fatigue
Consider setting personal guidelines about when you'll use AI and when you'll code manually. Some engineers reserve AI for boilerplate code while tackling core logic themselves. Others use AI for exploration but then rewrite critical sections manually to ensure complete understanding.
The most valuable engineers aren't those who avoid tools. They're those who use tools thoughtfully while maintaining their core competencies. They understand when to rely on assistance and when to work through problems themselves to build deeper understanding.
The difference might seem subtle in the moment, but the long-term impact on your capabilities, your code quality, and your career could be profound. Tools should amplify your skills, not replace them.
Does any of this actually matter?
It's fair to ask: So what if these skills fade away? If AI can handle the work, aren't we just standing in the way of progress? Our industry has disrupted countless others. Perhaps it's simply our turn to face the music.
Maybe we're clinging to these skills because they're the source of our professional identity and market value. Maybe they'll soon be as relevant as knowing how to operate a switchboard or set movable type.
I believe it does matter, and here's why:
Until we achieve true Artificial General Intelligence (AGI), which LLMs, for all their impressive capabilities, are not; these higher-order engineering skills remain essential. The current generation of AI tools are powerful but fundamentally limited. They don't truly understand code; they predict patterns based on training data.
This creates a paradoxical situation where the more we rely on AI, the more valuable deep engineering knowledge becomes. Someone needs to:
- Architect systems that AI isn't equipped to design
- Recognise when AI-generated solutions are elegant versus inefficient
- Debug complex issues that span multiple components
- Understand security implications that aren't obvious in isolated snippets
- Make critical technical decisions that require business context and engineering judgment
Look at other industries where automation has made significant inroads. Accounting software has been sophisticated for decades, yet accountants remain essential. Their role has evolved to focus on interpretation, strategy, and judgment rather than calculation.
The same pattern will likely hold for software development. The tasks will shift, but the need for human expertise won't disappear. It will concentrate in areas where judgment, creativity, and deep understanding matter most.
The better question is, how will the next generation of software engineers learn these skills?