AI Isn’t an Equalizer. It’s a Lever.
I keep hearing that AI will “level the playing field” for developers. That it will turn juniors into mid-levels and mid-levels into seniors, because now you just need to know how to ask for things. I don’t buy it. And I’m also allergic to doomsday takes that frame this as the end of programming. Reality feels simpler—and more interesting: AI is neither savior nor executioner. It’s a tool. And in practice, it behaves like a lever.
When code becomes cheap, everything around it gets expensive. Problem framing. Keeping the system in your head. Navigating trade-offs. Spotting a bad solution even when it looks confident and polished. That’s why I think AI doesn’t reduce differences—at least not at the levels that matter long-term. It can amplify them fast.
Equalizer vs. Lever
An equalizer is a technology that compresses outcomes. It helps most people in roughly the same way. It standardizes the path, reduces friction, and usually has a natural ceiling. You can look at an equalizer and say: “The average moved up, the spread narrowed.”
A lever works differently. It doesn’t hand everyone the same boost. It gives you force. And once force is available, results depend much more on who’s holding the tool. Two people can use the same AI, but one ends up with a system that scales and survives change, while the other ends up with a quickly assembled pile that collapses the moment requirements shift.
Where AI Does Equalize
To be fair, AI can be an equalizer—especially at the implementation layer. Boilerplate, routine CRUD, configuration glue, quick prototypes, small migrations, navigating an unfamiliar codebase… AI is genuinely great there.
That’s also why the catastrophe narratives annoy me. In day-to-day work, the most obvious impact is that a lot of annoying, low-leverage work got cheaper. That’s good. But this advantage lives “down low” in implementation. Once you’re building something that needs to last, AI starts behaving more like a lever than an equalizer.
Where AI Amplifies Differences
The first place it shows up is specification. A senior engineer typically doesn’t start by asking AI to “generate a solution.” They start by turning a fuzzy problem into a concrete contract. They cut scope, name boundaries, write down what must be true—and what must never be true. They give the problem a shape.
A junior often hands over a vague description and expects AI to fill in the blanks. And AI will fill them—in the most statistically plausible way, not in the way reality demands. The output can be long, smooth, convincing, and still slightly off. That’s the lever effect: AI can multiply your productivity, but it can just as easily multiply your confusion.
The second layer is architecture. AI can be excellent at locally correct solutions: a component, an endpoint, a hook, a utility. But systems aren’t the sum of locally correct pieces. Systems survive through coherence over time: consistent data flow, invariants, clear layer responsibilities, and decisions that repeat across the codebase. Without that coherence, AI tends to produce many variations of the same pattern, multiple competing styles, and endless exceptions. The codebase grows quickly—but it grows like a bush, not a tree.
The third layer is verification. Generating code is no longer the hard part. The hard part is validating that what you shipped is correct in edge cases, secure, performant, and maintainable. This is where the developer role shifts from “writing” to “directing and auditing.” If you’re good at reviews, you’ll extract huge value from AI. If you’re not, AI will happily generate problems at the same pace it generates code.
The Biggest Trap: Fake Seniority
AI has a dangerous property: it can manufacture the appearance of seniority. Everything runs. Tests pass. The code looks clean. But the architecture is fragile, and a real change request triggers a cascade.
This isn’t fear-mongering. It’s an observation. Most code looks great—until it has to change.
How I Use AI Without Letting It Poison the Codebase
What works best for me is simple: put hard guardrails around AI output. Ideally with TDD, or at least with explicit acceptance criteria. First, I produce a failing test (or a precise behavioral definition). Then I let AI implement the minimum needed to satisfy it. Only then do I refactor.
This does two things: it shrinks scope and puts control back in my hands. AI stops being the author and becomes a tool. It doesn’t set direction. It helps execute.
The Real Differentiator
The difference isn’t who writes “better prompts.” The difference is judgment: the ability to make trade-offs, reduce complexity, keep boundaries intact, and say “we’re not building that.”
AI helps you build. But it rarely tells you what not to build, where the hidden risk is, or when the system is quietly starting to crack.
That’s also why I dislike doomsday articles. Most of them argue about the wrong question. The question isn’t whether AI will replace developers. The question is who will use AI in a way that produces systems that survive reality.
Today vs. A Few Years From Now
It’s possible the gap between a prompt and a large system will shrink over time. AI will get better at planning, consistency, and long context. But even if that happens, one thing will still hold: intelligence may become cheap. Judgment won’t.
And when implementation becomes a commodity, judgment is what separates systems that truly work from systems that only look good until the first incident.
Conclusion
AI isn’t an equalizer. It’s a lever. The question isn’t whether we use it. The question is who’s holding it—and whether they know where they’re pushing.
Further reading / inspiration
- Addy Osmani – Factory model: https://addyosmani.com/blog/factory-model/
- Jesse Duffield – Are AI Agents Cognitive Ozempic?: https://jesseduffield.com/Are-AI-Agents-Cognitive-Ozempic/