Skip to content

AI Is Changing Software Engineering, Not How You Think

AI is changing how software gets built, but not what makes it good.
hero

We initially expected AI might raise every engineer's floor. The pattern we're seeing is more complicated than that.

AI might just be the most effective engineering skills audit the industry has ever accidentally run. It didn't set out to sort engineers into those who can think clearly upstream and those who can't. But it feels like it’s making the distinction visible in a way nothing previously has.

Initially, the predictions were that AI tools would compress the skill curve. Junior engineers would get faster. The floor would rise. But what we're seeing is more complicated than that, and we think it's worth talking about before the “confident takes” harden into received wisdom.

This isn't a rebuttal. It's an honest reflection from our teams, informed by conversations with practitioners in the wider industry who are examining similar things. The picture doesn't match the prediction, and we think the mismatch might matter a whole lot.

AI is not levelling engineering skills. It is amplifying the difference between engineers who think clearly and those who do not.

What did we expect AI to do for engineers?

The prediction made sense. AI tools reduce the friction between idea and working code. They catch syntax errors, suggest implementations, write tests, explain unfamiliar codebases faster than any documentation could. If you're an engineer who slows down at moments of uncertainty, these tools remove a lot of that uncertainty. The case for "AI raises the floor across the board" initially felt coherent, and some of it certainly is real.

Engineers who wouldn't have touched a new framework are willing to try one now. People are building in languages they'd never have picked up. In certain respects, AI tools have opened doors that were previously closed, and that's genuinely useful.  

What are teams actually seeing?

Our conversations and experiences are outlining a more complicated picture. The people who've become dramatically more productive with AI tools are, by and large, the ones who were already good. They’re not just faster, but qualitatively different in how they're working. They're compressing the time between idea and shipped feature in a way that looks genuinely new.  

The thing is, the engineers who were struggling before AI tools arrived are … mostly still struggling. Some are absolutely struggling in new ways. They can generate code, but they can't always evaluate whether it's right. They can reach a working implementation faster than before, but they can't always reason about whether it's the right implementation, or what it costs the system they're building. They can push the solution for review, but not always take their colleagues on the journey.

The gap between these two groups, from what we're seeing anyway, doesn't seem to be narrowing.

We want to be careful here. This is what we're hearing and observing, early in an incredibly dynamic time - it’s certainly not a controlled study. It could just be early signals during a particularly steep adoption curve. The sample size is small and hyper-local - it’s just honest chats about real teams, not research data. It's possible we're seeing a pattern that isn't representative, or one that will shift as the tools mature. But it's consistent enough across different teams and contexts that we think it's worth taking seriously.

beyond react part 2 img 3

Why is AI exposing skill gaps?

If this observation holds up, one way to understand it is this: AI tools reward engineers who can think clearly about what they're trying to build. They're very good at turning a well-formed intention into working code. They're less useful when the intention itself isn't well-formed.

A good engineer working with AI can stay upstream. They describe the problem precisely. They evaluate the output critically. They notice when the generated code is technically correct but wrong for the system. The tools might give them speed, but the judgment is still theirs.

An engineer who was already uncertain about those upstream questions finds themselves in a different position. The tool provides something to work with, which can feel like progress. But "something to work with" and "the right thing to build" can be a long way apart. The uncertainty doesn't disappear. It shifts,  and the speed with which you arrive at the “not quite right” destination is suddenly very, very quick.

This might be the key thing: AI tools are good at reducing mechanical friction. They don't reduce the need for clear thinking about the problem. And it's possible they've made that distinction much more visible, and much more important (not less).

What does this mean for engineering teams?

We don't know what this means for how engineering teams of the future should be structured. That feels important to say honestly. But we are absolutely exploring how this looks, and how previously held “truths” about team structures and ways of working are softening and changing.  

The questions we think are worth sitting with: what does developing junior engineers look like when AI tools are present from the start? Is some of the learning that used to happen through struggle now being bypassed? How will that show up in their future capability set. How do we deliberately build & hone upstream thinking skills if the space where that learning happened is now filled by generated output?

We don't have clean answers. But we've noticed something about the teams that seem to be navigating this well. It's not that they have the most considered AI usage policy, or that they've found the right framework. It's that they've stayed clear about what good engineering practice looks like in their context. They haven't outsourced that question to the tools, they’re ruthless about the non-negotiables and they’re talking about their experiences and their expectations of each other week to week. 

That's the thing worth holding onto: the tools change the speed of execution. They don't answer the question of what's worth executing. And if that distinction wasn't sharp enough before, it's very sharp now.

Where does this leave us?

AI hasn’t changed what good engineering looks like. It’s made it harder to fake. It's made good engineering judgment more valuable, not less. It's also made it harder to hide when it's absent. The teams that will navigate this well aren't necessarily the ones with the best tooling or the best skills with those particular tools. They're the ones with strong foundations - who stayed curious about the work itself.

 

Frequently asked questions

Has AI made software engineers redundant?

Not from anything we're seeing. The more consistent pattern is that AI tools have made capable engineers significantly more productive, while the skills that distinguish good engineers, being able to reason clearly about systems and trade-offs, are potentially even more important than they were before.  

Does AI help junior engineers become more capable faster?

Possibly in some respects, but the picture is mixed. Junior engineers can reach a working implementation faster with AI tools. What's less clear is whether the underlying skills, knowing how to evaluate that implementation and how to reason about the system it sits in, are developing at the same pace. That's an open question, and one worth paying attention to.

What skills matter most for engineers working with AI tools?

From what we're observing, the skills that matter most are the upstream ones: being able to think clearly about what you're trying to build before you start building it, and being able to evaluate generated output critically rather than accepting it at face value. These aren't new skills. They're the same ones that distinguished good engineers before AI tools existed.

Should teams change how they structure engineering roles because of AI?

We think it's too early to make confident structural calls, but we are actively experimenting in this space as our own work changes. What seems worth examining is how junior engineers develop, and whether onboarding and mentorship need to adapt to an environment where AI tools are present from day one. The teams asking those questions now are probably better placed than the ones waiting for the answer to become obvious.

Related

More People and Culture
More People and Culture

Most Recent

Show all articles
Show all articles

Media Suite
is now
MadeCurious.

All things change, and we change with them. But we're still here to help you build the right thing.

If you came looking for Media Suite, you've found us, we are now MadeCurious.

Media Suite MadeCurious.