Resonancy Logo
Article - AI Amplifies Everything, Including Bad Decisions
Speed is easy. Direction is the hard part
8 min read
Project ManagementAIEngineering LeadershipSoftware Development
AI Amplifies Everything, Including Bad Decisions

Six months ago, our team barely used AI. Maybe autocomplete, maybe the occasional snippet generation. Nothing that changed how we worked. Today we have a complete workflow that runs from planning through implementation to reviews and testing, with AI involved at every stage. The shift happened fast, and it changed what my job is.

I'm a software engineer at Resonancy, and my current day-to-day involves leading and mentoring a team of developers. For most of my career, that meant being the person who could solve the hardest implementation problems. Knowing the codebase better than anyone. Setting standards through my own code. That version of the role is fading, and we need to think about what is taking its place.

The speed trap

AI adoption has been rapid. The GitHub Octoverse 2025 report found 80% of new developers on GitHub used Copilot within their first week. Faros AI's telemetry across more than 10,000 developers showed 21% more tasks completed and 98% more pull requests merged. On the surface, that looks like a revolution in productivity.

But the 2025 DORA Report found that AI adoption still has a negative relationship with delivery stability. Teams ship faster but break more things. The METR study found experienced developers using AI actually took 19% longer to complete tasks, while believing they were 20% faster.

If everyone is coding faster but delivery isn't improving at the same rate, the value must be leaking somewhere. From what I've experienced, the leak sits in the gap between generating code and making good decisions about what that code should do. That gap is the lead dev's new territory.

What happens when you skip the planning

I saw this play out on a recent project. The product owner and I met with the client a couple of times, did one or two planning sessions between us to get a broad overview. We split the work into two mostly uncoupled processes and assigned two teams of two developers each. The idea was to give them full autonomy. Let them handle the planning, design, and implementation themselves, with AI doing much of the heavy lifting.

The job got done quickly. But there was unnecessary friction throughout. The two teams ended up with misaligned data schemas and implementation plans. They had to sort it out amongst themselves, and while they figured it out, it cost time and caused confusion that didn't need to happen.

Looking back, a few small things would have made a big difference. We didn't create GitHub issues to track progress initially. Once we started doing that, things improved. Regular check-ins between the teams would have caught the schema misalignment early. So would making it clearer that questions about shared interfaces and data structures should be raised sooner rather than figured out in isolation.

The project shipped. But it reinforced something I keep coming back to. When AI makes implementation fast, the cost of misalignment goes up, not down. Two teams can each build something that works perfectly in isolation and still produce a system that doesn't fit together. That's not an AI problem. That's a planning problem. And planning is the lead dev's job.

The job used to be about code

The traditional lead developer role leaned heavily on implementation ability. You didn't get there without strong coding skills, and a big part of the job was solving the harder problems, setting patterns through your own code, and reviewing what mattered most. Soft skills always counted, things like communication, mentoring, and weighing trade-offs, but deep technical ability was the foundation most of it rested on.

AI has changed that. When code can be generated in minutes, implementation is no longer the bottleneck. Direction is.

GitHub's research found developers now describe their role less as "code producer" and more as "creative director of code." The core skill has shifted from implementation to orchestration and verification. This aligns with what I've experienced firsthand.

Our team's developers are closer to intermediate than junior at this point. They haven't needed much mentoring on writing code for a while. But the conversations have changed completely. Where I used to help someone write a MongoDB aggregation correctly or explain the proper use of Lowdefy operators, now I'm talking about what level to prompt the AI at. How to verify that AI-generated code is actually correct and not just superficially clean. How to make good architectural decisions before the AI starts generating against a given direction.

The mentoring shifted from code to judgment, all in about three months.

What the job actually looks like now

Upfront decisions carry more weight

An Ox Security study of 300 open-source projects found AI-generated code is "highly functional but systematically lacking in architectural judgment." GitClear's 2025 data showed an eightfold increase in duplicated code blocks while moved and refactored code approaches zero. AI adds code. It doesn't organise it.

This means the organising has to happen before the code gets written. Framework choices, data models, service boundaries. A wrong architectural decision used to cost a sprint of rework. Now it costs a week of AI-generated code that's internally consistent and pointed in the wrong direction.

My project experience confirmed this. The minimum viable planning wasn't zero. It was enough to align on shared interfaces and data schemas so that autonomous teams could move fast without stepping on each other. Not so much planning that it killed the speed advantage. Just enough to avoid the friction.

Reviews are a systems problem

A lot has been written about how code review is changing in the AI era, and for good reason. Faros AI found PR volume up 98% with review time up 91%. Senior engineers spend 4.3 minutes reviewing AI-generated code versus 1.2 minutes for human-written code. The bottleneck has clearly moved from writing code to reviewing it.

From a lead dev's perspective, this is a systems design problem. One person reviewing all the code doesn't work anymore. I've been figuring out my own process in real time. AI does an initial review pass, generates a guide highlighting the important parts, and I check those myself while spot-checking the rest. It's not a solved problem. But the lead dev's job is to design a review process that catches what matters, not to personally catch everything.

Standards still need a human

We've set up general rules for the AI to follow. Coding standards. Best practices. The app build catches code that breaks, but there's a layer of quality that automated checks don't cover. Adding database indexes. Writing efficient queries. Updating enums as you go. Consistent variable casing conventions.

These are the things that separate code that works from code that works well in production over time. AI doesn't think about them unless you tell it to, and even then it's inconsistent. Someone has to own those standards. That someone is the lead.

Developing engineers still matters

A 2025 LeadDev survey found 54% of engineering leaders believe AI adoption will reduce junior hiring over the longer term. 38% reported that AI tools have reduced the amount of direct mentoring junior engineers receive from seniors. Employment for developers aged 22-25 has declined nearly 20% from its 2022 peak.

I think this is short-sighted. To use AI effectively, you need to understand what's going on underneath. The only way to gain that insight is through experience. You need juniors to eventually become seniors, and you need seniors to ensure projects stay on track. My feeling is this will even out over time, but only if the industry doesn't cut off the pipeline in the meantime.

The lead dev's role in mentoring hasn't disappeared. It's shifted. The conversations are different now, but someone still needs to help less experienced engineers develop the judgment that makes AI useful rather than dangerous. That requires deliberate effort. It won't happen on its own.

The real job

The 2025 DORA Report put it clearly: AI's primary role is as an amplifier, magnifying an organisation's existing strengths and weaknesses. Strong teams get stronger. Struggling teams get worse faster.

The lead dev's job is to make sure the right things are in place so that amplification works in the team's favour. Correct procedures. Enough planning to have confidence, but not so much that it slows the project down. Helping each developer use their new autonomy effectively.

At the end of the day, the project shipping is the lead dev's responsibility. That means taking ownership of everything the developers under you produced, whether a human wrote it or an AI did. The tools changed. The accountability didn't.

Johann Möller
Johann MöllerLinkedIn
Software Engineer at Resonancy & Lowdefy

As a software engineer at Resonancy and Lowdefy, Johann focuses on what happens around the code — the planning, standards, and team alignment that make AI-assisted development actually work. He builds scalable systems and helps developers use new tools effectively.