
Good engineering leaders care deeply about developer productivity, but they’re often unsure about how best to measure it. They worry that any attempt to understand productivity will turn into the kind of individual performance tracking that makes engineers miserable and delivers poor (if any) business value.
Maybe it’s in the name. Honestly, “developer productivity” might be the worst possible name for one of the most important challenges facing engineering organizations today.
The moment you say it, engineers picture stack rankings, story point velocity charts, and commits-per-day dashboards. They imagine being reduced to a number, their work boiled down to metrics that miss everything important about building software.
But we’re actually talking about something else entirely: how effectively can your engineering organization turn ideas into working software that delivers value? The focus should be on the system — the whole complex, interconnected system of people, processes, and tools that determines whether work flows smoothly from conception to production or not.
So when I talk about developer productivity in this article, I’m not talking about measuring individual output or optimizing for a metric. I’m talking about the hard, important work of building engineering systems that let talented people do their best work, and creating the conditions where teams can reliably ship high-quality software without burning out or drowning in technical debt.
The most persistent myth about developer productivity is that it’s about individual output.
This leads to all the types of measurement we’ve learned to hate: counting lines of code, tracking commits per day, measuring story points per sprint. These metrics don’t only fail to capture what matters, but actively make things worse by creating incentives for gaming the system rather than delivering value.
The reality is that individual productivity is largely fixed. An exceptional engineer isn’t typing 10x faster than an average one or writing 10x more code. What makes them more effective is their ability to solve the right problems, make good technical decisions, and work well within a team. None of that shows up in individual output metrics.
The unit of measurement that actually matters is the team. Software development is inherently collaborative — code gets reviewed, designs get discussed, features get integrated. When we optimize for individual metrics, we break the very collaboration that makes teams effective.
What we should be measuring instead is how effectively teams can deliver working software:
When you start looking at productivity this way — as a property of systems, not individuals — everything changes. The questions shift from “how do we make developers work harder?” to “what’s preventing our teams from being effective?” and the solutions shift from individual performance management to fixing systemic bottlenecks.
📖 More reading: Identify and eliminate common productivity bottlenecks for engineering teams
Truly productive teams share some unmistakable characteristics. They’re not necessarily working longer hours or working faster — in fact, they often appear less frantic or ‘busy’ than their less-effective counterparts. What sets them apart is how work flows through their systems:
Teams of all sizes can and do achieve these characteristics. What they require isn’t superhuman developers or perfect processes, but thoughtful attention to how work flows through the system and systematic removal of the obstacles that prevent smooth delivery.
They also require competent leadership that provide the environment where all of this can happen. When leadership makes unreasonable demands or fails to support the team, even the most talented engineers can’t maintain these practices. Teams need the authority to say “sorry, too much WIP” when they’re overloaded, without fear of repercussions. The inability to do so usually points to deeper cultural and leadership issues that no amount of process improvement can fix.
How you actually go from where you are today to whatever “peak” productivity looks like for you will vary dramatically based on your organization’s size, age, and culture. But understanding what good looks like — teams that ship frequently with high quality, deliver predictably, finish what they start, and connect their work to value — should give you a north star to navigate by.
📖 More reading: Well researched advice on software team productivity
The productivity challenges facing a 10-person startup look nothing like those at a 500-engineer enterprise. As your org grows, bottlenecks are likely to migrate from human processes to technical systems, and from team coordination to organizational complexity. Understanding where you are on this spectrum helps you focus on the right problems.
📖 More reading: Improving developer productivity: Different levers at different levels
These antipatterns show up everywhere, yet they persist because they often started as reasonable solutions that outlived their usefulness, because change is hard. But what works at 20 engineers breaks at 200, and changing entrenched patterns requires patience and willingness to accept temporary disruption.
📖 More reading Tackling the issue with cycle time
Identifying where your organization struggles with productivity requires more than gut feelings or anecdotal complaints. You need a systematic approach that combines quantitative data with qualitative insights, giving you both the “what” and the “why” of your challenges.
BRAINS is a simple roadmap that helps organizations move from vague concerns about productivity to specific, actionable improvements:
For most companies, DORA metrics provide a valuable starting point, but they’re diagnostic tools, not solutions. They tell you whether you have a problem but not how to fix it. Deployment frequency might be low, but is that because of slow review processes, lack of automated testing, or fear of breaking production?
More useful are metrics that help you understand where work gets stuck. Cycle time and flow efficiency show you how long work takes versus how long it’s actually being worked on. If a feature takes two weeks to deliver but only has two days of active work, you know to look at your handoffs and waiting periods.
For system-level issues, measure time waiting on machines: CI build times, test suite duration, deployment pipeline length. These become increasingly important as you scale.
When it comes to benchmarks, they’re most valuable as a reference point rather than a target. Your context — tech stack, industry, company age — shapes what’s achievable.
The most important thing about measurement is using it to start conversations, not end them. When you see high cycle times, that’s the beginning of the investigation, not a problem to be solved by mandating lower cycle times.
The metrics should help you ask better questions:
📖 More reading: Practical guide to DORA metrics
The fastest way to demoralize a team is to set productivity goals based on what other companies are doing. I’ve watched teams currently deploying monthly get handed a mandate to achieve daily deployments because “that’s what high-performing teams do” — and I bet you can imagine how that turned out.
If your team ships every 30 days, your next goal should be 15 days. This might seem unambitious if you’re reading about companies deploying hundreds of times per day, but it’s exactly the right approach. Halving your cycle time requires real changes — better testing, smaller batch sizes, improved coordination. When the team achieves it, they’ll have learned what works in your specific context, built confidence in their ability to improve, and created appetite for the next goal.
This incremental approach works because each improvement teaches you something. Moving from 30-day to 15-day cycles might reveal that your code review process is the bottleneck. Fix that, and suddenly 7-day cycles seem achievable. Fix the next bottleneck, and 3-day cycles come into view. Each step builds on the last.
Improving productivity often means slowing down first. Engineering time is finite: every hour spent writing tests, automating deployments, or refactoring code is an hour not spent on features. It’s also an hour spent on work that will pay future dividends.
If product leadership has separate goals focused on feature delivery, if sales is promising specific functionality by specific dates, if the CEO is asking why velocity has dropped — your productivity improvements will fail. Not because they’re technically flawed, but because the organization isn’t aligned on the investment required.
This is why productivity goals must be organizational goals:
The math is compelling once everyone understands it. Three engineers spending part of their time one quarter on productivity improvements effectively creates a free engineer forever. But that math only works if those three engineers are actually allowed to do the work.
Setting effective goals requites being honest about these tradeoffs and having difficult conversations about short-term costs for long-term gains. It means treating productivity improvement as what it really is: one of the highest-leverage investments a software company can make.
Developer productivity is deeply connected to developer experience and business outcomes in ways that create reinforcing feedback loops (positive or negative, depending on what you choose do do). Understanding these connections helps explain why productivity improvements often lead to benefits far beyond faster delivery.
Productive developers are happier developers. When they can ship meaningful work without struggling with broken tools or excessive process, they experience the satisfaction of accomplishment, reach flow states more often, and take pride in their work — and stay at your company longer.
Most developers who leave cite inability to be effective as a primary reason. They’re frustrated by broken deployments, slow reviews, and endless meetings. These are fixable problems. Unlike cultural issues that can take years to address, productivity problems often respond to focused effort. Quick wins show the organization cares about developer experience, creating momentum for bigger changes.
When teams can deliver quickly and predictably, everything else in the business just gets easier. Fast delivery means faster customer feedback, allowing you to adjust course before investing too heavily in the wrong direction. And being first to market with new capabilities can make the difference between leading and following.
Predictable delivery transforms how the entire company operates: Sales can confidently promise features to close deals, product can plan roadmaps with confidence, and eventually, the perception of engineering shifts from “black box that maybe delivers” to “reliable partner that delivers what they promise.”
Even if you understand your productivity problems pretty well, fixing them can still feel overwhelming. Here’s how to start making progress based on where your organization stands today.
AI is a tool, like automated testing or continuous integration before it.
The goalposts remain the same whether AI is involved or not. Are cycle times improving? Is deployment frequency increasing? Is code quality maintained? If AI helps achieve these outcomes, great.
Some teams find AI transformative for specific problems — large-scale migrations, identifying security vulnerabilities, or generating comprehensive test suites. Others see minimal impact because their bottlenecks are organizational, not technical. Both experiences are valid. The tool matters less than understanding your constraints and measuring whether you’re addressing them.
Especially if you’re just starting out with developer productivity improvements, focus on your existing productivity indicators. If they improve after adopting AI tools, you’ll know the investment paid off. Think about the AI impact metrics later.
Developer productivity isn’t a project you complete, but an ongoing practice that’s never really “done.” Your solutions today might be tomorrow’s bottlenecks, and that’s okay.
This is why building a culture of continuous improvement matters more than any one metric of productivity. When retrospectives lead to real changes, when engineers feel empowered to fix what’s broken, when metrics guide conversations rather than judge performance — that’s when you create lasting effectiveness.
Your competitors can copy your tools and processes, but they can’t replicate your capacity to adapt. And when the only constant is change, that capacity makes all the difference.
Subscribe to our newsletter
Get the latest product updates and #goodreads delivered to your inbox once a month.