
AI tools for engineering teams are changing fast — really fast. New capabilities drop weekly, and everyone from ICs to engineering leaders to board members are scrambling to figure out what it means for productivity, measurement, and value.
As we move from simple chat interfaces toward orchestrated agent-based systems, you need to understand both the opportunities and the significant challenges ahead.
I’m bullish on AI’s potential. But I’m deeply skeptical of how most people think they’ll get value from it. We’re in a moment comparable to the mid-1990s, when Tim Berners-Lee had just invented the World Wide Web. Success was measured by visitor counters on GeoCities homepages, and some dude was just trying to sell books.
Tell me, what did you measure then?
Here’s a thought experiment: if you could write code for your entire backlog in the next 24 hours, at reasonably high quality, would you? Of course you would.
But would your organization suddenly ship dramatically more valuable features? Probably not.
That newly written code would join the pile of other code waiting to be reviewed, deployed, documented, and actually shipped to customers who may or may not want it.
Turns out, code generation was never actually your bottleneck.
And truth is, AI can only amplify what you already have. Strong foundations get stronger, and weak foundations crumble.
Now that we’ve established code isn’t the bottleneck, let’s talk about what actually matters: results.
Your teams will get better results when they use AI to generate inspectable, testable code rather than seeking direct answers to complex questions.
Think of it this way: when AI produces code, your teams can review it, modify it, and validate it through existing QA processes. This lets AI help where it’s reliable while keeping errors in check.
But — and this is a big but — this only works if you already have robust development guardrails.
I’m talking about:
As AI systems become more autonomous, these guardrails become even more important. AI agents will fail catastrophically without testing frameworks to catch errors or deployment safeguards to prevent problematic code from reaching production.
If you’ve got discipline around these practices, great! You can safely benefit from current AI tools and future agent orchestration. You can experiment because your existing systems will catch errors, validate functionality, and prevent serious failures.
But if you’ve consistently deprioritized these “overhead” tasks in pursuit of velocity?
You’re in for a rough ride. And honestly, that’s the part that keeps me up at night.
Let’s be real about what we’re dealing with: sophisticated pattern matching and text generation systems.
Yes, vendors claim these models “think” now. But “think” is doing a whole lot of heavy lifting in that sentence.
Current AI coding tools:
Your experienced engineers are right to worry. They’re thinking about bugs that get missed, mysterious dependencies, and code that becomes unmaintainable over time.
If you’re in a regulated industry? You’ve got a whole other set of compliance and data privacy challenges to consider.
Despite AI’s emergence — or maybe because of it — the fundamentals of engineering effectiveness remain unchanged.
AI doesn’t require new productivity frameworks or a resurgence of per-developer productivity metrics. It reinforces why team outcomes matter more than individual activities.
Do your teams deliver value more effectively? Are cycle times improving? Are quality standards maintained?
If AI tools provide measurable benefit, these fundamental metrics should improve. If they don’t improve, the tools aren’t delivering meaningful value — regardless of how impressive the generated code looks or how much developers enjoy using them.
The conversation around AI investment has evolved recently, too. It’s no longer ‘Does this provide value?’ but ‘How do we manage the costs of this thing?’ as vendor pricing models evolve toward usage-based systems that are already creating significant and unpredictable expenses (ask me how I know).
These tools provide enough ergonomic benefit that your teams won’t abandon them once integrated. The developer experience improvements — faster code completion, quicker debugging, automated documentation — create baseline expectations. Try taking them away and watch the pitchforks come out.
You’ll find yourself committed to ongoing AI spending regardless of whether you can demonstrate productivity improvements. It’s like giving developers dual monitors in 2010 — technically optional, but practically mandatory.
So how do you then justify the cost?
Enterprise AI tools can cost hundreds or thousands per developer annually. Your engineers love them — and why wouldn’t they? AI helps with repetitive tasks, context switching, navigating unfamiliar codebases. The subjective value is immediate and clear. Demonstrating ROI through traditional metrics, however? Good luck with that.
But it’s important to remember that it's not just about the work that happens in the code. When justifying the total return on your AI investment, there are important factors beyond productivity:
And also, the equally important strategic risks:
Think of your AI investments as a sort of portfolio approach — some AI investments for immediate developer experience, others targeting specific bottlenecks, and others exploring future capabilities. Some investments are more or less risky than others, and your portfolio strategy should account for that.
Given these realities - that AI amplifies what you already have, that you’ll pay for ergonomics regardless of productivity gains, and that focusing on the wrong metrics will lead you astray - what should you actually do?
But teams can’t do this alone. It’s not their job to negotiate enterprise contracts, establish data governance, or implement security controls. That’s where you come in.
Organizational enablement means:
But remember: enablement only works with proper development practices already established. If you’ve been postponing them, you’re in a tough spot: you need AI tools to remain competitive but lack the infrastructure to use them safely.
And there’s no shortcut here. Sorry.
Engineering roles will transform significantly over the next 1-2 years, and I suspect we’ll see major changes within 6-9 months.
However, nobody knows the full scope of what’s coming. Anyone who claims otherwise is selling something.
And here you are, trying to make strategic decisions while the ground shifts beneath your feet. It’s uncomfortable. It’s uncertain. It’s also where we all are right now.
So what should we do?
First, shake off some of the hype and remind yourself that none of this changes the fundamentals. To build an effective engineering organization, you still need to cover your three pillars, focus on team delivery capabilities, and maintain disciplined development practices.
Remember our thought experiment? It shows us that while AI is powerful, it’s not magic — and it amplifies what you already have, good or bad.
So keep experimenting, learning, and dare I say it — having fun with it.
And maybe, just maybe, you’ll figure out how to code faster and ship more.
Subscribe to our newsletter
Get the latest product updates and #goodreads delivered to your inbox once a month.