For engineering leaders, there’s perhaps never been more clarity and consensus around the foremost challenges and opportunities facing our profession. Of course I’m talking about AI.
Putting aside the larger (and very important) societal questions around AI’s rapid evolution and integration into our lives and work, it’s safe to say that software as an industry is experiencing a whole extra layer of pressing concerns. The amazing progress of AI coding assistants has quickly made it clear that the field of software development will absolutely be transformed, and that transformation is coming fast and furious!
It’s clear that AI coding will transform the day to day work of software engineers, software development processes and best practices, and cost assumptions around shipping and supporting software. Engineering leaders are acutely aware of this, but it’s still too early to really know what this all will look like when things stabilize.
As just one representative data point of this massive shift, let’s look at this year’s Accelerate State of DevOps Report from DORA. In the 2023 DORA report, AI got a single page of coverage (page 26). But this year AI is the sole topic covered for the first 46 pages of a 120 page report! That’s a pretty big change. But I’d say it accurately reflects the mental space AI Coding occupies for the average Engineering leader, as well as how that has grown over the last year.
We are in a maelstrom of change, but we also have businesses to run every day, and decisions that need to be made about how to proceed.
How can we best navigate the storm?
State of the Discourse
Parsing the larger discussion around AI coding, there are many distinct parts of the conversation. Clearly there’s lots of attention going towards assessing models and their relative capabilities. There’s discussion of AI-powered development tools like Cursor and Windsurf, as well as supporting architectures such as MCP (Model Context Protocol). And of course there’s lots of discussion about approaches for working with AI – which types of problems and use cases work best, and how to approach these to most effectively leverage AI. The tech community is actively sorting out what AI coding practice will look like even as the models and tools are changing rapidly in real time!
Beyond these practical questions, there’s of course the “Big Question” of what the future of software engineering will look like more broadly. For a practicing software engineer, what will my job look like in the future? What will my team look like and how will it operate? Will most code be written by AI? And if so, what’s the role of human engineers?
Opinions on this from smart, credible people differ widely. Just last week, Dario Amodei, CEO of Anthropic, predicted that in around a year’s time practically all code will be written by AI. On the other hand, many have jumped in to offer the opposing view – that AI is fundamentally more limited, and that experienced human engineers are going to be critical, perhaps even more important, in the foreseeable future.
For Engineering leaders, these questions level up to the larger topic of what an engineering org will look like and how it will operate in the (not too distant!) future. I regularly talk to CEOs and other non-technical leaders who have wild expectations about what AI will mean for productivity and cost in the engineering function. They’ll point to a PM vibecoding a prototype in a matter of hours that delivers something like what a whole team is able to do in a month. And they’re justifiably curious about what they can expect around future velocity, productivity, and cost gains, and maybe unsurprisingly, a little overly optimistic about what might be possible (or maybe not! again, we don’t really know for sure…) Will they see 10x more output from the same team? Will engineering costs and timelines be a small fraction of what they are today? For Engineering leaders these are difficult and somewhat stressful questions.
Given the acute focus on this topic, there’s a ton of prognostication. But the reality is that the best practices on so many fronts are absolutely a moving target. Many engineers are sharing incredibly valuable real-time insights on tools, approaches, use cases, etc., but it’s clear that these haven’t been boiled down into durable guidance, nor could they possibly be with the tech evolving so quickly. The dust has not yet settled.
Charting a Course
So what’s an Engineering leader supposed to do?
Even the most skeptical grant that AI is helpful and will enhance productivity. And even a moderate view of the likely impact indicates that it will be revolutionary for how software engineers work. There’s little question that people developing software in the future will be working hand-in-hand with AI.
Last year, adoption seemed to be the key question: How do we make sure AI coding assistants are not just available to our teams but also getting used? This question seems like it’s largely in the rear view, and now the question is more about “how do we make sure we’re doing it right?” and “how do we make sure we’re getting meaningful benefits?”
In my conversations with other Engineering leaders, I’ve arrived at two durable recommendations in this highly volatile time:
Actively encourage and support adoption.
Make sure you can measure the impact of AI.
Actively encourage and support adoption.
Various analytical studies have shown very promising gains from coding with AI. The 2024 DORA report showed boosts in self-reported performance around a variety of aspects of software development, from code complexity, to quality, to testing, documentation, and more. A study performed by Jellyfish looked at how system signals from Jira and Github were influenced by Copilot adoption, and found that it drove a 15% increase in output. [disclosure: I currently serve as an Advisor at Jellyfish]
But the truth is that these analyses are early, and it’s safe to say that if you buy your team a bunch of Copilot licenses you may or may not magically see a 15% speedup. It depends on use cases and best practices for working in your environment. Given your specific code base and workload, which tools are best? What are the best ways for interacting with AI to make sure it sees the necessary code context and doesn’t generate code that is poor quality or out of line with your standards? Some of what we see in the DORA report is that AI coding isn’t purely and simply beneficial – it’s a mix of benefits and trade-offs.
Last year, the first steps were clearly about removing barriers to adoption, such as ensuring that emerging tools like Copilot were licensed and available to the team, had been run through Infosec vendor approval, etc. But this year, we need to move beyond adoption and start building the muscle to capture and promote AI coding best practices for our orgs.
One key part of this is designating ownership for that topic to specific people or roles in the org (or perhaps a Spotify-style Guild). Identify who will take on the extra effort to figure out the best tools and workflows for the org, capture and communicate these, and serve as a resource for the rest of the team. It doesn’t preclude others on the team from experimenting and contributing. But it solves the problem of ownership. And it also ensures you’re being realistic about focusing on this question as a true first-class responsibility.
Hoping that enthusiasts on the team will magically figure out the best ways of working with AI and share back with the team might work. But this is an important enough topic that I’d suggest making it a first-class investment for named individuals. This not only helps you be realistic about the idea that AI isn’t a magic “power up” that you can flip a switch and turn on –- it’s a new concept that needs to be integrated thoughtfully. And formalizing the investment also signals to both the team and your stakeholders that you are taking the opportunity seriously and spending real team cycles to stay on the cutting edge.
Make sure you can measure the impact of AI.
Whether or not you agree with my suggestion about putting formal investment into capturing and communicating AI best practices for your org, it’s inevitable that your team will be spending time and money on AI-assisted coding. We’re looking ahead at a highly dynamic period of rapidly advancing tools, emerging insights into best practices, and shifting expectations about the art of what’s possible. High functioning engineering teams tend to put some reasonable investment into looking at relevant emerging technologies, but AI will add a whole new layer. Not to mention, we’re going to be spending money on buying and implementing these tools.
We’re making these investments in the name of efficiency and productivity for the company. So far, with the space being so new and important, justifying some spend has been pretty straightforward, trivial even. After all, we’re just getting going, and the topic is new and promising. But down the road, especially in this post-ZIRP environment, we’re going to need to show the ROI. Are we getting more output, more quality, more velocity? Are we seeing results that compare favorably to comparable companies?
These are, of course, the very same questions we’ve been answering for years about a variety of investments such as dev tooling and automation, establishing platform teams or SRE functions, investing in architecture patterns like microservices, etc. We do all of these things to drive better efficiency and output for our company, and the introduction of AI coding is really no different.
Except it is different.
It’s way more visible. It’s a way broader discussion. AI is the topic in tech right now, and coding with AI is at the center of it. It’s front of mind in a way that feels even larger than the previous revolutions I’ve personally experienced (web, SaaS, mobile, etc.). We should expect to be highly focused on this topic, including ensuring a strong metrics practice to measure its outcomes.
The good news is that measuring outcomes for AI isn’t all that different from generally measuring the Engineering function. We’re interested in all of the same metrics – things like cycle time, allocation of investment to innovation, measures of quality such as change failure rate, etc., – all of the types of metrics that are already part of a sound Software Engineering Intelligence (SEI) strategy. Perhaps the primary new requirement is understanding AI usage within the engineering workflow in order to spot metric differences and changes in conjunction with the usage of AI. And the good news is that leading engineering metrics / SEI platforms are addressing this need with new product features, such as Jellyfish’s AI Impact or GetDX’s genAI impact dashboard. I think we can expect to see these capabilities evolve with the space.
Enjoy the Maelstrom!
I find that these common sense moves of:
(1) investing intentionally to foster AI coding adoption and capture best practices, and
(2) ensuring you have a good strategy for measuring impact,
provide durable actions that engineering leaders can take right now in the midst of change. It’s a stressful time for software leaders with so much change and uncertainty, and so many great expectations. It can feel disconcerting watching your industry transform in real time around you!
But I’d encourage you to find the joy in the Maelstrom. If you haven’t been doing some hands-on coding with the various AI tools out there, I recommend taking them for a spin. They’re both highly impressive and eye-opening, and at the same time, immensely useful. They’ve allowed me to quickly knock out automation tasks with a bit of code generated in 10 minutes instead of a couple of hours.
Stepping back, it’s kind of cool knowing that our industry is making history. You don’t have to be an AI researcher creating new models to contribute. You can be an old hand at building software or leading teams and helping to navigate this crazy time of transformation. We’re not just living through history, we’re making it. That’s a rare honor, and I’m trying to appreciate the moment.