The Other Side of the AI Velocity Equation
AI just changed how fast you can build software. It didn’t change what it costs to own it.
There’s a conversation happening in engineering teams right now that sounds like this: We can ship so much faster. Why aren’t we?
It’s a fair question. If you’ve spent any time with Claude Code or similar tools in the last year, you’ve seen what they can do. Opus-class models are producing pull requests that would pass review at most companies — not just functional code, but well-structured, well-documented, appropriately tested code. The capability is real. The velocity gains are real.
I’ve been in software and infrastructure for 25 years. I’ve watched a lot of “this changes everything” moments come and go. Some of them were right. This one is right.
But there’s a side of this equation that’s not getting enough attention, and I want to talk about it — not to rain on the parade, but because I think getting this right is what separates the teams that sustain high velocity from the ones that eventually choke on their own output.
All Code Is Debt
Ward Cunningham’s technical debt metaphor has been so thoroughly absorbed into engineering culture that it’s almost lost its edge. We invoke it to mean “messy code” or “shortcuts we took.” That’s not what he meant.
Debt means carrying cost. It means that even good, clean, well-tested code requires ongoing investment just to remain viable. Dependencies drift. Security vulnerabilities are discovered. Infrastructure has to be maintained. APIs have to stay running. Tests have to keep passing. None of that is free, and none of it generates new value — it just preserves the value you already have.
This was manageable when the cost of creating software was high. When a new service required weeks of engineering time to build, organizations naturally applied scrutiny before building it. The cost of creation was a natural forcing function for intentionality.
AI just removed that forcing function.
The New Asymmetry
Here’s what’s actually new: the cost of generating software has collapsed. The cost of owning software has not.
Compute, security, compliance, infrastructure, on-call rotations, dependency management — none of those are on an AI improvement curve. Your AWS bill doesn’t get cheaper because Opus 4.6 wrote the service that’s running on it. Your Dependabot queue doesn’t shrink because the code it’s flagging was generated in ten minutes instead of two weeks.
At my company, we have decades of code across hundreds of repositories. We receive several hundred Dependabot pull requests a day. Merging them is nearly a full-time job, so we’re automating it — which means more code, more test runs, more build minutes, more deploy pipelines. Each solution creates its own small obligation. The automation doesn’t eliminate the carrying cost; it shifts where the labor goes while adding infrastructure of its own.
This is not a complaint about AI. This is a description of the environment that AI acceleration creates more of, faster.
The Manufacturing Gap
Software engineering borrowed a lot from manufacturing. Continuous integration, kanban boards, sprint cadences, deployment pipelines — these are all adaptations of production and logistics thinking. They made us dramatically better at building things.
What we didn’t borrow was the disposal side.
Lean manufacturing has a sophisticated framework around inventory carrying cost. Unsold or unused inventory isn’t neutral — it occupies space, requires tracking, can spoil or become obsolete, and represents capital that’s tied up rather than deployed. The discipline of lean isn’t just about making things efficiently; it’s about not making things you don’t need to own.
The principle is sometimes called “the best part is no part.” The cheapest component in a system is the one you engineered out of existence. The cheapest operation is the one you eliminated. Not optimized — eliminated.
Software engineering has YAGNI (“You Aren’t Gonna Need It”) as a philosophical nod to this idea, but we’ve never built the institutional practices around intentional retirement that manufacturing takes for granted. We have CI/CD. We don’t have the equivalent of a product lifecycle management discipline applied to internal software.
That gap was acceptable when building was expensive. It’s becoming a liability now that building is cheap.
The “Can We Turn This Off?” Problem
The hardest category of software debt isn’t the code that’s clearly broken. It’s the code that’s working fine and might still be needed.
Observability tooling is my favorite example. Most engineering organizations spend more on their observability stack than on production infrastructure. Logging, tracing, metrics, alerting, dashboards — these systems are valuable, but they’re also extraordinarily difficult to decommission. Why? Because it’s very hard to prove that something you’ve been measuring is now safe to stop measuring. The absence of a signal isn’t evidence that the signal was never useful; it might mean the system is healthy because you were watching it.
So observability tools stay on. They run continuously. Someone has to keep upgrading them, because they’re connected to everything else, and “everything else” keeps changing. They become permanent fixtures not because anyone decided they should be permanent, but because no one ever built a process to evaluate whether they still need to be.
Multiply this pattern across every internal tool, every small service, every automation script that got deployed because it solved a real problem in Q3 of some year you barely remember, and you have the average enterprise software portfolio. A graveyard of things that still twitch.
AI is going to generate a lot more of those things, much faster. That’s not inherently bad. But it means the organizations that build a discipline around software retirement now are the ones that will be able to sustain the velocity AI promises — instead of watching it plateau as the carrying cost of all that accumulated code catches up with them.
Toward an Intentional Retirement Practice
So what does this actually look like in practice? Here’s a starting framework, drawn from lean principles and adapted for software.
1. Build for observable retirement from day one.
Every new service or internal tool should be instrumented not just for uptime and performance, but for usage. Who is calling this? How often? What happens if the call count drops to zero — is that nominal or an incident? This is different from standard observability. It’s asking: how will we know when this thing no longer needs to exist?
2. Assign ownership with explicit review cadences.
In manufacturing, every component in a product has a responsible engineer and a review cycle. Software should too. Not just “who’s on call if this breaks” but “who is accountable for deciding whether this continues to exist.” That person should be required to reaffirm that ownership on a regular schedule — annually, at minimum.
3. Create a retirement readiness checklist.
Before decommissioning anything, you need to be able to answer: What does this do? Who depends on it? What would break if it stopped? Is there a replacement, or is the function itself no longer needed? This sounds obvious, but most organizations have no formal process for answering these questions. They find out the answers the hard way.
4. Treat retirement as a first-class engineering activity.
Decommissioning a service cleanly — migrating its dependencies, archiving its data, removing its infrastructure, documenting what it did and why it was retired — takes real engineering effort. It should be estimated, planned, and celebrated the same way a new feature is. “We retired four internal services this quarter and reduced our monthly infrastructure cost by 15%” should be a headline in your engineering all-hands. Right now, it usually isn’t even tracked.
5. Apply the inventory lens to AI-generated code specifically.
When Claude Code generates a new tool, service, or automation, the question isn’t just “does this work?” It’s “are we prepared to own this?” That’s a fast conversation, not a blocker — but it needs to happen. If the answer is “it’s a throwaway script,” treat it as one: don’t deploy it to production infrastructure, don’t wire it into other systems, and document that it’s ephemeral. If the answer is “this is going into production,” then it needs the same ownership, observability, and retirement instrumentation as anything else.
The Opportunity
I want to be clear about what I’m not saying. I’m not saying AI-assisted development is a trap. I’m not saying the velocity gains aren’t real or aren’t worth pursuing. They are.
What I’m saying is that the organizations that will actually sustain high velocity are the ones that pair their AI-accelerated creation practices with equally deliberate ownership practices. The constraint on software development is shifting from “how fast can we build?” to “how much can we responsibly own?” The teams that recognize this early and build the discipline to manage it will have a genuine competitive advantage — not just in cost, but in the clarity and focus that comes from a codebase where everything in production is there on purpose.
The best part is no part. The best service is the one you don’t have to run. The best code is the code that solved the problem so well you were able to delete it.
That’s not doom. That’s good engineering.
Fred Smith is a TechOps and AI Engineering Lead with 25 years of experience in infrastructure and software development. He writes about sustainable engineering practices at the intersection of AI and operational reality.