Vibe coding is in the rear view mirror
Why “vibe coding” more accurately describes the last 20 years of engineering
Seems like the collective pragmatic CTO mindshare hit the AI tipping point as we all had our holiday break to close out 2025. AI maximalists wandering the software tundra shouting like John the Baptist in a Patagonia vest on a Claude Max bender had been foretelling us for sure, but many leaders like myself contextualized it as greenfield novelty, toys for side projects while the adults ran the business. At best it was a helpful pairing partner for senior devs and at worst a slop injector into our delivery pipelines tearing down product quality and pace and pissing off the senior devs who know better.
But then I had the chance to play and see how much progress had been made in the models, agentic workflows and AI native IDEs since my last real checkin. I was genuinely mind-blown by the quality and speed improvements. This was the tipping point. I knew if teams don’t make the shift from side project dabbling to full engagement they would be lapped. I’ve been heads down in it every day since and came to the realization that the phrase “vibe coding” is being used to describe AI-driven development as if it were reckless improvisation.
Ironically, I think “vibe coding” better describes the last few decades of engineering than what’s emerging now.
There’s plenty of others writing about the hands on techniques, skills, latest apps and journeys to become a Principle Markdown Engineer, but I wanted to touch on the paradigm shift and what I’ve noticed is an increasingly obvious misnomer in “vibe coding”. Let’s compare paradigms between AI native and what was the prior most modern engineering practice, “Cloud Native”.
Best Practice Alignment
Cloud Native
Open up your multi-front war around continuous delivery, TDD, IaC, local dev vs integrated environments, strict vs passive conventions, who tests what, branch management, observability, blue green or canary, or God help us release trains, documentation, explainability, refactor vs rewrite, micro vs macro services vs monolith, to monorepo or not …
These are alignment meetings, brown bags, team all hands and then more realignment meetings. They are expressed at best as onboarding wikis and READMEs that you beg teams to align to, get frustrated when they don’t and watch the tech debt pile up if you actually want to ship software. You’re managing a constant flow of pragmatic compromise to keep the business moving forward and you do your best to manage the balance.
AI Native
Encode your engineering biases as executable context. Not slides. Not tribal norms. Not onboarding decks. Enforced context. Turn talk tracks into markdown, dead wikis into living enforced context.
This feels like therapy to a weary tech leader’s soul. “Best practice” finally has meaningful and sustainable weight, metaphorically and technically speaking, in your engineering system.
Tech Debt
Cloud Native
Tech debt is in the eye of the beholder, it’s felt mostly in implicit ways. Time to deliver changes to prod keeps rising, path to product pain drives angst, but the team says they can always fix that by taking an axe to the best practice alignment. The finger pointing starts and then ultimately the business is put into a hostage crisis, “if you really want this done right then I need to rebuild all of this”.
The cost to “fix” or more accurately re-finance tech debt is significant. It’s a running aggregation of each day’s compromise where any in isolation may have been the “right” call but now you are backed into a corner and you either double down, pay down or invest in the purist pay off plan.
But make no mistake you are armed with few scalable defenses against this dynamic from repeating over and over.
AI Native
Being able to bring explicitness to what was implicit and tribal should certainly be a welcomed technique change, but it’s a superpower because it scales. It scales because it’s not dependent on human capacity and non-deterministic application. In my article on the transformation triad I talk about how you must make the right thing the easy thing, then make the easy thing the habitual thing. Despite those points having “easy” in the phrasing, making that happen is anything but easy and that’s because humans are inherently complex and highly constrained.
Agents are only as difficult and complex as you define them and only theoretically constrained by the compute you feed them. That’s an operating primitive whose scale fundamentally changes what’s possible, over what time horizon and with what resources. We’ve never had any truly effective vehicle to solve coordination and motivation cost, we do now.
The CTO instinct of “what can I do with a team of a dozen” has turned into what could I accomplish with a team of 2 or 3.
Explainability
Cloud Native
The more shared understanding you want, the more human effort you must spend creating and consuming non-user facing value artifacts, effort diverted from building the product itself. So you inevitably make daily tradeoffs between this. Bias towards execution and you lean on those that already have the understanding and settle for imbalanced utilization and frustration from those on the sidelines watching or working on low value things. Bias towards shared understanding and you get more balanced utilization but sacrifice near term velocity and perhaps quality.
These are poor choices.
AI Native
Explainability is not limited by human cognitive load or tribal effort, instead it’s democratized through a durable and evolving instruction set in the codebase/filesystem itself. Humans are no longer the bottleneck of contribution or consumption. Use and interrogate the system to understand and ultimately contribute to the system.
In TDD we used to talk about the value of self documenting code and that tests were the best form of that. We’re now in a TDD renaissance.
What used to take a form of miraculous spiritual revival to get strong TDD adoption is a few lines in a markdown file. Entire test pyramid alignment a few more. Durable. Always enforced, executed and results honored.
My therapy bill is nose diving as a type.
The reward was the vibes we made along the way
Let’s not kid ourselves, we’ve been swimming in vibes for decades.
Vibes about code quality, testing discipline, architectural purity, what “good” was, what “done” was, who works well with whom, the codebase no dev wants to touch etc… I think in large part our defensiveness about those vibes is the root of skepticism about AI native workflows because it directly confronts the vibe reality.
Now to be fair, there’s a lot we’re discovering and learning about the AI native workflow, what works and what doesn’t, what’s fun, painful, scales or tanks. But the compounding pace of innovation and quality in this space is undeniable.
The right question every business should be asking themselves is not what are all the correct tooling/implementation choices to make today, it’s how do we urgently orient ourselves around a new engineering paradigm that puts us on the right curve. A curve where we can quickly pivot, ideate and learn, scale much more efficiently, massively shorten feedback loops and time to market.
The efficiency, velocity and scale gains to be had here for most businesses in many contexts are measured in orders of magnitude, not incremental. If you aren’t re-evaluating aged assumptions through this new paradigm, you’re doubling down on legacy thinking and while that may feel like maturity I think it’s more fundamentally a misplaced trust and defense of the “vibes” you’ve previously known.

