Why context, not code, is the moat in the AI era

Every engineer can now write code at superhuman speed. The advantage shifts to who actually understands the problem they're trying to solve.

April 30, 2026 · 11 min read

A few months ago I sat with a CEO who had just shipped a feature his engineering team had been arguing about for four months. The argument was real and the engineers were senior. The disagreement was about which AI architecture to use for a new search experience — vector vs. hybrid, agent vs. pipeline, build vs. buy.

He told me his junior product manager had ended the debate by spending one Saturday afternoon with Cursor and shipping a working prototype of all three approaches. Not pretty, not production-ready, but functional enough to demo. Monday morning the team picked the right one in twenty minutes.

The CEO told me this story expecting me to be impressed by the prototype velocity. What surprised me was where his attention actually was.

“The thing I keep thinking about,” he said, “is that we wasted four months because the engineers were arguing about implementation details and the only person who knew which approach would actually fit our customers was the PM. We didn’t have a code problem. We had a context problem.”

That conversation has been on my mind. I think it might be the most important thing I’ve heard about AI all year, and it has very little to do with AI itself.

What the AI shift actually changed

The conventional framing is that AI dramatically lowers the cost of writing code. That’s true but undersells what’s happening. The cost of code is not just lower. It’s becoming approximately zero in a way that changes the structure of the work.

When code was expensive, you had to be careful about what you built because building was the bottleneck. The constraint forced a particular kind of discipline: think hard, scope well, choose carefully, ship slowly. Most engineering culture, most engineering leadership, most of the tooling we built up over twenty years assumes that constraint.

That constraint is dissolving. I don’t think we’re at the end state yet, but the trajectory is clear. A senior engineer with current AI tooling can produce in a day what used to take two weeks. A motivated PM with Cursor can ship a prototype that previously required an engineering ticket. The bottleneck has moved.

The new bottleneck isn’t writing code. It’s knowing what to write.

The context I’m talking about

I want to be precise about what I mean by context, because the word does a lot of work and I’ve seen it abused.

I don’t mean documentation. I don’t mean RAG over your codebase. I don’t mean prompt engineering. Those are all useful, but they’re surface manifestations of something deeper.

Context is the accumulated, often unwritten understanding of:

  • Why the system was built the way it was, including the constraints that no longer apply but shaped the architecture
  • Which customers care about which features, and which features they say they care about but actually don’t
  • Which past decisions were experiments that worked, which were experiments that failed, and which were just defaults that nobody had time to revisit
  • Which team members are the actual decision-makers on which questions, regardless of org chart
  • Which integration points are fragile in ways the test suite doesn’t capture
  • Which kinds of failures are acceptable to which stakeholders

This is the kind of knowledge that lives in the heads of senior people, gets transmitted in 1:1s, surfaces in retros, and never makes it into a Notion doc. It’s the thing a new senior engineer takes nine months to acquire even with good onboarding. It’s also the thing that makes the difference between a feature that ships and works and a feature that ships and gets quietly walked back two months later.

Why this is the moat

For most of the last twenty years, the durable competitive advantage in software was being able to hire and retain engineers who could write good code. The companies that did that well outcompeted the ones that didn’t. The cost of code was high enough that this difference compounded into product velocity, which compounded into market position.

That advantage is eroding. Not gone — code quality still matters, system design still matters, but the gap between a great engineer and a competent one with AI assistance is narrower than it was. The advantage that compounds the most isn’t being eroded by AI, though. It’s being concentrated by it.

The engineers who win in this environment are the ones who know what’s worth building. The teams that win are the ones whose engineers have enough context to make those calls quickly. The companies that win are the ones who have invested, over years, in building the institutional context that lets their teams make good decisions fast.

This is not a new idea. Tom DeMarco was writing about it in the 1980s. What’s new is that the constraint has flipped. Code used to be the scarce resource. Context is now.

What this means for how I run a team

I’ve been running engineering teams for a decade and the rules I used to apply have shifted under me in the last eighteen months. A few things I’m trying.

Tenure as a leading indicator, not a lagging one. I used to think of long engineer tenure as a result of having done other things right — culture, comp, work that mattered. I now think it’s also a cause of the things I want. The engineer who’s been on the codebase for three years can move ten times faster than the equivalent engineer hired last quarter. With AI tooling, that ratio gets bigger, not smaller, because the experienced engineer can deploy AI usefully in places where the new engineer doesn’t even know there’s a decision to be made.

I’m now willing to pay more to keep an engineer for a fourth year than I would have been to hire a stronger engineer at year zero. That’s a different math problem than I was solving five years ago.

Documentation as a forcing function for legibility. Not for the AI’s benefit, though that’s a side effect. For the team’s. The pressure to write down why decisions were made, who they affect, and what the tradeoffs were used to feel like overhead. It now feels like the thing I’m investing in. When the documented context is good, both the humans and the AI tooling can work from it. When it’s bad, only the humans can compensate, and only the senior ones, and only at the cost of their attention.

Hiring for taste over throughput. A junior engineer with good taste who uses AI well will outproduce a senior engineer with mediocre taste who’s still writing everything by hand. The throughput gap that used to favor seniority is closing. The taste gap is the one that matters now. I look for people who can articulate why they made a particular technical choice, not just whether they made it. The articulation is the proxy for taste.

Embedded models over project models. I’ve been making this argument for ten years for unrelated reasons. The new reason: an embedded engineer is a context-acquisition machine. Every standup, every PR review, every casual Slack thread is an opportunity to build the model of how the system works and why. A project engineer doesn’t get that. They ship what they were scoped to ship and leave.

In a world where context is the moat, an embedded engineer is acquiring something durable that outlasts any specific feature they ship. A project engineer is producing output that is becoming commoditized.

What this doesn’t mean

I want to be careful here, because there’s a version of this argument that becomes a weak excuse for not investing in technical excellence. That’s not what I’m saying.

Code quality still matters. System design still matters. The ability to actually ship working software at scale still matters. AI tooling makes some of these things easier, but it doesn’t make them irrelevant, and in some cases it makes them more important — a system that’s been over-built for ease of AI iteration may be harder to operate or extend than one designed by an engineer who understood the constraints.

What I’m arguing is narrower: the relative value of writing the code is dropping, and the relative value of knowing what code to write is rising. Both still matter. The mix is shifting.

The CEO I started with

A few weeks after that conversation I asked the CEO what he was doing differently. He told me he’d reorganized his engineering meetings. Engineering used to spend an hour a week on architecture review. Now they spend an hour a week on customer review — specific customer use cases, specific feedback, specific decisions made because of that feedback. Architecture review still happens, but it’s been compressed to fifteen minutes and is mostly process.

“The architecture is the easy part now,” he told me. “I have eight engineers who can produce good architecture. I have two who can tell you which architecture our customers will actually use, and they’re so much more valuable than they were a year ago that I can’t quite believe it.”

That’s the shift. The work didn’t disappear. The leverage moved.


Federico Ramallo is the founder of Density Labs and the author of The Invisible Distance. He hosts the PreVetted Podcast and writes about cross-border engineering, founder operations, and team design.