
Stop Letting AI Agents Guess Where Code Goes
A practical guide to structuring your codebase so AI agents stop creating architectural debt.

From Craftsman to Toolmaker: Where Value Concentrates Now
What Developers Actually Get Paid For Now

Mob Sessions Are Variance Insurance, Not Meetings
When Paying Three People for One Feature Is Actually Efficient
<100 subscribers

Stop Letting AI Agents Guess Where Code Goes
A practical guide to structuring your codebase so AI agents stop creating architectural debt.

From Craftsman to Toolmaker: Where Value Concentrates Now
What Developers Actually Get Paid For Now

Mob Sessions Are Variance Insurance, Not Meetings
When Paying Three People for One Feature Is Actually Efficient
Share Dialog
Share Dialog



Published Feb 3, 2026
Part 2 of 5: Organizational Structures for AI-Native Development
Part 1 established that "epic-sized" work units create variance explosions. The standard response from engineering leaders is: "We need better specifications upfront. AI will execute them, and ambiguity will vanish."
This is the new gospel of spec-driven development. It’s also wrong—not because AI can't follow instructions, but because it’s a fundamental misunderstanding of the new human-AI bottleneck.
In a recent two-day hackathon, we found that code generation is no longer our constraint.
"The code is written in minutes—usually 15—but the rest of the process takes days."
Code that takes 15 minutes to generate still requires review, architectural alignment, and deployment. Features completed by one person end-to-end were delivered significantly faster than those split across teams. One of our solo developers built a complete product in a month—front end, back end, and 70% test coverage. He didn't succeed because he was faster at typing; he succeeded because he had zero coordination overhead.
In the AI-native era, the "human-to-human" tax is now the most expensive part of the lifecycle.
Old-school AI needed a perfect spec because it was a literalist. 2026-era agents are different: they are proactive. If you ask for a "dashboard," a modern agent doesn't just build a blank page; it asks you about data refresh rates, permissions models, and edge cases.
The bottleneck has shifted from writing the spec to having the answer.
When an AI agent asks, "Should this dashboard prioritize real-time latency or historical accuracy for this specific user segment?" most product people stumble. We used to rely on developers to "just figure it out" over weeks of slow iteration. Now, the AI asks that question in seconds.
If you don't have a clear Strategic Intent, you end up in a "Clarification Loop" where the AI is ready to build, but the human is still "vibing."
Even with smart agents, "Local Inference" kills "Global Coherence."
One team experienced this when the backend agent used "push/pull sources" while the UI agent used "webhooks and imports."
Both were "smart" enough to infer their own naming conventions, but they didn't match.
When concepts have different names across the stack, the AI's "context window" fragments. This isn't a failure of AI intelligence; it's a failure of organizational nomenclature. Without a human-defined "Source of Truth," the AI's inferences will diverge, creating a "Franken-code" monster that is technically functional but architecturally incoherent.
The traditional ratio of one PM to ten engineers worked when implementation was slow. You wrote a spec, waited three weeks, and validated it at a demo.
In AI-native development, code production is abundant, but clarity is scarce. The organization cannot "absorb" the speed of AI output. PMs now have to:
Iterate with AI to answer the "Clarification Loop" before the developer starts.
Continuously Validate daily, because a week’s worth of "wrong direction" can now result in thousands of lines of code.
Orchestrate Absorption: Sales, Marketing, and Legal are now the "slow" parts of the company.
This is why PM ratios are shifting toward 1:3 or 1:5. You need more "intent-definers" to keep up with the "code-producers."
Some features ship in three days; some take three weeks. The mistake is trying to eliminate this variance with more upfront documentation.
Straightforward features: Heavy specs are "process theater." Let the AI and Dev align and ship.
Exploratory features: Specs are waste. The "correct" approach only emerges once the AI starts building and revealing the hidden technical debt or data mess.
Stop writing specs to eliminate variance. Start using AI to reach the "Clarity Threshold" faster.
Coming in Part 3: When code is abundant, where does value concentrate? The economic equation is shifting. Developers aren't paid to "write code" anymore—they are paid to be Architectural Orchestrators.

Published Feb 3, 2026
Part 2 of 5: Organizational Structures for AI-Native Development
Part 1 established that "epic-sized" work units create variance explosions. The standard response from engineering leaders is: "We need better specifications upfront. AI will execute them, and ambiguity will vanish."
This is the new gospel of spec-driven development. It’s also wrong—not because AI can't follow instructions, but because it’s a fundamental misunderstanding of the new human-AI bottleneck.
In a recent two-day hackathon, we found that code generation is no longer our constraint.
"The code is written in minutes—usually 15—but the rest of the process takes days."
Code that takes 15 minutes to generate still requires review, architectural alignment, and deployment. Features completed by one person end-to-end were delivered significantly faster than those split across teams. One of our solo developers built a complete product in a month—front end, back end, and 70% test coverage. He didn't succeed because he was faster at typing; he succeeded because he had zero coordination overhead.
In the AI-native era, the "human-to-human" tax is now the most expensive part of the lifecycle.
Old-school AI needed a perfect spec because it was a literalist. 2026-era agents are different: they are proactive. If you ask for a "dashboard," a modern agent doesn't just build a blank page; it asks you about data refresh rates, permissions models, and edge cases.
The bottleneck has shifted from writing the spec to having the answer.
When an AI agent asks, "Should this dashboard prioritize real-time latency or historical accuracy for this specific user segment?" most product people stumble. We used to rely on developers to "just figure it out" over weeks of slow iteration. Now, the AI asks that question in seconds.
If you don't have a clear Strategic Intent, you end up in a "Clarification Loop" where the AI is ready to build, but the human is still "vibing."
Even with smart agents, "Local Inference" kills "Global Coherence."
One team experienced this when the backend agent used "push/pull sources" while the UI agent used "webhooks and imports."
Both were "smart" enough to infer their own naming conventions, but they didn't match.
When concepts have different names across the stack, the AI's "context window" fragments. This isn't a failure of AI intelligence; it's a failure of organizational nomenclature. Without a human-defined "Source of Truth," the AI's inferences will diverge, creating a "Franken-code" monster that is technically functional but architecturally incoherent.
The traditional ratio of one PM to ten engineers worked when implementation was slow. You wrote a spec, waited three weeks, and validated it at a demo.
In AI-native development, code production is abundant, but clarity is scarce. The organization cannot "absorb" the speed of AI output. PMs now have to:
Iterate with AI to answer the "Clarification Loop" before the developer starts.
Continuously Validate daily, because a week’s worth of "wrong direction" can now result in thousands of lines of code.
Orchestrate Absorption: Sales, Marketing, and Legal are now the "slow" parts of the company.
This is why PM ratios are shifting toward 1:3 or 1:5. You need more "intent-definers" to keep up with the "code-producers."
Some features ship in three days; some take three weeks. The mistake is trying to eliminate this variance with more upfront documentation.
Straightforward features: Heavy specs are "process theater." Let the AI and Dev align and ship.
Exploratory features: Specs are waste. The "correct" approach only emerges once the AI starts building and revealing the hidden technical debt or data mess.
Stop writing specs to eliminate variance. Start using AI to reach the "Clarity Threshold" faster.
Coming in Part 3: When code is abundant, where does value concentrate? The economic equation is shifting. Developers aren't paid to "write code" anymore—they are paid to be Architectural Orchestrators.
No comments yet