
Stop Letting AI Agents Guess Where Code Goes
A practical guide to structuring your codebase so AI agents stop creating architectural debt.

From Craftsman to Toolmaker: Where Value Concentrates Now
What Developers Actually Get Paid For Now

Mob Sessions Are Variance Insurance, Not Meetings
When Paying Three People for One Feature Is Actually Efficient
<100 subscribers

Stop Letting AI Agents Guess Where Code Goes
A practical guide to structuring your codebase so AI agents stop creating architectural debt.

From Craftsman to Toolmaker: Where Value Concentrates Now
What Developers Actually Get Paid For Now

Mob Sessions Are Variance Insurance, Not Meetings
When Paying Three People for One Feature Is Actually Efficient
Share Dialog
Share Dialog


Published Jan 27, 2026
Part 1 of 5: Organizational Structures for AI-Native Development
Here's an uncomfortable statistic: 95% of AI initiatives fail to deliver enterprise impact.
The usual suspects get blamed—immature technology, lack of data quality, insufficient compute resources. Engineering leaders nod along. CTOs commission another study. The pattern repeats.
But there's a more fundamental problem hiding in plain sight: process debt.
Process debt is the organizational equivalent of technical debt. It's the accumulation of workflows, ceremonies, and coordination mechanisms designed for a world that no longer exists.
And just like technical debt, it compounds.
Every sprint planning session optimized for human typing speed. Every story point estimation that assumes code scarcity. Every daily standup built around task-level status updates.
These aren't just inefficient in the AI era. They actively prevent AI from delivering value.
Consider the timing: When your constraint was how fast developers could type, optimizing around two-week batch cycles made perfect sense. You needed those sprint boundaries to coordinate who was building what. You needed story points to calibrate effort across a team. You needed daily standups to surface blockers before they cost you days of progress.
When AI can generate a complete feature in an afternoon, those same ceremonies create artificial wait states. The task sits in "Ready for Development" for five days waiting for sprint planning. The implementation takes three hours. The deployment waits another four days for the next release window.
The constraint shifted. The process didn't.
Before you close this tab thinking "great, another hot take on killing all meetings" wait a second.
Not all meetings are harmful. Status meetings could be harmful. Working sessions are essential.
The distinction matters more than most organizations realize.
Daily standups where developers report what they did yesterday? That's status overhead optimized for a two-week batch cycle. When work flowed in predictable sprints and humans were the bottleneck, those 15-minute syncs prevented coordination failures that could cost days.
With AI acceleration, that same standup becomes theater. "Yesterday I wrote a spec. Today I'll let AI implement it. No blockers." What did we actually coordinate? Nothing that couldn't be handled asynchronously in five seconds.
That's the meeting to kill.
Now consider this: Three senior developers spend four hours in a room together, sharing a screen, architecting a token management system with AI. They debate whether the approach makes sense. They course-correct in real-time. They make architectural decisions that affect multiple teams.
No one reports what they did yesterday. Code gets written. Decisions get made. Understanding emerges.
That's a working session. That's high-bandwidth collaboration. That's the meeting to aggressively invest in.
The ceremonies of Scrum as seen in many companies: the Daily Stand-up, the Sprint Planning meeting, Story Point estimation—are status coordination mechanisms. They're predicated on human-scale timelines and the scarcity of code production.
This isn't about Agile being "wrong." It's about specific Scrum rituals being optimized for constraints that no longer bind.
The philosophy—respond to change, working software over documentation, individuals over process—remains more relevant than ever. The rituals designed to implement that philosophy need fundamental rethinking.
Your team sits down fort planning. You've got a templating system to build—traditionally a month-long project involving database schema design, API endpoints, React components, test coverage, and documentation.
Your Scrum Master starts: "Let's break this down into tasks."
And that's where the friction begins.
In the pre-AI world, this decomposition was essential. You needed to coordinate who builds the database layer, who writes the API, who implements the frontend. Task-level granularity enabled parallel work and clear ownership.
But when AI agents can implement the entire stack coherently in hours, the decomposition becomes overhead.
You're not parallelizing human effort anymore—you're introducing coordination latency.
What's emerging across organizations: They're keeping Agile's coordination layer while dramatically decreasing work granularity. That month-long templating system? It's now a single work unit. The token management process? Also a single unit. What used to be called "features" or "epics" now function as atomic deliverables.
This isn't only about working faster—it's about working at the right level of abstraction.
Some of the ceremonies remain because coordination still matters. But the atoms changed because the means of production changed.
This creates an immediate problem: variance explosion.
Traditional sprint planning assumed relatively predictable effort profiles. A task might be 2 points or 8 points, but rarely 2 versus 80. Teams calibrated around this predictability.
Epic-sized/Feature work units introduce compounding variance:
One templating system might take three days because initial clarity is high and iteration converges quickly. Another stretches to three weeks because requirements emerge through building, requiring continuous refinement of both spec and implementation.
This isn't a bug—it's the nature of software development. But it breaks the fixed-timebox model that Agile relies on.
You can't plan a two-week sprint when you don't know if the work will take three days or three weeks.
The response from most organizations? Try to force the work back into predictable chunks. Write more detailed specs. Add more planning meetings. Decompose further.
This is fighting the constraint rather than adapting to it.
With AI, iteration never stops—it just shifts location based on initial clarity. High-clarity work iterates rapidly: spec refines with AI, then implementation iterates against that spec, back and forth until convergence. Low-clarity work iterates everywhere simultaneously: spec, implementation, and integration all evolve together because you learn what you need by building it.
Both are continuous iteration. Both are valuable. The difference is variance: high-clarity work converges predictably in days. Low-clarity work explores unpredictably over weeks.
The variance question isn't "which mode of work" but "how much do we know before we start, and how much will we learn by building?"
The pattern emerging across sful AI-native teams:
Stop organizing around two-week sprints. Work flows continuously from intake through refinement to development to verification. Work-in-progress limits replace sprint boundaries. Upon completing one work item, the next highest-priority item pulls from the backlog immediately—no artificial delays.
Preserve integration cadence. Weekly syncs become sense-making sessions rather than planning meetings: "What was learned? What surprised us? Do these pieces compose coherently?" This separates work intake (now continuous at varying clarity levels) from team synchronization (now regular but lightweight).
Distinguish coordination needs by clarity level. High-clarity work gets solo iteration—one developer working with AI, tight feedback loops, rapid convergence. Low-clarity work gets mob sessions—multiple developers iterating together because coordination prevents divergence.
The rhythm shifts from synchronized batch release to continuous flow with quality gates. Some features flow through in days when clarity is high. Others require weeks as clarity builds through iteration. Both are normal.
Process debt isn't a minor optimization problem. It's the primary reason AI initiatives fail to deliver impact.
You're trying to run AI-accelerated development through coordination mechanisms designed for human-speed development. It's like trying to use traffic flags designed for horse-drawn carriages to manage highway traffic.
The constraint was typing speed. Now it's clarity of intent and quality of architectural boundaries.
Your processes need to optimize for the new constraint, not the old one.
This requires more than swapping tools. It requires rethinking what meetings are for, what work granularity makes sense, and how variance gets managed.
This is Part 1 of a 5-part series on building AI-native engineering organizations.
Coming in Part 2: Epic-sized work creates variance explosions. Why spec-driven development does not solve the problem. What actually works for team alignment.
Published Jan 27, 2026
Part 1 of 5: Organizational Structures for AI-Native Development
Here's an uncomfortable statistic: 95% of AI initiatives fail to deliver enterprise impact.
The usual suspects get blamed—immature technology, lack of data quality, insufficient compute resources. Engineering leaders nod along. CTOs commission another study. The pattern repeats.
But there's a more fundamental problem hiding in plain sight: process debt.
Process debt is the organizational equivalent of technical debt. It's the accumulation of workflows, ceremonies, and coordination mechanisms designed for a world that no longer exists.
And just like technical debt, it compounds.
Every sprint planning session optimized for human typing speed. Every story point estimation that assumes code scarcity. Every daily standup built around task-level status updates.
These aren't just inefficient in the AI era. They actively prevent AI from delivering value.
Consider the timing: When your constraint was how fast developers could type, optimizing around two-week batch cycles made perfect sense. You needed those sprint boundaries to coordinate who was building what. You needed story points to calibrate effort across a team. You needed daily standups to surface blockers before they cost you days of progress.
When AI can generate a complete feature in an afternoon, those same ceremonies create artificial wait states. The task sits in "Ready for Development" for five days waiting for sprint planning. The implementation takes three hours. The deployment waits another four days for the next release window.
The constraint shifted. The process didn't.
Before you close this tab thinking "great, another hot take on killing all meetings" wait a second.
Not all meetings are harmful. Status meetings could be harmful. Working sessions are essential.
The distinction matters more than most organizations realize.
Daily standups where developers report what they did yesterday? That's status overhead optimized for a two-week batch cycle. When work flowed in predictable sprints and humans were the bottleneck, those 15-minute syncs prevented coordination failures that could cost days.
With AI acceleration, that same standup becomes theater. "Yesterday I wrote a spec. Today I'll let AI implement it. No blockers." What did we actually coordinate? Nothing that couldn't be handled asynchronously in five seconds.
That's the meeting to kill.
Now consider this: Three senior developers spend four hours in a room together, sharing a screen, architecting a token management system with AI. They debate whether the approach makes sense. They course-correct in real-time. They make architectural decisions that affect multiple teams.
No one reports what they did yesterday. Code gets written. Decisions get made. Understanding emerges.
That's a working session. That's high-bandwidth collaboration. That's the meeting to aggressively invest in.
The ceremonies of Scrum as seen in many companies: the Daily Stand-up, the Sprint Planning meeting, Story Point estimation—are status coordination mechanisms. They're predicated on human-scale timelines and the scarcity of code production.
This isn't about Agile being "wrong." It's about specific Scrum rituals being optimized for constraints that no longer bind.
The philosophy—respond to change, working software over documentation, individuals over process—remains more relevant than ever. The rituals designed to implement that philosophy need fundamental rethinking.
Your team sits down fort planning. You've got a templating system to build—traditionally a month-long project involving database schema design, API endpoints, React components, test coverage, and documentation.
Your Scrum Master starts: "Let's break this down into tasks."
And that's where the friction begins.
In the pre-AI world, this decomposition was essential. You needed to coordinate who builds the database layer, who writes the API, who implements the frontend. Task-level granularity enabled parallel work and clear ownership.
But when AI agents can implement the entire stack coherently in hours, the decomposition becomes overhead.
You're not parallelizing human effort anymore—you're introducing coordination latency.
What's emerging across organizations: They're keeping Agile's coordination layer while dramatically decreasing work granularity. That month-long templating system? It's now a single work unit. The token management process? Also a single unit. What used to be called "features" or "epics" now function as atomic deliverables.
This isn't only about working faster—it's about working at the right level of abstraction.
Some of the ceremonies remain because coordination still matters. But the atoms changed because the means of production changed.
This creates an immediate problem: variance explosion.
Traditional sprint planning assumed relatively predictable effort profiles. A task might be 2 points or 8 points, but rarely 2 versus 80. Teams calibrated around this predictability.
Epic-sized/Feature work units introduce compounding variance:
One templating system might take three days because initial clarity is high and iteration converges quickly. Another stretches to three weeks because requirements emerge through building, requiring continuous refinement of both spec and implementation.
This isn't a bug—it's the nature of software development. But it breaks the fixed-timebox model that Agile relies on.
You can't plan a two-week sprint when you don't know if the work will take three days or three weeks.
The response from most organizations? Try to force the work back into predictable chunks. Write more detailed specs. Add more planning meetings. Decompose further.
This is fighting the constraint rather than adapting to it.
With AI, iteration never stops—it just shifts location based on initial clarity. High-clarity work iterates rapidly: spec refines with AI, then implementation iterates against that spec, back and forth until convergence. Low-clarity work iterates everywhere simultaneously: spec, implementation, and integration all evolve together because you learn what you need by building it.
Both are continuous iteration. Both are valuable. The difference is variance: high-clarity work converges predictably in days. Low-clarity work explores unpredictably over weeks.
The variance question isn't "which mode of work" but "how much do we know before we start, and how much will we learn by building?"
The pattern emerging across sful AI-native teams:
Stop organizing around two-week sprints. Work flows continuously from intake through refinement to development to verification. Work-in-progress limits replace sprint boundaries. Upon completing one work item, the next highest-priority item pulls from the backlog immediately—no artificial delays.
Preserve integration cadence. Weekly syncs become sense-making sessions rather than planning meetings: "What was learned? What surprised us? Do these pieces compose coherently?" This separates work intake (now continuous at varying clarity levels) from team synchronization (now regular but lightweight).
Distinguish coordination needs by clarity level. High-clarity work gets solo iteration—one developer working with AI, tight feedback loops, rapid convergence. Low-clarity work gets mob sessions—multiple developers iterating together because coordination prevents divergence.
The rhythm shifts from synchronized batch release to continuous flow with quality gates. Some features flow through in days when clarity is high. Others require weeks as clarity builds through iteration. Both are normal.
Process debt isn't a minor optimization problem. It's the primary reason AI initiatives fail to deliver impact.
You're trying to run AI-accelerated development through coordination mechanisms designed for human-speed development. It's like trying to use traffic flags designed for horse-drawn carriages to manage highway traffic.
The constraint was typing speed. Now it's clarity of intent and quality of architectural boundaries.
Your processes need to optimize for the new constraint, not the old one.
This requires more than swapping tools. It requires rethinking what meetings are for, what work granularity makes sense, and how variance gets managed.
This is Part 1 of a 5-part series on building AI-native engineering organizations.
Coming in Part 2: Epic-sized work creates variance explosions. Why spec-driven development does not solve the problem. What actually works for team alignment.
No comments yet