
In the late 18th century, Thomas Malthus looked at a field of wheat and a growing population and saw a trap. Every biological system, he argued, has a ceiling — a point at which the environment can no longer absorb more growth without beginning to collapse. He called it carrying capacity. His math was sound. What he got wrong was assuming the ceiling was fixed.
He was not the last person to make that mistake.
In 2026, carrying capacity has migrated from biology textbooks into boardrooms, city planning offices, and the quieter conversations people have with themselves at midnight about why they feel perpetually behind. The physics of the wall are the same everywhere. Growth follows an S-curve, not a straight line, and the relationship has a precise shape:

The term (1 − N/K) is the friction. At 10% of capacity, it is nearly invisible. At 90%, it is nearly everything. The closer a system gets to its limit, the more each additional unit of progress costs — not linearly, but with sharply increasing resistance. This is why the last 10% of any hard problem takes as long as the first 90%. At some point, the treadmill doesn’t go faster. It just gets heavier. And to change that, you cannot work harder inside the system. You have to change K itself.
The standard response to hitting a ceiling is efficiency. Squeeze more out of what you have. Do more with less. This is sensible advice, and it works, right up until it doesn’t. What we keep discovering, across domains that have nothing obvious in common, is that efficiency harbors a paradox. Make a system run better, and you don’t reduce the load — you expand the empire. You raise the ceiling, yes, but you also immediately begin filling the new space, and the wall reappears, further out but structurally identical. The question worth asking is whether we are solving the problem or just rescheduling it.
The most visible collision with carrying capacity right now is happening in AI. The popular assumption is that the constraint on artificial intelligence is intelligence itself — that we are racing toward some conceptual limit of what machines can reason through. The actual constraint, in 2026, is more mundane and more serious: it’s the electrical grid. We can design chips faster than we can string power lines. The data centers required to train and run frontier models are consuming power at a rate that is straining regional infrastructure, and the gap between compute demand and energy supply is widening, not closing.
The industry’s response has been to innovate at the hardware level — Small Modular Reactors, optical computing, chip architectures that deliver more computation per watt. These are genuine breakthroughs. A current generation AI chip is roughly 25 times more energy efficient than its predecessor from five years ago. And yet total energy consumption by AI infrastructure has not fallen. It has risen sharply. This is Jevons Paradox in its clearest modern expression: efficiency lowers the cost per unit of activity, which makes more activity economically viable, which expands the total scale until the system is consuming far more than it did before the efficiency gain. The ceiling moved. The empire followed.
Energy grids scale like cities. Both are physical substrates that determine how much activity a system can host.
New York City has been running this experiment for over a century. Manhattan is 22 square miles — a boundary that has not changed — and yet the city should, by any Malthusian logic, have stopped growing around 1910. It didn’t, because it kept changing the variable. Air rights extended growth vertically. Digital signaling squeezed more trains into the same tunnels. Reclaiming street lanes for pedestrian plazas increased what planners call social carrying capacity without moving a single brick. Each intervention was real. Each one worked.
And each one made the city more attractive to more people, which filled the new capacity until the system was strained again. This is induced demand: the infrastructure version of Jevons. Build more, and you generate more pressure to fill it. The ceiling rose. The wall came with it.
The domain where this pattern is least examined and most consequential is the personal one. We treat our own cognitive bandwidth like an infinite resource, subject only to time management and the right productivity system. But attention, working memory, and the capacity to absorb ambient stress all follow the same S-curve. There is a ceiling, and most people in high-output environments are operating close to it most of the time.
The hacks are familiar: AI to compress reading and summarize meetings, batching decisions to reduce the overhead of context-switching, standardizing routines to lower the energy cost of choices that don’t actually matter. These work. And then the rebound effect arrives. Save two hours through automation, and you rarely use them for rest. You fill them with more input — more reading, more messages, more calls, more projects that were previously impossible to fit. The inbox creeps back toward triple digits. The calendar compresses until every hour has a name. The system returns to capacity. The Red Zone becomes the normal zone. The efficiency gain was real, but it was immediately colonized by expansion.
The lesson sitting underneath all three of these stories is the same one. Efficiency is a tool, not a solution. It is exceptionally good at one thing: buying time and space. What we do with that time and space is a different and harder question, and one that efficiency cannot answer for us.
The systems that navigate this well share something that is not technological. They build buffers deliberately and then defend them. A buffer is not waste. It is the gap between where you are and where the ceiling is, and maintaining it requires a specific kind of discipline: the willingness to leave capacity unfilled even when filling it is profitable. A system with no buffer cannot absorb variance. Every fluctuation becomes a shock.
This turns out to be an act of coordination as much as restraint. Every empty unit of capacity is a market signal. Someone will try to fill it. The buffer requires not just individual decision-making but a collective agreement about what the system is actually for.
Malthus wasn’t wrong about the trap. He was wrong about who could move the walls. But expansion is almost always easier than restraint. The hard part is deciding, once the walls have moved, whether to sprint toward the new ceiling or to stop, look at the space you’ve just created, and use some of it for something other than growth.
In economics, as in life, the most valuable resource is often the one you don’t spend.

In the late 18th century, Thomas Malthus looked at a field of wheat and a growing population and saw a trap. Every biological system, he argued, has a ceiling — a point at which the environment can no longer absorb more growth without beginning to collapse. He called it carrying capacity. His math was sound. What he got wrong was assuming the ceiling was fixed.
He was not the last person to make that mistake.
In 2026, carrying capacity has migrated from biology textbooks into boardrooms, city planning offices, and the quieter conversations people have with themselves at midnight about why they feel perpetually behind. The physics of the wall are the same everywhere. Growth follows an S-curve, not a straight line, and the relationship has a precise shape:

The term (1 − N/K) is the friction. At 10% of capacity, it is nearly invisible. At 90%, it is nearly everything. The closer a system gets to its limit, the more each additional unit of progress costs — not linearly, but with sharply increasing resistance. This is why the last 10% of any hard problem takes as long as the first 90%. At some point, the treadmill doesn’t go faster. It just gets heavier. And to change that, you cannot work harder inside the system. You have to change K itself.
The standard response to hitting a ceiling is efficiency. Squeeze more out of what you have. Do more with less. This is sensible advice, and it works, right up until it doesn’t. What we keep discovering, across domains that have nothing obvious in common, is that efficiency harbors a paradox. Make a system run better, and you don’t reduce the load — you expand the empire. You raise the ceiling, yes, but you also immediately begin filling the new space, and the wall reappears, further out but structurally identical. The question worth asking is whether we are solving the problem or just rescheduling it.
The most visible collision with carrying capacity right now is happening in AI. The popular assumption is that the constraint on artificial intelligence is intelligence itself — that we are racing toward some conceptual limit of what machines can reason through. The actual constraint, in 2026, is more mundane and more serious: it’s the electrical grid. We can design chips faster than we can string power lines. The data centers required to train and run frontier models are consuming power at a rate that is straining regional infrastructure, and the gap between compute demand and energy supply is widening, not closing.
The industry’s response has been to innovate at the hardware level — Small Modular Reactors, optical computing, chip architectures that deliver more computation per watt. These are genuine breakthroughs. A current generation AI chip is roughly 25 times more energy efficient than its predecessor from five years ago. And yet total energy consumption by AI infrastructure has not fallen. It has risen sharply. This is Jevons Paradox in its clearest modern expression: efficiency lowers the cost per unit of activity, which makes more activity economically viable, which expands the total scale until the system is consuming far more than it did before the efficiency gain. The ceiling moved. The empire followed.
Energy grids scale like cities. Both are physical substrates that determine how much activity a system can host.
New York City has been running this experiment for over a century. Manhattan is 22 square miles — a boundary that has not changed — and yet the city should, by any Malthusian logic, have stopped growing around 1910. It didn’t, because it kept changing the variable. Air rights extended growth vertically. Digital signaling squeezed more trains into the same tunnels. Reclaiming street lanes for pedestrian plazas increased what planners call social carrying capacity without moving a single brick. Each intervention was real. Each one worked.
And each one made the city more attractive to more people, which filled the new capacity until the system was strained again. This is induced demand: the infrastructure version of Jevons. Build more, and you generate more pressure to fill it. The ceiling rose. The wall came with it.
The domain where this pattern is least examined and most consequential is the personal one. We treat our own cognitive bandwidth like an infinite resource, subject only to time management and the right productivity system. But attention, working memory, and the capacity to absorb ambient stress all follow the same S-curve. There is a ceiling, and most people in high-output environments are operating close to it most of the time.
The hacks are familiar: AI to compress reading and summarize meetings, batching decisions to reduce the overhead of context-switching, standardizing routines to lower the energy cost of choices that don’t actually matter. These work. And then the rebound effect arrives. Save two hours through automation, and you rarely use them for rest. You fill them with more input — more reading, more messages, more calls, more projects that were previously impossible to fit. The inbox creeps back toward triple digits. The calendar compresses until every hour has a name. The system returns to capacity. The Red Zone becomes the normal zone. The efficiency gain was real, but it was immediately colonized by expansion.
The lesson sitting underneath all three of these stories is the same one. Efficiency is a tool, not a solution. It is exceptionally good at one thing: buying time and space. What we do with that time and space is a different and harder question, and one that efficiency cannot answer for us.
The systems that navigate this well share something that is not technological. They build buffers deliberately and then defend them. A buffer is not waste. It is the gap between where you are and where the ceiling is, and maintaining it requires a specific kind of discipline: the willingness to leave capacity unfilled even when filling it is profitable. A system with no buffer cannot absorb variance. Every fluctuation becomes a shock.
This turns out to be an act of coordination as much as restraint. Every empty unit of capacity is a market signal. Someone will try to fill it. The buffer requires not just individual decision-making but a collective agreement about what the system is actually for.
Malthus wasn’t wrong about the trap. He was wrong about who could move the walls. But expansion is almost always easier than restraint. The hard part is deciding, once the walls have moved, whether to sprint toward the new ceiling or to stop, look at the space you’ve just created, and use some of it for something other than growth.
In economics, as in life, the most valuable resource is often the one you don’t spend.

The New Common Sense
Own Your Work. Own Your Audience. Own the Web.

The Rise of the Distribution-First Founder
For decades, founders followed the same script: build a product, raise a round, then worry about customers later. In the 2010s, the script evolved—thanks to the Lean Startup playbook—into “ship an MVP, test for traction, raise a round, then prep your GTM.” It was faster, leaner, but distribution was still left at the end of the process. But even this MVP-first approach kept the hardest part—finding customers—pushed to the back of the journey. That gap is what a new type of founder is closing....

The Physics of Distribution
How attention becomes motion.

The New Common Sense
Own Your Work. Own Your Audience. Own the Web.

The Rise of the Distribution-First Founder
For decades, founders followed the same script: build a product, raise a round, then worry about customers later. In the 2010s, the script evolved—thanks to the Lean Startup playbook—into “ship an MVP, test for traction, raise a round, then prep your GTM.” It was faster, leaner, but distribution was still left at the end of the process. But even this MVP-first approach kept the hardest part—finding customers—pushed to the back of the journey. That gap is what a new type of founder is closing....

The Physics of Distribution
How attention becomes motion.
>200 subscribers
>200 subscribers
Share Dialog
Share Dialog
1 comment
Raise the Ceiling, Move the Wall The more efficient you get, the more you consume. The more capacity you build, the more pressure you generate to fill it. Raising the ceiling doesn't solve the problem. It just reschedules it. A piece on Jevons, Malthus, and the S-curve that governs everything from AI data centers to your inbox.