
Beyond Funding: Web3's Real Coordination Crisis and the Paradoxes We're Ignoring
'The uncomfortable truth is that funding, no matter how innovative or well-intentioned, cannot solve coordination problems rooted in unaddressed paradoxes.'

The Hidden Architecture of Human Systems: How Complexity Organizes Itself Through Tensegrity
How Dynamic Balance Shapes Everything From Relationships to Democracy

Nothing Makes Sense: AI & Information Ecology
Integrating Daniel Schmachtenberger's Information Ecology, LessWrong's Technical Safety Concerns, Neil Postman’s Technopoly and Sensemaking Frameworks
<100 subscribers



Beyond Funding: Web3's Real Coordination Crisis and the Paradoxes We're Ignoring
'The uncomfortable truth is that funding, no matter how innovative or well-intentioned, cannot solve coordination problems rooted in unaddressed paradoxes.'

The Hidden Architecture of Human Systems: How Complexity Organizes Itself Through Tensegrity
How Dynamic Balance Shapes Everything From Relationships to Democracy

Nothing Makes Sense: AI & Information Ecology
Integrating Daniel Schmachtenberger's Information Ecology, LessWrong's Technical Safety Concerns, Neil Postman’s Technopoly and Sensemaking Frameworks
There is a certain kind of person who looks like a genius from the outside.
They start companies that move markets, send rockets into orbit, advise presidents, shape narratives. Peter Thiel tells founders that "competition is for losers" and builds a secretive data company whose entire business is selling governments and corporations the ability to see what others cannot. Elon Musk looks at rockets that everyone else assumes must cost tens of millions and quietly asks what the metal is worth. Donald Trump spends fourteen seasons on television performing the role of "America's Boss" until millions of people unconsciously accept it as fact. Curtis Yarvin writes philosophy in corporate-speak, telling a generation raised on business-school thinking that democracy is obsolete and that we should replace it with CEO monarchs.
These are not accidents. They are examples of a pattern.
They are exploiting what you might call actionable information asymmetry. They see gaps between what people think they know and what is actually going on, and they turn those gaps into money, power, or influence.
What almost no one talks about is where those asymmetries come from, and what a different kind of intelligence would do with them.
That is where Fred Rogers, a circle of kindergarteners with spaghetti and a marshmallow, and an emerging culture in web3 all quietly meet.

There is a famous team exercise called the marshmallow challenge. Four people get twenty sticks of spaghetti, one yard of tape, one yard of string, and a marshmallow, and their job is to build the tallest free–standing structure with the marshmallow on top.
You can run this exercise with all kinds of groups. Engineers. CEOs. Lawyers. Designers. The pattern is always the same.
The worst performers are recent business school graduates.
They spend a lot of time talking about the plan. They argue over the best design. They implicitly negotiate who is "in charge." Only at the end, when time is almost out, do they place the marshmallow on top. At that moment, the whole structure usually collapses.
The best performers, surprisingly, are kindergarteners.
They do not compete to be "CEO of Spaghetti, Inc." They do not give speeches. They put the marshmallow on right away and build lots of small prototypes. The tower fails, they adjust. It leans, they adjust. They learn by doing until something stable appears.
This is not a cute anecdote. It is a diagnosis.
Business education trains people to look for the single right plan, execute it with confidence, and treat that confidence as proof they are on the right track. It is a style of thinking that works reasonably well for tame, predictable problems. It fails dramatically for complex ones.
And yet many of the people who go through that training end up running companies, designing products, advising governments, funding research, and setting cultural narratives. Their mental model becomes the default.
Now add the Dunning–Kruger effect to this picture. People who know little about a domain tend to overestimate their competence, while people who know a lot tend to underestimate theirs. The less you know, the less you realize how much you do not know.
Combine a system that rewards confidence with a bias that inflates confidence most in those who should be most cautious. You get a class of leaders who are breathtakingly sure of themselves, structurally blind to their own limitations, and in charge of very complicated systems.
That is fertile ground for the Peters, Elons, Curtises, and Donalds of the world.
From the outside, Thiel looks like a contrarian visionary because he says things like "competition is for losers" while business schools solemnly teach case after case about how to compete harder.
What he is really doing is noticing a training–induced blind spot. If everyone else believes profit lies in entering proven markets and fighting for share, the person who refuses to compete and quietly builds a monopoly in a niche looks magical.
Palantir, the company he co–founded, simply turns that pattern into infrastructure. Governments and corporations already have oceans of data. They just cannot see across their own silos. Palantir integrates that data and hands them an information advantage over everyone who does not have that view. That advantage is the product.
Musk applies a similar pattern in engineering domains. He looks at rockets and batteries and refuses to accept industry assumptions. He breaks the problem down to first principles: what is this object made of, what do the raw materials cost, what physics truly constrain us. Inside that gap between assumption and reality, he finds room for radically different cost structures.
Trump's move is cruder but no less deliberate. "The Apprentice" manufactured a fictional version of him as the decisive, ultra–successful businessman. Millions of viewers consumed that character for years. When he ran for office, they were not evaluating a politician. They were voting for the person they already "knew." That gap between the constructed image and the messy reality of bankruptcies and failed ventures was a massive information asymmetry. He did not have to be good at business. He only needed people to believe he was.
Curtis Yarvin, meanwhile, dresses his political project in the language of software and shareholder value. He tells a generation soaked in corporate metaphors that democracy is obsolete and we should replace it with "CEO–monarchs" and "sovereign corporations." To people already trained to think of leadership as decisive hierarchy and of complexity as something you fix by finding the right executive, it can sound like clarity rather than regression.
In every case, the pattern is the same.
A large group of people has been trained to think in a certain narrow way. That training creates predictable blind spots. A smaller group learns to see those blind spots and turns them into levers.
From the vantage point of those still inside the training, it looks like genius.
From the outside, it is a particular kind of opportunism.
This is not a symmetrical contest between two worldviews.
The extractive pattern depends on keeping the frame narrow. If everyone could see what Thiel sees about monopolies and see the human costs of monopolistic control and see the alternatives to that system, the information asymmetry collapses. The "genius" move requires maintaining ignorance as much as it requires possessing insight.
Rogers operates from the fuller view. He can see the patterns of extraction—he understands how television manipulates attention, how adults project their anxieties onto children, how institutions prioritize efficiency over care. But he also sees what extraction must exclude to function: the inner lives of children, the structural role of emotion, the intelligence of iterative uncertainty, the creativity that emerges from safety rather than competition.
The asymmetry isn't just informational. It is epistemic. One mode of knowing maintains its power by keeping people from accessing a more complete mode of knowing.

Now, place this next to Fred Rogers, sitting across from Charlie Rose in a quiet studio.
He is not launching companies or raising funds. He is not promising to disrupt anything. He is talking, very slowly, about children, feelings, and the "gift of silence."
"Our society is much more interested in information than wonder," he says, "in noise rather than silence. And I feel that we need a lot more wonder and a lot more silence in our lives."
He does not just say this. He demonstrates it in the way he inhabits the space.
He leaves pauses that would make a producer nervous. He allows emotion to arise and be seen. He does not rush to fill every moment with words. He treats the viewer not as a consumer of content but as a person with an inner life that matters.
Rogers is not operating on a different axis of intelligence. He is operating from a wider aperture.
If the Thiel/Musk/Yarvin/Trump axis is about extracting information from noise to gain an advantage, Rogers sees in the silence. He does not just pull out facts. He pulls out meaning.
He is not interested in information asymmetry. He is interested in human asymmetry: the parts of us that have never been seen, heard, or given space. He is not building systems that hoard insight. He is building conditions where people can recognize themselves.
It is easy to dismiss this as "soft" or unrelated to the hard problems of economics and infrastructure. That is another symptom of our training. We separate "feeling" from "thinking" as if the quality of attention we bring to the world has nothing to do with the systems we design.
But what if the ability to sit in silence, to tolerate not knowing, to feel the contours of a problem before forcing a solution, is exactly what we need in order to build systems that do not keep collapsing under the weight of their own cleverness?
Most of us carry Fred Rogers as a feeling more than as a framework.
He belongs to a vague, sacred category in our minds: the kind man from childhood who made the world feel safer for half an hour at a time. For many, he sat in the space where fathers were absent, distracted, or emotionally unreachable. He talked about feelings no one else named. He modeled a kind of steady attention that most of us rarely experienced in real life.
We tend to leave him there.
In nostalgia. In tribute videos. In the part of us that says, "Wasn't it nice that someone like that existed?"
We do not usually connect him to web3, or finance, or Peter Thiel. We certainly do not sit down to design a governance mechanism and ask, "What would Fred Rogers do with this parameter?"
That is the mistake.
Rogers was not simply "nice." He was working from a rigorous philosophy of how humans learn, heal, and grow. He built a television show as a carefully crafted environment in which children could safely encounter difficult emotions, complex realities, and their own inner lives. He thought deeply about pacing, silence, repetition, eye contact, and ritual. He treated attention itself as architecture.
That is exactly the level at which we are now operating with code.
When we design token incentives, voting systems, interface patterns, and organizational rituals, we are doing what Rogers did with camera angles and pauses. We are telling people what matters. We are deciding what gets rushed and what gets room. We are encoding assumptions about what a "good" response looks like and what feelings are allowed.
Rogers's wisdom is not a sentiment we should honor in parallel to our "serious" work. It is the wider epistemic frame we could choose to build from. It can see extraction for what it is—a deliberately constrained view masquerading as the whole picture. When we say we need "Rogers in web3," we are not asking for niceness alongside rigor. We are asking: what would our systems look like if they were designed from the view that includes both information advantage patterns and the human costs, and the alternatives extraction must keep invisible?
But here is the part that transforms Rogers from memory into method:
Rogers did not just think about responsiveness. He built systems around it.
His show received 15 to 30 pieces of viewer mail daily. Rogers personally read every letter. He edited and signed responses. His staff estimated he wrote between 40,000 and 200,000 letters over 31 seasons. One colleague remembered: "There are some incredible letters about how he was the only male role model in their lives—children who were abused or neglected—all they had was Mister Rogers' Neighborhood."
That is not inspiration. That is infrastructure for caring.
A blind five-year-old girl named Katie worried that the fish on the show were not being fed because she could not see him do it. She wrote to Rogers. He did not send a generic response. He changed the show. He began announcing each feeding. Then he spoke directly to her on air: "I just wanted you to know that even if I forget to feed them when we're together, I come back later and feed them, so they're always taken care of. It's good to know that fish and animals and children are taken care of by those who can, isn't it?"
One piece of mail. One concern from one child. Led to a structural change in how the program operated.
When a parent confronted him about his decision to explain that Santa Claus was not real (rejecting the idea of a stranger sneaking into children's homes), Rogers received significant angry backlash. Most producers would have backed down. He did not. He prioritized honesty with children over adult comfort, even knowing it would cost him.
When Robert Kennedy was assassinated in 1968, Rogers went on air two days later in a suit instead of his cardigan. He addressed parents directly about "a disturbing time in our nation's history." When Cold War tensions reached a peak and threatened nuclear annihilation, he created a five-episode arc in which the Neighborhood's leader considered abandoning music to have the children build weapons. The resolution: it was all a misunderstanding. Southwood was building a bridge, not bombs. He taped those episodes in summer 1983, months before a major international conflict—not because he could predict the future, but because he was attentive to the fears his audience actually carried.
This is what listening-as-governance looks like.
Rogers did not set the show once and optimize metrics. He paid attention. He adjusted. He treated viewer input not as data points but as signals that something in the system needed to shift.
If we actually stopped and designed from the place Rogers helped us inhabit as children, we would not reach so quickly for the tools we currently treat as default.
We would not begin and end with game theory.
We would reach first for social choice theory and ask basic, human questions:
When we aggregate preferences, whose voice gets washed out and whose gets amplified
How do we prevent a system from quietly becoming dictatorial, even when it follows its own rules
What happens to people at the edges when the "rational" outcome is chosen for the center
Game theory treats people as payoff maximizers and asks, "Given these incentives, what will the players do?" Social choice theory treats people as citizens and asks, "Given these people and these rules, what kind of world are we creating?"
If you design from the Rogers place in you, this second question feels obviously prior.
That is what your own work keeps circling back to:
In "Game Theory Assumptions That Hurt Web3," you show how rational actor assumptions and clean payoff matrices break down in the presence of fear, panic selling, loyalty, altruism, and spite. The math is not wrong. The model is too thin for the actual humans inside it.
In "Beyond Funding: Web3's Real Coordination Crisis," you argue that throwing more clever funding mechanics at a system does nothing if the underlying paradoxes and tensions are not acknowledged and held. That is Rogers again: he did not give children "better incentives" to stop feeling scared. He sat with them in fear until something else became possible.
In your tensegrity work, you refuse to flatten complex systems into one right structure. You treat tension as structural, not as a bug to be eliminated. Rogers did the same with emotion: anger, sadness, jealousy, and joy all had a place in his neighborhood. He did not optimize for one prevailing feeling. He designed for many feelings to coexist without the system snapping.
And then there are people far more decorated than either of us who echo the same pattern.
Elinor Ostrom, who won a Nobel Prize for showing that communities can govern shared resources without a Leviathan, didn't start from "how do we get individuals to maximize payoffs." She started from long-lived real communities and asked what kinds of rules, relationships, and feedback loops actually kept forests, fisheries, and irrigation systems alive. Her principles of polycentric governance and "clearly defined boundaries, conflict-resolution mechanisms, and collective choice arrangements" are what it looks like to take Rogers's ethic into institutional design.
Audrey Tang in Taiwan helped build digital democracy tools that do not just count votes faster; they surface consensus without erasing difference. Platforms like vTaiwan and pol.is are spaces designed so that people can see where they agree and disagree, and where new options might emerge. That is Rogers at civic scale: making space where everyone in the "neighborhood" feels seen, and where difficult topics can be explored without shame or domination.
Gitcoin's use of quadratic funding is another concrete example. It encodes a very Rogers-like intuition into math: many small, sincere signals of support ought to matter more than a few large, self-interested ones. In other words, the quiet voices, if there are enough of them, should outweigh the whales. That is not just mechanism design. It is a choice about what kind of "neighborhood" the protocol wants to be.
You are doing the same kind of thing in a different language:
Bringing social choice into conversations that would otherwise reduce everything to game theory
Bringing tensegrity into spaces that would otherwise collapse into centralization vs decentralization shouting matches
Bringing shadow hierarchies into view, where people want to believe a flat org chart tells the whole story
Mr. Rogers is, in one sense, the emotional ancestor of this work.
He shows, in his body, pacing, and voice, what it looks like to center dignity and wonder without losing clarity. Your writing does the same thing in the domains of mechanism design, governance, and organizational architecture.
The proposal here is simple and radical:
When we design voting systems, let us bring social choice theory into the room alongside game theory. Let us ask who disappears when we choose efficiency over representation.
When we design DAOs and protocols, let us build in Rogers-like responsiveness: mechanisms to hear individual concerns, willingness to change course based on what we learn, and real infrastructure for caring that goes beyond mission statements.
When we talk about "rational" governance, let us remember that Rogers's way of knowing is no less rational. It is rational about more of the human being.
If we did that, "Mr. Rogers in web3" would stop sounding like a joke and become a design requirement.
When you look across examples, another pattern emerges.
The business school pattern and the authoritarian pattern treat complex, evolving problems as if they were simple or merely complicated. Find the right plan, pick the right leader, execute. If it fails, the answer is usually more of the same: more decisive action, more centralization, more confidence.
Kindergarteners instinctively treat the marshmallow problem as something you feel your way through. You try, you learn, you adjust. You do not assume you know in advance exactly how the structure will behave.
Mathematicians hit something similar in the real world with the Millennium Prize Problems. For decades, the image of progress was the solitary genius proving a deep conjecture alone. Then projects like Polymath opened up hard problems to open, messy, collective work. Many minds, many partial insights, much public failure. The results have been astonishingly productive.
In both cases, the shift is from single–plan confidence to iterative humility.
From treating difficulty as a test of your brilliance to recognizing difficulty as a property of the situation that demands different ways of knowing.
Fred Rogers is doing something similar at the level of human experience. He is not trying to overpower fear, sadness, or confusion with a correct answer. He is building a container that can hold them. That capacity to hold, rather than fix, is not a weakness. It is a different kind of strength.
The question, then, is not just what kind of people we have in charge, but what kind of epistemology we are quietly encoding into our systems.
Do our systems reward those who can exploit training–induced blind spots, or those who can sit still long enough to feel their way toward a wiser pattern?

The current versions of web3, so far, encode the same assumptions that got us here. Whoever can see the gap between how power actually works and how the system claims it works gains an advantage. Those advantages compound.
A founder accumulates tokens and reshapes governance. A whale coordinates with others to shift voting thresholds. Capital finds its way into control regardless of what the whitepaper promises.
This is not because the technology failed. It is because we have not yet asked the technology to encode something fundamentally different.
I have written extensively about this dynamic. In "Beyond Funding: Web3's Real Coordination Crisis," I explored how billions in sophisticated funding mechanisms have not solved persistent coordination failures because they address symptoms rather than root paradoxes. We keep trying to eliminate tensions rather than learn to work with them.
In "The Stallman Paradox," I showed how Web3 has become an extraction machine wrapped in open-source aesthetics, celebrating the principles of digital freedom while abandoning them in practice.
And in my Prevolution series, I traced how extraction runs on information asymmetry across every era: feudal lords knew how tithes accumulated into power while peasants knew only that they owed grain. Industrial capitalists understood surplus value, while workers saw wages. In web3, protocol insiders understand tokenomics and governance mechanics, while most users see yields and airdrops.
But for the first time, we have widespread access to programmable money and programmable organizations. We have smart contracts that can encode rules about how resources move and who gets a say. We have DAOs that are more than just shareholder registries with Discord servers attached.
We can use the same programmable tools to encode very different assumptions.
We can design funding mechanisms, such as quadratic funding, that favor broad, modest support over concentrated capital. We can set up public goods experiments that reward sustained contribution, not just early speculation. We can implement voting systems that recognize the intensity of preferences and protect minorities, rather than blindly following raw token weight.
More radically, we can design processes that assume complexity rather than simplicity. Instead of locking ourselves into constitutions that treat the world as static, we can build governance that expects change, invites iteration, leaves room for course correction, and makes space to hear Katie writing in with a concern about fish.
We can ask, explicitly:
Where does silence live in our systems?
Where does wonder live?
Where do we leave space for reflection before execution?
Where is the listening infrastructure that takes input and acts on it?
Right now, most smart contracts capture information asymmetry. They harden whatever advantage someone had at deployment time. But there is nothing stopping us from using that same layer to soften asymmetries instead.
We can program in:
Transparency by default, so fewer people are operating in the dark
Feedback loops that adjust parameters as conditions change
Collective checks that make it harder for a single "CEO–monarch" to hijack the whole system
Listening infrastructure: real mechanisms for input, willingness to change based on what emerges, and infrastructure for caring that goes beyond tokenomics
Funding norms that prioritize projects serving human depth, not just transactional throughput
In my work on tensegrity organizational design, I have explored how healthy systems balance continuous tensions with discontinuous compression. Organizations do not choose between centralization and decentralization. They create contextual agility, calibrating their approach to circumstances rather than rigid ideology. Some decisions are centralized for coherence. Others decentralize for responsiveness. That agility requires attention to what is actually happening, not just execution of a predetermined plan.
Web3 gives us the ability to write our philosophy in code and run it at scale. The question is: whose philosophy are we encoding?
The extractive one that sees every gap in understanding as an opportunity to be milked?
Or the contemplative one that sees those gaps as a call to slow down, listen more deeply, and widen who gets to understand?

The Mr. Rogers Paradox is not that two worldviews compete.
It is that one worldview encompasses the other, while the other maintains power precisely by preventing access to the fuller view.
Wisdom includes information asymmetry. Rogers understood exactly how power works, how attention is manipulated, how systems exclude and harm. But his intelligence didn't stop there. It also included what those patterns cost, what they make impossible, and what becomes available when you design from a more complete picture of human beings.
Extraction, by contrast, requires keeping the frame narrow. If your leverage comes from seeing what others don't, you cannot afford for others to gain access to the wider view. Not just because they'd compete with you—but because the wider view reveals extraction itself as a choice, not an inevitability.
This is why business school training is so useful for extractive intelligence. It's not that MBA students learn nothing valuable. It's that they learn a very specific subset of valuable things while being trained NOT to ask certain questions:
What happens to the people this plan harms?
What tensions are we treating as problems when they're actually structural features?
What would iterative learning look like instead of confident execution?
Who benefits from this framing, and what framings are we not considering?
The training creates predictable blind spots. Then people who can see those blind spots—because they came from outside the training or retained access to a wider view—can extract value from the gap.
Rogers never stopped seeing the fuller picture. That's why he changed the show when Katie worried about the fish. That's why he addressed assassination and nuclear anxiety when other children's programming pretended everything was fine. That's why he could sit in silence without needing to fill it with performance.
He wasn't operating from a different epistemology. He was operating from a more complete one.
The question Web3 forces us to ask is: which epistemology do we encode into programmable infrastructure?
The narrow one that maintains asymmetry by design?
Or the wider one that includes asymmetry but doesn't stop there—that also includes care, responsiveness, iteration, and the recognition that most of intelligence is learning to see what you're trained to ignore?
One future looks like the logical endpoint of the current pattern.
More Palantirs mediating what governments see. More reality–warping television and algorithmic feeds are shaping what citizens believe. More charismatic "thought leaders" are selling simple answers to wicked problems. More wealth and power are accumulating around people who are good at seeing the training–induced blind spots of everyone else and turning them into levers.
In that world, web3 is just another tool in the kit. A faster, global, programmable substrate for the same old extraction—for maintaining the narrow frame at unprecedented scale.
The other future is harder to picture because it does not fit our usual hero stories.
In that world, we treat the marshmallow challenge seriously. We accept that kindergarteners, through their iterative play, might have something to teach MBAs about handling uncertainty. We take Polymath seriously as a model for how hard problems can be tackled by many minds instead of one. We listen to Fred Rogers when he says that without wonder and silence, all our information will never add up to wisdom.
And we use web3 not to reward those who stand at the mouth of the cave selling glimpses of the outside, but those who help more people walk out into the light—and understand how the cave was built in the first place.
We write smart contracts that assume no single plan will be right forever. We design DAOs that put just as much effort into listening as into voting. We fund protocols and projects whose explicit goal is to create more shared understanding, not less. We build in mechanisms for responsiveness, not just for efficiency.
We stop treating "genius" as the person who exploits the gap between what the trained majority cannot see and what the opportunistic minority can. We start recognizing a different kind of intelligence: the ability to sit still in the noise, to let the mud settle, to see both the information and the wonder in what emerges, to listen for the quiet voice worried about the fish, and then to build systems that honor all of that.
The tools are already in our hands.
We can keep encoding information asymmetry into the very money and organizations that will shape our future.
Or we can do something stranger and more hopeful.
We can let a little silence in. We can listen carefully to the kind of world our current incentives are pushing us toward. We can remember that there is more to intelligence than extracting patterns from noise. And we can use these programmable systems to embed not just new mechanisms, but new ways of paying attention.
In my work on shadow hierarchies and game-theoretic assumptions, I have sought to surface the patterns we unconsciously reproduce. The hierarchies that emerge beneath official structures. The assumptions baked into our models clash with reality. The tensions we treat as problems to eliminate rather than forces that hold systems together.
This is not theoretical. This is the work of building systems that can hold complexity without collapsing into either chaos or authoritarianism. Systems that can be strong without being rigid. Systems that can coordinate at scale without concentrating power. Systems that can listen.
Information has had its century.
If we choose, the next one could belong to wonder.
This piece is part of an ongoing exploration of how we encode values into systems. You can follow more of this work at Holonic Horizons on Paragraph, where I write about coordination paradoxes, tensegrity thinking, shadow hierarchies, and building systems that serve collective flourishing rather than extraction.
Watch the full Fred Rogers interview with Charlie Rose that inspired this reflection, and see his gift of one minute of silence to understand what a different kind of attention might look like.
There is a certain kind of person who looks like a genius from the outside.
They start companies that move markets, send rockets into orbit, advise presidents, shape narratives. Peter Thiel tells founders that "competition is for losers" and builds a secretive data company whose entire business is selling governments and corporations the ability to see what others cannot. Elon Musk looks at rockets that everyone else assumes must cost tens of millions and quietly asks what the metal is worth. Donald Trump spends fourteen seasons on television performing the role of "America's Boss" until millions of people unconsciously accept it as fact. Curtis Yarvin writes philosophy in corporate-speak, telling a generation raised on business-school thinking that democracy is obsolete and that we should replace it with CEO monarchs.
These are not accidents. They are examples of a pattern.
They are exploiting what you might call actionable information asymmetry. They see gaps between what people think they know and what is actually going on, and they turn those gaps into money, power, or influence.
What almost no one talks about is where those asymmetries come from, and what a different kind of intelligence would do with them.
That is where Fred Rogers, a circle of kindergarteners with spaghetti and a marshmallow, and an emerging culture in web3 all quietly meet.

There is a famous team exercise called the marshmallow challenge. Four people get twenty sticks of spaghetti, one yard of tape, one yard of string, and a marshmallow, and their job is to build the tallest free–standing structure with the marshmallow on top.
You can run this exercise with all kinds of groups. Engineers. CEOs. Lawyers. Designers. The pattern is always the same.
The worst performers are recent business school graduates.
They spend a lot of time talking about the plan. They argue over the best design. They implicitly negotiate who is "in charge." Only at the end, when time is almost out, do they place the marshmallow on top. At that moment, the whole structure usually collapses.
The best performers, surprisingly, are kindergarteners.
They do not compete to be "CEO of Spaghetti, Inc." They do not give speeches. They put the marshmallow on right away and build lots of small prototypes. The tower fails, they adjust. It leans, they adjust. They learn by doing until something stable appears.
This is not a cute anecdote. It is a diagnosis.
Business education trains people to look for the single right plan, execute it with confidence, and treat that confidence as proof they are on the right track. It is a style of thinking that works reasonably well for tame, predictable problems. It fails dramatically for complex ones.
And yet many of the people who go through that training end up running companies, designing products, advising governments, funding research, and setting cultural narratives. Their mental model becomes the default.
Now add the Dunning–Kruger effect to this picture. People who know little about a domain tend to overestimate their competence, while people who know a lot tend to underestimate theirs. The less you know, the less you realize how much you do not know.
Combine a system that rewards confidence with a bias that inflates confidence most in those who should be most cautious. You get a class of leaders who are breathtakingly sure of themselves, structurally blind to their own limitations, and in charge of very complicated systems.
That is fertile ground for the Peters, Elons, Curtises, and Donalds of the world.
From the outside, Thiel looks like a contrarian visionary because he says things like "competition is for losers" while business schools solemnly teach case after case about how to compete harder.
What he is really doing is noticing a training–induced blind spot. If everyone else believes profit lies in entering proven markets and fighting for share, the person who refuses to compete and quietly builds a monopoly in a niche looks magical.
Palantir, the company he co–founded, simply turns that pattern into infrastructure. Governments and corporations already have oceans of data. They just cannot see across their own silos. Palantir integrates that data and hands them an information advantage over everyone who does not have that view. That advantage is the product.
Musk applies a similar pattern in engineering domains. He looks at rockets and batteries and refuses to accept industry assumptions. He breaks the problem down to first principles: what is this object made of, what do the raw materials cost, what physics truly constrain us. Inside that gap between assumption and reality, he finds room for radically different cost structures.
Trump's move is cruder but no less deliberate. "The Apprentice" manufactured a fictional version of him as the decisive, ultra–successful businessman. Millions of viewers consumed that character for years. When he ran for office, they were not evaluating a politician. They were voting for the person they already "knew." That gap between the constructed image and the messy reality of bankruptcies and failed ventures was a massive information asymmetry. He did not have to be good at business. He only needed people to believe he was.
Curtis Yarvin, meanwhile, dresses his political project in the language of software and shareholder value. He tells a generation soaked in corporate metaphors that democracy is obsolete and we should replace it with "CEO–monarchs" and "sovereign corporations." To people already trained to think of leadership as decisive hierarchy and of complexity as something you fix by finding the right executive, it can sound like clarity rather than regression.
In every case, the pattern is the same.
A large group of people has been trained to think in a certain narrow way. That training creates predictable blind spots. A smaller group learns to see those blind spots and turns them into levers.
From the vantage point of those still inside the training, it looks like genius.
From the outside, it is a particular kind of opportunism.
This is not a symmetrical contest between two worldviews.
The extractive pattern depends on keeping the frame narrow. If everyone could see what Thiel sees about monopolies and see the human costs of monopolistic control and see the alternatives to that system, the information asymmetry collapses. The "genius" move requires maintaining ignorance as much as it requires possessing insight.
Rogers operates from the fuller view. He can see the patterns of extraction—he understands how television manipulates attention, how adults project their anxieties onto children, how institutions prioritize efficiency over care. But he also sees what extraction must exclude to function: the inner lives of children, the structural role of emotion, the intelligence of iterative uncertainty, the creativity that emerges from safety rather than competition.
The asymmetry isn't just informational. It is epistemic. One mode of knowing maintains its power by keeping people from accessing a more complete mode of knowing.

Now, place this next to Fred Rogers, sitting across from Charlie Rose in a quiet studio.
He is not launching companies or raising funds. He is not promising to disrupt anything. He is talking, very slowly, about children, feelings, and the "gift of silence."
"Our society is much more interested in information than wonder," he says, "in noise rather than silence. And I feel that we need a lot more wonder and a lot more silence in our lives."
He does not just say this. He demonstrates it in the way he inhabits the space.
He leaves pauses that would make a producer nervous. He allows emotion to arise and be seen. He does not rush to fill every moment with words. He treats the viewer not as a consumer of content but as a person with an inner life that matters.
Rogers is not operating on a different axis of intelligence. He is operating from a wider aperture.
If the Thiel/Musk/Yarvin/Trump axis is about extracting information from noise to gain an advantage, Rogers sees in the silence. He does not just pull out facts. He pulls out meaning.
He is not interested in information asymmetry. He is interested in human asymmetry: the parts of us that have never been seen, heard, or given space. He is not building systems that hoard insight. He is building conditions where people can recognize themselves.
It is easy to dismiss this as "soft" or unrelated to the hard problems of economics and infrastructure. That is another symptom of our training. We separate "feeling" from "thinking" as if the quality of attention we bring to the world has nothing to do with the systems we design.
But what if the ability to sit in silence, to tolerate not knowing, to feel the contours of a problem before forcing a solution, is exactly what we need in order to build systems that do not keep collapsing under the weight of their own cleverness?
Most of us carry Fred Rogers as a feeling more than as a framework.
He belongs to a vague, sacred category in our minds: the kind man from childhood who made the world feel safer for half an hour at a time. For many, he sat in the space where fathers were absent, distracted, or emotionally unreachable. He talked about feelings no one else named. He modeled a kind of steady attention that most of us rarely experienced in real life.
We tend to leave him there.
In nostalgia. In tribute videos. In the part of us that says, "Wasn't it nice that someone like that existed?"
We do not usually connect him to web3, or finance, or Peter Thiel. We certainly do not sit down to design a governance mechanism and ask, "What would Fred Rogers do with this parameter?"
That is the mistake.
Rogers was not simply "nice." He was working from a rigorous philosophy of how humans learn, heal, and grow. He built a television show as a carefully crafted environment in which children could safely encounter difficult emotions, complex realities, and their own inner lives. He thought deeply about pacing, silence, repetition, eye contact, and ritual. He treated attention itself as architecture.
That is exactly the level at which we are now operating with code.
When we design token incentives, voting systems, interface patterns, and organizational rituals, we are doing what Rogers did with camera angles and pauses. We are telling people what matters. We are deciding what gets rushed and what gets room. We are encoding assumptions about what a "good" response looks like and what feelings are allowed.
Rogers's wisdom is not a sentiment we should honor in parallel to our "serious" work. It is the wider epistemic frame we could choose to build from. It can see extraction for what it is—a deliberately constrained view masquerading as the whole picture. When we say we need "Rogers in web3," we are not asking for niceness alongside rigor. We are asking: what would our systems look like if they were designed from the view that includes both information advantage patterns and the human costs, and the alternatives extraction must keep invisible?
But here is the part that transforms Rogers from memory into method:
Rogers did not just think about responsiveness. He built systems around it.
His show received 15 to 30 pieces of viewer mail daily. Rogers personally read every letter. He edited and signed responses. His staff estimated he wrote between 40,000 and 200,000 letters over 31 seasons. One colleague remembered: "There are some incredible letters about how he was the only male role model in their lives—children who were abused or neglected—all they had was Mister Rogers' Neighborhood."
That is not inspiration. That is infrastructure for caring.
A blind five-year-old girl named Katie worried that the fish on the show were not being fed because she could not see him do it. She wrote to Rogers. He did not send a generic response. He changed the show. He began announcing each feeding. Then he spoke directly to her on air: "I just wanted you to know that even if I forget to feed them when we're together, I come back later and feed them, so they're always taken care of. It's good to know that fish and animals and children are taken care of by those who can, isn't it?"
One piece of mail. One concern from one child. Led to a structural change in how the program operated.
When a parent confronted him about his decision to explain that Santa Claus was not real (rejecting the idea of a stranger sneaking into children's homes), Rogers received significant angry backlash. Most producers would have backed down. He did not. He prioritized honesty with children over adult comfort, even knowing it would cost him.
When Robert Kennedy was assassinated in 1968, Rogers went on air two days later in a suit instead of his cardigan. He addressed parents directly about "a disturbing time in our nation's history." When Cold War tensions reached a peak and threatened nuclear annihilation, he created a five-episode arc in which the Neighborhood's leader considered abandoning music to have the children build weapons. The resolution: it was all a misunderstanding. Southwood was building a bridge, not bombs. He taped those episodes in summer 1983, months before a major international conflict—not because he could predict the future, but because he was attentive to the fears his audience actually carried.
This is what listening-as-governance looks like.
Rogers did not set the show once and optimize metrics. He paid attention. He adjusted. He treated viewer input not as data points but as signals that something in the system needed to shift.
If we actually stopped and designed from the place Rogers helped us inhabit as children, we would not reach so quickly for the tools we currently treat as default.
We would not begin and end with game theory.
We would reach first for social choice theory and ask basic, human questions:
When we aggregate preferences, whose voice gets washed out and whose gets amplified
How do we prevent a system from quietly becoming dictatorial, even when it follows its own rules
What happens to people at the edges when the "rational" outcome is chosen for the center
Game theory treats people as payoff maximizers and asks, "Given these incentives, what will the players do?" Social choice theory treats people as citizens and asks, "Given these people and these rules, what kind of world are we creating?"
If you design from the Rogers place in you, this second question feels obviously prior.
That is what your own work keeps circling back to:
In "Game Theory Assumptions That Hurt Web3," you show how rational actor assumptions and clean payoff matrices break down in the presence of fear, panic selling, loyalty, altruism, and spite. The math is not wrong. The model is too thin for the actual humans inside it.
In "Beyond Funding: Web3's Real Coordination Crisis," you argue that throwing more clever funding mechanics at a system does nothing if the underlying paradoxes and tensions are not acknowledged and held. That is Rogers again: he did not give children "better incentives" to stop feeling scared. He sat with them in fear until something else became possible.
In your tensegrity work, you refuse to flatten complex systems into one right structure. You treat tension as structural, not as a bug to be eliminated. Rogers did the same with emotion: anger, sadness, jealousy, and joy all had a place in his neighborhood. He did not optimize for one prevailing feeling. He designed for many feelings to coexist without the system snapping.
And then there are people far more decorated than either of us who echo the same pattern.
Elinor Ostrom, who won a Nobel Prize for showing that communities can govern shared resources without a Leviathan, didn't start from "how do we get individuals to maximize payoffs." She started from long-lived real communities and asked what kinds of rules, relationships, and feedback loops actually kept forests, fisheries, and irrigation systems alive. Her principles of polycentric governance and "clearly defined boundaries, conflict-resolution mechanisms, and collective choice arrangements" are what it looks like to take Rogers's ethic into institutional design.
Audrey Tang in Taiwan helped build digital democracy tools that do not just count votes faster; they surface consensus without erasing difference. Platforms like vTaiwan and pol.is are spaces designed so that people can see where they agree and disagree, and where new options might emerge. That is Rogers at civic scale: making space where everyone in the "neighborhood" feels seen, and where difficult topics can be explored without shame or domination.
Gitcoin's use of quadratic funding is another concrete example. It encodes a very Rogers-like intuition into math: many small, sincere signals of support ought to matter more than a few large, self-interested ones. In other words, the quiet voices, if there are enough of them, should outweigh the whales. That is not just mechanism design. It is a choice about what kind of "neighborhood" the protocol wants to be.
You are doing the same kind of thing in a different language:
Bringing social choice into conversations that would otherwise reduce everything to game theory
Bringing tensegrity into spaces that would otherwise collapse into centralization vs decentralization shouting matches
Bringing shadow hierarchies into view, where people want to believe a flat org chart tells the whole story
Mr. Rogers is, in one sense, the emotional ancestor of this work.
He shows, in his body, pacing, and voice, what it looks like to center dignity and wonder without losing clarity. Your writing does the same thing in the domains of mechanism design, governance, and organizational architecture.
The proposal here is simple and radical:
When we design voting systems, let us bring social choice theory into the room alongside game theory. Let us ask who disappears when we choose efficiency over representation.
When we design DAOs and protocols, let us build in Rogers-like responsiveness: mechanisms to hear individual concerns, willingness to change course based on what we learn, and real infrastructure for caring that goes beyond mission statements.
When we talk about "rational" governance, let us remember that Rogers's way of knowing is no less rational. It is rational about more of the human being.
If we did that, "Mr. Rogers in web3" would stop sounding like a joke and become a design requirement.
When you look across examples, another pattern emerges.
The business school pattern and the authoritarian pattern treat complex, evolving problems as if they were simple or merely complicated. Find the right plan, pick the right leader, execute. If it fails, the answer is usually more of the same: more decisive action, more centralization, more confidence.
Kindergarteners instinctively treat the marshmallow problem as something you feel your way through. You try, you learn, you adjust. You do not assume you know in advance exactly how the structure will behave.
Mathematicians hit something similar in the real world with the Millennium Prize Problems. For decades, the image of progress was the solitary genius proving a deep conjecture alone. Then projects like Polymath opened up hard problems to open, messy, collective work. Many minds, many partial insights, much public failure. The results have been astonishingly productive.
In both cases, the shift is from single–plan confidence to iterative humility.
From treating difficulty as a test of your brilliance to recognizing difficulty as a property of the situation that demands different ways of knowing.
Fred Rogers is doing something similar at the level of human experience. He is not trying to overpower fear, sadness, or confusion with a correct answer. He is building a container that can hold them. That capacity to hold, rather than fix, is not a weakness. It is a different kind of strength.
The question, then, is not just what kind of people we have in charge, but what kind of epistemology we are quietly encoding into our systems.
Do our systems reward those who can exploit training–induced blind spots, or those who can sit still long enough to feel their way toward a wiser pattern?

The current versions of web3, so far, encode the same assumptions that got us here. Whoever can see the gap between how power actually works and how the system claims it works gains an advantage. Those advantages compound.
A founder accumulates tokens and reshapes governance. A whale coordinates with others to shift voting thresholds. Capital finds its way into control regardless of what the whitepaper promises.
This is not because the technology failed. It is because we have not yet asked the technology to encode something fundamentally different.
I have written extensively about this dynamic. In "Beyond Funding: Web3's Real Coordination Crisis," I explored how billions in sophisticated funding mechanisms have not solved persistent coordination failures because they address symptoms rather than root paradoxes. We keep trying to eliminate tensions rather than learn to work with them.
In "The Stallman Paradox," I showed how Web3 has become an extraction machine wrapped in open-source aesthetics, celebrating the principles of digital freedom while abandoning them in practice.
And in my Prevolution series, I traced how extraction runs on information asymmetry across every era: feudal lords knew how tithes accumulated into power while peasants knew only that they owed grain. Industrial capitalists understood surplus value, while workers saw wages. In web3, protocol insiders understand tokenomics and governance mechanics, while most users see yields and airdrops.
But for the first time, we have widespread access to programmable money and programmable organizations. We have smart contracts that can encode rules about how resources move and who gets a say. We have DAOs that are more than just shareholder registries with Discord servers attached.
We can use the same programmable tools to encode very different assumptions.
We can design funding mechanisms, such as quadratic funding, that favor broad, modest support over concentrated capital. We can set up public goods experiments that reward sustained contribution, not just early speculation. We can implement voting systems that recognize the intensity of preferences and protect minorities, rather than blindly following raw token weight.
More radically, we can design processes that assume complexity rather than simplicity. Instead of locking ourselves into constitutions that treat the world as static, we can build governance that expects change, invites iteration, leaves room for course correction, and makes space to hear Katie writing in with a concern about fish.
We can ask, explicitly:
Where does silence live in our systems?
Where does wonder live?
Where do we leave space for reflection before execution?
Where is the listening infrastructure that takes input and acts on it?
Right now, most smart contracts capture information asymmetry. They harden whatever advantage someone had at deployment time. But there is nothing stopping us from using that same layer to soften asymmetries instead.
We can program in:
Transparency by default, so fewer people are operating in the dark
Feedback loops that adjust parameters as conditions change
Collective checks that make it harder for a single "CEO–monarch" to hijack the whole system
Listening infrastructure: real mechanisms for input, willingness to change based on what emerges, and infrastructure for caring that goes beyond tokenomics
Funding norms that prioritize projects serving human depth, not just transactional throughput
In my work on tensegrity organizational design, I have explored how healthy systems balance continuous tensions with discontinuous compression. Organizations do not choose between centralization and decentralization. They create contextual agility, calibrating their approach to circumstances rather than rigid ideology. Some decisions are centralized for coherence. Others decentralize for responsiveness. That agility requires attention to what is actually happening, not just execution of a predetermined plan.
Web3 gives us the ability to write our philosophy in code and run it at scale. The question is: whose philosophy are we encoding?
The extractive one that sees every gap in understanding as an opportunity to be milked?
Or the contemplative one that sees those gaps as a call to slow down, listen more deeply, and widen who gets to understand?

The Mr. Rogers Paradox is not that two worldviews compete.
It is that one worldview encompasses the other, while the other maintains power precisely by preventing access to the fuller view.
Wisdom includes information asymmetry. Rogers understood exactly how power works, how attention is manipulated, how systems exclude and harm. But his intelligence didn't stop there. It also included what those patterns cost, what they make impossible, and what becomes available when you design from a more complete picture of human beings.
Extraction, by contrast, requires keeping the frame narrow. If your leverage comes from seeing what others don't, you cannot afford for others to gain access to the wider view. Not just because they'd compete with you—but because the wider view reveals extraction itself as a choice, not an inevitability.
This is why business school training is so useful for extractive intelligence. It's not that MBA students learn nothing valuable. It's that they learn a very specific subset of valuable things while being trained NOT to ask certain questions:
What happens to the people this plan harms?
What tensions are we treating as problems when they're actually structural features?
What would iterative learning look like instead of confident execution?
Who benefits from this framing, and what framings are we not considering?
The training creates predictable blind spots. Then people who can see those blind spots—because they came from outside the training or retained access to a wider view—can extract value from the gap.
Rogers never stopped seeing the fuller picture. That's why he changed the show when Katie worried about the fish. That's why he addressed assassination and nuclear anxiety when other children's programming pretended everything was fine. That's why he could sit in silence without needing to fill it with performance.
He wasn't operating from a different epistemology. He was operating from a more complete one.
The question Web3 forces us to ask is: which epistemology do we encode into programmable infrastructure?
The narrow one that maintains asymmetry by design?
Or the wider one that includes asymmetry but doesn't stop there—that also includes care, responsiveness, iteration, and the recognition that most of intelligence is learning to see what you're trained to ignore?
One future looks like the logical endpoint of the current pattern.
More Palantirs mediating what governments see. More reality–warping television and algorithmic feeds are shaping what citizens believe. More charismatic "thought leaders" are selling simple answers to wicked problems. More wealth and power are accumulating around people who are good at seeing the training–induced blind spots of everyone else and turning them into levers.
In that world, web3 is just another tool in the kit. A faster, global, programmable substrate for the same old extraction—for maintaining the narrow frame at unprecedented scale.
The other future is harder to picture because it does not fit our usual hero stories.
In that world, we treat the marshmallow challenge seriously. We accept that kindergarteners, through their iterative play, might have something to teach MBAs about handling uncertainty. We take Polymath seriously as a model for how hard problems can be tackled by many minds instead of one. We listen to Fred Rogers when he says that without wonder and silence, all our information will never add up to wisdom.
And we use web3 not to reward those who stand at the mouth of the cave selling glimpses of the outside, but those who help more people walk out into the light—and understand how the cave was built in the first place.
We write smart contracts that assume no single plan will be right forever. We design DAOs that put just as much effort into listening as into voting. We fund protocols and projects whose explicit goal is to create more shared understanding, not less. We build in mechanisms for responsiveness, not just for efficiency.
We stop treating "genius" as the person who exploits the gap between what the trained majority cannot see and what the opportunistic minority can. We start recognizing a different kind of intelligence: the ability to sit still in the noise, to let the mud settle, to see both the information and the wonder in what emerges, to listen for the quiet voice worried about the fish, and then to build systems that honor all of that.
The tools are already in our hands.
We can keep encoding information asymmetry into the very money and organizations that will shape our future.
Or we can do something stranger and more hopeful.
We can let a little silence in. We can listen carefully to the kind of world our current incentives are pushing us toward. We can remember that there is more to intelligence than extracting patterns from noise. And we can use these programmable systems to embed not just new mechanisms, but new ways of paying attention.
In my work on shadow hierarchies and game-theoretic assumptions, I have sought to surface the patterns we unconsciously reproduce. The hierarchies that emerge beneath official structures. The assumptions baked into our models clash with reality. The tensions we treat as problems to eliminate rather than forces that hold systems together.
This is not theoretical. This is the work of building systems that can hold complexity without collapsing into either chaos or authoritarianism. Systems that can be strong without being rigid. Systems that can coordinate at scale without concentrating power. Systems that can listen.
Information has had its century.
If we choose, the next one could belong to wonder.
This piece is part of an ongoing exploration of how we encode values into systems. You can follow more of this work at Holonic Horizons on Paragraph, where I write about coordination paradoxes, tensegrity thinking, shadow hierarchies, and building systems that serve collective flourishing rather than extraction.
Watch the full Fred Rogers interview with Charlie Rose that inspired this reflection, and see his gift of one minute of silence to understand what a different kind of attention might look like.
Share Dialog
Share Dialog
No comments yet