The robots are coming.
AI is generating art and essays in seconds, and algorithms are curating our lives. Our social feeds echo the same opinions, our movies are sequels of sequels. Despite having godlike technology, the future feels like it is going to be automated sameness.
Why aren't we living in the imaginative future we were promised? What happened to Captain Kirk galactic ambition, sustainable energy biodomes, and flying cars? It seems we're too distracted watching people crack their knuckles on TikTok for hours at at time.
We're trapped in a loop of recycled ideas.
And here's what's worse: AI might be amplifying our worst tendencies toward comfort and conformity. It now becomes increasingly vital for humanity to pursue what is genuinely and entirely new—not merely iterations of the past, but radical departures that challenge our fundamental assumptions about creativity and progress.
...
There was a pretty influential thinker and writer named Thomas Malthus—In short, he was an economist who predicted that population growth would always outpace food production, leading to a sort of perpetual scarcity. In this original idea of a "Malthusian trap," societies improved their standard of living only to have population growth return them to subsistence levels.
However, according to a somewhat startling book I just finished, "Boom: Bubbles and The End of Stagnation" there's a new kind of Meta-Malthusian system we're facing today.
As Byrne Hobart and Tobias Hunter write:
Hyperbureaucratization, which appears in many domains, can be generalized as Meta-Malthusianism: Success in any domain tend to make people repeat the process that worked before until they run into resource constraints.
This concept describes how our systems—economic, technological, and cultural—develop a kind of "institutional Stockholm syndrome." Once our systems find a solution that works, they keep retreading that same path forever, actively resisting genuine innovation. It's more cost effective, energy efficient, comfortable, and most of all, safe, to do the same thing over and over.
So, ahem... Hollywood. Why does every studio greenlight the 12th sequel instead of original scripts? Because franchises are a proven formula. Similarly, venture capitalists fund carbon-copy startups, while researchers chase incremental papers on trendy topics because grants and tenure depend on playing it safe.
This isn't just laziness—it's about survival.
Economist Clayton Christensen explained this in his theory of the "innovator's dilemma." Large organizations lose their ability to innovate precisely because the safer it is to milk existing ideas, the harder it becomes to bet on radical ones. As Christensen demonstrated, successful companies often fail not because they're poorly managed, but because they follow traditional good management practices—listening to their existing customers and focusing on high-margin opportunities—which paradoxically blind them to disruptive innovations emerging from the edges of their market.
Essentially, grant committees fund projects that align with mainstream theories, while outlier ideas get sidelined as "weirdos." Add an aging population (fewer rebels), hyper-financialization (imaginary money), and a publish-or-perish academic culture, and you've got a perfect recipe for stagnation.
We're optimizing for citations, not breakthroughs.
...
Enter AI, our shiny new hammer that promises to crack everything from quantum physics to writing poetry. But there's a catch: AI is a mirror. It reflects—and magnifies—the biases of its training data.
Feed an AI model a century of rock music, and it'll generate a perfectly average, polished tune that feels "Casey Kasem's top 40." Ask it to innovate, and it'll remix the past into something that feels new but is actually just the statistical average of human creativity.
This creates a disturbing pattern.
In a recent article in Nature, it was found that people have trouble distinguishing between human-written poems and AI-generated ones. If that doesn't churn your stomach, they also found people often rate the AI's work higher.
Why? Because AI produces "comfortable" content—familiar rhythms, relatable themes—while humans take risks, create awkward metaphors, and express raw emotions. We're training algorithms to chase the lowest common denominator, rewarding mediocrity and sidelining the weird, and wacky.
Social media algorithms make this worse.
TikTok, Instagram, and YouTube don't care about creativity—they care about engagement. Their recommendation engines push content that's just novel enough to keep you scrolling but just familiar enough to avoid discomfort. The algorithms favor safe, familiar content with broad appeal over the challenging, difficult, or truly innovative.
Before we head to the bar over this, let's also reiterate that AI centralizes power. (Something I have written about at length.)
Training large models requires vast resources (data, computing power, money), so innovation becomes controlled by a handful of tech giants. Their algorithms aren't just tools—they're gatekeepers. When Spotify's AI recommends music, it funnels listeners toward the same songs, starving the indie artists. When AI writes code, it defaults to conventional solutions, subtly enforcing technological norms.
...
So how do you break a system stuck in the loop?
Historian Thomas Kuhn showed us that progress isn't gradual—it's explosive. The Structure of Scientific Revolutions illustrates that paradigms shift when the old system collapses under its contradictions. We're due for one of those collapses... and man, are we due!
It's the students outside "the main stream" I look for. Young developers experimenting with brain-computer interfaces. Game Developers with mechanics that don't put you to sleep. Crypto enthusiasts who are reimagining IP ownership. The unafraid build with open source and untested systems. Could we get students to use AI not to write essays but to simulate climate models or design proteins? We need to push beyond, squint, and hope we catch a glimps of a new emergent paradigm.
To accelerate this, we need to build "antifragile" systems—a concept from philosopher Nassim Taleb wrote about in his mindblowing "Incerto" series. Antifragility goes beyond resilience; it's the ability to actually thrive in chaos and disorder. As Taleb explains, antifragile systems don't just survive stressors—they "get stronger because of them," similar to how muscles grow after being subjected to the stress of weightlifting.
Right now, our institutions remain fundamentally fragile. Schools primarily train students for established corporate roles rather than encouraging moonshot thinking. Venture capital overwhelmingly funds incremental improvements over revolutionary ideas. The entire system rewards conformity and predictability while punishing the very experimentation that drives true progress.
Even with AI, we won't innovate. We need to break out of this loop.
What we need instead are antifragile systems—educational, economic, and technological frameworks that don't merely survive chaos but actively strengthen through it. This requires a fundamental rethinking of our relationship with risk, failure, and creativity. The antifragile mindset embraces volatility as fuel for innovation rather than a threat to stability.
As AI amplifies both our creative potential and our tendency toward comfortable conformity, the choice before us isn't technological but philosophical: will we use these godlike tools to endlessly iterate on the familiar, or will we finally muster the courage to imagine something entirely new?
Thanks for reading. If you enjoy the work of myself and my robots, please consider subscribing and sharing with others.
Nye Warburton is a creative technologist and educator from Savannah, Georgia. This essay was iterated and improvised over several hours using Otter.ai, Claude Sonnet, Llama 3 and Stable Diffusion using a personalized data set in Obsidian. For more information visit: https://nyewarburton.com
Nye Warburton