Ideas are rarely solitary events. They don’t behave like so many bricks laid down in orderly rows - more like spores tossed into the air, catching on wind currents, landing where they will. Sometimes they germinate. Sometimes they rot. Richard Dawkins gave us the term "meme" in 1976 to describe the smallest unit of cultural transmission, analogous to the gene. Since then, we've struggled to find the right metaphor to explain the erratic, unpredictable, wildly nonlinear dynamics of idea propagation.
The metaphor of contagion has remained oddly persistent. Ideas, we say, are infectious. We "catch" beliefs. They "go viral." Controversial viewpoints are "spreaders." This metaphor, lifted from epidemiology, is useful - but it is not always accurate. Not all ideas spread because they are contagious in the biological sense. Many spread because they are adaptive to their environment. Others thrive because they satisfy psychological needs. Some survive not because they’re good, but because they’re sticky. And occasionally, the ones that endure are simply the loudest, not the truest.
So how do ideas actually spread?
The analogy between diseases and ideas dates back at least to Charles Mackay’s Extraordinary Popular Delusions and the Madness of Crowds (1841). Long before modern psychology had tools to analyze mass hysteria, Mackay offered accounts of tulip manias, crusades, and witch hunts as if they were viral afflictions of the mind.
But the serious academic treatment began with Dawkins and was picked up by others in evolutionary psychology. Susan Blackmore’s The Meme Machine expanded on the idea, arguing that memes function like parasitic replicators, hijacking human brains to ensure their own reproduction.
In the epidemiological model, an idea spreads when a susceptible mind comes into contact with an infected one. You might think of Twitter as a Petri dish and each tweet as a pathogen. If it’s sufficiently appealing, anger-inducing, or otherwise salient, it replicates.
But the analogy begins to fall apart in contact with reality. For one, biological contagions don’t give a shit what you think. A virus doesn’t need your buy-in to spread. Ideas do. That’s a significant deviation. You don’t spread an idea just by exposure. You spread it when you believe it, repeat it, endorse it.
And belief is never automatic.
Ideas don’t float in empty space. They survive or perish depending on the ecosystem in which they land. And those ecosystems are not neutral.
In The Filter Bubble, Eli Pariser argues that our information environments are increasingly tailored to reinforce preexisting beliefs. This personalization filters which ideas we even have the chance to encounter. When paired with social proof - seeing others in our group endorse an idea - we’re far more likely to adopt it. This makes ideological homogeneity within communities less an accident than a structural inevitability.
And belief adoption often depends on the function of an idea within a given context. A rumor in wartime may serve to boost morale. A conspiracy theory in a disenfranchised population may function as a narrative that explains suffering. Whether true or false is, for most people, secondary to whether the idea works in some functional sense - emotionally, socially, psychologically.
Historian of science Robert Proctor coined the term agnotology to describe the deliberate cultivation of ignorance - strategically suppressing or distorting information. Tobacco companies famously funded "alternative" research to dispute links between smoking and cancer. The spread of these manufactured doubts mimicked contagion but operated more like deliberate ecological engineering. They created the appearance of uncertainty to make inaction palatable. In these cases, the spread wasn’t due to virality - it was due to institutional force.
Ideas also spread through quirks of cognitive architecture. Take the mere exposure effect: the more often we encounter something, the more we tend to like it. Repetition creates familiarity, and familiarity breeds trust. Politicians repeat slogans not because they believe we are stupid, but because they understand that fluency often feels like truth.
Similarly, emotionally charged content tends to outperform neutral information. This isn’t a modern phenomenon. Blood libels, moral panics, apocalyptic visions etc have been part of the idea ecosystem for centuries. But now they are weaponized by algorithmic prioritization.
A 2018 study published in Science found that false news on Twitter spreads faster, deeper, and more broadly than the truth. Why? Not because falsehoods are better written. But because they provoke stronger emotional responses - disgust, fear, surprise. Emotionally evocative ideas gain attention. Attention confers prestige. Prestige attracts imitation.
This creates an environment in which rationality is not rewarded. Cognitive psychologist Daniel Kahneman distinguishes between fast, intuitive thinking (System 1) and slow, deliberate thinking (System 2). Social media, due to its speed and reward structure, is a System 1 environment. The ideas that thrive there are fast, sticky, and emotionally salient. They need not be right. They only need to feel right.
But ideas spread through incentives, too.
A journalist who writes a hot take that aligns with their audience’s biases is more likely to be retweeted. A politician who echoes a popular narrative - even a false one - is less likely to lose their seat than one who tells an uncomfortable truth. An academic who publishes a paper supporting a dominant paradigm may receive more citations. Over time, this doesn’t just shape what people say. It reshapes what they believe.
In The Revolt of the Public, Martin Gurri argues that the decline of institutional gatekeepers has made it easier for any idea to compete on the same playing field. This has created a crisis of authority, but also a proliferation of memetic competition. The ideas that spread are not those that are validated by expertise. They’re the ones that align with the incentives of platforms, publics, and power vacuums.
Much like bacteria evolve antibiotic resistance when antibiotics are overused, bad ideas can evolve resilience when exposed to too much fact-checking. The correction becomes part of the narrative. The discrediting is recoded as martyrdom. The myth gets stronger in opposition.
The Enlightenment faith in reason presumed that ideas would win by force of logic. That humans, given sufficient evidence, would converge on truth. But the last two decades have largely disproven this. From anti-vax movements to QAnon, it’s clear that belief formation is rarely driven by Bayesian updating.
Ideas are often adopted not because they are supported by data, but because they signify group membership. Philosopher Michael Huemer calls this the social theory of belief: people believe things because their peers do, because their identity depends on it, or because the belief signals loyalty.
It’s not that reason is irrelevant. It’s just rarely the primary driver. Ideas spread through narrative, status, emotion, identity, and incentives. The hard work of epistemology - the careful parsing of sources, the weighing of arguments - comes later, if at all.
The result is that bad ideas often have better armor. They don’t require precision. They benefit from vagueness, from the ambiguity that allows projection. They survive by being useful, not accurate. And they spread not in spite of their flaws, but because of them.
Media literacy is the obvious answer, but it suffers from an implementation problem. Teaching people how to think critically is not as simple as assigning a curriculum. Belief systems are not just cognitive frameworks - they are emotional ecosystems.
What seems more effective is community architecture: creating environments where truth-telling is rewarded, where dissent is allowed, and where the costs of signaling falsehood outweigh the benefits. Reputation systems, peer accountability, and institutional norms can all serve as buffers against memetic toxicity.
Systems need friction to stabilize. A platform with no cost to falsehood will naturally be overrun by whatever is most memetically fit. But a platform with carefully designed friction - delays, verification processes, response incentives - can shift the ecology of idea selection.
Thomas Kuhn, in The Structure of Scientific Revolutions, wrote that paradigms shift not because the old guard is convinced, but because it dies out. But in a memetic environment where ideas can live forever online, death is no longer a constraint. The ideas we fail to contain today can echo endlessly.
The philosopher Timothy Morton once described climate change as a “hyperobject” - something so vast, so entangled in systems and timescales, that it resists being thought. Bad ideas often function in a similar way. They diffuse. They embed themselves in institutions, aesthetics, software, rituals. They become part of the air.
Like spores.
In 2023, a research team discovered that Aspergillus tubingensis, a strain of fungus found in landfills, can break down plastics in weeks instead of centuries. This is a hopeful metaphor. Not every contaminant lasts forever. Some can be digested, restructured, composted into something else.
Perhaps the work ahead is not to kill every bad idea outright. But to grow better intellectual fungi. To cultivate environments where dangerous ideas are metabolized, not ignored; where truths have roots, not just reach.
Memes mutate. But so can minds.
Joan Westenberg
Love the biological metaphor, very biomimetic :)
If it feels true, it spreads. If it gets clicks, it spreads. If it flatters your side, it spreads. Truth doesn’t enter the equation until much, much later—if ever. https://paragraph.com/@signalvs/thought-contagions-how-ideas-actually-spread
This feels true, so I rc'd.