Share Dialog

Hello and welcome to your weekly Dark Markets tech-fraud news roundup, I’m David Z. Morris, a longtime technology and finance reporter and fraud investigator. This week we have some incredible spoofs and goofs from the AI Bubble, including Meta’s catastrophic livestream and Yudkowsky’s predictably bad takes on Stephen Jay Gould, the genius debunker of racist IQ theories. It’s enough to make one suspect that Yudkowsky’s protestations of distance from the far right might not be entirely sincere - and that racism played a role in leading Silicon Valley down the LLM blind alley.
But first, Heads up New Yorkers: Save the date: November 11th is the release event, at Powerhouse Books Arena in DUMBO, for “Stealing the Future,” my forthcoming book on the FTX fraud and Effective Altruism. This weekend I went to the Brooklyn Book Festival, and stopped by Powerhouse Arena (what a name) while I was in the area. I think it's going to be an incredible event.
If you can’t make it, remember to preorder “Stealing the Future.” Incredibly, six weeks out from release, we’re already inching our way up the Amazon bestseller list - I’m dropping this here mostly as a historical record, let’s see if we can get those numbers up!

Dark Markets is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Subscribe

This is like really bad, guys. Bug bounty platforms have their issues, but now, much like sci-fi magazines before them, they’re getting spammed by AI crap, and human bug reviwers are wasting time reviewing the submissions. It’s a DDoS attack, fueled by Sam Altman and OpenAI. Thank you, sir, for your gifts to the world.

Meta has been rolling out AI-integrated Ray Ban glasses and I think this was part of that rollout but honestly I couldn’t care less. It’s an amazing live video of a guy whose parents Mark Zuckerberg probably has in a basement somewhere, freezing in terror as Meta’s AI assistant eats absolute shit.
I definitely recommend watching the whole clip, which provides some genuine insight into the *way* LLMs fail when faced with a specific bounded task that could be ably handled by a non-AI, specialist program with a voice interface. It’s slow as hell, and most critically, it clearly doesn’t have a stable model of the real world, with no ability to infer or remember where the human is in the steps of the task, and it keeps jumping forward despite multiple attempts to get it to backtrack. Isn’t this the entire point of AI glasses?
LLM-based AI companies are not in the business of building artificial intelligence, they are in the business of obfuscating certain basic computer science realities about their methods for mimicking human communication.
So it’s really notable that a new paper from OpenAI, “Why Language Models Hallucinate”, admits the indisputable truth that computer scientists like Gary Marcus and Chompba Bupe have been pointing out for years now : That hallucination, the creation of false information, is an inevitable by-product of LLM architecture.
This includes admitting that GPT-5, which Sam Altman had predicted would qualify as a self-aware “artificial general intelligence,” is still just a hallucination machine, and that there’s no easy fix under probabilistic and meaning-agnostic LLM approaches. “ChatGPT also hallucinates. GPT‑5 has significantly fewer hallucinations, especially when reasoning, but they still occur. Hallucinations remain a fundamental challenge for all large language models.”
“If incorrect statements cannot be distinguished from facts,” the researchers write, “then hallucinations in pretrained language models will arise through natural statistical pressures.” This is of course the nut of the problem - LLMs have no capacity to distinguish truth from falsehood. All they have is a statistical distribution of statements in their training data, which both a) contains limited data, however broad, and b) likely contains examples of incorrect data, which the LLM can only distinguish from ‘the truth’ by weighting the frequency of their appearance in the set.
Or, to quote a summary of the research at ComputerWorld, hallucinations arise from “epistemic uncertainty when information appeared rarely in training data, model limitations where tasks exceeded current architectures’ representational capacity, and computational intractability where even superintelligent systems could not solve cryptographically hard problems.”
This isn’t going to change without major revisions to the transformer/LLM model. “Large language models,” ComputerWorld summarizes, “will always produce plausible but false outputs, even with perfect data, due to fundamental statistical and computational limits.”
Or, as I put it in a viral tweet earlier this year, “You’re being sold an automated Dunning-Kruger Trap.”
Dark Markets is a reader-supported publication. To receive all posts and support my work, consider becoming a paid subscriber.
Subscribe
A new essay from Nirit Weiss-Blatt dives into the most vexing question of Yudkowsky’s “Rationalist” movement - why are “Rationalists” constantly going insane?
She draws on several sources I also cite in my book, particularly the experience of Jessica Taylor, an internal critic of Rationsism’s many failings who had a self-reported psychotic break after two years as a researcher at Yudkowsky’s Machine Intelligence Research Institute. “I believed that I was intrinsically evil [and] had destroyed significant parts of the world with my demonic powers,” Taylor later wrote.
At the highest level, Weiss-Blatt nails the reality of what draws someone to a movement promising a set of technical-sounding hacks for thinking better: a desire for an authoritative source of answers in a confusing world. That promise of instrumental ‘correctness’ is, according to even people inside the movement, a formula for attracting cult followers who want downloadable answers delivered by exceptional geniuses, more than they want to learn how to think for themselves.
‘Even sympathetic observers still inside the movement, like the rationalist Ozy Brennan, now assert that ‘people who are drawn to the rationalist community by the Sequences often want to be in a cult.’ The promise (“learn the art of thinking, become part of the small elite shaping the future”) over-selects people who want a transformative authority and a grand plan.”

·
Jan 28
Subscribe
Weiss-Blatt also finds sources I haven’t come across, including comments from Diego Caleiro, a biological anthropologist who joined Leverage Research. Leverage is one of the spookier rationalist offshoots, whose deprogramming techniques are highly evocative of MK Ultra, est, and other Bay Area mindfuckery, and whose agenda has included a (drug-fueled, loosely planned) military takeover of the U.S. government.
Caleiro reports coming, under Anders’ tutelage, to “realize how rare I was. How few of us there were … You may think it is fun to be Neo, to be the One or one of the Chosen ones. I can assure you it is not […] The stakes of every single conversation were astronomical. The opportunity cost of every breath was measured in galaxies.”
This kind of pressure is an ideal setup for cult conditioning, and to induce the kind of “pivotal acts” that Yudkowsky saw as necessary for the fight against All-Powerful AI. See below link for more on Geoff Anders.

·
May 29
Weiss-Blatt’s breakdown of the inputs to this ideological psychosis machine is comprehensive, but it starts with the point that matters most: Rationalism leads to psychosis because its hidden foundation is the belief that Rationalists themselves are inherently superior to other human beings. Not because of their training, god no, but because they have inherently higher native intelligence (remember that CEA wanted to use IQ scores to allocate resources), and (implicitly) self-selected to be part of a group to refine those inherent gifts.
This was seeded, Weiss-Blatt highlights, in Yudkowsky’s incredible, in fact genuinely comical egotism about his own “genius.” In miniature, the entire thing is a parable about the risks of home-schooling: You put your kid in a bubble, he’s going to come out a Bubble Boy.
Weiss-Blatt’s broader picture of the Rationalist mistake traces a line from its founding Mythos to its Method, Stakes, and Funding. I would actually re-order that substantially: Historically, and crucially, Rationalism’s funding cam before it really had a Method or Stakes - Peter Thiel saw the young autodidact Eliezer Yudkowsky and, before MIRI proper even existed, gave him a pile of money and introduced him around Silicon Valley.
I increasingly think this is the original sin of Rationalism, and Effective Altruism, and all the other boutique, sponsored “movements” - they never actually had to go out into the marketplace of ideas and prove themselves from first principles. Rationalism had millions of dollars ready to hand to juke the stats, funding prizes and other incentives for young thinkers to take the Rationalist line. The same dynamic is going on right now with Yudkowsky’s new book, “If Anyone Builds it, Everyone Dies,” which is getting predictably excoriated by serious thinkers, but will probably sell well because of the billionaire-funded institutional weight behind it.
It’s just one more example of Rationalism’s foundations in fundamental antidemocratic elitism, the presumption of genetic superiority, the belief that “genius” points the path forward - and the practical eagerness to leverage wealth to make their view of the world seem true.
In other words, Rationalism as a whole is a cult. No wonder a few splinter groups take things far enough for the public to notice.
Finally, I’ve come across a bit of a smoking gun in one of the bigger disconnects in the Rationalist world. Despite the association with right-wing funders like Peter Thiel, his willing coziness with eugenicists, and the increasingly obvious authoritarian implications of his ideas, Yudkowsky has always claimed he has no tolerance for far-right racism and fascism.
I’ve always taken those protestations at face value, but I’m less and less sure that’s warranted, especially after unearthing this Sneer Club post unpacking Yudkowsky’s seeming irrational hatred for Stephen Jay Gould, who Yudkowsky has described as a “villian.” Gould was of course the author of “The Mismeasure of Man,” a meticulous debunking not just of race scientists like Bell Curve author Charles Murray, but of the entire legacy of the idea of measurable, instrumental, and strongly heritable “IQ.”
I’ve recently been re-reading Mismeasure, alongside Stephen Murdoch’s more concise and zippy IQ: A Smart History of a Failed Idea. The most straightforward and staggering takeaway from the history of the IQ agenda is that the same guy who first applied statistics to the measuring of intelligence intelligence also invented the entire concept of eugenics. His name was Francis Galton, and his 19th century idea that human intellectual capacity could be described in a single heritable, measurable trait became a pillar of 20th century pogroms, holocausts, and genocides.
I think the Sneer poster really hits on something here when they surmised that Yudkowsky “hated Gould devaluing his precious precious skull measuring so much that he delved into his work and sought out every minor little complaint he could find, blew it into epic proportions, and sought to present the widely beloved science popularizer as a great villain- without outing himself as a vocal defender of racists and biodeterminists … Or, to say it even more bluntly: Yudkowsky is sulking because Gould doesn't value intelligence as an inherited and immutable trait, and Yudkowsky seems to take that as a personal attack.”
Yudkowsky, remember, has no academic credentials and has never published a pee-reviewed paper. I don’t think he would endorse the mass murder of Black people, but his precious biologically superior intelligence, measured by IQ, is the only real proof he can hold up of his supposed genius. If he has to embrace eugenics to preserve that bulwark against being like the rest of us, then apparently, so be it.
Finally, a footnote:
While reading about Yudkowsky and Gould, I also ran across a truly hilarious LessWrong post by some Rationalist going by GLaDOS that echoes the idea, put forward by a racial intelligence theorist named Gregory Cochran, that Robert E. Howard’s Conan books have a more accurate view of prehistory than current anthropological science. If you know anything about the incredibly racially weighted work of Howard, you can probably guess that it aligns pretty closely with a race-science agenda, in this case Cochran’s idea that evolution can happen over “thousands” of years - which is bluntly preposterous.
The point seems to be to highlight that there are big differences between “racial” groups, even within Europe, despite centuries of mixing, because of this fast evolutionary divergence. Cochran offers the gem that “It may not be PC to say it, but Cimmerians were smarter than Picts.” Cimmerians are of course fictional, and Howard’s Picts were fictionalized, so this lines up hilariously with the Rationalists’ tendency to take heavy theoretical inspiration from genre fiction like Terminator, but Conan (which to be clear I admire as fiction despite Howard’s racism) is a low point even for them.
Dark Markets is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Subscribe
Dark Markets
No comments yet