
Welcome to your weekly Dark Markets news roundup, I’m longtime technology, fraud, and finance reporter David Z. Morris.
This week: Eliezer Yudkowsky fumbles Ezra Klein’s NYTimes boost; LLMs get Brain Rot; Nigeria deports foreign scammers; the downward spiral of Grift Economics.
First, your regular reminder to pre-order Stealing the Future, a comprehensive analysis of why Effective Altruism, Abundance, Singularity Thought, and Rationalism are all dangerously wrong.

Share Dialog
“Here we have the snake eating its own tail - Klein, a studiously unwitting operative of conservative Democrats, giving a platform to Yudkowsky, whose protestations against AI have proven a huge boon to the development of AI and the military surveillance and planning that is its real aim.”

·
June 23, 2024
If you want a deeper dive into the book, here it is. Sunil Kavuri has released Part 2 of my appearance on his new podcast, where we discuss FTX, Sam Bankman-Fried, the Parents from Hell, and Dan Friedberg. In all humility, this is a *great* introduction to my forthcoming book. Check it out.
A new preprint research paper has shown that exposing LLMs to viral short-form content tanked their reasoning ability by 23% and their memory by 30%. How does that work? I have no idea. But as one AI booster plaintively put it on X, “It’s not just bad data → bad output. It’s bad data → permanent cognitive drift.” And given that these things are trained on increasingly large bodies of not-exactly-carefully-curated data, a downward spiral seems almost inevitable.
Dark Markets is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Subscribe
Nigeria’s Economic and Financial Crimes Commission (EFCC) has announced the comlpleted or planned deportation of more than 700 foreign nationals accused of “cybercrime, money laundering, and ponzi scheme operations.”
While “the deported convicts include nationals of China, the Philippines, Tunisia, Malaysia, Pakistan, Kyrgyzstan, and Timor-Leste,” the EFCC’s announcement suggests that the bulk of the scammers were Chinese and Philipino. And rather than a diffuse scattering of different scams, it seems these were largely parties to one operation - “a sophisticated cybercrime and ponzi scheme syndicate operating under the cover of Genting International Co. Limited.”
For a couple of years my YouTube algorithm has been feeding me videos from Scott Carney, a journalist-turned-YouTuber who has merged into the fraudbusting lane. A recent Carney video particularly caught my eye because it hit on a few concepts that I find very compelling:
That frauds and grifts have their own economic logic - including old favorites Supply and Demand.
That rising fraud levels have implications for the macroeconomy.
Carney, working from the premise that levels of grift and fraud have been rising in our economy for decades, reaches the conclusion that grift may be reaching some kind of self-inflicted inflection point.
Subscribe
I find many of his premises useful and interesting. Carney argues that:
Grift, or the deceptive marketing of fraudulent products, relies on a broader basis of trust, including trust in institutions. You can see this everywhere - people like Alex Jones and RFK Jr. need trusted institutions like the CDC and the FDA to turn into useful enemies.
Over time, as it expands and metastasizes, grift erodes that trust broadly, making it harder for grifters to trick their victims. I think this is true broadly - grifters are parasites of trust, but when if they go too far, even their victims might actually accidentally learn some critical thinking skills. That’s bad! However, I tend to think that grifters’ core audience go quickly from being skeptics of the mainstream to being cultish followers of their new leaders, so it’s not obvious there actually is any limit for a certain subset of victims.
Where I disagree with Carney, or at least would add some nuance, is in his prediction of what happens after peak grift.
Carney I think is right to a degree with his prognosis: after grift comes outright crime. That actually rhymes with a specific recent example - Tai Lopez’ indictment by the SEC for securities fraud. Lopez had been a pioneering grifter in the legal-but-shitty lane of rip-off “educational” products for more than five years when he decided he’s try and resurrect Radio Shack as an online brand. That was a huge mistake, amazingly leading him to be prosecuted *by the Trump SEC,* no less. And others will follow suit, overextending their hand and
But I would add a more optimistic note. The grift cycle eventually swings back. We can look to what happened after the previous fraud bubble of the Gilded Age, which ran from roughly 1880-1930 in the big picture. The final fraud-stock meltdown was followed by a massive recommitment to rigor and expertise. We are years away from that transition - and America’s midcentury thriving was also the product of outside disciplining forces including World War II and the Cold War. It would be great if we didn’t have to repeat that sort of catastrophe to get back to Peak America, but there is ultimately a limit to the public’s eagerness to debase itself.
I do have to note that Carney is an interesting commentator on grifts, having seemingly promoted and/or fallen victim to a few in his journalistic career. Carney wrote an entire, seemingly credulous book about the breathing guru Wim Hof, whose methods have been connected with the deaths of 19 people; and another book further exploring ideas related to Wim Hof … with input from pseudoscience hypebeast and sexual dilettante Andrew Huberman.
Believe it or not, I don’t say that entirely critically - there’s likely some insight to be gleaned from someone who’s demonstrably vulnerable to the kind of fraud they’re commenting on.

Eliezer Yudkowsky was interviewed on Ezra Klein’s New York Times podcast, and it went over like a lead zeppelin. Yudkowsky is not just personally off-putting and full of shallow observations, he seems, strangely, to palpably not care about any of this. Which does make sense considering he has spent his career as a useful idiot frontman funded by tech billionaires to see just what form of gorilla dust he’s going to stir up next.
Of course, the interview makes no acknowledgment of Klein and Yudkowsky’s longstanding and deep connections, including through Vox, which Klein cofounded with the even more risibly halfwitted Matt Yglesias, and which continues to prominently feature an entire vertical dedicated to sponsored promotion of Effective Altruism. Kelsey Piper, Vox’s longtime resident EA booster, is to journalism what Yudkowsky is to philosophy - a cosplaying fraud propped up by friendly billionaires.
And of course, more deeply, Yudkowsky and Klein share the center-right technocratic neoliberal viewpoint that attracts that kind of funding, and gets you jobs and gets you featured the New York Times - a covertly right-wing ethos recently repackaged for “liberals” via Klein’s Abundance project.

·
Aug 26
For just one glimpse of how lazy, incurious, and tendentious the Abundance project is, here’s Klein’s partner Derek Thompson getting disassembled by Mehdi Hassan over clear evidence of their laziness and motivated thinking. Thompson and Klein’s book baldly misrepresents the reality of a Biden broadband bill that is one of their key examples: While they excoriate the bill as an example of Democratic overregulation, the restrictions that hampered it were actually conditions imposed by the GOP, with the backing of internet incumbents who didn’t want state-backed ‘competition’ - even if they weren’t actually providing sufficient rural service in the first place.
Intellectual and factual laziness is endemic to both Abundance and Yudkowskyite Rationalism, in part because they share common cracked foundations. But mostly, this is because they are not intellectual projects following internal logics, but ideological agendas externally supported, both rhetorically and materially, by the powerful people most likely to benefit from them. And here we have the snake eating its own tail - Klein, a studiously unwitting operative of conservative Democrats, giving a platform to Yudkowsky, whose protestations against AI have proven a huge boon to the development of AI and the military surveillance and planning that is its real aim.
Subscribe
They’re both deeply tragic figures, and as much as anything the interview drives home how pitiable Yudkowsky is as a human being. Like Peter Thiel, it’s a safe bet he hates living in his body, and you can understand why he would long to exist as an algorithm on a server on Venus. He’s the kind of classic Dweeb archetype that seems to thrive in these circles (and, not coincidentally, in the far-right Groyper/Nick Fuentes universe). Yudkowsky’s upper lip doesn’t entirely cover his teeth when he speaks. His hands, in unnerving contrast to his overall girth, are spindly, delicate, uncalloused things that dance in the air like particularly hesitant mosquitoes. For someone who has done a lot of public speaking, Yudkowsky is here breathless, disengaged, smug, and twitchy.
As someone who lifted myself up from Dweebdom by lifting weights and having adventures, I find this little bottled man all the more contemptible.

·
January 21, 2024
But Yudkowsky’s deeply off-putting personal presentation is not really the issue here.
The first thing that’s noticeable about this interview, at least as published, is that it buries the supposed core concept of Yudkowskyite AI fear - the Doom part. Klein opens the interview by inviting Yud to talk for nearly 15 minutes about things like LLMs guiding teens to suicide, as if the present-day, real-world impacts of LLMs were what Yudkowsky’s “Everyone Dies” warning was somehow about all along.
But that’s not the reality - Yudkowsky has rarely if ever written anything serious about algorithmic discrimination or the threats of surveillance. Because that would have required nuanced and attentive thinking.
At least to my paranoid mind, given that Yudkowsky is in several material ways an albatros tied to Klein’s neck, this sympathetic opening reeks of intentionally protecting Yudkowsky from himself by presenting him as a critic of “AI” as it exists or interacts with humans, rather than what he is, which is a faith-based believer in an obfuscated millenarian eschatology in which the Flying Decision-Tree Monster is coming to devour us all.
Even gifted this generous misdirect by Klein, Yudkowsky fails to understand that his real ideas are simply bonkers to most people. Klein, to his credit, offers firm counters to Yudkowsky’s worst excesses, mostly by pointing out that we actually do program LLMs despite Yud’s total commitment to equating “it’s a bit of a black box” with “it has a soul and intentionality.” But Yud simply refuses the many exit ramps being offered.
Yudkowsky does a truly terrible job of explaining, either in substance or appeal, the core imporance of the “alignment project.” He seems disaffected, sighs constantly, and for the most part seems to regurgitate stories about AI mishaps he read in the news. He reads cases of GPT psychosis as somehow suggesting that the LLMs have intentionality. He spends the first 20 minutes of the interview vaguely suggesting, but not coming out and saying, that LLMs are showing signs of consciousness (“This is nto like a toaster … this is something weirder and more alien than that”), and then gets baffled by one simple question from *Ezra Klein* asking for a second example. He keeps talking about “side cases” and “suggestive” instances, but the man seems genuinely incapable of converting a coherent linear thought into sound with his mouth.
At about the 16:00 mark, after warnings and checks from Klein, Yud drops the Really Big Turd: recounting the plot from Terminator, which is ultimately the intellectual base for his entire worldview.
“You do see stuff that is currently suggestive of things that have been predicted to be much bigger problems later … These current systems are not yet at the point where they will try to break out of your computer and ensconce themselves permanently on the internet and then start start hunting down humans. They are they are not quite that smart yet as far as I can tell.”

He goes on long, rambling anecdotes about ice cream. He offers remarkably weak arguments for the big leap he has to make to “evil superintelligent robots.” He offers a weak version of the argument for scale - that LLMs will get weirder as they get larger. That’s an even more squirelly version of the same basic logic currently leading the rest of the industry to publicly eat shit, which is interesting.
Klein: Your book is not called if anyone builds it, there is a 1 to 4% chance everybody dies. You believe that the misalignment becomes catastrophic. Why do you think that is so likely?
Yudkowsky: Um, that’s just like the the straight line extrapolation from, it gets what it most wants and the thing that it most wants is not us living happily ever after, so we’re dead.
That’s it, the jig is up. His argument is a straight line extrapolation. that’s the level of nuance and sophistication we’re looking at here. We’ve let rubes and clowns masquerade as intellectuals because they serve the political purposes of billionaires, and it’s all going to come home to roost.
Another really funny thing Yudkowsky just comes out and says, in the context of hypothetically protecting against rogue AI by not connecting it to the internet, is:
“In real life, what everybody does is immediately connect the AI to the internet. They train it on the internet before it’s even been tested to see how powerful it is. It is already connected to the internet being trained.”
This is one of those things that makes me unhesitant to put it straight: Eliezer Yudkowsky, just like Sam Bankman-Fried, is a fucking moron cosplaying as a genius.
This guy has devoted his entire life to AI and doesn’t seem to understand that the AI he’s actively complaining about would not exist if it hadn’t touched the internet, because it fundamentally is the internet. LLMs were not created, fundamentally, by transformers or some other kind of technique. They are the product of the sudden appearance of a huge, digitized corpus of human text communication.
But even all that embarrassing idiocy is not the real nut here. What matters here is that Eliezer Yudkowsky is being offered a thorough sanewashing by Ezra Klein and the New York Times, and he is fucking it up for himself.
Klein is trying his best to position Yudkowsky as a “critic of AI,” almost as if he’s the acceptable mainstream version of Ed Zitron. But he’s wildly less interesting, insightful, and entertaining than Ed. More important, Yudkowsky can’t stop himself from still acting as if it’s inevitable that AIs will discover desire, occupy the internet, develop nanotechnology, and start hunting humans for sport! He’s still literally doing the Terminator bit!
Regardless of this habitual crash out, this automatism at the end of thought, we have to ask - exactly why is it important that Yudkowsky get a red-carpet invitation from Ezra Klein to pivot here away from his theological commitment to the Infinite Built God of Artificial General Intelligence? Why are all these lazy nitwits so committed to saving each other?
I may have just answered my own question.
Dark Markets is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Subscribe
No comments yet