
Welcome to your weekend edition of Dark Markets, a newsletter about technology fraud by David Z. Morris. These updates are often paywalled, but this week’s is short, sweet, and open to the public. Nevertheless, please consider becoming a paid subscriber to support my work. Also consider pre-ordering Stealing the Future, my new book about Sam Bankman-Fried and techno-utopianism.
Dark Markets is a reader-supported publication. To receive all updates in full, consider becoming a paid subscriber.
Subscribe
Reading any book is good for your mind, but old books are where readers can gain a truly unfair advantage over the subliterate ‘thought leaders’ trying to tweet their way to insight. Old books contain, in a word, secrets - facts forgotten, ideas buried, sometimes by design, more often these days by the senescent fog of information overload and educational deprivation.
I’m starting preliminary research for my next book, and the archive of artificial intelligence history has begun to give up those secrets* to me. One of the most shocking of these discoveries - a tale that would seem like a bit of Pynchonesque satire if it wasn’t entirely real - is the supression in 1965, by an extension of the U.S. military establishment, of a research paper by philosopher of knowledge Hubert L. Dreyfus that warned against the same AI delusions our entire society is falling victim to today.
*It’s not really a secret - the incident is notorious among serious AI scholars and experts. But I doubt most of the
Share Dialog


·
November 20, 2023
Dreyfus’ paper, titled “Alchemy and Artificial Intelligence,” was commissioned by the Rand Corporation, a pseudo-governmental organization that housed a great deal of government-funded early research on thinking machines. The commission came seemingly in part because Dreyfus’ brother, Stuart E. Dreyfus, was a staffer at Rand working on operations research - essentially, the attempt to use machines to assist in decision-making.
But if they thought Hubert was one of the family, the research Rand got in return was nevertheless not to their liking. Dreyfus’ 1965 paper, “Alchemy and Artificial Intelligence,” outlined the conceptual flaws at the core of early AI boosterism at Rand. Rand’s “broader claim to have cast any general light on understanding, intuition, and learning was not supported by their actual research,” as the Brothers Dreyfus put it in their joint 1988 book “Mind Over Machine: The Power of Human Intuition and Expertise in the Era of the Computer.” Very broadly, “Alchemy and Artificial Intelligence” argued that the very premise that human knowledge could be encoded in binary algorithms or knowledge bases was misguided, and that centuries of humanist thought increasingly converged on the idea that knowledge is inextricable from lived experience.
Dreyfus had been respectable enough for Rand to hire, but when they didn’t like his conclusions, they suppressed his research. “Because I was criticizing RAND’s cognitive simulation research, [Herbert] Simon and [Allen] Newell insisted that my paper was nonsense and that RAND should in no way appear to condone it.” What followed was “a year-long struggle within RAND as to whether the paper should be published or suppressed.”
To their partial credit, RAND did eventually release the paper a year later, and it’s in RAND’s public archive. Notably obvious from the archive, though, is that Hubert Dreyfus never did another piece of work for RAND - and you can guess that was a relatively lucrative consulting gig for a philosophy professor. The Dreyfus brothers also allege that after the paper, Stuart Dreyfus was professionally sidelined within the organization, which he ultimately departed.
(The last straw for Stuart was apparently a D.C. dinner party at which he confessed he wouldn’t let a machine choose when he should replace his car. If that was true, he decided he probably shouldn’t be pushing for the use of machines in geopolitical decisionmaking. If only today’s AI researchers had 1/10th of his self-awareness.)
Hubert Dreyfus describes this incident as his “first taste of the unscientific character of the field” - a phrase written two decades later, suggesting that he felt this unscientific character had persisted. Indeed, it continues to this day - RAND’s wilful disregard of conclusions it didn’t like rhymes all too well with Sam Altman and other shameless factotums’ gormless insistence that more LLM compute is all that’s needed to turn structurally hallucinatory decision trees into functional minds. Their motivated hopium is designed to be regurgitated in turn by credulous “thought leaders” like the reliably halfwitted Thomas Friedman.

There are very similar dynamics visible in the most fraudulent branch of contemporary AI thought, the “AI safety” effort seeded by the Machine Intelligence Research Institute, and now ensconsced in a network of related billionaire-funded West-Coast think tanks. These bodies’ research is by and large secret - or private, if you prefer. In some fundamental sense that means it’s not research at all, any more than RAND’s picking and choosing of pro-machine conclusions was “research” more than half a century ago. Both RAND in this moment, and all of its successors since, have been, instead, propaganda organs first and foremost.
I’m still working my way through Mind over Machine, and I aim to share more detailed summaries’ of Dreyfus’ overall critique, including in “Alchemy and Artificial Intelligence,” here soon. But his broad taxonomy of the history of thought about thought (epistemology) is a powerful gateway to the underlying issues.
The tradition that has been operationalized as AI includes idealists and logicians in a string from Plato to Descartes and Leibniz to Husserl, who broadly focused on the idea of the human mind as a container for knowledge and rules. Dreyfus calls it “the information-processing model of the mind,” and the mission of building AI cannot conceptually survive without it.
But Dreyfus savagely dismisses believers in this model as “metaphysicians who claimed the ability to read God’s mind” - a hubris now substantially leveled-up into the goal of creating God’s mind. Against them, roughly, Dreyfus arrays the empiricist David Hume, who defended the deceptively complex model of the human mind as made up, not of knowledge and logic, but the far more difficult to define “common sense” that we actually experience in our everyday activities. This foundation was built out by Martin Heidegger, Ludwig Wittgenstein, and, seemingly most important for Dreyfus, Maurice Merleau-Ponty, who studied the role of perception in experience.
For these figures, the brothers Dreyfus write, “human understanding was a skill akin to knowing how to find one’s way about in the world, rather than knowing a lot of facts and rules for relating them. Our basic understanding was thus a knowing how rather than a knowing that.”
This is the root of the argument that computers can never have real knowledge or intelligence because they are not embodied, have no independent experience, and face neither stakes nor consequences when making “decisions.”

·
March 21, 2023
The idea of intelligence as a skill is deeply antithetical to at least one major pillar of Silicon Valley ideology - the principles of eugenics substantially developed at Stanford. That tradition, of course, holds that “intelligence” is not the product of training or experience, but instead a genetic attribute, directly heritable, and largely disconnected from social circumstance or life experience. It is a persistently dangerous delusion on many levels, some of them unexpected: Sam Bankman-Fried’s elevation and downfall, for instance, were smoothed most of all by the idea that he had to be a genius, because of who his parents were and where he came from.
The Rand Corp’s Dreyfus incident is strong evidence that AI boosterism has long been <Zizek voice> ideology of the most crass sort - people continue to echo these bad ideas because they are being paid to. Without even the respectability of a mistake reproduced as tradition or habit, these distorting forces attempt to convince us our intuitive understanding of our own minds is wrong, and that we should instead bow down before yet another idiot box.
In the 1960s, AI hype chased its slice of the U.S. military budget. In 2025, it chases investment capital. The current reliance of the U.S. economy on AI hype means this hype bubble will deflate slower than previous AI cycles. But the waste will be even more total, because today’s AI flim-flam artists have even less tolerance than Rand once did for those making the same common-sense, humanist arguments Dreyfus did 60 years ago.
Dark Markets is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Subscribe
No comments yet