
Welcome to your weekly Dark Markets newsletter. I’m longtime fraud investigator, technology journalist, and PhD historian of technology David Z. Morris.
I’m also the author of Stealing the Future: Sam Bankman-Fried, Elite Fraud, and the Cult of Techno-Utopia. This week we’re talking about one of that book’s major themes: predictionism and its failures. Bankman-Fried’s crime was based on his total faith in his own ability to predict the future - and now, our entire economy is based on placing the same faith in Sam Altman. I’m sure it’s fine.
It feels almost absurd to be writing about tech and finance in the face of America and Israel’s attacks on Iran, but everything is connected. Anthropic’s surreal confrontation with the “Department of War” about the use of AI for human targetting and surveillance shows just how our mass misundersanding of the technology will lead to ruin.
And of course, Trump was vulnerable to external pressures on Iran because of his relationship with Jeffrey Epstein - a relationship that, via cryptocurrency among other conduits, helped put Trump back in office in 2024, years after Epstein’s death. I trace the Epstin-crypto power nexus in a new piece in The Verge.

The piece covers three main threads:
Epstein’s relationships to Masha Drukova and Masha Prusakova, Russian expats who ran crypto PR shops and also helped place at least two pieces of fake reporting about Epstein. Prusokava, who to my knowledge remains active in the crypto industry, was also apparently an active procurer of women and girls for Epstein. I’ve explored the pair in additional depth here already, though there are many more strings to pull on.

That’s the real doomsday scenario here: not that AI succeeds in wiping out all jobs, but that it is already wiping out jobs by promising too much and attracting vast overinvestment.
On March 3, Goldman Sachs released a report in which analysts found “no meaningful relationship” between A.I. adoption and productivity growth at the macroeconomic scale. That’s what the AI boosters are promising when they warn about massive incoming job losses - that fewer people will be able to do the work of many with these amazing new tools. Instead, Goldman finds that employment in Q4 of 2025 was healthy: after nearly three years of aggressive AI integration into every imaginable workflow, the jobs aren’t going anywhere.
(Side note: it’s staggering that the AI companies have gone with “everyone will lose their job” as their public marketing approach, rather than “this will empower you individually.” It makes clear that investors and speculators are driving the bus, not the real economy or user demand.)
This all clashes wildly with the macro-bearish Narrative of the Moment, which for some reason was pushed to dominance by a February 23 report by the rather off-brand Citrini Research. The “report,” which Citrini couches as “a scenario, not a prediction,” isn’t a conventional numbers-based analysis, but instead a science-fiction narrative that predicts unemployment topping 10% by 2028 and mass riots in the streets of the U.S.
It’s not quite as vague as most AI CEO’s warnings of mass unemployment, with a specific focus on the potential disruption to online shopping by AI agents. It specifically predicts doom for platforms like Doordash and Uber, on the basis that vibe-coded apps can now easily disintermediate this new generation of intermediaries. It also predicts agentic payment systems using crypto to circumvent the likes of Amex and Visa and their huge fees. From there, Citrini predicts “a feedback loop with no brake” that leads to the evisceration of first white collar work, then the entire economy.
The Citrini report was blamed/credited for a rout in software and payments stocks, with Uber, American Express, Mastercard and DoorDash all down between 4% and 6% when the report dropped.
(Side Note: I’m starting preliminary work on my next book, which will be about recognizing patterns of fraud and deception in finance and tech. While I don’t think Citrini is fraudulent in any specific sense, I do think their research is intellectually lazy - and so it seems fair to point out their name is very close to Citron Research, a short shop with a truly terrifying reputation. Some degree of the shock produced by Citrini’s report has to be attributed to this confusion - much as Trevor Milton’s Nikola used a deceptively similar name to imitate Elon Musk’s Tesla.)
Goldman titled their report “AI-nxiety,” which doesn’t quite work as a pun, but nails the nature of the supposed crisis - people are worried about what might happen next, not about anything happening now. That’s not unusual in markets, but it is unusual that the people selling the tools have been so aggressive in trying to convince the entire economy of how disruptive and dangerous they are.
This focus on the future is nothing new to the people developing A.I. OpenAI and Anthropic come very directly from the Rationalist and Effective Altruist communities, which have for years shown a huge willingness to exaggerate their own projections of AI development timelines and impacts, motivated by financial motives rather than engineering realities.
The admirably self-reflexive Rationalist Jessica Taylor, well before the AI boom took off, called this “The A.I. Timelines Scam.” Unrealistically short timelines for the arrival of AGI, Taylor wrote, make it easier “to justify receiving large amounts of money … if it is, in fact, possible to develop AI soon. So, there is an economic pressure towards inflating estimates of the chance AI will be developed soon.”

·
May 11, 2025
This brings us back to Citrini. The coauthor of the report was Alap Shah, who is a builder and investor in AI companies. So the report isn’t a bearish takedown of the economy so much as, yet again, a promise of what AI is purportedly able to accomplish.
Like all AI hype and fear, it’s ultimately an advertisement for Shah’s bags, aimed at potential exit liquidity.
For the full rundown of the deceptions of Rationalism and Effective Altruism, be sure to pick up my book Stealing the Future.

Subscribe
The entire AI “movement” is comically in thrall to the power of bold (and baseless) predictions to drive self-enriching attention and investment. This is most pathetically displayed in recent interviews with Ray Kurzweil. If you look it up on Google, you’ll see Kurzweil credited with a firm prediction that AGI will emerge by 2029 - but the reality is that he has moved the goalposts on that prediction many times, and continues to.
If this sounds familiar, it’s probably because Elon Musk has been moving the goalposts on self-driving cars for about a decade now. Tesla’s status as a essentially a meme stock has shielded it from market discipline, but these decade-old lies (which continue to undermine the Robotaxi today) have turned the stock into a time bomb of apocalyptic scale. The even bigger bomb now, of course, is OpenAI and the entire AI-Narrative Complex.
Elon has also recently engaged in some even more profoundly dangerous predicting - throughout November and December of 2025, he was pushing the idea that nobody needed to save for retirement because AGI would usher in universal high income. He predicted that “poverty will cease to exist” while at a conference in the UAE.
The cynicism of this messaging, coming from a man clearly willing to lie to God for a buck, is truly revolting.
Going back to Feb 2024, Elon was also touting the inevitability of universal basic income (or UBI) after the disappearance of jobs. This was also a major point of interest for Sam Altman: Before OpenAI began taking up his time, delivering UBI was the ostensible rationale behind Worldcoin, now World, a cryptocurrency token notoroius for harvesting the biometric data of needy people in the developing world in exchange for tokens whose value has Splatooned by about 90% from its 2024 peak. So much for UBI, but they’ll keep the investor money, thanks.

All of this, including Citrini’s report, should be seen in the light of another concept dear to many of the authoritarian broligarchs behind tech development and investing: Nick Land’s theory of Hyperstition. Land is a philosopher who has influenced the “alt-right” movement substantially, and coined the term before his own full degradation into a Curtis Yarvin-tier “hyper-racist” bloviator. But it has become integrated into the far-right socio-political framework alongside the (largely misappropriated and misunderstood) likes of Rene Girard and Leo Strauss.
“Hyperstition,” a portmanteu of “hype” and “superstition” is the belief that creating narratives about the future, and spreading them widely enough among the populace, can actually make those myths come true. While Land initially posited this as something closer to a descriptive analysis of social and communication forces in the 21st century, you can perhaps see why authoritarians like Peter Thiel would look at Land’s analysis and adopt it as a conscious strategy, in much the same way Thiel completely misunderstands Girard’s point about mimetic desire.
Writing in fragmentary poetics in Fanged Noumena, Land describes “Digital Hyperstition — brands of the outside — real components of numerical fictions that make themselves real (…)”
It is, certainly in the degraded form it is deployed by its biggest champions, only a slightly more developed version of The Secret, the “Law of Attraction,” “Manifesting,” or Tony Robbins’ “Power of Positive Thinking.” They are all arguments that thinking can replace practical doing.
And of course, these “manifestation” discourses are often the ideological tools of financial scams.
Subscribe
As Victor Steuck writes in this excellent introductory overview and critique of Land’s concept, “hyperstition often oversimplifies the complex nature of societal change by suggesting that belief alone can reshape reality.”
That’s exactly what’s at issue in Citrini’s report, which critics have broadly disassembled its assumptions about the nature of economic change, and of productivity growth. Most of all, Citrini adopts the AI industry’s blind assumption that AI’s will uniformly replace knowledge work across the board. In practice, it’s becoming increasingly clear that the current generation of LLMs and related tools have strengths in data analytics and research, but serious weaknesses in, for instance, replacing customer service workers who actually interact with other humans.
Over time, as with any new tool, that will reshape human working life, and shift labor around the economy. But Occam’s Razor suggests Citrini’s goal, like that of the rest of the AI Doom Narrative Industrial Complex, is not to accurately predict anything, but to create the hyperstitional conditions to fulfill their own prophecy.
That is, by inducing AI fear, all of this is an effort to induce AI adoption.
Again, from the very cogent Steuck:
“Reliance on belief-driven narratives makes hyperstition particularly susceptible to manipulation. Influential individuals or groups can craft and disseminate hyperstitions to serve their own interests, potentially steering public opinion and behavior in directions that benefit a select few rather than the collective good.”
This is clearly what’s going on when Elon Musk says you don’t need to save for retirement. Less responsible saving, after all, means more current money free to … invest in speculative AI companies and inflated Tesla stock.
In this light, it’s notable that integrations like Microsoft Copilot are almost entirely imposed on workers from the top down, either by fearful managers who have been sold the narrative of competitive adoption, or by actual corporations making their own product shittier with workflows that don’t actually make sense or increase efficiency. Hence recent findings that nobody is actually using Copilot.
This echoes Steuck’s final major critique of hyperstition as a development model, rather than a descriptive analysis: “Hyperstition may foster a false sense of control or complacency. By placing emphasis on the power of collective belief to manifest reality, there’s a risk that people might prioritize spreading narratives over engaging in practical actions needed to address pressing issues.”
Microsoft and others simply imposing a new technology, without attention to the details of its useful implementation, shows exactly this reliance on narrative over reality.
And upstream from the resources misallocated into Copilot and the like, of course, is the much vaster misallocation into immense, planet-eroding data centers that might wind up being unproductive stranded capital. The whole package creates the overleveraged, over-invested conditions for economic catastrophe.
That’s the real doomsday scenario here: not that AI succeeds in wiping out all jobs, but that it is already wiping out jobs by promising too much and creating vast malinvestment through deceptive narratives.
I won’t delve too far into the other major AI Issue of the week, the “Department of War’s” bullying of Anthropic on its safety policies. But one reaction to the fight really caught my eye:

https://x.com/KatieMiller/status/2027889344363593876
Obviously, Katie Miller is a halfwit right-wing podcaster, but that’s why it’s worth paying attention: Normies have completely bought into the thesis that LLMs should be relied on to make decisions. More subtly, she posits that an AI can “seek truth,” imputing to Elon Musk’s Grok both intention and knowledge - neither of which any LLM possesses.

·
March 21, 2023
When I wrote about the fraudulent promotion of A.I. “consciousness” narratives nearly three years ago, I was expecting financial fraud. And we certainly got that, as with OpenAI’s new $110 billion investment round, which the company seems mostly likely to just set on fire.
Of course, as other cynics have observed, the Iran situation also offers some explanation for this seemingly absurd bet: the money isn’t really to help create better AI video or to upgrade ChatGPT, but to put OpenAI more effectively in the service of the U.S. war machine.
So the fraud here, in this creation of a false horizon touting false applications of a false technology, isn’t just about the “A.I. Timelines Scam” of attracting investor money as fast as possible.
It’s also a fraud against our understanding of the truth, judgment, and human life. And we can’t afford to have those eroded any further.
Dark Markets is an entirely independent and reader-supported publication. To receive all new posts and support my work, consider becoming a paid subscriber.
Subscribe

Welcome to your weekly Dark Markets newsletter. I’m longtime fraud investigator, technology journalist, and PhD historian of technology David Z. Morris.
I’m also the author of Stealing the Future: Sam Bankman-Fried, Elite Fraud, and the Cult of Techno-Utopia. This week we’re talking about one of that book’s major themes: predictionism and its failures. Bankman-Fried’s crime was based on his total faith in his own ability to predict the future - and now, our entire economy is based on placing the same faith in Sam Altman. I’m sure it’s fine.
It feels almost absurd to be writing about tech and finance in the face of America and Israel’s attacks on Iran, but everything is connected. Anthropic’s surreal confrontation with the “Department of War” about the use of AI for human targetting and surveillance shows just how our mass misundersanding of the technology will lead to ruin.
And of course, Trump was vulnerable to external pressures on Iran because of his relationship with Jeffrey Epstein - a relationship that, via cryptocurrency among other conduits, helped put Trump back in office in 2024, years after Epstein’s death. I trace the Epstin-crypto power nexus in a new piece in The Verge.

The piece covers three main threads:
Epstein’s relationships to Masha Drukova and Masha Prusakova, Russian expats who ran crypto PR shops and also helped place at least two pieces of fake reporting about Epstein. Prusokava, who to my knowledge remains active in the crypto industry, was also apparently an active procurer of women and girls for Epstein. I’ve explored the pair in additional depth here already, though there are many more strings to pull on.

That’s the real doomsday scenario here: not that AI succeeds in wiping out all jobs, but that it is already wiping out jobs by promising too much and attracting vast overinvestment.
On March 3, Goldman Sachs released a report in which analysts found “no meaningful relationship” between A.I. adoption and productivity growth at the macroeconomic scale. That’s what the AI boosters are promising when they warn about massive incoming job losses - that fewer people will be able to do the work of many with these amazing new tools. Instead, Goldman finds that employment in Q4 of 2025 was healthy: after nearly three years of aggressive AI integration into every imaginable workflow, the jobs aren’t going anywhere.
(Side note: it’s staggering that the AI companies have gone with “everyone will lose their job” as their public marketing approach, rather than “this will empower you individually.” It makes clear that investors and speculators are driving the bus, not the real economy or user demand.)
This all clashes wildly with the macro-bearish Narrative of the Moment, which for some reason was pushed to dominance by a February 23 report by the rather off-brand Citrini Research. The “report,” which Citrini couches as “a scenario, not a prediction,” isn’t a conventional numbers-based analysis, but instead a science-fiction narrative that predicts unemployment topping 10% by 2028 and mass riots in the streets of the U.S.
It’s not quite as vague as most AI CEO’s warnings of mass unemployment, with a specific focus on the potential disruption to online shopping by AI agents. It specifically predicts doom for platforms like Doordash and Uber, on the basis that vibe-coded apps can now easily disintermediate this new generation of intermediaries. It also predicts agentic payment systems using crypto to circumvent the likes of Amex and Visa and their huge fees. From there, Citrini predicts “a feedback loop with no brake” that leads to the evisceration of first white collar work, then the entire economy.
The Citrini report was blamed/credited for a rout in software and payments stocks, with Uber, American Express, Mastercard and DoorDash all down between 4% and 6% when the report dropped.
(Side Note: I’m starting preliminary work on my next book, which will be about recognizing patterns of fraud and deception in finance and tech. While I don’t think Citrini is fraudulent in any specific sense, I do think their research is intellectually lazy - and so it seems fair to point out their name is very close to Citron Research, a short shop with a truly terrifying reputation. Some degree of the shock produced by Citrini’s report has to be attributed to this confusion - much as Trevor Milton’s Nikola used a deceptively similar name to imitate Elon Musk’s Tesla.)
Goldman titled their report “AI-nxiety,” which doesn’t quite work as a pun, but nails the nature of the supposed crisis - people are worried about what might happen next, not about anything happening now. That’s not unusual in markets, but it is unusual that the people selling the tools have been so aggressive in trying to convince the entire economy of how disruptive and dangerous they are.
This focus on the future is nothing new to the people developing A.I. OpenAI and Anthropic come very directly from the Rationalist and Effective Altruist communities, which have for years shown a huge willingness to exaggerate their own projections of AI development timelines and impacts, motivated by financial motives rather than engineering realities.
The admirably self-reflexive Rationalist Jessica Taylor, well before the AI boom took off, called this “The A.I. Timelines Scam.” Unrealistically short timelines for the arrival of AGI, Taylor wrote, make it easier “to justify receiving large amounts of money … if it is, in fact, possible to develop AI soon. So, there is an economic pressure towards inflating estimates of the chance AI will be developed soon.”

·
May 11, 2025
This brings us back to Citrini. The coauthor of the report was Alap Shah, who is a builder and investor in AI companies. So the report isn’t a bearish takedown of the economy so much as, yet again, a promise of what AI is purportedly able to accomplish.
Like all AI hype and fear, it’s ultimately an advertisement for Shah’s bags, aimed at potential exit liquidity.
For the full rundown of the deceptions of Rationalism and Effective Altruism, be sure to pick up my book Stealing the Future.

Subscribe
The entire AI “movement” is comically in thrall to the power of bold (and baseless) predictions to drive self-enriching attention and investment. This is most pathetically displayed in recent interviews with Ray Kurzweil. If you look it up on Google, you’ll see Kurzweil credited with a firm prediction that AGI will emerge by 2029 - but the reality is that he has moved the goalposts on that prediction many times, and continues to.
If this sounds familiar, it’s probably because Elon Musk has been moving the goalposts on self-driving cars for about a decade now. Tesla’s status as a essentially a meme stock has shielded it from market discipline, but these decade-old lies (which continue to undermine the Robotaxi today) have turned the stock into a time bomb of apocalyptic scale. The even bigger bomb now, of course, is OpenAI and the entire AI-Narrative Complex.
Elon has also recently engaged in some even more profoundly dangerous predicting - throughout November and December of 2025, he was pushing the idea that nobody needed to save for retirement because AGI would usher in universal high income. He predicted that “poverty will cease to exist” while at a conference in the UAE.
The cynicism of this messaging, coming from a man clearly willing to lie to God for a buck, is truly revolting.
Going back to Feb 2024, Elon was also touting the inevitability of universal basic income (or UBI) after the disappearance of jobs. This was also a major point of interest for Sam Altman: Before OpenAI began taking up his time, delivering UBI was the ostensible rationale behind Worldcoin, now World, a cryptocurrency token notoroius for harvesting the biometric data of needy people in the developing world in exchange for tokens whose value has Splatooned by about 90% from its 2024 peak. So much for UBI, but they’ll keep the investor money, thanks.

All of this, including Citrini’s report, should be seen in the light of another concept dear to many of the authoritarian broligarchs behind tech development and investing: Nick Land’s theory of Hyperstition. Land is a philosopher who has influenced the “alt-right” movement substantially, and coined the term before his own full degradation into a Curtis Yarvin-tier “hyper-racist” bloviator. But it has become integrated into the far-right socio-political framework alongside the (largely misappropriated and misunderstood) likes of Rene Girard and Leo Strauss.
“Hyperstition,” a portmanteu of “hype” and “superstition” is the belief that creating narratives about the future, and spreading them widely enough among the populace, can actually make those myths come true. While Land initially posited this as something closer to a descriptive analysis of social and communication forces in the 21st century, you can perhaps see why authoritarians like Peter Thiel would look at Land’s analysis and adopt it as a conscious strategy, in much the same way Thiel completely misunderstands Girard’s point about mimetic desire.
Writing in fragmentary poetics in Fanged Noumena, Land describes “Digital Hyperstition — brands of the outside — real components of numerical fictions that make themselves real (…)”
It is, certainly in the degraded form it is deployed by its biggest champions, only a slightly more developed version of The Secret, the “Law of Attraction,” “Manifesting,” or Tony Robbins’ “Power of Positive Thinking.” They are all arguments that thinking can replace practical doing.
And of course, these “manifestation” discourses are often the ideological tools of financial scams.
Subscribe
As Victor Steuck writes in this excellent introductory overview and critique of Land’s concept, “hyperstition often oversimplifies the complex nature of societal change by suggesting that belief alone can reshape reality.”
That’s exactly what’s at issue in Citrini’s report, which critics have broadly disassembled its assumptions about the nature of economic change, and of productivity growth. Most of all, Citrini adopts the AI industry’s blind assumption that AI’s will uniformly replace knowledge work across the board. In practice, it’s becoming increasingly clear that the current generation of LLMs and related tools have strengths in data analytics and research, but serious weaknesses in, for instance, replacing customer service workers who actually interact with other humans.
Over time, as with any new tool, that will reshape human working life, and shift labor around the economy. But Occam’s Razor suggests Citrini’s goal, like that of the rest of the AI Doom Narrative Industrial Complex, is not to accurately predict anything, but to create the hyperstitional conditions to fulfill their own prophecy.
That is, by inducing AI fear, all of this is an effort to induce AI adoption.
Again, from the very cogent Steuck:
“Reliance on belief-driven narratives makes hyperstition particularly susceptible to manipulation. Influential individuals or groups can craft and disseminate hyperstitions to serve their own interests, potentially steering public opinion and behavior in directions that benefit a select few rather than the collective good.”
This is clearly what’s going on when Elon Musk says you don’t need to save for retirement. Less responsible saving, after all, means more current money free to … invest in speculative AI companies and inflated Tesla stock.
In this light, it’s notable that integrations like Microsoft Copilot are almost entirely imposed on workers from the top down, either by fearful managers who have been sold the narrative of competitive adoption, or by actual corporations making their own product shittier with workflows that don’t actually make sense or increase efficiency. Hence recent findings that nobody is actually using Copilot.
This echoes Steuck’s final major critique of hyperstition as a development model, rather than a descriptive analysis: “Hyperstition may foster a false sense of control or complacency. By placing emphasis on the power of collective belief to manifest reality, there’s a risk that people might prioritize spreading narratives over engaging in practical actions needed to address pressing issues.”
Microsoft and others simply imposing a new technology, without attention to the details of its useful implementation, shows exactly this reliance on narrative over reality.
And upstream from the resources misallocated into Copilot and the like, of course, is the much vaster misallocation into immense, planet-eroding data centers that might wind up being unproductive stranded capital. The whole package creates the overleveraged, over-invested conditions for economic catastrophe.
That’s the real doomsday scenario here: not that AI succeeds in wiping out all jobs, but that it is already wiping out jobs by promising too much and creating vast malinvestment through deceptive narratives.
I won’t delve too far into the other major AI Issue of the week, the “Department of War’s” bullying of Anthropic on its safety policies. But one reaction to the fight really caught my eye:

https://x.com/KatieMiller/status/2027889344363593876
Obviously, Katie Miller is a halfwit right-wing podcaster, but that’s why it’s worth paying attention: Normies have completely bought into the thesis that LLMs should be relied on to make decisions. More subtly, she posits that an AI can “seek truth,” imputing to Elon Musk’s Grok both intention and knowledge - neither of which any LLM possesses.

·
March 21, 2023
When I wrote about the fraudulent promotion of A.I. “consciousness” narratives nearly three years ago, I was expecting financial fraud. And we certainly got that, as with OpenAI’s new $110 billion investment round, which the company seems mostly likely to just set on fire.
Of course, as other cynics have observed, the Iran situation also offers some explanation for this seemingly absurd bet: the money isn’t really to help create better AI video or to upgrade ChatGPT, but to put OpenAI more effectively in the service of the U.S. war machine.
So the fraud here, in this creation of a false horizon touting false applications of a false technology, isn’t just about the “A.I. Timelines Scam” of attracting investor money as fast as possible.
It’s also a fraud against our understanding of the truth, judgment, and human life. And we can’t afford to have those eroded any further.
Dark Markets is an entirely independent and reader-supported publication. To receive all new posts and support my work, consider becoming a paid subscriber.
Subscribe
·
Feb 11
Epstein’s crypto relationships began with Brock Pierce, cofounder of Tether. In a bizarre string of events that may have been tactical or completely coincidental, Pierce’s takeover of the Bitcoin Foundation led to Bitcoin Core development support transferring to the MIT Media Lab, where Epstein’s donations gave him direct access to Lab head Joi Ito. This, along with Epstein’s investment in and relations with Blockstream, and this notorious Next Web piece suggests he may have had some sway in the direction of the circa 2017- 2018 “Block Size Wars.” This remains to be explored much further.
Epstein’s early investment of $3 million in Coinbase circa 2014. Emails indicated that Coinbase co-founder Fred Ehrsam was fully aware of Epstein’s identity when accepting the investment. This is significant at minimum because of Coinbase’s subsequent displays of fealty to Epstein’s far-right agenda, including with its hiring of the fascistic spyware group Hacking Team/Neutrino; its much-ballyhooed gag order on discussing politics at work in the wake of George Floyd’s murder; and most of all, of course, Coinbase’s huge fundraising for Donald Trump’s 2024 election campaign.
Subscribe
·
Feb 11
Epstein’s crypto relationships began with Brock Pierce, cofounder of Tether. In a bizarre string of events that may have been tactical or completely coincidental, Pierce’s takeover of the Bitcoin Foundation led to Bitcoin Core development support transferring to the MIT Media Lab, where Epstein’s donations gave him direct access to Lab head Joi Ito. This, along with Epstein’s investment in and relations with Blockstream, and this notorious Next Web piece suggests he may have had some sway in the direction of the circa 2017- 2018 “Block Size Wars.” This remains to be explored much further.
Epstein’s early investment of $3 million in Coinbase circa 2014. Emails indicated that Coinbase co-founder Fred Ehrsam was fully aware of Epstein’s identity when accepting the investment. This is significant at minimum because of Coinbase’s subsequent displays of fealty to Epstein’s far-right agenda, including with its hiring of the fascistic spyware group Hacking Team/Neutrino; its much-ballyhooed gag order on discussing politics at work in the wake of George Floyd’s murder; and most of all, of course, Coinbase’s huge fundraising for Donald Trump’s 2024 election campaign.
Subscribe

👁️ Bullish and EOS: “A Fraud in Plain Sight”
The team behind Bullish brutally rugpulled their earliest investors. They'll do the same to public markets.

👁️ The Metaverse is On Its Last Legs
Say goodbye to the Previous Thing. Also: Salesforce gets f*cked by AI, Tesla's Roadster scam, and more.

👁️Not Even Past: Bullish and Ethereum's Shared Origin at Bitcoin Miami 2014
Ages and ages and ages thence

👁️ Bullish and EOS: “A Fraud in Plain Sight”
The team behind Bullish brutally rugpulled their earliest investors. They'll do the same to public markets.

👁️ The Metaverse is On Its Last Legs
Say goodbye to the Previous Thing. Also: Salesforce gets f*cked by AI, Tesla's Roadster scam, and more.

👁️Not Even Past: Bullish and Ethereum's Shared Origin at Bitcoin Miami 2014
Ages and ages and ages thence
Share Dialog
Share Dialog
<100 subscribers
<100 subscribers
Dark Markets
Dark Markets
No comments yet