The Paragraph version of That Was The Week
The Paragraph version of That Was The Week

Subscribe to That Was The Week

Subscribe to That Was The Week
A reminder for new readers. That Was The Week collects the best writing on critical issues in tech, startups, and venture capital. I selected the articles because they are of interest. The selections often include things I entirely disagree with. But they express common opinions, or they provoke me to think. The articles are only snippets. Click on the headline to go to the original. I express my point of view in the editorial and the weekly video below.
This Week’s Video and Podcast:
Thanks and Thanksgiving to: @bchesky, @lmatsakis, @reedalbergotti, @bentaylordata, @broderick, @benthompson, @coryweinberg, @rex_woodbury, @jglasner, @chamath, @Kantrowitz, @jacqmelinek, @cookie, @sarahpereztc, @om, @levie
It’s Thanksgiving here in Palo Alto, and I should thank all the writers and producers whose work I read each week for the stimulation and provocation they provide. In case you do not all realize it, you are appreciated.
I always try to call out the creators of the content I curate at the top of That Was The Week, and I will continue to do so. Many appear every week.
Let us start with Brian Chesky, of AirBnB fame, this week. His X post is apt and to the point.

called this week’s newsletter, A Tale of Two Weeks. Friends kindly noted that it has only been one week. It felt like two, and to Brian Chesky’s point, we learned a lot.
EA and e /acc are now part of everyday conversation in tech circles. And we are starting to understand that ideology (or philosophy) plays a significant role in strategy. People are forced to take sides.
Last week, the EA (effective altruism) camp looked in the ascendancy. But this week, over 700 OpenAI employees sided with Sam Altman and Greg Brockman, resulting in their return to lead the company. The e/acc camp won. And that leads many advocates of effective altruism to question its relevance to startups.
There are also some new acronyms or labels to learn. Marc Andreessen is reposting @beffjezos on X, mentioning Decels.

The Decels want authoritarianism, which means the opponents of effective acceleration (e /acc) are now known as decelerators (Decels) and are considered to want to have the power to stop innovation (authoritarianism).
The e /acc lobby sees itself as humanistic at its core for innovation. It regards the Decels as authoritarian because they regard themselves as the moral arbiters of humanity.
Reading below, I also learned that similar schisms exist in TikTok. In this case, the acronym is BRG and apparently, it is like e /acc.
From Ryan Broderick:
Groups like Remilia and BRG are one half of an extremely stupid ideological battle tearing apart Silicon Valley — and OpenAI, specifically — right now. And before last Friday, the idea that the biggest names in tech could read too many blog posts and end up developing what are essentially two competing religions and could then be willing to blow up billion-dollar companies in defense of those religions was, honestly, too stupid to believe. And yet, here we are…
Which means it’s worth understanding what these two sides want. And it essentially comes down to speed. On one side is effective altruism, or EA, and on the other is effective accelerationism, or e/acc, which is mainly what BRG is promoting on TikTok right now.
I'm afraid I have to disagree with Ryan that the disputes are “stupid.” And I do not agree with him in his characterization of the two sides:
The altruists, which includes folks like Elon Musk and Sam Bankman-Fried, believe that maximum human happiness is a math equation you can solve with money, which should be what steers technological innovation. While the accelerationists believe almost the inverse, that innovation matters more than human happiness and the internet can, and should, rewire how our brains work. Either way, both groups are obsessed with race science, want to replace democratic institutions with privately-owned automations — that they control — and are utterly convinced that technology and, specifically, the emergence of AI is a cataclysmic doomsday moment for humanity. The accelerationists just think it should happen immediately
Last week’s editorial explains the differences quite well. But suffice it to say that Ryan’s definitions are way off. Both movements are ill-defined and have yet to be clearly articulated thoroughly. It would be unfair to create a binary world of only two views or to assume any individual is a 100% clone of groupthink. Many good people are grappling with critical philosophical issues. Most have not defined themselves into a strict camp.
So why am I inclined to side with the e /acc camp?
Mainly because e /acc is against a fear-driven elitist attempt to stop or slow down AI innovation; for me, that is sufficient. I see no evidence of anti-democratic or technocratic thinking. Indeed, with Altman and his WorldCoin project, there is evidence of the opposite - a genuine belief in using the wealth created by automation to serve human progress.
The entire European enlightenment that modern democracy was predicated on was broadly e /acc-like.
Once we accept the EA view that self-imposed limits to innovation, especially limits placed in the hands of elites and argued for due to fear, are required, we give up on the human ability to innovate freely using science alone
Of course, science needs to align with human good. So far, there is no evidence that AI does not.
At the core of democracy is wealth creation. At the heart of wealth creation is innovation. Most innovators are, by nature, rule breakers. The right to break the rules and learn is at the very essence of civilization. Once the rules win, the rulers win. And rulers are usually self-interested and not aligned with human evolution. For that reason, I’m with e /acc, not EA.
Much of this week’s curated content has these threads running through them. Clarity of purpose around innovation is essential.
Louise Matsakis and Reed Albergotti
Updated Nov 21, 2023, 12:29pm PST

One of the most prominent backers of the “effective altruism” movement at the heart of the ongoing turmoil at OpenAI, Skype co-founder Jaan Tallinn, told Semafor he is now questioning the merits of running companies based on the philosophy.
“The OpenAI governance crisis highlights the fragility of voluntary EA-motivated governance schemes,” said Tallinn, who has poured millions into effective altruism-linked nonprofits and AI startups. “So the world should not rely on such governance working as intended.”
His comments are part of a growing backlash against effective altruism and its arguments about the risks AI poses to humanity, which has snowballed over the last few days into the movement’s second major crisis in a year.
The first was caused by the downfall of convicted crypto fraudster Sam Bankman-Fried, who was once among the leading figures of EA, an ideology that emerged in the elite corridors of Silicon Valley and Oxford University in the 2010s offering an alternative, utilitarian-infused approach to charitable giving.
EA then played a role in the meltdown at OpenAI when its nonprofit board of directors — tasked solely with ensuring the company’s artificial intelligence models are “broadly beneficial” to all of humanity — abruptly fired CEO Sam Altman on Friday, creating a standoff that currently threatens its entire existence.
Three of the six seats on OpenAI’s board are occupied by people with deep ties to effective altruism: think tank researcher Helen Toner, Quora CEO Adam D’Angelo, and RAND scientist Tasha McCauley. A fourth member, OpenAI co-founder and chief scientist Ilya Sutskever, also holds views on AI that are generally sympathetic to EA.
Until a few days ago, OpenAI and its biggest corporate backers didn’t seem to think there was anything worrisome about this unusual governance structure. The president of Microsoft, which has invested $13 billion into OpenAI, argued earlier this month that the ChatGPT maker’s status as a nonprofit was what made it more trustworthy than competitors like Meta.
“People get hung up on structure,” Vinod Khosla, whose venture capital firm was among the first to invest in OpenAI’s for-profit subsidiary in 2019, said at an AI conference last week. “If you’re talking about changing the world, who freaking cares?”
Less than a week later, Khosla and other OpenAI investors are now left with shares of uncertain value. So far, it looks like Microsoft and Altman — a billionaire serial entrepreneur — are successfully outmaneuvering the effective altruists on the board. Nearly all of OpenAI’s employees have threatened to quit if the directors don’t resign, saying they will instead join their former boss on a new team at Microsoft.
Effective altruism emerged over a decade ago with a new way to think about helping the world. Instead of donating to causes that people found personally compelling, its leaders urged adherents to consider invented variables like “expected value,” which they said could be used to objectively determine where their impact would be the greatest.
EAs initially focused mostly on issues like animal welfare and global poverty, but over time, worries about an AI-fueled apocalypse became a central focus. With funding from deep-pocketed donors like billionaire and Facebook co-founder Dustin Moskovitz, they built their own insular universe to study AI safety, including a web of nonprofits and research organizations, forecasting centers, conferences, and web forums.
Toner and McCauley are both leaders of major EA groups backed by Moskovitz. McCauley sits on the board of Effective Ventures, one of the most important institutions of the movement. Earlier this year, Oxford philosophy professor William MacAskill, perhaps the most famous EA ever, named her as one of a small group of “senior figures.” Toner is the director of strategy at the Center for Security and Emerging Technology (CSET), a think tank at Georgetown University funded by Moskovitz’s grant-making organization Open Philanthropy.
Before joining CSET, Toner worked at Open Philanthropy and several other EA-linked organizations, and she and McCauley also sit on the board of another one called the Centre for the Governance of AI. D’Angelo, meanwhile, worked with Moskovitz at Facebook in the early aughts and sits on the board of his software company Asana. He has repeatedly echoed similar concerns about AI espoused by EA.
Despite these connections, OpenAI told a journalist in September that “none of our board members are effective altruists.” It argued that their interactions with the EA movement were largely “focused on topics related to AI safety.”
JEPSON TAYLOR, NOV 19, 2023

The engagement of non-tech individuals in this Silicon Valley drama has been fun; the real-life writers for this season have truly excelled 🔥! While it's all speculative and merely my perspective, I struggle to envision a scenario in which OpenAI isn't compelled to reinstate Sam Altman as soon as possible. The crux of the matter lies in the fact that Sam is synonymous with the funding. The investors' commitment was to Sam, his vision, and his renowned capacity to push technological boundaries, not to other figures like Adam D’Angelo, Ilya Sutskever, Mira Murati, Helen Toner, and so forth. As far as the investors are concerned most of them can go away.
Sam finds himself in an enviable position, with a couple of paths he could take. Additionally, he boasts one of the most formidable global networks in terms of capital access. Should he decide to seek funding for a new venture, which seems advisable, he is likely to break records for the amount of seed capital he raises.
Option one involves starting a new venture. By launching a fresh company with a new structure, Sam would gain more control. This presents a risk for Microsoft, as Sam could explore alternative funding sources. That said I’m sure Sam doesn’t want to piss off Microsoft so he will have some loyalty there. The talent is loyal to Sam, not Microsoft. Furthermore, the announcement of a new venture could spell disaster for OpenAI, potentially triggering a mass exodus of talent eager to join Sam's new initiative. Way more upside in a smaller venture. For Sam, if his ambition ever leaned towards joining the billionaire ranks, this route would be the most promising.
Option two is to return to OpenAI, but with a revamped organizational structure. It's conceivable to picture the original OpenAI board members convening with Satya Nadella and Sam Altman, attempting to dissect what went awry and seeking a resolution. However, that scenario seems unlikely. Satya is probably still upset about being left out of the loop, complicating matters for Microsoft. He may view them as contributing no value, leading to the board's eventual dismissal. I think Sam will hold significant equity in this new structure, just my guess, to persuade him from doing #1.
Fantastic, a new board acting as marionettes under Sam's control! Yet, this isn't entirely beneficial either. OpenAI is in need of assistance; they lack the necessary product and enterprise experience to take the company where it should be. Consequently, a robust board would be advantageous to provide guidance and navigate through the company's existing deficiencies. There's been a degree of sloppiness, and it's time for them to adopt the demeanor of a bona fide enterprise. Maintaining their lead depends on it.

The thorniest part of this saga isn't the key players like Sam, Satya, Ilya, or Mira; it's the lawyers. They're a nuisance, comparable to African wild dogs encircling a carcass, eager for a piece of the action. If the OpenAI board decides to make a sacrificial move regarding the company's structure to stand on the 'right side of history,' the only winners will be the lawyers, feasting on what remains of the company's dry powder as its valuation plummets. What was once a skyrocketing enterprise may well become a crater where all the money went. Self-preserving employees, more focused on individual gain than collective success, will likely abandon ship and aim for a position in Sam's new venture—if they manage to get through the door. Anyone reading this might as well aim to join Sam's new venture. Those who were against Sam will find fundraising and talent acquisition far more challenging, having marked themselves as overly cautious, the ones who pushed back. Investors aren't looking for caution; they crave acceleration, or better yet, a nitrous oxide boost.
RYAN BRODERICK, NOV 22, 2023
More than a few readers asked me to look into a new TikTok subculture called BRG, which stands for Based Redacted Gang or Based Retard Gang, depending on what social platform you’re on. The group was investigated recently by “maia arson crimew,” the “mentally ill enby polyam trans lesbian anarchist kitten” hacker that got ahold of the TSA no fly list earlier this year. I’m aware that this is already A Lot Of Internet, but this is only going to get weirder the further we go.
I was surprised to find out that BRG, what I’ll be referring to this group as from here on out, is actually, itself, just a rebrand of an online collective I’ve covered a few times in this newsletter — the Remilia Corporation.
I first came across Remilia in January 2022. They were the organizers behind the failed Spice DAO project. A bunch of crypto evangelists got into a Discord and tried to raise money to buy Alejandro Jodorowsky’s story bible for Dune so they could turn it into an animated Netflix series. Except they didn’t seem to realize that simply buying a copy of the book wouldn’t grant them the actual rights to do anything with it other than look at it. The DAO bought it for 2.6 million euros, regardless, and disbanded a few months later.
Remilia then launched a line of NFTs called Milady Maker, which allowed users to make customized chibi anime avatars and it quickly turned very racist. I interviewed the former “head” of Remilia for Fast Company at the time, a pseudonymous user that went by Charlotte Fang, among other names. And came away from it with more questions than answers, but I think I got the gist.
Remilia started as a Twitter DM group of edgy shitposters that wanted to court the attention of reactionary bloggers like technofeudalist Curtis Yarvin, who was building a crypto-friendly operating system called Urbit. And Urbit was getting funding from Peter Thiel, which Yarvin was then using to bankroll a lot of the fascist digital “art” coming out of New York’s Downtown Scene in the years immediately following the pandemic. Remilia’s whole schtick was reinventing “schizoposting” for Gen Z. Schizoposting is a 4chan trend where users pretend to — or actually have, I guess — a mental breakdown via memes. It’s been linked to at least one mass shooting, but it’s not always political.
Anyways, my understanding is around the time the Spice DAO was imploding and Remilia was getting “canceled” for Milady Maker going off the rails, there was a schism inside of the group, with some members wanting to go legit and become a proper crypto startup with others wanting to lean further into extremely esoteric occult fascism. A story as old as time. I lost touch with them after that, but I’ve seen a few Milady avatar accounts try recently to pivot into being AI guys.

Which brings us to BRG. I thought startup investor Julie Fredrickson’s X thread on BRG was a pretty useful explanation of what’s happening here: “An aesthetic ironic meme account that combines the aesthetic of AI-generated content, Chinese propaganda, fake woke spiritualism, hyperpop/glitchcore, and schizoposting.” That is, if you can understand any of that.
More simply, I would just say that it’s a bunch of trolls trying to trick young women on TikTok into sharing far-right memes and hopefully convince a couple of them to buy some NFTs. And, incidentally, BRG’s perceived success seems to hinge on the same right-wing misunderstanding about TikTok and its scale that every other conservative group has right now. A lot of BRG accounts are proudly declaring that they’ve infiltrated and schizopilled the social network. And yeah, one of their videos has gone sorta viral, which was just a Remilia copypasta dubbed over footage likely stolen from a Chinese video app. But most of the BRG-related content I’ve seen is borderline invisible in terms of views on the app and is largely being shared by other BRG-affiliated accounts.
This is where you ask if any of this matters. And the answer is, unfortunately, yes. Groups like Remilia and BRG are one half of an extremely stupid ideological battle tearing apart tearing apart Silicon Valley — and OpenAI, specifically — right now. And before last Friday, idea that the biggest names in tech could read too many blog posts and end up developing what are essentially two competing religions and could then be willing to blow up billion-dollar companies in defense of those religions was, honestly, too stupid to believe. And yet, here we are…
Which means it’s worth understanding what these two sides want. And it essentially comes down to speed. On one side is effective altruism, or EA, and on the other is effective accelerationism, or e/acc, which is mainly what BRG is promoting on TikTok right now.
The altruists, which includes folks like Elon Musk and Sam Bankman-Fried, believe that maximum human happiness is a math equation you can solve with money, which should be what steers technological innovation. While the accelerationists believe almost the inverse, that innovation matters more than human happiness and the internet can, and should, rewire how our brains work. Either way, both groups are obsessed with race science, want to replace democratic institutions with privately-owned automations — that they control — and are utterly convinced that technology and, specifically, the emergence of AI is a cataclysmic doomsday moment for humanity. The accelerationists just think it should happen immediately. Of course, as is the case with everything in Silicon Valley, all of this is predicated on the unwavering belief in its own importance. So it’s very possible that if we were to take the actually longtermist view of all of this, we’d actually end up looking back at this whole thing as a bunch a weird nerds fighting over Reddit threads.
Posted on Monday, November 20, 2023
I have, as you might expect, authored several versions of this Article, both in my head and on the page, as the most extraordinary weekend of my career has unfolded. To briefly summarize:
On Friday, then-CEO Sam Altman was fired from OpenAI by the board that governs the non-profit; then-President Greg Brockman was removed from the board and subsequently resigned.
Over the weekend rumors surged that Altman was negotiating his return, only for OpenAI to hire former Twitch CEO Emmett Shear as CEO.
Finally, late Sunday night, Satya Nadella announced via tweet that Altman and Brockman, “together with colleagues”, would be joining Microsoft.
This is, quite obviously, a phenomenal outcome for Microsoft. The company already has a perpetual license to all OpenAI IP (short of artificial general intelligence), including source code and model weights; the question was whether it would have the talent to exploit that IP if OpenAI suffered the sort of talent drain that was threatened upon Altman and Brockman’s removal. Indeed they will, as a good portion of that talent seems likely to flow to Microsoft; you can make the case that Microsoft just acquired OpenAI for $0 and zero risk of an antitrust lawsuit.
Microsoft’s gain, meanwhile, is OpenAI’s loss, which is dependent on the Redmond-based company for both money and compute: the work its employees will do on AI will either be Microsoft’s by virtue of that perpetual license, or Microsoft’s directly because said employees joined Altman’s team. OpenAI’s trump card is ChatGPT, which is well on its way to achieving the holy grail of tech — an at-scale consumer platform — but if the reporting this weekend is to be believed, OpenAI’s board may have already had second thoughts about the incentives ChapGPT placed on the company (more on this below).
The biggest loss of all, though, is a necessary one: the myth that anything but a for-profit corporation is the right way to organize a company.
OpenAI was founded in 2015 as a “non-profit intelligence research company.” From the initial blog post:
OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact. We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible. The outcome of this venture is uncertain and the work is difficult, but we believe the goal and the structure are right. We hope this is what matters most to the best in the field.
I was pretty cynical about the motivations of OpenAI’s founders, at least Altman and Elon Musk; I wrote in a Daily Update:
Elon Musk and Sam Altman, who head organizations (Tesla and YCombinator, respectively) that look a lot like the two examples I just described of companies threatened by Google and Facebook’s data advantage, have done exactly that with OpenAI, with the added incentive of making the entire thing a non-profit; I say “incentive” because being a non-profit is almost certainly a lot less about being altruistic and a lot more about the line I highlighted at the beginning: “We hope this is what matters most to the best in the field.” In other words, OpenAI may not have the best data, but at least it has a mission structure that may help idealist researchers sleep better at night. That OpenAI may help balance the playing field for Tesla and YCombinator is, I guess we’re supposed to believe, a happy coincidence.
Whatever Altman and Musk’s motivations, the decision to make OpenAI a non-profit wasn’t just talk: the company is a 501(c)3; you can view their annual IRS filings here. The first question on Form 990 asks the organization to “Briefly describe the organization’s mission or most significant activities”; the first filing in 2016 stated:
OpenAIs goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. We think that artificial intelligence technology will help shape the 21st century, and we want to help the world build safe AI technology and ensure that AI’s benefits are as widely and evenly distributed as possible. Were trying to build AI as part of a larger community, and we want to openly share our plans and capabilities along the way.
Two years later, and the commitment to “openly share our plans and capabilities along the way” was gone; three years after that and the goal of “advanc[ing] digital intelligence” was replaced by “build[ing] general-purpose artificial intelligence”.
In 2018 Musk, according to a Semafor report earlier this year, attempted to take over the company, but was rebuffed; he left the board and, more critically, stopped paying for OpenAI’s operations. That led to the second critical piece of background: faced with the need to pay for massive amounts of compute power, Altman, now firmly in charge of OpenAI, created OpenAI Global, LLC, a capped profit company with Microsoft as minority owner. This image of OpenAI’s current structure is from their website:
Nov. 21, 2023 11:30 AM PST

Quora CEO Adam D’Angelo, one of the central figures in the power struggle over OpenAI, has gained a reputation for stubbornness among Quora employees, a facet of his personality that may be playing into the debate on OpenAI’s board about whether to reinstate Sam Altman as CEO.
At the question-and-answer site, there was “never” an example where an employee convinced D’Angelo to change his position, a former Quora employee said. Another former employee said it was hard to earn D’Angelo’s trust but it could be lost quickly. Considered a brilliant engineer committed to disseminating better information on the internet, he kept a low profile and resisted attempts to invest in marketing or media relations for the site, which could draw in more users. And for years, he kept only one additional voting board member at Quora, friend and investor Matt Cohler of Benchmark.
THE TAKEAWAY
• OpenAI board member D’Angelo spent weekend actively recruiting tech leaders to be OpenAI’s new CEO
• D’Angelo has been ‘occupying the most oxygen’ among board members in talks between Altman and the board
• Former employees at D’Angelo’s company, Quora, paint him as hard to convince
Days after OpenAI’s board fired Altman, one of its members—chief scientist Ilya Sutskever—reversed course, tweeting that he regretted his actions. So far, though, there’s no sign other board members have changed their mind. D’Angelo, for his part, was recruiting tech leaders—including Anthropic’s Dario Amodei—to serve as OpenAI’s new head over the weekend, frustrating Altman’s attempts at a return, people familiar with the matter said.
D’Angelo, a high school friend of Mark Zuckerberg who parlayed an early engineering job building ad systems at Facebook into tremendous wealth, appears to be the fulcrum of finding a resolution. In recent days, D’Angelo has been “occupying the most oxygen” among board members in talks between Altman and the board, a person close to the talks said.
D’Angelo, who has been a board member at OpenAI for five years, is now the tech insider with perhaps the most to lose or gain reputationally in the stalemate over Sam Altman’s firing from the startup. Out of the three remaining board members at the company, he is the only person who many in the tech world, including Microsoft CEO Satya Nadella, know well. He sits firmly in an inner circle of Silicon Valley entrepreneurs who invest in each other’s companies, recommend software engineering talent to each other and pontificate about the future of technology.
But events of the past few days have subjected him to online sniping from industry peers, and even Elon Musk has called on him to explain why the board ousted Altman. He faces immense pressure to reverse course. More than 710 OpenAI employees signed a letter saying they’ll jump ship to Microsoft if Altman isn’t reinstated. Investors in OpenAI, who stand to lose piles of money should employees walk out, are preparing legal action against board members, a person familiar with the matter said.
D’Angelo has seen some parallels on a smaller scale at Quora, which he founded and has run for 14 years. He sometimes fired executives abruptly without considering the impact on employees, three former employees said. One firing a few years ago rankled staff so much that a few dozen refused to show up to work the day after the dismissal, they said. D’Angelo sometimes frustrated employees by not explaining why he had fired executives.
The board’s exact reasons for firing Altman and keeping him out of the company remain a mystery. The board has said Altman was “not consistently candid” with members. An OpenAI memo clarified that Altman wasn’t involved in what it described as “malfeasance.” Even Altman skeptics, who say the former Y Combinator executive is prone to exaggerations and tangled investments, are trying to figure out the exact grounds for his termination.
D’Angelo didn’t return requests for comment.
REX WOODBURY, NOV 22, 2023
Venture is a power law business.
Last week’s math exercise made that point: it’s often 1, 2, 3 companies driving a fund’s returns. Top Seed funds often have a 40% loss ratio, meaning that 40% of investments are zeroes. The middle third of companies might comprise ~20% of returns, and the top 20-30% (often just a handful of companies) will drive the remaining ~80% of returns.
That portfolio construction model was oversimplified, assuming that only 3 of 25 companies return any capital. But it got the point across: having outlier companies—and maintaining good ownership in them—is crucial.

Last week’s Seed Investing: The State of the Union dug into the current and future market dynamics for early-stage venture. We also launched the Daybreak website and shared that we’re allocating a portion of Daybreak Fund I to members of the Digital Native community. Most of the Limited Partner base will be traditional venture LPs, but it’s important to me to keep a portion for people who have been part of this community. It’s a small step toward democratizing venture.
Thank you to everyone who filled out the form last week—I’ll aim to get back to folks next week after the holiday. If you’re interested in being an LP in Daybreak, you can fill out the form here:
The power law nature of venture means that picking the right companies is everything. Picking is the focus this week.
A single company can shift a fund from 2x to 10x+. My view—which underpins how we invest at Daybreak—is that every Seed investment should be a potential fund-returner. This isn’t a business of first- and second-base hits; it’s a business of homeruns. Early-stage investors and venture-backed founders should swing for the fences. Venture isn’t a product for everyone—in fact, it’s not the right product for most companies; more on that later. But it’s the right product for companies with the potential to be compounding, enduring, market-transforming businesses.
Many Digital Native pieces focus on the big macro shifts—in technology and in human behavior—and on the startups that ride and accelerate those shifts. I don’t think I’ve written about the actual nuts and bolts of investing before. So this piece is a little different: the goal is to dig into what to look for when evaluating whether a company can become a $5B+ business.
I’ll build the piece around Market, Product, and Founder:
MARKET: What Can Go Right
PRODUCT: Platforms & Networks
FOUNDER: Clarity of Thought & X-Factor
Let’s dive in👇
One hot take I have: market size doesn’t really matter.
I wrote about this back in March—in, of all pieces, a piece called What Taylor Swift Can Teach Us About Business. How does Swift relate to market sizing? RCA Records had the chance to sign a young Taylor Swift, but they didn’t think there was a sizable market for a teenage country singer. After all, country music was on the decline with young listeners. So RCA made a critical mistake: they underestimated Swift and her market. It turns out, of course, that Swift vastly expanded the country market before further expanding her empire to encompass mainstream pop (e.g., 1989), indie-folk (e.g., Folklore & Evermore), and even some questionable dubstep (I Knew You Were Trouble) and rap (End Game, not my favorite).
The lesson: great products and great founders expand markets.

The same truism extends to startups.
I try to not get too caught up on Total Addressable Market (TAM) analyses, unless a founder is going after a really niche market or doesn’t have a good sense for how their product will expand the market.
Startup history is littered with market sizing mistakes that caused early investors to miss the train.
A classic example is Uber. As Bill Gurley outlines in a great post from 2014, many experts dramatically underestimated Uber’s market opportunity. One finance professor at NYU, Aswath Damodaran, wrote in Uber’s early days: “For my base case valuation, I’m going to assume that the primary market Uber is targeting is the global taxi and car-service market.” He arrived at a TAM of $100 billion.
Damodaran’s error, of course, was not recognizing that Uber’s product could expand the market. Uber offered a 10x better offering than taxis:
✅ Coverage density was higher, which drove down average wait times to under five minutes
✅ Geolocation on mobile devices enabled anyone to call a car, nearly from anywhere
✅ Payment was done via mobile, meaning customers didn’t need to carry cash
✅ The dual rating system ensured quality
✅ Digital record of each ride meant that Ubers were safer than cabs
In New York, the taxi capital of America, ride-hailing apps quickly overtook traditional cabs, revealing that a 10x better product could crowd in more riders.

As Gurley puts it: “The past can be a poor guide for the future if the future offering is materially different than the past.” I always liked this tweet from Box’s Aaron Levie:

Gurley also points to a similar mistake made by McKinsey, forty years ago:
“In 1980, McKinsey & Company was commissioned by AT&T (whose Bell Labs had invented cellular telephony) to forecast cell phone penetration in the U.S. by 2000. The consultant’s prediction, 900,000 subscribers, was less than 1% of the actual figure, 109 million. Based on this legendary mistake, AT&T decided there was not much future to these toys. A decade later, to rejoin the cellular market, AT&T had to acquire McCaw Cellular for $12.6 billion. By 2011, the number of subscribers worldwide had surpassed 5 billion and cellular communication had become an unprecedented technological revolution.”
Oops.
Great products expand markets. I often ask founders the question: what do you think has to go right for this to be a massive business?
Many startups have defied their markets. Airbnb popularized home-sharing. Tesla brought electric vehicles mainstream. Red Bull effectively created the energy drink market, which now comprises $53B globally and is expected to grow 7.2% a year through 2027. (For its troubles, Red Bull owns 43% of the market.)
The list goes on. Many investors I know regret passing on Benchling, which makes software for life sciences, because they thought life sciences was too small an opportunity; Benchling proved them wrong. One investor I know passed on Snap because how could the market for disappearing photos be very large? (The answer is that a product visionary like Evan Spiegel could create a large market.) Others passed on Figma because their TAM analysis focused on the number of designers; they missed the key insight that Figma’s real-time collaboration made design a cross-functional discipline, with engineers and product leaders and management also becoming paid users.

Figma offers a nice segue into market timing.
Market sizing I don’t believe in; market timing I very much do. In fact, outside of the entrepreneur, timing might be the single greatest determining factor for a startup’s success.
Figma is a good jumping-off point. Figma benefited from an exceptional team; Dylan and Evan are extraordinary. But Figma also rode the wave of WebGL, which renders high-fidelity, interactive 2D and 3D graphics in the browser.
Its timing was perfect. WebGL came out in 2011; Figma was born in 2012. As co-founder and CEO Dylan Fields puts it:
It was like, “Okay, WebGL lets you use the GPU, your computer, and the browser. What can we do with that?” So we started proving it out. [Evan] had made a bunch of tech demos already, and we started to look at it in the context of professional-grade tools and eventually interface design.
The more we built with WebGL, the more confident we were that this could be a technology we could use to go build a professional-grade interface design tool. But no one believed us. I kept trying to recruit people, and I found that if I didn’t show up and immediately open my laptop to show them the tool working, they just wouldn’t believe me.
In the early 2010s, Sketch and InVision were the would-be Adobe disruptors. Figma leapfrogged them both.
When it comes to market timing, there are many stories of companies that were too early. In the early 2000s, venture capitalists—including Sequoia and Benchmark—invested a total of $396M in Webvan, an online grocery delivery business. At its peak in 2000, Webvan brought in $179M in sales. But it had $525M in expenses that same year, and three years into its operations it declared bankruptcy. (Fun fact: Webvan was founded by Louis Borders, who also founded Borders Books.)
Two decades later, Instacart, a similar business to Webvan, has gone public and boasts a $7B+ market cap. Timing is crucial.
November 21, 2023

The share of U.S. venture funding going to companies in the San Francisco Bay Area hit a multiyear high this year, boosted largely by the AI boom.
Altogether, companies in the region pulled in $49.3 billion in seed through growth funding to date, per Crunchbase data. That represents approximately 41% of the entire U.S. total, the highest share in years.
For perspective, we charted out the percentage of U.S. startup investment going to Bay Area companies for the past five calendar years:

It should be noted, however, that the Northern California region is getting a larger slice of a much smaller pie. Overall, U.S. startup funding is down 40% in 2023 compared to the same period last year.
The San Francisco Bay Area, as the chart below illustrates, also saw a run-up to the 2021 peak, followed by a sharp decline. That said, funding this year is down 25% from the year-ago period — significantly better than the national average.

It’s possible to explain the Bay Area’s relatively resilient funding in two letters: AI.
San Francisco is home to the biggest fundraiser in the space, OpenAI, which has secured over $10.3 billion this year, principally from Microsoft. That alone accounts for about a fifth of the region’s funding.
Rival San Francisco-based AI unicorn Anthropic, meanwhile, pulled in at least $2.65 billion in known funding this year from lead backers including Amazon and Google. And Palo Alto-based Inflection AI, developer of large AI language models, raised $1.3 billion in a June financing led by Microsoft and Nvidia.
The funding spree comes as the Bay Area cements its status as a capital of artificial intelligence innovation. Much of the buzz is in San Francisco specifically, with the city’s Hayes Valley neighborhood apparently so densely populated with AI talent that it’s now also known as “Cerebral Valley.” It also helps that the world’s largest population of venture capitalists is nearby.
Deal flow is a mix of outsized rounds and smaller, earlier stage bets. So far this year, at least 27 Bay Area companies with an AI focus have raised rounds of $100 million more, per Crunchbase data. Yet roughly two-thirds of the 250-plus rounds in the space are at pre-seed, seed or Series A, with a median size of around $6 million.
The Bay Area’s strong showing on the fundraising front comes amid a period of often critical coverage of the region, and San Francisco in particular. Images of sprawling homeless encampments, large-scale retail theft, and other emblems of urban blight populate social media feeds and headlines of select news outlets.
CHAMATH PALIHAPITIYA, NOV 19, 2023
On Friday, OpenAI ousted its co-founder Sam Altman as CEO. While OpenAI cites a lack of consistent candor in Altman’s dealings with the board as the key reason for his removal, there is widespread speculation about other motives behind his termination. These range from disputes concerning the profit vs nonprofit motives of the company to the discovery of artificial general intelligence, a type of AI that can surpass human intelligence for most tasks.
We wanted to look back at the history and corporate structure of OpenAI to understand how we got here. Here’s the story:
Inception and Early Strides (2015-2018)
OpenAI was initially founded in 2015 by Sam Altman, Elon Musk, Ilya Sutskever and Greg Brockman as a non-profit organization with the stated goal to “advance digital intelligence in the way that is most likely to benefit humanity as a whole.” The company assembled a team of the best researchers in the field of AI to pursue the goal of building AGI in a safe way.
The early years of OpenAI were marked with rapid experimentation. The company made significant progress on research in deep learning and reinforcement learning, and released ‘OpenAI Gym’ in 2016, a toolkit for developing and comparing reinforcement learning algorithms.
OpenAI showcased the capabilities of these reinforcement learning algorithms through its ‘OpenAI Five’ project in 2018, which trained five independent AI agents to play a complex multiplayer online battle arena game called ‘Dota 2’. Despite operating independently, these agents learned to work as a cohesive team to coordinate strategies within the game.
A crucial development occurred in June 2018. The company released a paper titled "Improving Language Understanding by Generative Pre-Training", which introduced the foundational architecture for the Generative Pre-trained Transformer model. This later evolved into ChatGPT, the company’s flagship product.
Transition From a Non-Profit (2019)
In 2019, OpenAI transitioned from a non-profit to a “capped-profit” model. According to the company’s blog post in 2019, OpenAI wanted to increase its ability to raise capital while still serving its mission, and “no pre-existing legal structure they knew of struck the right balance”. Per the IRS, for-profit entities and not-for-profit entities are fundamentally at odds with each other, so in order to combine the two competing concepts, OpenAI came up with a novel structure which allowed the non-profit to control the direction of a for-profit entity while providing the investors a "capped" upside of 100x. This culminated in a $1Bn investment from Microsoft, marking the beginning of a key strategic relationship, but complicating the company’s organizational structure and incentives.
The non-profit entity, OpenAI Inc., became the sole controlling shareholder of the new for-profit entity OpenAI Global LLC, which answered to the board of the nonprofit and retained a fiduciary responsibility to the company’s nonprofit charter. Crucially, the board was responsible for determining when OpenAI attained artificial general intelligence (AGI), which the company defines as a “highly autonomous system that outperforms humans at most economically valuable work.”
The structure of OpenAI is outlined below:

Becoming ChatGPT (2020-2023)
In 2020, bolstered by new funding, OpenAI unveiled GPT-3, a large language model (LLM) capable of understanding and generating convincing human-like text. This was a watershed moment for OpenAI and the broader AI community. As the company grew, its LLMs continued to become larger and more intelligent.
However, OpenAI's innovation didn't stop with language models. In 2021, the company expanded its horizons by launching Codex, a specialized AI model for programming, and DALL-E, an AI system adept at creating original artwork from text descriptions.
ALEX KANTROWITZ, NOV 22, 2023

Sam Altman is back. Improbably and dramatically, the ex-OpenAI CEO returned as CEO late Tuesday. Altman’s counter-coup swept out three board members who sparked his firing and included an agreement to investigate what went down this past weekend. The new board — which now includes Larry Summers and Bret Taylor — will expand to up to nine members, likely including someone from Microsoft.
The AI field will not go back to ‘normal’ after this. OpenAI was already vulnerable coming into the chaos and will now have to work harder to maintain its lead while facing inspired competition. Though the narrative might frame this as a major win for OpenAI and Microsoft, the reality, as always, is a bit more nuanced. Here’s how the AI field changes after this:
18,000 Microsoft customers use its OpenAI service on Azure, and the disintegration of OpenAI would’ve left them scrambling. Microsoft had to return its OpenAI partnership to some order, and it could not have hired OpenAI’s entire staff and kept the OpenAI service running. So this is a positive resolution and a relief after a tense few days. There are still some governance issues to resolve. Microsoft doesn’t have an OpenAI board seat after all this. But this was the least bad option for Satya Nadella & co., who can now press forward with their industry-leading AI efforts, even if having Altman in-house would’ve paid dividends over time.
Companies building on OpenAI technology freaked out this past week. They trusted in a company that almost evaporated in a weekend. So today, those building on OpenAI are putting contingency plans in place should the situation repeat. The era of model agnosticism is really here. Soon, any serious AI company will be able to substitute OpenAI for Anthropic or any other competitor. Startup founders using OpenAI have already told me they’ve started work on it this week. OpenAI competitors are already trying to exploit the situation. “Utterly insane weekend. So sad. Wishing everyone involved the very best,” Inflection CEO Mustafa Suleyman wrote this week. In the next breath, he said: “Come run with us!”
OpenAI sold the world’s top AI researchers on a vision and a safety valve: Join us, help us get closer to human-level artificial intelligence, and if things get unsafe, the board will step in. It was a win-win proposition that was ultimately a sham. The OpenAI board was poorly structured, almost blew up the company, and the new structure will be less safety-focused. This will open avenues for competitors to recruit researchers who otherwise might’ve gone to OpenAI. Meta chief AI scientist Yann LeCun is already endorsing the case that his team’s open-source focus will make it an unlikely winner. He might be right.
Upgrade to paid
Altman’s pushed an AI ‘safety’ agenda in Washington and globally, becoming a lobbying force. OpenAI’s corporate structure lent legitimacy to his efforts. The implicit message: We’re the AI safety company, not the for-profit, please listen to us and consider the following rules. With the non-profit board’s decision so quickly reversed after pressure from investors (and well-compensated employees), the myth will take a hit. OpenAI will now become one of the pack, without its special sheen, which will change its ability to influence policy.
OpenAI’s board was supposed to save us from an AI apocalypse. Then, it couldn’t think three steps ahead in a boardroom coup. Much of the blame rests with the specific individuals. But more broadly, it’s hard to imagine anyone will have confidence in our ability to stop harmful AI should we develop it. (And what if the board’s concerns in this area were legitimate?) The future of the AI safety field is in flux.
OpenAI’s chaos may be its own ladder. It moves forward with a board more sympathetic toward accelerating AI development. It will work more closely with Microsoft under the new structure, with fewer speedbumps along the way. And it may have some incredible products en route. But the chaos will also be a ladder to those OpenAI once had on their heels. And some competitor — whether it’s Anthropic, Inflection, Google, or others — will inevitably exploit the moment and rise.
You could parachute him into an island full of cannibals and come back in 5 years and he'd be the king.
YCombinator co-founder Paul Graham in 2008
Jacquelyn Melinek @jacqmelinek / 12:55 PM PST•November 21, 2023

Image Credits: Piaras Ó Mídheach/Sportsfile for Web Summit via Getty Images
It’s been an eventful week for crypto exchanges and the U.S. government.
Changpeng Zhao, also known as “CZ,” the founder and CEO of Binance, is stepping down and has pleaded guilty to a number of violations brought on through the Department of Justice and other U.S. agencies. He appeared in a Seattle federal court on Tuesday to enter his plea.
Richard Teng, Binance’s former global head of regional markets, will be the exchange’s new CEO, Zhao shared in a post on X Tuesday afternoon. Teng previously was the CEO of the Financial Services Regulatory Authority at Abu Dhabi Global Market, among other executive roles. In response to stepping down, Zhao said, “it is the right thing to do” adding, “I made mistakes, and I must take responsibility.” Zhao will remain a shareholder and said he will be “available to the team to consult as needed.”
Binance, the world’s largest crypto exchange, has also agreed to pay about $4.3 billion to resolve the DOJ’s investigations, the agency said in a press release on Tuesday.
As a part of Binance’s guilty plea, it has also reached agreements with the Department of Treasury’s Financial Crimes Enforcement Network (FinCEN), the Office of Foreign Assets Control (OFAC) and the Commodity Futures Trading Commission (CFTC) and will credit about $1.8 billion toward those resolutions.
The crypto exchange “admits it engaged in anti-money laundering, unlicensed money transmitting and sanctions violations,” the DOJ release stated, calling it the “largest corporate resolution” that included criminal charges for an executive. Zhao pleaded guilty to failing to maintain an anti-money laundering program.
“The message here should be clear: using new technology to break the law does not make you a disruptor, it makes you a criminal,” U.S. Attorney General Merrick Garland said in a statement.
Connie Loizos @cookie / 2:19 PM PST•November 21, 2023

Image Credits: CatEyePerspective / Getty Images
Tiger Global Management is going through a major management change. Per a message from founder Chase Coleman sent this afternoon to investors of the 22-year-old venture- and hedge-fund outfit and obtained by TechCrunch, Coleman is taking over both the outfit’s public company investing and private equity businesses, while the longtime head of the latter, Scott Shleifer, becomes a senior advisor, a role that is a full-time position with no end date, per a source with knowledge of the maneuver.
According to Coleman, the decision was made by Shleifer, largely because Shleifer and his family have “made their home in Florida and want to stay there.” Meanwhile, wrote Coleman, “Tiger Global is operating in-person out of our New York offices,” and has “found that having everyone together in New York is highly productive and a better operating model for our firm.”
Tiger was founded by Coleman, a protégé of hedge fund pioneer Julian Robertson, in 2000. Shleifer joined two years later…
This is not the firm’s first major leadership change. In 2015, one of its investment heads, Feroz Dewan, left to set up his own investment firm, now called Arena Holdings Management, in New York.
Tiger’s private equity business was subsequently headed by Lee Fixel, who joined the firm in 2006 and stepped down to hang his own shingle in March of 2019. Fixel has subsequently raised a number of multibillion-dollar investment funds at that firm, called Addition.
After Fixel’s departure, Shleifer and Coleman continued as co-managers of the portfolios Fixel had overseen, with Shleifer taking over as its head. But he assumed control at what looks in retrospect to have proved a treacherous period for the firm.
After announcing in January 2020 that Tiger Global garnered $3.75 billion in commitments for its 12th fund, Schleifer put the pedal to the metal, overseeing an operation that made bold bets at a rapid-fire clip despite already heated valuations. For a time, investors were so happy with the strategy — which appeared to be working — that they awarded Tiger with a whopping $12.7 billion vehicle that it closed in March 2022 after just four months of fundraising.
. More
Sarah Perez @sarahpereztc / 9:16 AM PST•November 22, 2023

Image Credits: Jaap Arriens/NurPhoto (opens in a new window)/ Getty Images
Shortly after screenshots emerged showing xAI’s chatbot Grok appearing on X’s web app, X owner Elon Musk confirmed that Grok would be available to all of the company’s Premium+ subscribers sometime “next week.” While Musk’s pronouncements about time frames for product deliveries haven’t always held up, code developments in X’s own app revealed that Grok integration was already underway.
Yeah.
Grok should be available to all X Premium+ subscribers next week.
— Elon Musk (@elonmusk) November 22, 2023
This week, app researcher Nima Owji shared screenshots showing how Grok had been added to X’s web app, noting that its URL would be twitter.com/i/grok. In one screenshot, users who were not yet Premium+ subscribers would be invited to upgrade to gain access to Grok. Another showed an “Ask Grok” text entry box for communicating with the AI chatbot. The features were not public-facing at the time of his discovery, however, but suggested that Grok’s rollout was nearing.
Another image that shows how the Premium+ subscribers will be able to chat with @Grok! https://t.co/iye0SXwPe0 pic.twitter.com/Bcm2ohDrsS
— Nima Owji (@nima_owji) November 20, 2023
First released on November 4 to select testers, Grok is Musk’s answer to OpenAI’s ChatGPT, Google’s Bard, Anthropic’s Claude and others, and could potentially gain a following as part of X’s broader social platform.
In addition, xAI, the Musk-owned company behind Grok, promises that its chatbot will have more of a personality than rivals. It plans to respond to users’ questions “with a bit of a wit” and is said to have a “rebellious streak,” according to its website. The chatbot also plans to answer “spicy” questions that are rejected by other AI systems, the company has noted.
But personality alone won’t be key to Grok’s differentiation — it will also have access to real-time knowledge via the X platform, which could be an interesting component, if not one that leads to the highest accuracy in terms of its responses.
The addition could help juice sign-ups for X’s Premium subscription, which has yet to fare as well as Musk hoped. The X owner revamped Twitter Blue to become X Premium, promising paid verification among a host of other features, like increased exposure in replies, an edit button, the ability to publish longer posts and videos, and a reduction of ads.
NOVEMBER 22, 2023 | On Technology

The OpenAI Drama’s Season One is over. Now it’s time to go back to work or, better yet, spend time with our families over the long Thanksgiving weekend, instead of binging on every rumor, nuance, and hiccup.
i love openai, and everything i’ve done over the past few days has been in service of keeping this team and its mission together. when i decided to join msft on sun evening, it was clear that was the best path for me and the team. with the new board and w satya’s support, i’m looking forward to returning to openai, and building on our strong partnership with msft.
— Sam Altman.
Sam Altman and Greg Brockman have returned to OpenAI in their original roles as CEO and CTO. They are no longer part of the not-for-profit’s board, along with three other existing members — Tasha McCauley, Helen Toner, and Ilya Sutskever, OpenAI’s chief scientist. Adam D’Angelo remains on the board, while Bret Taylor, ex-co-CEO of Salesforce, becomes the chairman, and Larry Summers will join the board.
Now looking ahead — it is almost certain we will be back for Season Two. Why?Let’s start with the unanswered questions.
Why was Sam fired in the first place?
Tasha McCauley and Helen Toner won’t go quietly into the sunset.
If you have followed Ilya Sutskever over the years, another development is likely.
We should expect a lot of debate in the media over the new board composition.
OpenAI is highly influential for the future of the technology industry and humanity. If you are creating technologies that will have a significant impact, it’s important to prioritize transparency. At the most fundamental level, they should follow their own charter. If not, they should abandon the facade of a not-for-profit and instead become a for-profit company.
OpenAI plays a significant role in society, whether as a not-for-profit research lab or a for-profit behemoth. It is clear that they need to make future decisions based on the larger societal impact. The recent personnel drama doesn’t instill confidence. It is easy to overlook that when you have billions riding on a massive wave. Still, just looking back to the recent past is a good reminder that technology in the post-mobile era is less about technology itself and more about people.
Silicon Valley doesn’t really learn from the past, as startups that relied too much on Facebook got double-crossed by the company and its self-interests. Similarly, those who bet their future on Amazon Web Services have had to face rising costs.
If you are a startup, an organization, or a large company, you need to learn from the past five days and start building resilience in your product plans. Whether you choose OpenAI’s commercial competitors, or better yet, bet on open source, remember that relying solely on this one entity is not prudent.

A reminder for new readers. That Was The Week collects the best writing on critical issues in tech, startups, and venture capital. I selected the articles because they are of interest. The selections often include things I entirely disagree with. But they express common opinions, or they provoke me to think. The articles are only snippets. Click on the headline to go to the original. I express my point of view in the editorial and the weekly video below.
This Week’s Video and Podcast:
Thanks and Thanksgiving to: @bchesky, @lmatsakis, @reedalbergotti, @bentaylordata, @broderick, @benthompson, @coryweinberg, @rex_woodbury, @jglasner, @chamath, @Kantrowitz, @jacqmelinek, @cookie, @sarahpereztc, @om, @levie
It’s Thanksgiving here in Palo Alto, and I should thank all the writers and producers whose work I read each week for the stimulation and provocation they provide. In case you do not all realize it, you are appreciated.
I always try to call out the creators of the content I curate at the top of That Was The Week, and I will continue to do so. Many appear every week.
Let us start with Brian Chesky, of AirBnB fame, this week. His X post is apt and to the point.

called this week’s newsletter, A Tale of Two Weeks. Friends kindly noted that it has only been one week. It felt like two, and to Brian Chesky’s point, we learned a lot.
EA and e /acc are now part of everyday conversation in tech circles. And we are starting to understand that ideology (or philosophy) plays a significant role in strategy. People are forced to take sides.
Last week, the EA (effective altruism) camp looked in the ascendancy. But this week, over 700 OpenAI employees sided with Sam Altman and Greg Brockman, resulting in their return to lead the company. The e/acc camp won. And that leads many advocates of effective altruism to question its relevance to startups.
There are also some new acronyms or labels to learn. Marc Andreessen is reposting @beffjezos on X, mentioning Decels.

The Decels want authoritarianism, which means the opponents of effective acceleration (e /acc) are now known as decelerators (Decels) and are considered to want to have the power to stop innovation (authoritarianism).
The e /acc lobby sees itself as humanistic at its core for innovation. It regards the Decels as authoritarian because they regard themselves as the moral arbiters of humanity.
Reading below, I also learned that similar schisms exist in TikTok. In this case, the acronym is BRG and apparently, it is like e /acc.
From Ryan Broderick:
Groups like Remilia and BRG are one half of an extremely stupid ideological battle tearing apart Silicon Valley — and OpenAI, specifically — right now. And before last Friday, the idea that the biggest names in tech could read too many blog posts and end up developing what are essentially two competing religions and could then be willing to blow up billion-dollar companies in defense of those religions was, honestly, too stupid to believe. And yet, here we are…
Which means it’s worth understanding what these two sides want. And it essentially comes down to speed. On one side is effective altruism, or EA, and on the other is effective accelerationism, or e/acc, which is mainly what BRG is promoting on TikTok right now.
I'm afraid I have to disagree with Ryan that the disputes are “stupid.” And I do not agree with him in his characterization of the two sides:
The altruists, which includes folks like Elon Musk and Sam Bankman-Fried, believe that maximum human happiness is a math equation you can solve with money, which should be what steers technological innovation. While the accelerationists believe almost the inverse, that innovation matters more than human happiness and the internet can, and should, rewire how our brains work. Either way, both groups are obsessed with race science, want to replace democratic institutions with privately-owned automations — that they control — and are utterly convinced that technology and, specifically, the emergence of AI is a cataclysmic doomsday moment for humanity. The accelerationists just think it should happen immediately
Last week’s editorial explains the differences quite well. But suffice it to say that Ryan’s definitions are way off. Both movements are ill-defined and have yet to be clearly articulated thoroughly. It would be unfair to create a binary world of only two views or to assume any individual is a 100% clone of groupthink. Many good people are grappling with critical philosophical issues. Most have not defined themselves into a strict camp.
So why am I inclined to side with the e /acc camp?
Mainly because e /acc is against a fear-driven elitist attempt to stop or slow down AI innovation; for me, that is sufficient. I see no evidence of anti-democratic or technocratic thinking. Indeed, with Altman and his WorldCoin project, there is evidence of the opposite - a genuine belief in using the wealth created by automation to serve human progress.
The entire European enlightenment that modern democracy was predicated on was broadly e /acc-like.
Once we accept the EA view that self-imposed limits to innovation, especially limits placed in the hands of elites and argued for due to fear, are required, we give up on the human ability to innovate freely using science alone
Of course, science needs to align with human good. So far, there is no evidence that AI does not.
At the core of democracy is wealth creation. At the heart of wealth creation is innovation. Most innovators are, by nature, rule breakers. The right to break the rules and learn is at the very essence of civilization. Once the rules win, the rulers win. And rulers are usually self-interested and not aligned with human evolution. For that reason, I’m with e /acc, not EA.
Much of this week’s curated content has these threads running through them. Clarity of purpose around innovation is essential.
Louise Matsakis and Reed Albergotti
Updated Nov 21, 2023, 12:29pm PST

One of the most prominent backers of the “effective altruism” movement at the heart of the ongoing turmoil at OpenAI, Skype co-founder Jaan Tallinn, told Semafor he is now questioning the merits of running companies based on the philosophy.
“The OpenAI governance crisis highlights the fragility of voluntary EA-motivated governance schemes,” said Tallinn, who has poured millions into effective altruism-linked nonprofits and AI startups. “So the world should not rely on such governance working as intended.”
His comments are part of a growing backlash against effective altruism and its arguments about the risks AI poses to humanity, which has snowballed over the last few days into the movement’s second major crisis in a year.
The first was caused by the downfall of convicted crypto fraudster Sam Bankman-Fried, who was once among the leading figures of EA, an ideology that emerged in the elite corridors of Silicon Valley and Oxford University in the 2010s offering an alternative, utilitarian-infused approach to charitable giving.
EA then played a role in the meltdown at OpenAI when its nonprofit board of directors — tasked solely with ensuring the company’s artificial intelligence models are “broadly beneficial” to all of humanity — abruptly fired CEO Sam Altman on Friday, creating a standoff that currently threatens its entire existence.
Three of the six seats on OpenAI’s board are occupied by people with deep ties to effective altruism: think tank researcher Helen Toner, Quora CEO Adam D’Angelo, and RAND scientist Tasha McCauley. A fourth member, OpenAI co-founder and chief scientist Ilya Sutskever, also holds views on AI that are generally sympathetic to EA.
Until a few days ago, OpenAI and its biggest corporate backers didn’t seem to think there was anything worrisome about this unusual governance structure. The president of Microsoft, which has invested $13 billion into OpenAI, argued earlier this month that the ChatGPT maker’s status as a nonprofit was what made it more trustworthy than competitors like Meta.
“People get hung up on structure,” Vinod Khosla, whose venture capital firm was among the first to invest in OpenAI’s for-profit subsidiary in 2019, said at an AI conference last week. “If you’re talking about changing the world, who freaking cares?”
Less than a week later, Khosla and other OpenAI investors are now left with shares of uncertain value. So far, it looks like Microsoft and Altman — a billionaire serial entrepreneur — are successfully outmaneuvering the effective altruists on the board. Nearly all of OpenAI’s employees have threatened to quit if the directors don’t resign, saying they will instead join their former boss on a new team at Microsoft.
Effective altruism emerged over a decade ago with a new way to think about helping the world. Instead of donating to causes that people found personally compelling, its leaders urged adherents to consider invented variables like “expected value,” which they said could be used to objectively determine where their impact would be the greatest.
EAs initially focused mostly on issues like animal welfare and global poverty, but over time, worries about an AI-fueled apocalypse became a central focus. With funding from deep-pocketed donors like billionaire and Facebook co-founder Dustin Moskovitz, they built their own insular universe to study AI safety, including a web of nonprofits and research organizations, forecasting centers, conferences, and web forums.
Toner and McCauley are both leaders of major EA groups backed by Moskovitz. McCauley sits on the board of Effective Ventures, one of the most important institutions of the movement. Earlier this year, Oxford philosophy professor William MacAskill, perhaps the most famous EA ever, named her as one of a small group of “senior figures.” Toner is the director of strategy at the Center for Security and Emerging Technology (CSET), a think tank at Georgetown University funded by Moskovitz’s grant-making organization Open Philanthropy.
Before joining CSET, Toner worked at Open Philanthropy and several other EA-linked organizations, and she and McCauley also sit on the board of another one called the Centre for the Governance of AI. D’Angelo, meanwhile, worked with Moskovitz at Facebook in the early aughts and sits on the board of his software company Asana. He has repeatedly echoed similar concerns about AI espoused by EA.
Despite these connections, OpenAI told a journalist in September that “none of our board members are effective altruists.” It argued that their interactions with the EA movement were largely “focused on topics related to AI safety.”
JEPSON TAYLOR, NOV 19, 2023

The engagement of non-tech individuals in this Silicon Valley drama has been fun; the real-life writers for this season have truly excelled 🔥! While it's all speculative and merely my perspective, I struggle to envision a scenario in which OpenAI isn't compelled to reinstate Sam Altman as soon as possible. The crux of the matter lies in the fact that Sam is synonymous with the funding. The investors' commitment was to Sam, his vision, and his renowned capacity to push technological boundaries, not to other figures like Adam D’Angelo, Ilya Sutskever, Mira Murati, Helen Toner, and so forth. As far as the investors are concerned most of them can go away.
Sam finds himself in an enviable position, with a couple of paths he could take. Additionally, he boasts one of the most formidable global networks in terms of capital access. Should he decide to seek funding for a new venture, which seems advisable, he is likely to break records for the amount of seed capital he raises.
Option one involves starting a new venture. By launching a fresh company with a new structure, Sam would gain more control. This presents a risk for Microsoft, as Sam could explore alternative funding sources. That said I’m sure Sam doesn’t want to piss off Microsoft so he will have some loyalty there. The talent is loyal to Sam, not Microsoft. Furthermore, the announcement of a new venture could spell disaster for OpenAI, potentially triggering a mass exodus of talent eager to join Sam's new initiative. Way more upside in a smaller venture. For Sam, if his ambition ever leaned towards joining the billionaire ranks, this route would be the most promising.
Option two is to return to OpenAI, but with a revamped organizational structure. It's conceivable to picture the original OpenAI board members convening with Satya Nadella and Sam Altman, attempting to dissect what went awry and seeking a resolution. However, that scenario seems unlikely. Satya is probably still upset about being left out of the loop, complicating matters for Microsoft. He may view them as contributing no value, leading to the board's eventual dismissal. I think Sam will hold significant equity in this new structure, just my guess, to persuade him from doing #1.
Fantastic, a new board acting as marionettes under Sam's control! Yet, this isn't entirely beneficial either. OpenAI is in need of assistance; they lack the necessary product and enterprise experience to take the company where it should be. Consequently, a robust board would be advantageous to provide guidance and navigate through the company's existing deficiencies. There's been a degree of sloppiness, and it's time for them to adopt the demeanor of a bona fide enterprise. Maintaining their lead depends on it.

The thorniest part of this saga isn't the key players like Sam, Satya, Ilya, or Mira; it's the lawyers. They're a nuisance, comparable to African wild dogs encircling a carcass, eager for a piece of the action. If the OpenAI board decides to make a sacrificial move regarding the company's structure to stand on the 'right side of history,' the only winners will be the lawyers, feasting on what remains of the company's dry powder as its valuation plummets. What was once a skyrocketing enterprise may well become a crater where all the money went. Self-preserving employees, more focused on individual gain than collective success, will likely abandon ship and aim for a position in Sam's new venture—if they manage to get through the door. Anyone reading this might as well aim to join Sam's new venture. Those who were against Sam will find fundraising and talent acquisition far more challenging, having marked themselves as overly cautious, the ones who pushed back. Investors aren't looking for caution; they crave acceleration, or better yet, a nitrous oxide boost.
RYAN BRODERICK, NOV 22, 2023
More than a few readers asked me to look into a new TikTok subculture called BRG, which stands for Based Redacted Gang or Based Retard Gang, depending on what social platform you’re on. The group was investigated recently by “maia arson crimew,” the “mentally ill enby polyam trans lesbian anarchist kitten” hacker that got ahold of the TSA no fly list earlier this year. I’m aware that this is already A Lot Of Internet, but this is only going to get weirder the further we go.
I was surprised to find out that BRG, what I’ll be referring to this group as from here on out, is actually, itself, just a rebrand of an online collective I’ve covered a few times in this newsletter — the Remilia Corporation.
I first came across Remilia in January 2022. They were the organizers behind the failed Spice DAO project. A bunch of crypto evangelists got into a Discord and tried to raise money to buy Alejandro Jodorowsky’s story bible for Dune so they could turn it into an animated Netflix series. Except they didn’t seem to realize that simply buying a copy of the book wouldn’t grant them the actual rights to do anything with it other than look at it. The DAO bought it for 2.6 million euros, regardless, and disbanded a few months later.
Remilia then launched a line of NFTs called Milady Maker, which allowed users to make customized chibi anime avatars and it quickly turned very racist. I interviewed the former “head” of Remilia for Fast Company at the time, a pseudonymous user that went by Charlotte Fang, among other names. And came away from it with more questions than answers, but I think I got the gist.
Remilia started as a Twitter DM group of edgy shitposters that wanted to court the attention of reactionary bloggers like technofeudalist Curtis Yarvin, who was building a crypto-friendly operating system called Urbit. And Urbit was getting funding from Peter Thiel, which Yarvin was then using to bankroll a lot of the fascist digital “art” coming out of New York’s Downtown Scene in the years immediately following the pandemic. Remilia’s whole schtick was reinventing “schizoposting” for Gen Z. Schizoposting is a 4chan trend where users pretend to — or actually have, I guess — a mental breakdown via memes. It’s been linked to at least one mass shooting, but it’s not always political.
Anyways, my understanding is around the time the Spice DAO was imploding and Remilia was getting “canceled” for Milady Maker going off the rails, there was a schism inside of the group, with some members wanting to go legit and become a proper crypto startup with others wanting to lean further into extremely esoteric occult fascism. A story as old as time. I lost touch with them after that, but I’ve seen a few Milady avatar accounts try recently to pivot into being AI guys.

Which brings us to BRG. I thought startup investor Julie Fredrickson’s X thread on BRG was a pretty useful explanation of what’s happening here: “An aesthetic ironic meme account that combines the aesthetic of AI-generated content, Chinese propaganda, fake woke spiritualism, hyperpop/glitchcore, and schizoposting.” That is, if you can understand any of that.
More simply, I would just say that it’s a bunch of trolls trying to trick young women on TikTok into sharing far-right memes and hopefully convince a couple of them to buy some NFTs. And, incidentally, BRG’s perceived success seems to hinge on the same right-wing misunderstanding about TikTok and its scale that every other conservative group has right now. A lot of BRG accounts are proudly declaring that they’ve infiltrated and schizopilled the social network. And yeah, one of their videos has gone sorta viral, which was just a Remilia copypasta dubbed over footage likely stolen from a Chinese video app. But most of the BRG-related content I’ve seen is borderline invisible in terms of views on the app and is largely being shared by other BRG-affiliated accounts.
This is where you ask if any of this matters. And the answer is, unfortunately, yes. Groups like Remilia and BRG are one half of an extremely stupid ideological battle tearing apart tearing apart Silicon Valley — and OpenAI, specifically — right now. And before last Friday, idea that the biggest names in tech could read too many blog posts and end up developing what are essentially two competing religions and could then be willing to blow up billion-dollar companies in defense of those religions was, honestly, too stupid to believe. And yet, here we are…
Which means it’s worth understanding what these two sides want. And it essentially comes down to speed. On one side is effective altruism, or EA, and on the other is effective accelerationism, or e/acc, which is mainly what BRG is promoting on TikTok right now.
The altruists, which includes folks like Elon Musk and Sam Bankman-Fried, believe that maximum human happiness is a math equation you can solve with money, which should be what steers technological innovation. While the accelerationists believe almost the inverse, that innovation matters more than human happiness and the internet can, and should, rewire how our brains work. Either way, both groups are obsessed with race science, want to replace democratic institutions with privately-owned automations — that they control — and are utterly convinced that technology and, specifically, the emergence of AI is a cataclysmic doomsday moment for humanity. The accelerationists just think it should happen immediately. Of course, as is the case with everything in Silicon Valley, all of this is predicated on the unwavering belief in its own importance. So it’s very possible that if we were to take the actually longtermist view of all of this, we’d actually end up looking back at this whole thing as a bunch a weird nerds fighting over Reddit threads.
Posted on Monday, November 20, 2023
I have, as you might expect, authored several versions of this Article, both in my head and on the page, as the most extraordinary weekend of my career has unfolded. To briefly summarize:
On Friday, then-CEO Sam Altman was fired from OpenAI by the board that governs the non-profit; then-President Greg Brockman was removed from the board and subsequently resigned.
Over the weekend rumors surged that Altman was negotiating his return, only for OpenAI to hire former Twitch CEO Emmett Shear as CEO.
Finally, late Sunday night, Satya Nadella announced via tweet that Altman and Brockman, “together with colleagues”, would be joining Microsoft.
This is, quite obviously, a phenomenal outcome for Microsoft. The company already has a perpetual license to all OpenAI IP (short of artificial general intelligence), including source code and model weights; the question was whether it would have the talent to exploit that IP if OpenAI suffered the sort of talent drain that was threatened upon Altman and Brockman’s removal. Indeed they will, as a good portion of that talent seems likely to flow to Microsoft; you can make the case that Microsoft just acquired OpenAI for $0 and zero risk of an antitrust lawsuit.
Microsoft’s gain, meanwhile, is OpenAI’s loss, which is dependent on the Redmond-based company for both money and compute: the work its employees will do on AI will either be Microsoft’s by virtue of that perpetual license, or Microsoft’s directly because said employees joined Altman’s team. OpenAI’s trump card is ChatGPT, which is well on its way to achieving the holy grail of tech — an at-scale consumer platform — but if the reporting this weekend is to be believed, OpenAI’s board may have already had second thoughts about the incentives ChapGPT placed on the company (more on this below).
The biggest loss of all, though, is a necessary one: the myth that anything but a for-profit corporation is the right way to organize a company.
OpenAI was founded in 2015 as a “non-profit intelligence research company.” From the initial blog post:
OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact. We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible. The outcome of this venture is uncertain and the work is difficult, but we believe the goal and the structure are right. We hope this is what matters most to the best in the field.
I was pretty cynical about the motivations of OpenAI’s founders, at least Altman and Elon Musk; I wrote in a Daily Update:
Elon Musk and Sam Altman, who head organizations (Tesla and YCombinator, respectively) that look a lot like the two examples I just described of companies threatened by Google and Facebook’s data advantage, have done exactly that with OpenAI, with the added incentive of making the entire thing a non-profit; I say “incentive” because being a non-profit is almost certainly a lot less about being altruistic and a lot more about the line I highlighted at the beginning: “We hope this is what matters most to the best in the field.” In other words, OpenAI may not have the best data, but at least it has a mission structure that may help idealist researchers sleep better at night. That OpenAI may help balance the playing field for Tesla and YCombinator is, I guess we’re supposed to believe, a happy coincidence.
Whatever Altman and Musk’s motivations, the decision to make OpenAI a non-profit wasn’t just talk: the company is a 501(c)3; you can view their annual IRS filings here. The first question on Form 990 asks the organization to “Briefly describe the organization’s mission or most significant activities”; the first filing in 2016 stated:
OpenAIs goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. We think that artificial intelligence technology will help shape the 21st century, and we want to help the world build safe AI technology and ensure that AI’s benefits are as widely and evenly distributed as possible. Were trying to build AI as part of a larger community, and we want to openly share our plans and capabilities along the way.
Two years later, and the commitment to “openly share our plans and capabilities along the way” was gone; three years after that and the goal of “advanc[ing] digital intelligence” was replaced by “build[ing] general-purpose artificial intelligence”.
In 2018 Musk, according to a Semafor report earlier this year, attempted to take over the company, but was rebuffed; he left the board and, more critically, stopped paying for OpenAI’s operations. That led to the second critical piece of background: faced with the need to pay for massive amounts of compute power, Altman, now firmly in charge of OpenAI, created OpenAI Global, LLC, a capped profit company with Microsoft as minority owner. This image of OpenAI’s current structure is from their website:
Nov. 21, 2023 11:30 AM PST

Quora CEO Adam D’Angelo, one of the central figures in the power struggle over OpenAI, has gained a reputation for stubbornness among Quora employees, a facet of his personality that may be playing into the debate on OpenAI’s board about whether to reinstate Sam Altman as CEO.
At the question-and-answer site, there was “never” an example where an employee convinced D’Angelo to change his position, a former Quora employee said. Another former employee said it was hard to earn D’Angelo’s trust but it could be lost quickly. Considered a brilliant engineer committed to disseminating better information on the internet, he kept a low profile and resisted attempts to invest in marketing or media relations for the site, which could draw in more users. And for years, he kept only one additional voting board member at Quora, friend and investor Matt Cohler of Benchmark.
THE TAKEAWAY
• OpenAI board member D’Angelo spent weekend actively recruiting tech leaders to be OpenAI’s new CEO
• D’Angelo has been ‘occupying the most oxygen’ among board members in talks between Altman and the board
• Former employees at D’Angelo’s company, Quora, paint him as hard to convince
Days after OpenAI’s board fired Altman, one of its members—chief scientist Ilya Sutskever—reversed course, tweeting that he regretted his actions. So far, though, there’s no sign other board members have changed their mind. D’Angelo, for his part, was recruiting tech leaders—including Anthropic’s Dario Amodei—to serve as OpenAI’s new head over the weekend, frustrating Altman’s attempts at a return, people familiar with the matter said.
D’Angelo, a high school friend of Mark Zuckerberg who parlayed an early engineering job building ad systems at Facebook into tremendous wealth, appears to be the fulcrum of finding a resolution. In recent days, D’Angelo has been “occupying the most oxygen” among board members in talks between Altman and the board, a person close to the talks said.
D’Angelo, who has been a board member at OpenAI for five years, is now the tech insider with perhaps the most to lose or gain reputationally in the stalemate over Sam Altman’s firing from the startup. Out of the three remaining board members at the company, he is the only person who many in the tech world, including Microsoft CEO Satya Nadella, know well. He sits firmly in an inner circle of Silicon Valley entrepreneurs who invest in each other’s companies, recommend software engineering talent to each other and pontificate about the future of technology.
But events of the past few days have subjected him to online sniping from industry peers, and even Elon Musk has called on him to explain why the board ousted Altman. He faces immense pressure to reverse course. More than 710 OpenAI employees signed a letter saying they’ll jump ship to Microsoft if Altman isn’t reinstated. Investors in OpenAI, who stand to lose piles of money should employees walk out, are preparing legal action against board members, a person familiar with the matter said.
D’Angelo has seen some parallels on a smaller scale at Quora, which he founded and has run for 14 years. He sometimes fired executives abruptly without considering the impact on employees, three former employees said. One firing a few years ago rankled staff so much that a few dozen refused to show up to work the day after the dismissal, they said. D’Angelo sometimes frustrated employees by not explaining why he had fired executives.
The board’s exact reasons for firing Altman and keeping him out of the company remain a mystery. The board has said Altman was “not consistently candid” with members. An OpenAI memo clarified that Altman wasn’t involved in what it described as “malfeasance.” Even Altman skeptics, who say the former Y Combinator executive is prone to exaggerations and tangled investments, are trying to figure out the exact grounds for his termination.
D’Angelo didn’t return requests for comment.
REX WOODBURY, NOV 22, 2023
Venture is a power law business.
Last week’s math exercise made that point: it’s often 1, 2, 3 companies driving a fund’s returns. Top Seed funds often have a 40% loss ratio, meaning that 40% of investments are zeroes. The middle third of companies might comprise ~20% of returns, and the top 20-30% (often just a handful of companies) will drive the remaining ~80% of returns.
That portfolio construction model was oversimplified, assuming that only 3 of 25 companies return any capital. But it got the point across: having outlier companies—and maintaining good ownership in them—is crucial.

Last week’s Seed Investing: The State of the Union dug into the current and future market dynamics for early-stage venture. We also launched the Daybreak website and shared that we’re allocating a portion of Daybreak Fund I to members of the Digital Native community. Most of the Limited Partner base will be traditional venture LPs, but it’s important to me to keep a portion for people who have been part of this community. It’s a small step toward democratizing venture.
Thank you to everyone who filled out the form last week—I’ll aim to get back to folks next week after the holiday. If you’re interested in being an LP in Daybreak, you can fill out the form here:
The power law nature of venture means that picking the right companies is everything. Picking is the focus this week.
A single company can shift a fund from 2x to 10x+. My view—which underpins how we invest at Daybreak—is that every Seed investment should be a potential fund-returner. This isn’t a business of first- and second-base hits; it’s a business of homeruns. Early-stage investors and venture-backed founders should swing for the fences. Venture isn’t a product for everyone—in fact, it’s not the right product for most companies; more on that later. But it’s the right product for companies with the potential to be compounding, enduring, market-transforming businesses.
Many Digital Native pieces focus on the big macro shifts—in technology and in human behavior—and on the startups that ride and accelerate those shifts. I don’t think I’ve written about the actual nuts and bolts of investing before. So this piece is a little different: the goal is to dig into what to look for when evaluating whether a company can become a $5B+ business.
I’ll build the piece around Market, Product, and Founder:
MARKET: What Can Go Right
PRODUCT: Platforms & Networks
FOUNDER: Clarity of Thought & X-Factor
Let’s dive in👇
One hot take I have: market size doesn’t really matter.
I wrote about this back in March—in, of all pieces, a piece called What Taylor Swift Can Teach Us About Business. How does Swift relate to market sizing? RCA Records had the chance to sign a young Taylor Swift, but they didn’t think there was a sizable market for a teenage country singer. After all, country music was on the decline with young listeners. So RCA made a critical mistake: they underestimated Swift and her market. It turns out, of course, that Swift vastly expanded the country market before further expanding her empire to encompass mainstream pop (e.g., 1989), indie-folk (e.g., Folklore & Evermore), and even some questionable dubstep (I Knew You Were Trouble) and rap (End Game, not my favorite).
The lesson: great products and great founders expand markets.

The same truism extends to startups.
I try to not get too caught up on Total Addressable Market (TAM) analyses, unless a founder is going after a really niche market or doesn’t have a good sense for how their product will expand the market.
Startup history is littered with market sizing mistakes that caused early investors to miss the train.
A classic example is Uber. As Bill Gurley outlines in a great post from 2014, many experts dramatically underestimated Uber’s market opportunity. One finance professor at NYU, Aswath Damodaran, wrote in Uber’s early days: “For my base case valuation, I’m going to assume that the primary market Uber is targeting is the global taxi and car-service market.” He arrived at a TAM of $100 billion.
Damodaran’s error, of course, was not recognizing that Uber’s product could expand the market. Uber offered a 10x better offering than taxis:
✅ Coverage density was higher, which drove down average wait times to under five minutes
✅ Geolocation on mobile devices enabled anyone to call a car, nearly from anywhere
✅ Payment was done via mobile, meaning customers didn’t need to carry cash
✅ The dual rating system ensured quality
✅ Digital record of each ride meant that Ubers were safer than cabs
In New York, the taxi capital of America, ride-hailing apps quickly overtook traditional cabs, revealing that a 10x better product could crowd in more riders.

As Gurley puts it: “The past can be a poor guide for the future if the future offering is materially different than the past.” I always liked this tweet from Box’s Aaron Levie:

Gurley also points to a similar mistake made by McKinsey, forty years ago:
“In 1980, McKinsey & Company was commissioned by AT&T (whose Bell Labs had invented cellular telephony) to forecast cell phone penetration in the U.S. by 2000. The consultant’s prediction, 900,000 subscribers, was less than 1% of the actual figure, 109 million. Based on this legendary mistake, AT&T decided there was not much future to these toys. A decade later, to rejoin the cellular market, AT&T had to acquire McCaw Cellular for $12.6 billion. By 2011, the number of subscribers worldwide had surpassed 5 billion and cellular communication had become an unprecedented technological revolution.”
Oops.
Great products expand markets. I often ask founders the question: what do you think has to go right for this to be a massive business?
Many startups have defied their markets. Airbnb popularized home-sharing. Tesla brought electric vehicles mainstream. Red Bull effectively created the energy drink market, which now comprises $53B globally and is expected to grow 7.2% a year through 2027. (For its troubles, Red Bull owns 43% of the market.)
The list goes on. Many investors I know regret passing on Benchling, which makes software for life sciences, because they thought life sciences was too small an opportunity; Benchling proved them wrong. One investor I know passed on Snap because how could the market for disappearing photos be very large? (The answer is that a product visionary like Evan Spiegel could create a large market.) Others passed on Figma because their TAM analysis focused on the number of designers; they missed the key insight that Figma’s real-time collaboration made design a cross-functional discipline, with engineers and product leaders and management also becoming paid users.

Figma offers a nice segue into market timing.
Market sizing I don’t believe in; market timing I very much do. In fact, outside of the entrepreneur, timing might be the single greatest determining factor for a startup’s success.
Figma is a good jumping-off point. Figma benefited from an exceptional team; Dylan and Evan are extraordinary. But Figma also rode the wave of WebGL, which renders high-fidelity, interactive 2D and 3D graphics in the browser.
Its timing was perfect. WebGL came out in 2011; Figma was born in 2012. As co-founder and CEO Dylan Fields puts it:
It was like, “Okay, WebGL lets you use the GPU, your computer, and the browser. What can we do with that?” So we started proving it out. [Evan] had made a bunch of tech demos already, and we started to look at it in the context of professional-grade tools and eventually interface design.
The more we built with WebGL, the more confident we were that this could be a technology we could use to go build a professional-grade interface design tool. But no one believed us. I kept trying to recruit people, and I found that if I didn’t show up and immediately open my laptop to show them the tool working, they just wouldn’t believe me.
In the early 2010s, Sketch and InVision were the would-be Adobe disruptors. Figma leapfrogged them both.
When it comes to market timing, there are many stories of companies that were too early. In the early 2000s, venture capitalists—including Sequoia and Benchmark—invested a total of $396M in Webvan, an online grocery delivery business. At its peak in 2000, Webvan brought in $179M in sales. But it had $525M in expenses that same year, and three years into its operations it declared bankruptcy. (Fun fact: Webvan was founded by Louis Borders, who also founded Borders Books.)
Two decades later, Instacart, a similar business to Webvan, has gone public and boasts a $7B+ market cap. Timing is crucial.
November 21, 2023

The share of U.S. venture funding going to companies in the San Francisco Bay Area hit a multiyear high this year, boosted largely by the AI boom.
Altogether, companies in the region pulled in $49.3 billion in seed through growth funding to date, per Crunchbase data. That represents approximately 41% of the entire U.S. total, the highest share in years.
For perspective, we charted out the percentage of U.S. startup investment going to Bay Area companies for the past five calendar years:

It should be noted, however, that the Northern California region is getting a larger slice of a much smaller pie. Overall, U.S. startup funding is down 40% in 2023 compared to the same period last year.
The San Francisco Bay Area, as the chart below illustrates, also saw a run-up to the 2021 peak, followed by a sharp decline. That said, funding this year is down 25% from the year-ago period — significantly better than the national average.

It’s possible to explain the Bay Area’s relatively resilient funding in two letters: AI.
San Francisco is home to the biggest fundraiser in the space, OpenAI, which has secured over $10.3 billion this year, principally from Microsoft. That alone accounts for about a fifth of the region’s funding.
Rival San Francisco-based AI unicorn Anthropic, meanwhile, pulled in at least $2.65 billion in known funding this year from lead backers including Amazon and Google. And Palo Alto-based Inflection AI, developer of large AI language models, raised $1.3 billion in a June financing led by Microsoft and Nvidia.
The funding spree comes as the Bay Area cements its status as a capital of artificial intelligence innovation. Much of the buzz is in San Francisco specifically, with the city’s Hayes Valley neighborhood apparently so densely populated with AI talent that it’s now also known as “Cerebral Valley.” It also helps that the world’s largest population of venture capitalists is nearby.
Deal flow is a mix of outsized rounds and smaller, earlier stage bets. So far this year, at least 27 Bay Area companies with an AI focus have raised rounds of $100 million more, per Crunchbase data. Yet roughly two-thirds of the 250-plus rounds in the space are at pre-seed, seed or Series A, with a median size of around $6 million.
The Bay Area’s strong showing on the fundraising front comes amid a period of often critical coverage of the region, and San Francisco in particular. Images of sprawling homeless encampments, large-scale retail theft, and other emblems of urban blight populate social media feeds and headlines of select news outlets.
CHAMATH PALIHAPITIYA, NOV 19, 2023
On Friday, OpenAI ousted its co-founder Sam Altman as CEO. While OpenAI cites a lack of consistent candor in Altman’s dealings with the board as the key reason for his removal, there is widespread speculation about other motives behind his termination. These range from disputes concerning the profit vs nonprofit motives of the company to the discovery of artificial general intelligence, a type of AI that can surpass human intelligence for most tasks.
We wanted to look back at the history and corporate structure of OpenAI to understand how we got here. Here’s the story:
Inception and Early Strides (2015-2018)
OpenAI was initially founded in 2015 by Sam Altman, Elon Musk, Ilya Sutskever and Greg Brockman as a non-profit organization with the stated goal to “advance digital intelligence in the way that is most likely to benefit humanity as a whole.” The company assembled a team of the best researchers in the field of AI to pursue the goal of building AGI in a safe way.
The early years of OpenAI were marked with rapid experimentation. The company made significant progress on research in deep learning and reinforcement learning, and released ‘OpenAI Gym’ in 2016, a toolkit for developing and comparing reinforcement learning algorithms.
OpenAI showcased the capabilities of these reinforcement learning algorithms through its ‘OpenAI Five’ project in 2018, which trained five independent AI agents to play a complex multiplayer online battle arena game called ‘Dota 2’. Despite operating independently, these agents learned to work as a cohesive team to coordinate strategies within the game.
A crucial development occurred in June 2018. The company released a paper titled "Improving Language Understanding by Generative Pre-Training", which introduced the foundational architecture for the Generative Pre-trained Transformer model. This later evolved into ChatGPT, the company’s flagship product.
Transition From a Non-Profit (2019)
In 2019, OpenAI transitioned from a non-profit to a “capped-profit” model. According to the company’s blog post in 2019, OpenAI wanted to increase its ability to raise capital while still serving its mission, and “no pre-existing legal structure they knew of struck the right balance”. Per the IRS, for-profit entities and not-for-profit entities are fundamentally at odds with each other, so in order to combine the two competing concepts, OpenAI came up with a novel structure which allowed the non-profit to control the direction of a for-profit entity while providing the investors a "capped" upside of 100x. This culminated in a $1Bn investment from Microsoft, marking the beginning of a key strategic relationship, but complicating the company’s organizational structure and incentives.
The non-profit entity, OpenAI Inc., became the sole controlling shareholder of the new for-profit entity OpenAI Global LLC, which answered to the board of the nonprofit and retained a fiduciary responsibility to the company’s nonprofit charter. Crucially, the board was responsible for determining when OpenAI attained artificial general intelligence (AGI), which the company defines as a “highly autonomous system that outperforms humans at most economically valuable work.”
The structure of OpenAI is outlined below:

Becoming ChatGPT (2020-2023)
In 2020, bolstered by new funding, OpenAI unveiled GPT-3, a large language model (LLM) capable of understanding and generating convincing human-like text. This was a watershed moment for OpenAI and the broader AI community. As the company grew, its LLMs continued to become larger and more intelligent.
However, OpenAI's innovation didn't stop with language models. In 2021, the company expanded its horizons by launching Codex, a specialized AI model for programming, and DALL-E, an AI system adept at creating original artwork from text descriptions.
ALEX KANTROWITZ, NOV 22, 2023

Sam Altman is back. Improbably and dramatically, the ex-OpenAI CEO returned as CEO late Tuesday. Altman’s counter-coup swept out three board members who sparked his firing and included an agreement to investigate what went down this past weekend. The new board — which now includes Larry Summers and Bret Taylor — will expand to up to nine members, likely including someone from Microsoft.
The AI field will not go back to ‘normal’ after this. OpenAI was already vulnerable coming into the chaos and will now have to work harder to maintain its lead while facing inspired competition. Though the narrative might frame this as a major win for OpenAI and Microsoft, the reality, as always, is a bit more nuanced. Here’s how the AI field changes after this:
18,000 Microsoft customers use its OpenAI service on Azure, and the disintegration of OpenAI would’ve left them scrambling. Microsoft had to return its OpenAI partnership to some order, and it could not have hired OpenAI’s entire staff and kept the OpenAI service running. So this is a positive resolution and a relief after a tense few days. There are still some governance issues to resolve. Microsoft doesn’t have an OpenAI board seat after all this. But this was the least bad option for Satya Nadella & co., who can now press forward with their industry-leading AI efforts, even if having Altman in-house would’ve paid dividends over time.
Companies building on OpenAI technology freaked out this past week. They trusted in a company that almost evaporated in a weekend. So today, those building on OpenAI are putting contingency plans in place should the situation repeat. The era of model agnosticism is really here. Soon, any serious AI company will be able to substitute OpenAI for Anthropic or any other competitor. Startup founders using OpenAI have already told me they’ve started work on it this week. OpenAI competitors are already trying to exploit the situation. “Utterly insane weekend. So sad. Wishing everyone involved the very best,” Inflection CEO Mustafa Suleyman wrote this week. In the next breath, he said: “Come run with us!”
OpenAI sold the world’s top AI researchers on a vision and a safety valve: Join us, help us get closer to human-level artificial intelligence, and if things get unsafe, the board will step in. It was a win-win proposition that was ultimately a sham. The OpenAI board was poorly structured, almost blew up the company, and the new structure will be less safety-focused. This will open avenues for competitors to recruit researchers who otherwise might’ve gone to OpenAI. Meta chief AI scientist Yann LeCun is already endorsing the case that his team’s open-source focus will make it an unlikely winner. He might be right.
Upgrade to paid
Altman’s pushed an AI ‘safety’ agenda in Washington and globally, becoming a lobbying force. OpenAI’s corporate structure lent legitimacy to his efforts. The implicit message: We’re the AI safety company, not the for-profit, please listen to us and consider the following rules. With the non-profit board’s decision so quickly reversed after pressure from investors (and well-compensated employees), the myth will take a hit. OpenAI will now become one of the pack, without its special sheen, which will change its ability to influence policy.
OpenAI’s board was supposed to save us from an AI apocalypse. Then, it couldn’t think three steps ahead in a boardroom coup. Much of the blame rests with the specific individuals. But more broadly, it’s hard to imagine anyone will have confidence in our ability to stop harmful AI should we develop it. (And what if the board’s concerns in this area were legitimate?) The future of the AI safety field is in flux.
OpenAI’s chaos may be its own ladder. It moves forward with a board more sympathetic toward accelerating AI development. It will work more closely with Microsoft under the new structure, with fewer speedbumps along the way. And it may have some incredible products en route. But the chaos will also be a ladder to those OpenAI once had on their heels. And some competitor — whether it’s Anthropic, Inflection, Google, or others — will inevitably exploit the moment and rise.
You could parachute him into an island full of cannibals and come back in 5 years and he'd be the king.
YCombinator co-founder Paul Graham in 2008
Jacquelyn Melinek @jacqmelinek / 12:55 PM PST•November 21, 2023

Image Credits: Piaras Ó Mídheach/Sportsfile for Web Summit via Getty Images
It’s been an eventful week for crypto exchanges and the U.S. government.
Changpeng Zhao, also known as “CZ,” the founder and CEO of Binance, is stepping down and has pleaded guilty to a number of violations brought on through the Department of Justice and other U.S. agencies. He appeared in a Seattle federal court on Tuesday to enter his plea.
Richard Teng, Binance’s former global head of regional markets, will be the exchange’s new CEO, Zhao shared in a post on X Tuesday afternoon. Teng previously was the CEO of the Financial Services Regulatory Authority at Abu Dhabi Global Market, among other executive roles. In response to stepping down, Zhao said, “it is the right thing to do” adding, “I made mistakes, and I must take responsibility.” Zhao will remain a shareholder and said he will be “available to the team to consult as needed.”
Binance, the world’s largest crypto exchange, has also agreed to pay about $4.3 billion to resolve the DOJ’s investigations, the agency said in a press release on Tuesday.
As a part of Binance’s guilty plea, it has also reached agreements with the Department of Treasury’s Financial Crimes Enforcement Network (FinCEN), the Office of Foreign Assets Control (OFAC) and the Commodity Futures Trading Commission (CFTC) and will credit about $1.8 billion toward those resolutions.
The crypto exchange “admits it engaged in anti-money laundering, unlicensed money transmitting and sanctions violations,” the DOJ release stated, calling it the “largest corporate resolution” that included criminal charges for an executive. Zhao pleaded guilty to failing to maintain an anti-money laundering program.
“The message here should be clear: using new technology to break the law does not make you a disruptor, it makes you a criminal,” U.S. Attorney General Merrick Garland said in a statement.
Connie Loizos @cookie / 2:19 PM PST•November 21, 2023

Image Credits: CatEyePerspective / Getty Images
Tiger Global Management is going through a major management change. Per a message from founder Chase Coleman sent this afternoon to investors of the 22-year-old venture- and hedge-fund outfit and obtained by TechCrunch, Coleman is taking over both the outfit’s public company investing and private equity businesses, while the longtime head of the latter, Scott Shleifer, becomes a senior advisor, a role that is a full-time position with no end date, per a source with knowledge of the maneuver.
According to Coleman, the decision was made by Shleifer, largely because Shleifer and his family have “made their home in Florida and want to stay there.” Meanwhile, wrote Coleman, “Tiger Global is operating in-person out of our New York offices,” and has “found that having everyone together in New York is highly productive and a better operating model for our firm.”
Tiger was founded by Coleman, a protégé of hedge fund pioneer Julian Robertson, in 2000. Shleifer joined two years later…
This is not the firm’s first major leadership change. In 2015, one of its investment heads, Feroz Dewan, left to set up his own investment firm, now called Arena Holdings Management, in New York.
Tiger’s private equity business was subsequently headed by Lee Fixel, who joined the firm in 2006 and stepped down to hang his own shingle in March of 2019. Fixel has subsequently raised a number of multibillion-dollar investment funds at that firm, called Addition.
After Fixel’s departure, Shleifer and Coleman continued as co-managers of the portfolios Fixel had overseen, with Shleifer taking over as its head. But he assumed control at what looks in retrospect to have proved a treacherous period for the firm.
After announcing in January 2020 that Tiger Global garnered $3.75 billion in commitments for its 12th fund, Schleifer put the pedal to the metal, overseeing an operation that made bold bets at a rapid-fire clip despite already heated valuations. For a time, investors were so happy with the strategy — which appeared to be working — that they awarded Tiger with a whopping $12.7 billion vehicle that it closed in March 2022 after just four months of fundraising.
. More
Sarah Perez @sarahpereztc / 9:16 AM PST•November 22, 2023

Image Credits: Jaap Arriens/NurPhoto (opens in a new window)/ Getty Images
Shortly after screenshots emerged showing xAI’s chatbot Grok appearing on X’s web app, X owner Elon Musk confirmed that Grok would be available to all of the company’s Premium+ subscribers sometime “next week.” While Musk’s pronouncements about time frames for product deliveries haven’t always held up, code developments in X’s own app revealed that Grok integration was already underway.
Yeah.
Grok should be available to all X Premium+ subscribers next week.
— Elon Musk (@elonmusk) November 22, 2023
This week, app researcher Nima Owji shared screenshots showing how Grok had been added to X’s web app, noting that its URL would be twitter.com/i/grok. In one screenshot, users who were not yet Premium+ subscribers would be invited to upgrade to gain access to Grok. Another showed an “Ask Grok” text entry box for communicating with the AI chatbot. The features were not public-facing at the time of his discovery, however, but suggested that Grok’s rollout was nearing.
Another image that shows how the Premium+ subscribers will be able to chat with @Grok! https://t.co/iye0SXwPe0 pic.twitter.com/Bcm2ohDrsS
— Nima Owji (@nima_owji) November 20, 2023
First released on November 4 to select testers, Grok is Musk’s answer to OpenAI’s ChatGPT, Google’s Bard, Anthropic’s Claude and others, and could potentially gain a following as part of X’s broader social platform.
In addition, xAI, the Musk-owned company behind Grok, promises that its chatbot will have more of a personality than rivals. It plans to respond to users’ questions “with a bit of a wit” and is said to have a “rebellious streak,” according to its website. The chatbot also plans to answer “spicy” questions that are rejected by other AI systems, the company has noted.
But personality alone won’t be key to Grok’s differentiation — it will also have access to real-time knowledge via the X platform, which could be an interesting component, if not one that leads to the highest accuracy in terms of its responses.
The addition could help juice sign-ups for X’s Premium subscription, which has yet to fare as well as Musk hoped. The X owner revamped Twitter Blue to become X Premium, promising paid verification among a host of other features, like increased exposure in replies, an edit button, the ability to publish longer posts and videos, and a reduction of ads.
NOVEMBER 22, 2023 | On Technology

The OpenAI Drama’s Season One is over. Now it’s time to go back to work or, better yet, spend time with our families over the long Thanksgiving weekend, instead of binging on every rumor, nuance, and hiccup.
i love openai, and everything i’ve done over the past few days has been in service of keeping this team and its mission together. when i decided to join msft on sun evening, it was clear that was the best path for me and the team. with the new board and w satya’s support, i’m looking forward to returning to openai, and building on our strong partnership with msft.
— Sam Altman.
Sam Altman and Greg Brockman have returned to OpenAI in their original roles as CEO and CTO. They are no longer part of the not-for-profit’s board, along with three other existing members — Tasha McCauley, Helen Toner, and Ilya Sutskever, OpenAI’s chief scientist. Adam D’Angelo remains on the board, while Bret Taylor, ex-co-CEO of Salesforce, becomes the chairman, and Larry Summers will join the board.
Now looking ahead — it is almost certain we will be back for Season Two. Why?Let’s start with the unanswered questions.
Why was Sam fired in the first place?
Tasha McCauley and Helen Toner won’t go quietly into the sunset.
If you have followed Ilya Sutskever over the years, another development is likely.
We should expect a lot of debate in the media over the new board composition.
OpenAI is highly influential for the future of the technology industry and humanity. If you are creating technologies that will have a significant impact, it’s important to prioritize transparency. At the most fundamental level, they should follow their own charter. If not, they should abandon the facade of a not-for-profit and instead become a for-profit company.
OpenAI plays a significant role in society, whether as a not-for-profit research lab or a for-profit behemoth. It is clear that they need to make future decisions based on the larger societal impact. The recent personnel drama doesn’t instill confidence. It is easy to overlook that when you have billions riding on a massive wave. Still, just looking back to the recent past is a good reminder that technology in the post-mobile era is less about technology itself and more about people.
Silicon Valley doesn’t really learn from the past, as startups that relied too much on Facebook got double-crossed by the company and its self-interests. Similarly, those who bet their future on Amazon Web Services have had to face rising costs.
If you are a startup, an organization, or a large company, you need to learn from the past five days and start building resilience in your product plans. Whether you choose OpenAI’s commercial competitors, or better yet, bet on open source, remember that relying solely on this one entity is not prudent.

Share Dialog
Share Dialog
<100 subscribers
<100 subscribers
No activity yet