The Paragraph version of That Was The Week

Subscribe to That Was The Week
<100 subscribers
<100 subscribers
A reminder for new readers. That Was The Week collects the best writing on critical issues in tech, startups, and venture capital. I select the articles because they are of interest. The selections often include things I disagree with. The articles are only snippets. Click on the headline to go to the original. I express my point of view in the editorial and the weekly video.
Content this week from: @Cartainc, @chudson, , , @BatteryVentures, @jglasner, @benthompson, @gbd, @yejinChoika, @geoffreyhinton, @geneteare and more

In last week’s video, I asked which day you would like to get That Was The Week and whether you would prefer the video and content in a single post. The score is in:

So starting this week, I will wait to post the newsletter until the video is ready, and that will mostly be Saturday but sometimes Friday, depending on where in the world you are.
This week is a lot about AI. On Thursday, various executives visited the White House to discuss how to keep AI safe for humans. And Lina Khan wrote an op-ed in the New York Times entitled “We Must Regulate A.I. Here’s How.” A little like ChatGPT, she is hallucinating. When the Federal Trade Commission chair takes to the Times, you know it is because she has no actual path to execute her wishes.
She states that:
While the technology is moving swiftly, we already can see several risks. The expanding adoption of A.I. risks further locking in the market dominance of large incumbent technology firms.
There’s the hallucination. OpenAI is a startup, barely a few years old and challenging Google, Facebook, Salesforce, and others. Hardly locking in the market dominance of incumbents. What is she smoking?
OpeanAI did announce a $300m raise this week. And Salesforce did announce SlackGPT, a large language model embedded in Slack. But the biggest news was Geoffrey Hinton, inventor of neural networks, leaving Google due to his concerns about the safety of AI. This exit was driven by OpenAi’s success and Google’s drive to catch up. That drive leaves the pre-LLM engineers fearing for their path to artificial general intelligence. A little like Gary Marcus and others, there is a turf war between data scientists on the right path to AGI. Like the Life of Brian movie, this is an inside-religion fight on details. The anti-LLM crowd is fighting for their careers and reputations and is throwing as much mud as possible at ChatGPT and its lookalikes.
Now in these turf wars, much is right on both sides. But as a non-combattant, I prefer to take what is good in both while encouraging the iteration towards good human outcomes.
OpenAI co-founder Greg Brockton has a video this week explaining and showing where ChatGPT is going next. It’s wonderful, watch it. And there is a counter from the other side, so two videos of the week.
In the essays, much more on last week’s theme of Shrinking. Wonderful essays by Carta, Charles Hudson, and Joanna Glaser from Crunchbase News.
Enjoy this week’s collection in That Was The Week.
May 1, 2023
Kevin Dowd and Peter Walker
Stateofprivatemarketsaddendumq12023
2.77MB ∙ PDF file

The transformation of the venture capital industry over the past year has been stark. Total venture capital raised by startups plunged 80% from Q1 2022 to Q1 2023. Venture deal count fell 45% over the same span. Overall, Q1 was the slowest quarter for both capital raised and deal count since 2017.

There are signs of a venture spring. Valuations from seed to Series C ticked up from recent lows. Median round sizes mostly stabilized. But these green shoots were overwhelmed by the decline in total rounds across all stages.

Down rounds spiked in frequency: Just shy of 20% of all venture investments in Q1 were down rounds, the highest proportion since at least 2018. A year ago, barely 5% of venture deals resulted in a reduced valuation. With median valuations having fallen so far from recent highs, many companies seeking a valuation increase are battling against the current.

More companies chose bridge rounds: For companies ranging from Series A to Series C, bridge rounds have emerged as an increasingly attractive option. At least 40% of all investments in Series A and Series B companies were bridge rounds in Q1, the highest figures of the 2020s.
Startup M&A bounced back: The number of venture-backed companies that were acquired or merged with another company increased by 20% in Q1 compared to Q4 2022, with 57% of those M&A deals valued at $10 million or less. In a challenging environment for raising new venture capital, some smaller startups are instead opting for an exit.



APR 25, 2023

I have been working on Precursor for over nine years at this point. I got a lot of very good advice when I was getting started. One thing that has stuck with me since the beginning was some feedback I got from Mike Maples at Floodgate when I was still formulating the plan for Precursor. I’m unsure if he’s the first person ever to say this, but I remember our conversation very clearly - he told me that your fund size is your strategy. I didn’t fully appreciate what that meant as a new fund manager, but the longer I am in this business, the more I think it explains fund behavior and individual incentives.
There is a certain part of that statement that was obvious to me then and is still obvious now. Your fund size, for the most part, dictates your check size, ownership targets, and portfolio construction. A fund of a given size only has a few levers to pull to get to top-tier returns. The larger your fund, the fewer levers you generally have to tinker with to generate great returns.
There is a second part of this that I didn’t really appreciate until about three or four years ago. The size of your fund also dictates the scale of outcomes that can actually move the needle for your fund, and that shapes the lens through which you evaluate the terminal scale of startups that come across your desk. The larger your fund, the larger absolute scale of outcomes you need and the more of those outcomes you need to acheive to make your math work.
We are currently in the midst of an era where many VCs raised very large funds, relative to strategy, in 2020, 2021, and 2022. At the time those funds were raised, the public and private markets were signaling that the terminal outcomes for very good companies were in the $5-10 billion range. We also had what we believed to be exceptional companies that might ultimately be worth $10-$100 billion at exit. In an era where terminal values for great companies sat at those levels, large fund sizes made sense and there was a clear path to returning them if you could get into great companies.
Fast forward to today, and it feels like great and exceptional companies are likely worth half of what we thought in terms of terminal value back then. We can adjust our expectations on a forward-looking basis, and we can even adjust previous valuations through down rounds. What we cannot change, though, is the math of fund size. Regardless of ARR multiples and future expectations, the demands for cash-on-cash returns remain the same for a fund of a given size. What changes, though, is the investor’s perception of what it takes to build companies of the terminal scale that will move the needle for a fund of a given size. Today, it might be 2-3 times (?) harder to build a company with a $1 billion terminal value than it was 2-3 years ago. The market expects way more in terms of total revenue and efficiency to warrant that valuation. And I'd argue that the payoff for truly exceptional companies is also harder to achieve and that $10+ billion public or private outcomes will be harder to achieve going forward than they were in recent memory.
DHARMESH THAKKER AND JASON MENDEL
MAY 1, 2023
Nap pods, NFTs, free workplace massages and other caricatures of big-tech life are now, more or less, relegated to the graveyard of ‘zero-interest-rate phenomena,’ or ZIRP — part of the broader industry right-sizing amid the market downturn.
Private-company leaders are noticing big-tech companies reaping the benefits of cost-cutting measures and also recognize the potential of generative-AI technologies to automate coding, sales, marketing and creative content, driving impactful results with a much-lower cost structure.
And how could they not? In board rooms, private company founders are often advised to do two things: Build great products and grow efficiently. Balancing those obligations with other leadership responsibilities can be difficult in the best of times, but it’s a particularly tall order in the midst of a potential recession, ongoing industry layoffs and slowing IT budgets.
Not to mention, for many private software companies, even highly-valued unicorns, there is a high bar to meet to transition to a successful IPO, including a 10x+ revenue ramp and strong margins, as we noted in our 2022 State of the OpenCloud report.
But I remain optimistic that many of these startups—particularly B2B software and infrastructure companies—can pull through and prosper in this new environment. Why? Namely, because software is a uniquely attractive asset class, as it drives increased productivity, replaces high-cost labor and can generate 80-90% gross margins.
Furthermore, software companies can scale quickly to massive revenues and can generate 30-40% operating margins at scale. Per Capital IQ, there are 16 US-based software companies with more than $1B in next-twelve-months revenue that went public in the last five years. This is unheard of in any other industry.
That said, the boom times over the past few years have, in my view, encouraged some bad habits in the technology industry and among many private software firms. Investors, company leaders and employees alike became accustomed to the promise of multi-billion-dollar software companies, minted at record pace. With this dogged focus on “growth at all costs,” we naturally saw incredibly high burn rates with steep investments of time and resources into sales, marketing and R&D with little oversight or accountability.
Many investors, myself included, were willing to endorse high cash-burn rates for early-stage companies hoping to reach 30-40% operating margins on billion-dollar revenue when it meant they’d be burning that cash for three to four years. That journey now looks like it’ll take closer to five or six years – the result of an IT spending slowdown and significantly-rising costs of capital – a stark, new reality. A high-burn approach is unsustainable under these circumstances, to say the least.
But there’s good news: You can reduce cash burn and succeed, even in the current environment. In fact, by becoming more cost-sensitive, you will likely be better off in the short- and long-term – we’ve seen it many times before.
In fact, we tend to see four essential cycles of value creation in the software industry, illustrated below by our chart and by a March 2023 Goldman Sachs analysis of the trade-off between growth and profitability, which we now are seeing play out as companies come to grips with slowing growth and the need to manage expenses.
Joanna Glasner, May 4, 2023, @jglasner
By the time a startup gets to Series D stage, backers are done asking questions about product-market fit and what inspires founders.
At this point, it’s all about scale. And while an IPO or acquisition might still be a few years away, executives better be able to present compelling options for big returns.
The lack of large exits lately, combined with falling tech valuations and slower startup investment, has made the case for funding a Series D round much harder in recent months. These factors have contributed to pushing U.S. investment at this stage to its lowest point in years.
How low? Using Crunchbase data, we charted out total Series D investment for the past eight quarters below. As you can see, deal-making is down sharply from its former peak.

Of course, things did reach stratospheric levels in the gravity-defying investment environment of 2021 and early 2022. At the peak in the fourth quarter of 2021, for instance, there were three D rounds of $1 billion or more alone. The median round at that stage was over $100 million.
Those are anomalous comps. So we can’t be too shocked to see we’ve come down since.
Nonetheless, the declines are dramatic. For the first quarter of this year, total Series D investment is down 92% from the peak and 86% from the year-ago period.
The shuttering of the IPO window looks like the most likely culprit in curtailing really big deals. Investors usually back a deal in the hundreds of millions or more only if they see a clear-cut path to exit that will return a tidy multiple.
So far this year, we’ve seen just one Series D in the hundreds of millions — a $300 million financing for Wiz, a cloud security company headquartered in New York with significant operations in Israel.
Including Wiz, there have been just six Series D rounds this year of $100 million or more, which we list below:
Venture firms that used to do a lot of Series D aren’t cutting checks.
Per Crunchbase data, the investors who used to be most active in backing large, later-stage rounds have stopped leading Series D rounds.
Standouts in this category include Tiger Global Management and SoftBank Vision Fund. In 2021 and 2022, the two firms were Series D lead investors 43 times, per Crunchbase data. So far this year, neither has led a single round at this stage.
Andreessen Horowitz and Insight Partners — two other firms that regularly top the most active investor lists — haven’t led a U.S. Series D round for more than a year. In 2021 and 2022, the two were lead backers in D rounds 22 times.
One can’t dismiss the plunge in Series D investment as merely a decline from a very high peak. So far, 2023 is on track to turn in the lowest total investment at this stage in years.
By Sahil Patel May 4, 2023

For years, Google’s YouTube couldn’t get any respect from the TV industry. TV marketers wouldn’t go near it out of fear that their ads would be tainted by running alongside YouTube’s amateur content. And analysts and research firms treated the streaming service as separate from the rest of television when analyzing TV viewing and advertising.
How things have changed. One startling statistic shows how YouTube is now unequivocally the king of TV. Its internal data indicate that close to 45% of overall YouTube viewing in the U.S. today is happening on TV screens, according to people familiar with the matter, compared with well below 30% in 2020. That’s a radical shift for the video-streaming service, reflecting how the growth of internet-connected TVs has made it easier for people to watch streaming services like YouTube on TVs instead of on their cellphones and computers.
THE TAKEAWAY
• Nearly 45% of U.S. viewing of YouTube is on TV sets
• YouTube is most watched outlet on TV
• YouTube expected to draw as much upfront ad commitments as networks
And as more people watch YouTube videos on TV, YouTube has become the most popular thing on that platform—Nielsen data show that YouTube accounts for more TV viewing than any other single network or streaming service. (Most of the growth is due to YouTube’s main service, while its cable TV–like subscription streaming service, YouTube TV, accounts for a small portion of the growth.)
Those shifts explain why advertisers have stopped treating YouTube like a second-class citizen, which will become clearer this month at the start of the annual TV upfronts market, an event where advertisers and ad sellers begin negotiating their spending commitments for the next year. Advertisers widely expect to allocate at least as many dollars to YouTube as on any individual TV company such as Disney and NBCUniversal, according to multiple senior ad-buying executives.
In total, this likely means the Google-owned unit will land well north of $7 billion in ad-spending commitments, according to people familiar with YouTube’s upfront plans and other advertising executives. (That figure includes money spent on other Google properties as part of broader ad packages.) Brian Albert, managing director of U.S. agency video at Google and YouTube, who oversees YouTube’s upfront ad negotiations, declined to comment on YouTube’s upfront ad sales goals: “I would lose my job if I answer that question,” he said.
It’s not the first time YouTube’s upfront take has been equal with that of TV networks, the people said, although the scale of what YouTube is drawing hasn’t previously been reported. The growth in YouTube’s share of TV ad dollars likely helps explain how its total ad revenue has nearly doubled from $15.1 billion in 2019 to $29.2 billion in 2022.
Posted on Tuesday, May 2, 2023

In 2017 BuzzFeed CEO Jonah Peretti declared that “if you’re thinking about an electorate and you’re thinking about the public and you’re thinking about people being informed, the subscription model in media does not help inform the broad public”; in 2017 The Athletic CEO Alex Mather went in the opposite direction, telling me in a Stratechery Interview that his publication would be differentiated by “less clickbait, no ads, no game recaps, no hot takes, really focusing in on the deeper stories, the insider stuff, the minutiae for the really diehard fan.”
Today in 2023 BuzzFeed has shuttered its News team and The Athletic has been acquired by the New York Times; the latter’s top priority was adding advertising.
BuzzFeed’s bet on news was a bet on Facebook and Google’s willingness to subsidize free news, a bet that didn’t pay off. The Athletic had a happier ending, even if there are arguments that the New York Times overpaid, given that the sports publication had never made a profit; it’s also the case that neither BuzzFeed News nor The Athletic were running the proper business model for content on the Internet. The question going forward should not be advertising or subscriptions; the answer, in meme form:

This is, of course, a return to form for content production; it’s also both an evolution and refutation of a point I argued in 2015’s Popping the Publishing Bubble:
It is easy to feel sorry for publishers: before the Internet most were swimming in money, and for the first few years online it looked like online publications with lower costs of production would be profitable as well. The problem, though, was the assumption that advertising money would always be there, resulting in a “build it and they will come” mentality that focused almost exclusively on content production and far too little on sustainable business models.
In fact, publishers going forward need to have the exact opposite attitude from publishers in the past: instead of focusing on journalism and getting the business model for free, publishers need to start with a sustainable business model and focus on journalism that works hand-in-hand with the business model they have chosen. First and foremost that means publishers need to answer the most fundamental question required of any enterprise: are they a niche or scale business?
Niche businesses make money by maximizing revenue per user on a (relatively) small user base
Scale businesses make money by maximizing the number of users they reach
The truth is most publications are trying to do a little bit of everything: gain more revenue per user here, reach more users over there. However, unless you’re the New York Times (and even then it’s questionable), trying to do everything is a recipe for failing at everything; these two strategies require different revenue models, different journalistic focuses, and even different presentation styles.
I think my position today is more of an evolution than a refutation, because I do still think it is essential for an content entity to understand if it is in the niche or scale game; the refutation is twofold: first, everything is a niche, and second, nearly all content businesses should have both subscriptions and advertising.
The text below is a very recent leaked document, which was shared by an anonymous individual on a public Discord server who has granted permission for its republication. It originates from a researcher within Google. We have verified its authenticity. The only modifications are formatting and removing links to internal web pages. The document is only the opinion of a Google employee, not the entire firm. We do not agree with what is written below, nor do other researchers we asked, but we will publish our opinions on this in a separate piece for subscribers. We simply are a vessel to share this document which raises some very interesting points.
We’ve done a lot of looking over our shoulders at OpenAI. Who will cross the next milestone? What will the next move be?
But the uncomfortable truth is, we aren’t positioned to win this arms race and neither is OpenAI. While we’ve been squabbling, a third faction has been quietly eating our lunch.
I’m talking, of course, about open source. Plainly put, they are lapping us. Things we consider “major open problems” are solved and in people’s hands today. Just to name a few:
LLMs on a Phone: People are running foundation models on a Pixel 6 at 5 tokens / sec.
Scalable Personal AI: You can finetune a personalized AI on your laptop in an evening.
Responsible Release: This one isn’t “solved” so much as “obviated”. There are entire websites full of art models with no restrictions whatsoever, and text is not far behind.
Multimodality: The current multimodal ScienceQA SOTA was trained in an hour.
While our models still hold a slight edge in terms of quality, the gap is closing astonishingly quickly. Open-source models are faster, more customizable, more private, and pound-for-pound more capable. They are doing things with $100 and 13B params that we struggle with at $10M and 540B. And they are doing so in weeks, not months. This has profound implications for us:
We have no secret sauce. Our best hope is to learn from and collaborate with what others are doing outside Google. We should prioritize enabling 3P integrations.
People will not pay for a restricted model when free, unrestricted alternatives are comparable in quality. We should consider where our value add really is.
Giant models are slowing us down. In the long run, the best models are the ones
which can be iterated upon quickly. We should make small variants more than an afterthought, now that we know what is possible in the <20B parameter regime.

At the beginning of March the open source community got their hands on their first really capable foundation model, as Meta’s LLaMA was leaked to the public. It had no instruction or conversation tuning, and no RLHF. Nonetheless, the community immediately understood the significance of what they had been given.
A tremendous outpouring of innovation followed, with just days between major developments (see The Timeline for the full breakdown). Here we are, barely a month later, and there are variants with instruction tuning, quantization, quality improvements, human evals, multimodality, RLHF, etc. etc. many of which build on each other…. Read on at
The technology, as it’s currently imagined, promises to concentrate wealth and disempower workers. Is an alternative imaginable?
By Ted Chiang
May 4, 2023
When we talk about artificial intelligence, we rely on metaphor, as we always do when dealing with something new and unfamiliar. Metaphors are, by their nature, imperfect, but we still need to choose them carefully, because bad ones can lead us astray. For example, it’s become very common to compare powerful A.I.s to genies in fairy tales. The metaphor is meant to highlight the difficulty of making powerful entities obey your commands; the computer scientist Stuart Russell has cited the parable of King Midas, who demanded that everything he touched turn into gold, to illustrate the dangers of an A.I. doing what you tell it to do instead of what you want it to do. There are multiple problems with this metaphor, but one of them is that it derives the wrong lessons from the tale to which it refers. The point of the Midas parable is that greed will destroy you, and that the pursuit of wealth will cost you everything that is truly important. If your reading of the parable is that, when you are granted a wish by the gods, you should phrase your wish very, very carefully, then you have missed the point.
So, I would like to propose another metaphor for the risks of artificial intelligence. I suggest that we think about A.I. as a management-consulting firm, along the lines of McKinsey & Company. Firms like McKinsey are hired for a wide variety of reasons, and A.I. systems are used for many reasons, too. But the similarities between McKinsey—a consulting firm that works with ninety per cent of the Fortune 100—and A.I. are also clear. Social-media companies use machine learning to keep users glued to their feeds. In a similar way, Purdue Pharma used McKinsey to figure out how to “turbocharge” sales of OxyContin during the opioid epidemic. Just as A.I. promises to offer managers a cheap replacement for human workers, so McKinsey and similar firms helped normalize the practice of mass layoffs as a way of increasing stock prices and executive compensation, contributing to the destruction of the middle class in America.
A former McKinsey employee has described the company as “capital’s willing executioners”: if you want something done but don’t want to get your hands dirty, McKinsey will do it for you. That escape from accountability is one of the most valuable services that management consultancies provide. Bosses have certain goals, but don’t want to be blamed for doing what’s necessary to achieve those goals; by hiring consultants, management can say that they were just following independent, expert advice. Even in its current rudimentary form, A.I. has become a way for a company to evade responsibility by saying that it’s just doing what “the algorithm” says, even though it was the company that commissioned the algorithm in the first place.
The question we should be asking is: as A.I. becomes more powerful and flexible, is there any way to keep it from being another version of McKinsey? The question is worth considering across different meanings of the term “A.I.” If you think of A.I. as a broad set of technologies being marketed to companies to help them cut their costs, the question becomes: how do we keep those technologies from working as “capital’s willing executioners”? Alternatively, if you imagine A.I. as a semi-autonomous software program that solves problems that humans ask it to solve, the question is then: how do we prevent that software from assisting corporations in ways that make people’s lives worse? Suppose you’ve built a semi-autonomous A.I. that’s entirely obedient to humans—one that repeatedly checks to make sure it hasn’t misinterpreted the instructions it has received. This is the dream of many A.I. researchers. Yet such software could easily still cause as much harm as McKinsey has.
May 3, 2023
By Lina M. Khan

Ms. Khan is the chair of the Federal Trade Commission.
It’s both exciting and unsettling to have a realistic conversation with a computer. Thanks to the rapid advance of generative artificial intelligence, many of us have now experienced this potentially revolutionary technology with vast implications for how people live, work and communicate around the world. The full extent of generative A.I.’s potential is still up for debate, but there’s little doubt it will be highly disruptive.
The last time we found ourselves facing such widespread social change wrought by technology was the onset of the Web 2.0 era in the mid-2000s. New, innovative companies like Facebook and Google revolutionized communications and delivered popular services to a fast-growing user base.
Those innovative services, however, came at a steep cost. What we initially conceived of as free services were monetized through extensive surveillance of the people and businesses that used them. The result has been an online economy where access to increasingly essential services is conditioned on the widespread hoarding and sale of our personal data.
These business models drove companies to develop endlessly invasive ways to track us, and the Federal Trade Commission would later find reason to believe that several of these companieshad broken the law. Coupled with aggressive strategies to acquire or lock out companies that threatened their position, these tactics solidified the dominance of a handful of companies. What began as a revolutionary set of technologies ended up concentrating enormous private power over key services and locking in business models that come at extraordinary cost to our privacy and security.
Story continues below advertisement
Continue reading the main story
The trajectory of the Web 2.0 era was not inevitable — it was instead shaped by a broad range of policy choices. And we now face another moment of choice. As the use of A.I. becomes more widespread, public officials have a responsibility to ensure this hard-learned history doesn’t repeat itself.
As companies race to deploy and monetize A.I., the Federal Trade Commission is taking a close look at how we can best achieve our dual mandate to promote fair competition and to protect Americans from unfair or deceptive practices. As these technologies evolve, we are committed to doing our part to uphold America’s longstanding tradition of maintaining the open, fair and competitive markets that have underpinned both breakthrough innovations and our nation’s economic success — without tolerating business models or practices involving the mass exploitation of their users. Although these tools are novel, they are not exempt from existing rules, and the F.T.C. will vigorously enforce the laws we are charged with administering, even in this new market.
While the technology is moving swiftly, we already can see several risks. The expanding adoption of A.I. risks further locking in the market dominance of large incumbent technology firms. A handful of powerful businesses control the necessary raw materials that start-ups and other companies rely on to develop and deploy A.I. tools. This includes cloud services and computing power, as well as vast stores of data.
Enforcers and regulators must be vigilant. Dominant firms could use their control over these key inputs to exclude or discriminate against downstream rivals, picking winners and losers in ways that further entrench their dominance. Meanwhile, the A.I. tools that firms use to set prices for everything from laundry detergent to bowling lane reservations can facilitate collusive behavior that unfairly inflates prices — as well as forms of precisely targeted price discrimination. Enforcers have the dual responsibility of watching out for the dangers posed by new A.I. technologies while promoting the fair competition needed to ensure the market for these technologies develops lawfully. The F.T.C. is well equipped with legal jurisdiction to handle the issues brought to the fore by the rapidly developing A.I. sector, including collusion, monopolization, mergers, price discrimination and unfair methods of competition.
And generative A.I. risks turbocharging fraud. It may not be ready to replace professional writers, but it can already do a vastly better job of crafting a seemingly authentic message than your average con artist — equipping scammers to generate content quickly and cheaply. Chatbots are already being used to generate spear-phishing emails designed to scam people, fake websites and fake consumer reviews —bots are even being instructed to use words or phrases targeted at specific groups and communities. Scammers, for example, can draft highly targeted spear-phishing emails based on individual users’ social media posts. Alongside tools that create deep fake videos and voice clones, these technologies can be used to facilitate fraud and extortion on a massive scale.When enforcing the law’s prohibition on deceptive practices, we will look not just at the fly-by-night scammers deploying these tools but also at the upstream firms that are enabling them.
Lastly, these A.I. tools are being trained on huge troves of data in ways that are largely unchecked. Because they may be fed information riddled with errors and bias, these technologies risk automating discrimination — unfairly locking out people from jobs, housing or key services. These tools can also be trained on private emails, chats and sensitive data, ultimately exposing personal details and violating user privacy. Existing laws prohibiting discrimination will apply, as will existing authorities proscribing exploitative collection or use of personal data.
The history of the growth of technology companies two decades ago serves as a cautionary tale for how we should think about the expansion of generative A.I. But history also has lessons for how to handle technological disruption for the benefit of all. Facing antitrust scrutiny in the late 1960s, the computing titan IBM unbundled software from its hardware systems, catalyzing the rise of the American software industry and creating trillions of dollars of growth. Government action required AT&T to open up its patent vault and similarly unleashed decades of innovation and spurred the expansion of countless young firms.
America’s longstanding national commitment to fostering fair and open competition has been an essential part of what has made this nation an economic powerhouse and a laboratory of innovation. We once again find ourselves at a key decision point. Can we continue to be the home of world-leading technology without accepting race-to-the-bottom business models and monopolistic control that locks out higher quality products or the next big idea? Yes — if we make the right policy choices.
Geoffrey Hinton, an artificial-intelligence pioneer, has left search giant Google when Microsoft has taken major strides in the AI arms race.
MAY 2, 2023
This is a blow for Alphabet's Google.
The internet-search-and-advertising giant has just lost a big asset, at a time when its core business is under dangerous attack.
Indeed, since the Nov. 30 unveiling of ChatGPT, the conversational robot, Google has never been so vulnerable. ChatGPT is developed by the startup OpenAI, whose main shareholder is Microsoft.
The software giant has invested more than $11 billion in the startup, which is now valued at nearly $30 billion.
The return on investment for Microsoft is potentially colossal since OpenAI, thanks to ChatGPT, has breathed new life into the software giant (MSFT) - Get Free Report. Microsoft is using the technological advances enabled by the chatbot to challenge Google's dominance in internet search.
ChatGPT, which provides human-like responses to even complex requests, has changed the way internet search is perceived. The chatbot showed that artificial intelligence has reached a point where robots can perform certain tasks much better than humans can.
Microsoft immediately incorporated ChatGPT features into Bing, its search engine. The Redmond, Wash., group has also deployed these features in almost all its products and in its cloud activity.
Faced with this offensive, Google (GOOGL) - Get Free Report recently launched Bard, a rival to ChatGPT. While it's still too early to peg the leader in the AI arms race -- all the other Big Tech groups as well as small players are participating -- investors seem to be betting on and rewarding Microsoft. The software stalwart's stock is up 28% this year.
The group, co-founded by Bill Gates and Paul Allen, is currently the world's second-largest company measured by its $2.3 trillion market value, according to companiesmarketcap.com.
Tech giant Apple, the world's largest company, has a market capitalization of $2.7 trillion. Alphabet, Google's parent, is fourth with a market value of $1.4 trillion. The Saudi oil giant Saudi Aramco is third at $2.1 trillion and e-commerce behemoth Amazon is fifth at $1.05 trillion.
It is in this context that Google has just lost Geoffrey Hinton, 75, whom Breitbart described as the "Godfather of AI."
Hinton, who received his Ph.D. in artificial intelligence 45 years ago and is one of the most respected and admired voices in the industry, left Google on May 1, he told The New York Times in an interview.
He spent 10 years at Google, which bought his AI startup in 2013.
The British computer scientist said he was leaving because he wanted to be free to warn against the risks associated with AI. He plans to devote himself to warning about the dangers of this revolutionary technology, which he helped develop for decades.
"It is hard to see how you can prevent the bad actors from using it for bad things,” Hinton told The New York Times.
He added that the dangers of AI were closer to us than he thought.
"I thought it was 30 to 50 years or even longer away,” Hinton told the newspaper. "Obviously, I no longer think that.”
Few people can speak about AI with as much authority as Hinton. His pioneering work on neural networks laid the foundation for the technology that's booming today. For this, Hinton, along with his colleagues Yann LeCun and Yoshua Bengio, received the Turing Award, considered the Nobel Prize of computer science.
In the short term, the scientist fears a surge of fake photos, videos and texts, to the point that "an average person can no longer know what is true and what is not.”
He also fears that the technology will soon go from being a tool for many trades to that of a substitute. Here, Hinton is thinking of translators, personal assistants, legal assistants and other jobs with many routine tasks.
In the longer term, he fears for humanity. He points out that AI often discovers unexpected patterns or can draw conclusions from the massive amount of data it processes. The fact that systems are no longer just executing what humans ask of them, but are able to generate and execute code themselves, could become dangerous, he told The New York Times. By doing so, autonomous weapons are no longer an unthinkable doomsday scenario.
Hinton fears that a race between Microsoft and Google in AI could quickly derail the technology. The tech pioneer says he doesn't want to take part in such an escalation.
He joins the ranks of many tech luminaries who called for a pause in the development of the next generation of AI tools in a letter last month.
Elon Musk, CEO of Tesla (TSLA) - Get Free Report and founder of SpaceX, who was one of the signatories of this petition, welcomed the warnings from Hinton.
"Hinton knows what he’s talking about," the billionaire reacted on Twitter on May 2….
Ron Miller@ron_miller / 2:00 AM PDT•May 4, 2023

Image Credits: Kenstocker / Getty Images
Slack has evolved from a pure communications platform to one that enables companies to link directly to enterprise applications without having to resort to dreaded task switching. Today, at the Salesforce World Tour event in NYC, the company announced the next step in its platform’s evolution where it will be putting AI at the forefront of the user experience, making it easier to get information and build workflows.
It’s important to note that these are announcements, and many of these features are not available yet.
Rob Seaman says that rather than slapping on an AI cover, they are working to incorporate it in a variety of ways across the platform. That started last month with a small step, a partnership with OpenAI to bring a ChatGPT app into Slack, the first piece of a much broader vision for AI on the platform. That part is in beta at the moment.
Today’s announcement involves several new integrations, including SlackGPT, the company’s own flavor of generative AI built on top of the Slack platform, which users and developers can tap into to build AI-driven experiences. The content in Slack provides a starting point for building models related to the platform.
“We think Slack has a unique advantage when it comes to generative AI. A lot of the institutional knowledge on every topic, team, work item and project is already in Slack through the messages, the files and the clips that are shared every day,” he said.
When you combine that with Slack’s Partner ecosystem and platform, customers have a lot of options for integrating AI into their workflows. He says that Slack is thinking about this in three ways right now.
“For starters, Slack is going to bring AI natively into the user experience with SlackGPT to help customers work faster, communicate better, learn faster, etc. And an example of that is AI-powered conversation summaries and writing assistance for composition that’s going to be directly available in Slack,” he said.

By Richard & Dan
January 25, 2023
AI is all the rage these days. A short prompt synthesizes whole essays, images, and even code. It honestly feels a bit like magic.
Where there’s magic, there’s also money. Startups and venture capitalists are spending incredible sums to develop the next best machine learning models. It’s a rising tide in AI - companies that touch AI see a big bump in their stock price and net valuation.
Yet, amid all the excitement, there’s been little discussion on where profits are in the AI industry. Or put another way, who makes the most money with the AI wave.
Broadly speaking, there are five different layers in the AI industry.

Semiconductors
AI as an industry is only possible due to the steady increase in computational power. Compared to a decade ago, we now have chips that are roughly 32x faster. Semiconductor companies design and manufacture chips. Some companies focus on the design element (Nvidia), some are on the manufacturing side (TSMC), and others do both (Intel).
Nvidia, Intel, AMD, and TSMC
Cloud Platforms
Most semiconductor companies will sell their chips to enterprise buyers such as cloud platforms. These large platforms charge their customers for using these hardware resources, also known as compute (energy + usage of the chip). Machine learning models require massive amounts of compute to train and maintain.
AWS, Azure, GCP, and Oracle Cloud
Data Labeling
Some models require extensive data collection, sanitation, and labeling. Most AI companies collect data in-house but will employ third-parties to label and sanitize datasets manually. Higher quality data means better model performance.
Scale AI, Appen, Hive, Labelbox, Snorkel AI etc.
Research
AI research companies combine compute and data resources to train machine learning models. A model is built by piecing together various transformers, gathering billions of parameters, and spending millions of dollars in compute resources. At the end of this process, research companies produce a working model like GPT-3.
“Today I learned changing random stuff until your program works is ‘hacky’ and ‘bad coding practice’ but if you do it fast enough it’s Machine Learning and pays 4x your current salary.”
OpenAI, Deepmind, Stability, and Anthropic
Applications
Most AI research companies are focused on building and refining their models. Once a model is trained, application layer companies will tailor the model to specific applications such as marketing copy, image creation, or even code generations.
Marketing copy (Copy.AI or Jasper), image creation (Midjourney, Stable Diffusion, etc.), and code generation (Github Copilot, Replit)

Of these five layers, cloud platforms are best positioned to capture profits from the AI wave.
Jagmeet Singh, Ingrid Lunden/ April 28, 2023

Image Credits: Leon Neal / Getty Images
Updated to note that the Microsoft investment closed in January. The money from VCs reported here, part of a tender offer, is separate to that.
OpenAI, the startup behind the widely used conversational AI model ChatGPT, has picked up new backers, TechCrunch has learned.
VC firms including Sequoia Capital, Andreessen Horowitz, Thrive and K2 Global picking up new shares, according to documents seen by TechCrunch. A source tells us Founders Fund is also investing. Altogether the VCs have put in just over $300 million at a valuation of $27 billion – $29 billion. This is separate to a big investment from Microsoft announced earlier this year, a person familiar with the development told TechCrunch, which closed in January. The size of Microsoft’s investment is believed to be around $10 billion, a figure we confirmed with our source.
If all this is accurate, this is the closing of the tender offer the Wall Street Journal reported was in the works in January. We confirmed that was when discussions started, amid a viral surge of interest in OpenAI and its business.
We have reached out to the investors named here, as well as to OpenAI, for comment and will update this story as we learn more. OpenAI declined to comment on the tender offer, which is separate to the Microsoft investment that closed in January.
While Microsoft’s investment comes with a strong strategic angle — the tech giant is working to integrate OpenAI’s tech across a number of areas of its business — the VCs are coming in as financial backers.
From what we understand, the term sheets have been signed by investors and the money’s been transferred; still to come is countersigning from OpenAI. The plan was to make this investment public next week.
May 2, 2023

Five companies joined The Crunchbase Unicorn Board in April 2023 — the sixth month in a row for new unicorns to number in the single digits. Three of those companies are in the AI sector.
Of the five companies, three are U.S.-based. The U.K. and China each count for one new unicorn this past month.
Two companies were dropped from the unicorn board.
Tonal raised $130 million in funding at a valuation between $550 million and $600 million according to the WSJ, which removed it from the unicorn board. The company previously raised funding at a $1.6 billion value in March 2021.
And Cybereason, an endpoint security company, raised $100 million led by SoftBank which valued the company at $350 million, a 90% discount from its prior funding in July 2021 which valued the company at $3.2 billion.
Earlier this year, online media company Vox Media received a lowered valuation in February 2023 at $500 million, down from $1 billion. In December 2022, online grocery retailer Oda was removed from the board with a new valuation of $350 million, down from $1.2 billion.
As unicorn companies continue to raise funding, we expect more down rounds and for companies to drop off the list.
We find around half of the unicorn board has raised funding since the beginning of 2022. Around 40% of the 1,400+ unicorn board companies last raised funding in 2020 and 2021. These companies will be coming back to raise funding from the private markets as startups typically raise funding every 18 months to two years.
Here are the new unicorns:
New Jersey-based CoreWeave, a cloud infrastructure company that pivoted from Ethereum mining, raised a $221 million Series B led by Magnetar Capital. The funding valued the company at $2.2 billion.
Quantexa, a London-based data analytics company that uses AI, raised a $129 million Series E funding. The funding was led by Singapore-based Sovereign Wealth Fund GIC which valued the company at $1.8 billion.
San Francisco-based Replit, a developer platform that uses AI to complete code, raised a $97 million Series B funding led by Andreessen Horowitz. The company was valued at $1.2 billion.
Nevada-based Ohmium, a hydrogen producer of green energy, raised a $250 million Series C funding which valued the company at $1 billion, led by TPG Rise Climate Fund.
B&C Chemical , a materials chip maker based in China, raised an $87 million funding valuing the company at $1 billion. The funding was led by China Development Bank Capital and Zhongping Capital.
The company reportedly hopes to raise as much as $10 billion.

Dado Ruvic / reuters
Igor Bonifacic| @igorbonifacic| April 30, 2023
ARM has registered for a US stock market listing. In a press release published Saturday, the mobile chip company said it recently confidentially submitted a draft F-1 form to the Securities and Exchange Commission. According to Reuters, ARM hopes to raise between $8 billion and $10 billion dollars when it holds the initial public offering later this year, though over the weekend the company said it had yet to determine the size and price range of the proposed IPO.
ARM parent company SoftBank has been eyeing a public listing ever since NVIDIA’s $40 billion bid to buy the chip maker fell through at the start of last year due to regulatory resistance from the US Federal Trade Commission and other antitrust watchdogs. In March, SoftBank said it would list ARM on the US stock market after rebuffing a push for a London listing from the United Kingdom government. ARM designs the processor components used in almost every mobile device, including models from Apple and Samsung. Its licensing model means nearly every tech company depends on ARM designs. According to a recent Financial Times report, the company recently began work on a prototype chip that is “more advanced” than any semiconductor produced in the past.
Published April 30, 2023
By Andrew Hutchinson - Content and Social Media Manager
Regardless of how you feel about Elon Musk and his various projects and stances, at least he’s consistent. Well, in a business management context, at least.
Back in June last year, in an interview The Kilowatts, Elon discussed his plans for Twitter, which was well before he actually took ownership of the platform.
In that interview, Musk outlined his plans for a paywall bypass system, which would enable Twitter users to pay for one-off articles in-app, as opposed to subscribing to various publications.
That’s now becoming a reality, with Musk announcing over the weekend that Twitter will soon enable publications to charge Twitter users for access per article in-stream.
Which sounds interesting, right? As Musk says, maybe that’ll provide another way for publications to make money from people who are never going to become subscribers, but might pay for an article here or there.
Sounds interesting in theory, right?
Except, this very model has already been tried, and abandoned, many times, by various publications and platforms as they seek new monetization opportunities.
The key problem? By offering smaller, one-off payments for single article access, that then de-values subscriptions, which are far more valuable for media entities. Sure, not everybody’s going to become a subscriber, but a portion of their audience will, and if those few no longer need to subscribe to access content, that then means that your per-article model needs to deliver a lot more, in order to replace that lost subscription revenue.
Every past experiment has found that this ends up leading to a net loss for publishers versus the subscription system, which is why, try as many have, this system doesn’t work, and will fail again on Twitter.
Boring Co., the billionaire's infrastructure company, has been cleared to expand the Las Vegas Loop. In total, the Vegas Loop will extend to over 65 miles and will have 69 stations.
Elon Musk continues to expand his influence.
This effort most often involves the products and services offered by the companies he directs, founded and co-founded.
Take the electric-vehicle manufacturer Tesla. The Austin, Texas-based group, of which he is chief executive, has pushed the rest of the auto industry to transition to electric vehicles. Battery-powered vehicles are seen as the future of cars, thanks largely to Musk's vision.
In a few months, Tesla will deliver its Cybertruck, its first pickup. This vehicle is expected to revolutionize how consumers and analysts view pickup trucks. It is expected to broaden the appeal of this sector beyond America's heartland and reach younger urbanites with high purchasing power.
Another example of Musk's building his influence, his brand and his companies' brands is Starlink, the satellite internet access service developed by SpaceX, the billionaire's aerospace company.
Starlink became a prominent worldwide product after Musk, on Feb. 24, 2022, decided to provide it for free to Ukraine as it fought off Russia's invasion. Since then, the service has been seen as a window of freedom for millions of people living in dictatorships. For people in remote areas and regions, the service is a means of connecting to the rest of the world.
Yet another service contributes to increasing the tech mogul's influence.
This is Loop, developed by Boring Co., the tunneling and infrastructure company Musk founded to relieve congestion in large cities.
Loop is an all-electric high-speed underground public system in which passengers are transported to their destinations with no stops.
Boring Co.'s first major loop is in Las Vegas, a 29-mile tunnel network connecting 51 stations. Tesla has a fleet of vehicles in the loop with human drivers who ferry convention-goers.
This Loop is now set for major expansion as Boring Co.'s plans have been approved by local authorities, both sides said.
"#ClarkCounty Commissioners just approved new @boringcompany plans for 18 new stations and about 25 miles of tunnels," the county announced on Twitter, adding that the plans will extend the Vegas Loop out from the Las #Vegas Strip corridor.
It added that the Vegas Loop expansion plan includes sections in Clark County and also within the city of Las Vegas to the north and northwest. The new stations and extensions will also operate underground, according to the official documents, in the vicinity of the Resort Corridor, Allegiant Stadium, the University of Nevada-Las Vegas, Town Square Las Vegas, and Blue Diamond Road/Las Vegas Boulevard South.

A reminder for new readers. That Was The Week collects the best writing on critical issues in tech, startups, and venture capital. I select the articles because they are of interest. The selections often include things I disagree with. The articles are only snippets. Click on the headline to go to the original. I express my point of view in the editorial and the weekly video.
Content this week from: @Cartainc, @chudson, , , @BatteryVentures, @jglasner, @benthompson, @gbd, @yejinChoika, @geoffreyhinton, @geneteare and more

In last week’s video, I asked which day you would like to get That Was The Week and whether you would prefer the video and content in a single post. The score is in:

So starting this week, I will wait to post the newsletter until the video is ready, and that will mostly be Saturday but sometimes Friday, depending on where in the world you are.
This week is a lot about AI. On Thursday, various executives visited the White House to discuss how to keep AI safe for humans. And Lina Khan wrote an op-ed in the New York Times entitled “We Must Regulate A.I. Here’s How.” A little like ChatGPT, she is hallucinating. When the Federal Trade Commission chair takes to the Times, you know it is because she has no actual path to execute her wishes.
She states that:
While the technology is moving swiftly, we already can see several risks. The expanding adoption of A.I. risks further locking in the market dominance of large incumbent technology firms.
There’s the hallucination. OpenAI is a startup, barely a few years old and challenging Google, Facebook, Salesforce, and others. Hardly locking in the market dominance of incumbents. What is she smoking?
OpeanAI did announce a $300m raise this week. And Salesforce did announce SlackGPT, a large language model embedded in Slack. But the biggest news was Geoffrey Hinton, inventor of neural networks, leaving Google due to his concerns about the safety of AI. This exit was driven by OpenAi’s success and Google’s drive to catch up. That drive leaves the pre-LLM engineers fearing for their path to artificial general intelligence. A little like Gary Marcus and others, there is a turf war between data scientists on the right path to AGI. Like the Life of Brian movie, this is an inside-religion fight on details. The anti-LLM crowd is fighting for their careers and reputations and is throwing as much mud as possible at ChatGPT and its lookalikes.
Now in these turf wars, much is right on both sides. But as a non-combattant, I prefer to take what is good in both while encouraging the iteration towards good human outcomes.
OpenAI co-founder Greg Brockton has a video this week explaining and showing where ChatGPT is going next. It’s wonderful, watch it. And there is a counter from the other side, so two videos of the week.
In the essays, much more on last week’s theme of Shrinking. Wonderful essays by Carta, Charles Hudson, and Joanna Glaser from Crunchbase News.
Enjoy this week’s collection in That Was The Week.
May 1, 2023
Kevin Dowd and Peter Walker
Stateofprivatemarketsaddendumq12023
2.77MB ∙ PDF file

The transformation of the venture capital industry over the past year has been stark. Total venture capital raised by startups plunged 80% from Q1 2022 to Q1 2023. Venture deal count fell 45% over the same span. Overall, Q1 was the slowest quarter for both capital raised and deal count since 2017.

There are signs of a venture spring. Valuations from seed to Series C ticked up from recent lows. Median round sizes mostly stabilized. But these green shoots were overwhelmed by the decline in total rounds across all stages.

Down rounds spiked in frequency: Just shy of 20% of all venture investments in Q1 were down rounds, the highest proportion since at least 2018. A year ago, barely 5% of venture deals resulted in a reduced valuation. With median valuations having fallen so far from recent highs, many companies seeking a valuation increase are battling against the current.

More companies chose bridge rounds: For companies ranging from Series A to Series C, bridge rounds have emerged as an increasingly attractive option. At least 40% of all investments in Series A and Series B companies were bridge rounds in Q1, the highest figures of the 2020s.
Startup M&A bounced back: The number of venture-backed companies that were acquired or merged with another company increased by 20% in Q1 compared to Q4 2022, with 57% of those M&A deals valued at $10 million or less. In a challenging environment for raising new venture capital, some smaller startups are instead opting for an exit.



APR 25, 2023

I have been working on Precursor for over nine years at this point. I got a lot of very good advice when I was getting started. One thing that has stuck with me since the beginning was some feedback I got from Mike Maples at Floodgate when I was still formulating the plan for Precursor. I’m unsure if he’s the first person ever to say this, but I remember our conversation very clearly - he told me that your fund size is your strategy. I didn’t fully appreciate what that meant as a new fund manager, but the longer I am in this business, the more I think it explains fund behavior and individual incentives.
There is a certain part of that statement that was obvious to me then and is still obvious now. Your fund size, for the most part, dictates your check size, ownership targets, and portfolio construction. A fund of a given size only has a few levers to pull to get to top-tier returns. The larger your fund, the fewer levers you generally have to tinker with to generate great returns.
There is a second part of this that I didn’t really appreciate until about three or four years ago. The size of your fund also dictates the scale of outcomes that can actually move the needle for your fund, and that shapes the lens through which you evaluate the terminal scale of startups that come across your desk. The larger your fund, the larger absolute scale of outcomes you need and the more of those outcomes you need to acheive to make your math work.
We are currently in the midst of an era where many VCs raised very large funds, relative to strategy, in 2020, 2021, and 2022. At the time those funds were raised, the public and private markets were signaling that the terminal outcomes for very good companies were in the $5-10 billion range. We also had what we believed to be exceptional companies that might ultimately be worth $10-$100 billion at exit. In an era where terminal values for great companies sat at those levels, large fund sizes made sense and there was a clear path to returning them if you could get into great companies.
Fast forward to today, and it feels like great and exceptional companies are likely worth half of what we thought in terms of terminal value back then. We can adjust our expectations on a forward-looking basis, and we can even adjust previous valuations through down rounds. What we cannot change, though, is the math of fund size. Regardless of ARR multiples and future expectations, the demands for cash-on-cash returns remain the same for a fund of a given size. What changes, though, is the investor’s perception of what it takes to build companies of the terminal scale that will move the needle for a fund of a given size. Today, it might be 2-3 times (?) harder to build a company with a $1 billion terminal value than it was 2-3 years ago. The market expects way more in terms of total revenue and efficiency to warrant that valuation. And I'd argue that the payoff for truly exceptional companies is also harder to achieve and that $10+ billion public or private outcomes will be harder to achieve going forward than they were in recent memory.
DHARMESH THAKKER AND JASON MENDEL
MAY 1, 2023
Nap pods, NFTs, free workplace massages and other caricatures of big-tech life are now, more or less, relegated to the graveyard of ‘zero-interest-rate phenomena,’ or ZIRP — part of the broader industry right-sizing amid the market downturn.
Private-company leaders are noticing big-tech companies reaping the benefits of cost-cutting measures and also recognize the potential of generative-AI technologies to automate coding, sales, marketing and creative content, driving impactful results with a much-lower cost structure.
And how could they not? In board rooms, private company founders are often advised to do two things: Build great products and grow efficiently. Balancing those obligations with other leadership responsibilities can be difficult in the best of times, but it’s a particularly tall order in the midst of a potential recession, ongoing industry layoffs and slowing IT budgets.
Not to mention, for many private software companies, even highly-valued unicorns, there is a high bar to meet to transition to a successful IPO, including a 10x+ revenue ramp and strong margins, as we noted in our 2022 State of the OpenCloud report.
But I remain optimistic that many of these startups—particularly B2B software and infrastructure companies—can pull through and prosper in this new environment. Why? Namely, because software is a uniquely attractive asset class, as it drives increased productivity, replaces high-cost labor and can generate 80-90% gross margins.
Furthermore, software companies can scale quickly to massive revenues and can generate 30-40% operating margins at scale. Per Capital IQ, there are 16 US-based software companies with more than $1B in next-twelve-months revenue that went public in the last five years. This is unheard of in any other industry.
That said, the boom times over the past few years have, in my view, encouraged some bad habits in the technology industry and among many private software firms. Investors, company leaders and employees alike became accustomed to the promise of multi-billion-dollar software companies, minted at record pace. With this dogged focus on “growth at all costs,” we naturally saw incredibly high burn rates with steep investments of time and resources into sales, marketing and R&D with little oversight or accountability.
Many investors, myself included, were willing to endorse high cash-burn rates for early-stage companies hoping to reach 30-40% operating margins on billion-dollar revenue when it meant they’d be burning that cash for three to four years. That journey now looks like it’ll take closer to five or six years – the result of an IT spending slowdown and significantly-rising costs of capital – a stark, new reality. A high-burn approach is unsustainable under these circumstances, to say the least.
But there’s good news: You can reduce cash burn and succeed, even in the current environment. In fact, by becoming more cost-sensitive, you will likely be better off in the short- and long-term – we’ve seen it many times before.
In fact, we tend to see four essential cycles of value creation in the software industry, illustrated below by our chart and by a March 2023 Goldman Sachs analysis of the trade-off between growth and profitability, which we now are seeing play out as companies come to grips with slowing growth and the need to manage expenses.
Joanna Glasner, May 4, 2023, @jglasner
By the time a startup gets to Series D stage, backers are done asking questions about product-market fit and what inspires founders.
At this point, it’s all about scale. And while an IPO or acquisition might still be a few years away, executives better be able to present compelling options for big returns.
The lack of large exits lately, combined with falling tech valuations and slower startup investment, has made the case for funding a Series D round much harder in recent months. These factors have contributed to pushing U.S. investment at this stage to its lowest point in years.
How low? Using Crunchbase data, we charted out total Series D investment for the past eight quarters below. As you can see, deal-making is down sharply from its former peak.

Of course, things did reach stratospheric levels in the gravity-defying investment environment of 2021 and early 2022. At the peak in the fourth quarter of 2021, for instance, there were three D rounds of $1 billion or more alone. The median round at that stage was over $100 million.
Those are anomalous comps. So we can’t be too shocked to see we’ve come down since.
Nonetheless, the declines are dramatic. For the first quarter of this year, total Series D investment is down 92% from the peak and 86% from the year-ago period.
The shuttering of the IPO window looks like the most likely culprit in curtailing really big deals. Investors usually back a deal in the hundreds of millions or more only if they see a clear-cut path to exit that will return a tidy multiple.
So far this year, we’ve seen just one Series D in the hundreds of millions — a $300 million financing for Wiz, a cloud security company headquartered in New York with significant operations in Israel.
Including Wiz, there have been just six Series D rounds this year of $100 million or more, which we list below:
Venture firms that used to do a lot of Series D aren’t cutting checks.
Per Crunchbase data, the investors who used to be most active in backing large, later-stage rounds have stopped leading Series D rounds.
Standouts in this category include Tiger Global Management and SoftBank Vision Fund. In 2021 and 2022, the two firms were Series D lead investors 43 times, per Crunchbase data. So far this year, neither has led a single round at this stage.
Andreessen Horowitz and Insight Partners — two other firms that regularly top the most active investor lists — haven’t led a U.S. Series D round for more than a year. In 2021 and 2022, the two were lead backers in D rounds 22 times.
One can’t dismiss the plunge in Series D investment as merely a decline from a very high peak. So far, 2023 is on track to turn in the lowest total investment at this stage in years.
By Sahil Patel May 4, 2023

For years, Google’s YouTube couldn’t get any respect from the TV industry. TV marketers wouldn’t go near it out of fear that their ads would be tainted by running alongside YouTube’s amateur content. And analysts and research firms treated the streaming service as separate from the rest of television when analyzing TV viewing and advertising.
How things have changed. One startling statistic shows how YouTube is now unequivocally the king of TV. Its internal data indicate that close to 45% of overall YouTube viewing in the U.S. today is happening on TV screens, according to people familiar with the matter, compared with well below 30% in 2020. That’s a radical shift for the video-streaming service, reflecting how the growth of internet-connected TVs has made it easier for people to watch streaming services like YouTube on TVs instead of on their cellphones and computers.
THE TAKEAWAY
• Nearly 45% of U.S. viewing of YouTube is on TV sets
• YouTube is most watched outlet on TV
• YouTube expected to draw as much upfront ad commitments as networks
And as more people watch YouTube videos on TV, YouTube has become the most popular thing on that platform—Nielsen data show that YouTube accounts for more TV viewing than any other single network or streaming service. (Most of the growth is due to YouTube’s main service, while its cable TV–like subscription streaming service, YouTube TV, accounts for a small portion of the growth.)
Those shifts explain why advertisers have stopped treating YouTube like a second-class citizen, which will become clearer this month at the start of the annual TV upfronts market, an event where advertisers and ad sellers begin negotiating their spending commitments for the next year. Advertisers widely expect to allocate at least as many dollars to YouTube as on any individual TV company such as Disney and NBCUniversal, according to multiple senior ad-buying executives.
In total, this likely means the Google-owned unit will land well north of $7 billion in ad-spending commitments, according to people familiar with YouTube’s upfront plans and other advertising executives. (That figure includes money spent on other Google properties as part of broader ad packages.) Brian Albert, managing director of U.S. agency video at Google and YouTube, who oversees YouTube’s upfront ad negotiations, declined to comment on YouTube’s upfront ad sales goals: “I would lose my job if I answer that question,” he said.
It’s not the first time YouTube’s upfront take has been equal with that of TV networks, the people said, although the scale of what YouTube is drawing hasn’t previously been reported. The growth in YouTube’s share of TV ad dollars likely helps explain how its total ad revenue has nearly doubled from $15.1 billion in 2019 to $29.2 billion in 2022.
Posted on Tuesday, May 2, 2023

In 2017 BuzzFeed CEO Jonah Peretti declared that “if you’re thinking about an electorate and you’re thinking about the public and you’re thinking about people being informed, the subscription model in media does not help inform the broad public”; in 2017 The Athletic CEO Alex Mather went in the opposite direction, telling me in a Stratechery Interview that his publication would be differentiated by “less clickbait, no ads, no game recaps, no hot takes, really focusing in on the deeper stories, the insider stuff, the minutiae for the really diehard fan.”
Today in 2023 BuzzFeed has shuttered its News team and The Athletic has been acquired by the New York Times; the latter’s top priority was adding advertising.
BuzzFeed’s bet on news was a bet on Facebook and Google’s willingness to subsidize free news, a bet that didn’t pay off. The Athletic had a happier ending, even if there are arguments that the New York Times overpaid, given that the sports publication had never made a profit; it’s also the case that neither BuzzFeed News nor The Athletic were running the proper business model for content on the Internet. The question going forward should not be advertising or subscriptions; the answer, in meme form:

This is, of course, a return to form for content production; it’s also both an evolution and refutation of a point I argued in 2015’s Popping the Publishing Bubble:
It is easy to feel sorry for publishers: before the Internet most were swimming in money, and for the first few years online it looked like online publications with lower costs of production would be profitable as well. The problem, though, was the assumption that advertising money would always be there, resulting in a “build it and they will come” mentality that focused almost exclusively on content production and far too little on sustainable business models.
In fact, publishers going forward need to have the exact opposite attitude from publishers in the past: instead of focusing on journalism and getting the business model for free, publishers need to start with a sustainable business model and focus on journalism that works hand-in-hand with the business model they have chosen. First and foremost that means publishers need to answer the most fundamental question required of any enterprise: are they a niche or scale business?
Niche businesses make money by maximizing revenue per user on a (relatively) small user base
Scale businesses make money by maximizing the number of users they reach
The truth is most publications are trying to do a little bit of everything: gain more revenue per user here, reach more users over there. However, unless you’re the New York Times (and even then it’s questionable), trying to do everything is a recipe for failing at everything; these two strategies require different revenue models, different journalistic focuses, and even different presentation styles.
I think my position today is more of an evolution than a refutation, because I do still think it is essential for an content entity to understand if it is in the niche or scale game; the refutation is twofold: first, everything is a niche, and second, nearly all content businesses should have both subscriptions and advertising.
The text below is a very recent leaked document, which was shared by an anonymous individual on a public Discord server who has granted permission for its republication. It originates from a researcher within Google. We have verified its authenticity. The only modifications are formatting and removing links to internal web pages. The document is only the opinion of a Google employee, not the entire firm. We do not agree with what is written below, nor do other researchers we asked, but we will publish our opinions on this in a separate piece for subscribers. We simply are a vessel to share this document which raises some very interesting points.
We’ve done a lot of looking over our shoulders at OpenAI. Who will cross the next milestone? What will the next move be?
But the uncomfortable truth is, we aren’t positioned to win this arms race and neither is OpenAI. While we’ve been squabbling, a third faction has been quietly eating our lunch.
I’m talking, of course, about open source. Plainly put, they are lapping us. Things we consider “major open problems” are solved and in people’s hands today. Just to name a few:
LLMs on a Phone: People are running foundation models on a Pixel 6 at 5 tokens / sec.
Scalable Personal AI: You can finetune a personalized AI on your laptop in an evening.
Responsible Release: This one isn’t “solved” so much as “obviated”. There are entire websites full of art models with no restrictions whatsoever, and text is not far behind.
Multimodality: The current multimodal ScienceQA SOTA was trained in an hour.
While our models still hold a slight edge in terms of quality, the gap is closing astonishingly quickly. Open-source models are faster, more customizable, more private, and pound-for-pound more capable. They are doing things with $100 and 13B params that we struggle with at $10M and 540B. And they are doing so in weeks, not months. This has profound implications for us:
We have no secret sauce. Our best hope is to learn from and collaborate with what others are doing outside Google. We should prioritize enabling 3P integrations.
People will not pay for a restricted model when free, unrestricted alternatives are comparable in quality. We should consider where our value add really is.
Giant models are slowing us down. In the long run, the best models are the ones
which can be iterated upon quickly. We should make small variants more than an afterthought, now that we know what is possible in the <20B parameter regime.

At the beginning of March the open source community got their hands on their first really capable foundation model, as Meta’s LLaMA was leaked to the public. It had no instruction or conversation tuning, and no RLHF. Nonetheless, the community immediately understood the significance of what they had been given.
A tremendous outpouring of innovation followed, with just days between major developments (see The Timeline for the full breakdown). Here we are, barely a month later, and there are variants with instruction tuning, quantization, quality improvements, human evals, multimodality, RLHF, etc. etc. many of which build on each other…. Read on at
The technology, as it’s currently imagined, promises to concentrate wealth and disempower workers. Is an alternative imaginable?
By Ted Chiang
May 4, 2023
When we talk about artificial intelligence, we rely on metaphor, as we always do when dealing with something new and unfamiliar. Metaphors are, by their nature, imperfect, but we still need to choose them carefully, because bad ones can lead us astray. For example, it’s become very common to compare powerful A.I.s to genies in fairy tales. The metaphor is meant to highlight the difficulty of making powerful entities obey your commands; the computer scientist Stuart Russell has cited the parable of King Midas, who demanded that everything he touched turn into gold, to illustrate the dangers of an A.I. doing what you tell it to do instead of what you want it to do. There are multiple problems with this metaphor, but one of them is that it derives the wrong lessons from the tale to which it refers. The point of the Midas parable is that greed will destroy you, and that the pursuit of wealth will cost you everything that is truly important. If your reading of the parable is that, when you are granted a wish by the gods, you should phrase your wish very, very carefully, then you have missed the point.
So, I would like to propose another metaphor for the risks of artificial intelligence. I suggest that we think about A.I. as a management-consulting firm, along the lines of McKinsey & Company. Firms like McKinsey are hired for a wide variety of reasons, and A.I. systems are used for many reasons, too. But the similarities between McKinsey—a consulting firm that works with ninety per cent of the Fortune 100—and A.I. are also clear. Social-media companies use machine learning to keep users glued to their feeds. In a similar way, Purdue Pharma used McKinsey to figure out how to “turbocharge” sales of OxyContin during the opioid epidemic. Just as A.I. promises to offer managers a cheap replacement for human workers, so McKinsey and similar firms helped normalize the practice of mass layoffs as a way of increasing stock prices and executive compensation, contributing to the destruction of the middle class in America.
A former McKinsey employee has described the company as “capital’s willing executioners”: if you want something done but don’t want to get your hands dirty, McKinsey will do it for you. That escape from accountability is one of the most valuable services that management consultancies provide. Bosses have certain goals, but don’t want to be blamed for doing what’s necessary to achieve those goals; by hiring consultants, management can say that they were just following independent, expert advice. Even in its current rudimentary form, A.I. has become a way for a company to evade responsibility by saying that it’s just doing what “the algorithm” says, even though it was the company that commissioned the algorithm in the first place.
The question we should be asking is: as A.I. becomes more powerful and flexible, is there any way to keep it from being another version of McKinsey? The question is worth considering across different meanings of the term “A.I.” If you think of A.I. as a broad set of technologies being marketed to companies to help them cut their costs, the question becomes: how do we keep those technologies from working as “capital’s willing executioners”? Alternatively, if you imagine A.I. as a semi-autonomous software program that solves problems that humans ask it to solve, the question is then: how do we prevent that software from assisting corporations in ways that make people’s lives worse? Suppose you’ve built a semi-autonomous A.I. that’s entirely obedient to humans—one that repeatedly checks to make sure it hasn’t misinterpreted the instructions it has received. This is the dream of many A.I. researchers. Yet such software could easily still cause as much harm as McKinsey has.
May 3, 2023
By Lina M. Khan

Ms. Khan is the chair of the Federal Trade Commission.
It’s both exciting and unsettling to have a realistic conversation with a computer. Thanks to the rapid advance of generative artificial intelligence, many of us have now experienced this potentially revolutionary technology with vast implications for how people live, work and communicate around the world. The full extent of generative A.I.’s potential is still up for debate, but there’s little doubt it will be highly disruptive.
The last time we found ourselves facing such widespread social change wrought by technology was the onset of the Web 2.0 era in the mid-2000s. New, innovative companies like Facebook and Google revolutionized communications and delivered popular services to a fast-growing user base.
Those innovative services, however, came at a steep cost. What we initially conceived of as free services were monetized through extensive surveillance of the people and businesses that used them. The result has been an online economy where access to increasingly essential services is conditioned on the widespread hoarding and sale of our personal data.
These business models drove companies to develop endlessly invasive ways to track us, and the Federal Trade Commission would later find reason to believe that several of these companieshad broken the law. Coupled with aggressive strategies to acquire or lock out companies that threatened their position, these tactics solidified the dominance of a handful of companies. What began as a revolutionary set of technologies ended up concentrating enormous private power over key services and locking in business models that come at extraordinary cost to our privacy and security.
Story continues below advertisement
Continue reading the main story
The trajectory of the Web 2.0 era was not inevitable — it was instead shaped by a broad range of policy choices. And we now face another moment of choice. As the use of A.I. becomes more widespread, public officials have a responsibility to ensure this hard-learned history doesn’t repeat itself.
As companies race to deploy and monetize A.I., the Federal Trade Commission is taking a close look at how we can best achieve our dual mandate to promote fair competition and to protect Americans from unfair or deceptive practices. As these technologies evolve, we are committed to doing our part to uphold America’s longstanding tradition of maintaining the open, fair and competitive markets that have underpinned both breakthrough innovations and our nation’s economic success — without tolerating business models or practices involving the mass exploitation of their users. Although these tools are novel, they are not exempt from existing rules, and the F.T.C. will vigorously enforce the laws we are charged with administering, even in this new market.
While the technology is moving swiftly, we already can see several risks. The expanding adoption of A.I. risks further locking in the market dominance of large incumbent technology firms. A handful of powerful businesses control the necessary raw materials that start-ups and other companies rely on to develop and deploy A.I. tools. This includes cloud services and computing power, as well as vast stores of data.
Enforcers and regulators must be vigilant. Dominant firms could use their control over these key inputs to exclude or discriminate against downstream rivals, picking winners and losers in ways that further entrench their dominance. Meanwhile, the A.I. tools that firms use to set prices for everything from laundry detergent to bowling lane reservations can facilitate collusive behavior that unfairly inflates prices — as well as forms of precisely targeted price discrimination. Enforcers have the dual responsibility of watching out for the dangers posed by new A.I. technologies while promoting the fair competition needed to ensure the market for these technologies develops lawfully. The F.T.C. is well equipped with legal jurisdiction to handle the issues brought to the fore by the rapidly developing A.I. sector, including collusion, monopolization, mergers, price discrimination and unfair methods of competition.
And generative A.I. risks turbocharging fraud. It may not be ready to replace professional writers, but it can already do a vastly better job of crafting a seemingly authentic message than your average con artist — equipping scammers to generate content quickly and cheaply. Chatbots are already being used to generate spear-phishing emails designed to scam people, fake websites and fake consumer reviews —bots are even being instructed to use words or phrases targeted at specific groups and communities. Scammers, for example, can draft highly targeted spear-phishing emails based on individual users’ social media posts. Alongside tools that create deep fake videos and voice clones, these technologies can be used to facilitate fraud and extortion on a massive scale.When enforcing the law’s prohibition on deceptive practices, we will look not just at the fly-by-night scammers deploying these tools but also at the upstream firms that are enabling them.
Lastly, these A.I. tools are being trained on huge troves of data in ways that are largely unchecked. Because they may be fed information riddled with errors and bias, these technologies risk automating discrimination — unfairly locking out people from jobs, housing or key services. These tools can also be trained on private emails, chats and sensitive data, ultimately exposing personal details and violating user privacy. Existing laws prohibiting discrimination will apply, as will existing authorities proscribing exploitative collection or use of personal data.
The history of the growth of technology companies two decades ago serves as a cautionary tale for how we should think about the expansion of generative A.I. But history also has lessons for how to handle technological disruption for the benefit of all. Facing antitrust scrutiny in the late 1960s, the computing titan IBM unbundled software from its hardware systems, catalyzing the rise of the American software industry and creating trillions of dollars of growth. Government action required AT&T to open up its patent vault and similarly unleashed decades of innovation and spurred the expansion of countless young firms.
America’s longstanding national commitment to fostering fair and open competition has been an essential part of what has made this nation an economic powerhouse and a laboratory of innovation. We once again find ourselves at a key decision point. Can we continue to be the home of world-leading technology without accepting race-to-the-bottom business models and monopolistic control that locks out higher quality products or the next big idea? Yes — if we make the right policy choices.
Geoffrey Hinton, an artificial-intelligence pioneer, has left search giant Google when Microsoft has taken major strides in the AI arms race.
MAY 2, 2023
This is a blow for Alphabet's Google.
The internet-search-and-advertising giant has just lost a big asset, at a time when its core business is under dangerous attack.
Indeed, since the Nov. 30 unveiling of ChatGPT, the conversational robot, Google has never been so vulnerable. ChatGPT is developed by the startup OpenAI, whose main shareholder is Microsoft.
The software giant has invested more than $11 billion in the startup, which is now valued at nearly $30 billion.
The return on investment for Microsoft is potentially colossal since OpenAI, thanks to ChatGPT, has breathed new life into the software giant (MSFT) - Get Free Report. Microsoft is using the technological advances enabled by the chatbot to challenge Google's dominance in internet search.
ChatGPT, which provides human-like responses to even complex requests, has changed the way internet search is perceived. The chatbot showed that artificial intelligence has reached a point where robots can perform certain tasks much better than humans can.
Microsoft immediately incorporated ChatGPT features into Bing, its search engine. The Redmond, Wash., group has also deployed these features in almost all its products and in its cloud activity.
Faced with this offensive, Google (GOOGL) - Get Free Report recently launched Bard, a rival to ChatGPT. While it's still too early to peg the leader in the AI arms race -- all the other Big Tech groups as well as small players are participating -- investors seem to be betting on and rewarding Microsoft. The software stalwart's stock is up 28% this year.
The group, co-founded by Bill Gates and Paul Allen, is currently the world's second-largest company measured by its $2.3 trillion market value, according to companiesmarketcap.com.
Tech giant Apple, the world's largest company, has a market capitalization of $2.7 trillion. Alphabet, Google's parent, is fourth with a market value of $1.4 trillion. The Saudi oil giant Saudi Aramco is third at $2.1 trillion and e-commerce behemoth Amazon is fifth at $1.05 trillion.
It is in this context that Google has just lost Geoffrey Hinton, 75, whom Breitbart described as the "Godfather of AI."
Hinton, who received his Ph.D. in artificial intelligence 45 years ago and is one of the most respected and admired voices in the industry, left Google on May 1, he told The New York Times in an interview.
He spent 10 years at Google, which bought his AI startup in 2013.
The British computer scientist said he was leaving because he wanted to be free to warn against the risks associated with AI. He plans to devote himself to warning about the dangers of this revolutionary technology, which he helped develop for decades.
"It is hard to see how you can prevent the bad actors from using it for bad things,” Hinton told The New York Times.
He added that the dangers of AI were closer to us than he thought.
"I thought it was 30 to 50 years or even longer away,” Hinton told the newspaper. "Obviously, I no longer think that.”
Few people can speak about AI with as much authority as Hinton. His pioneering work on neural networks laid the foundation for the technology that's booming today. For this, Hinton, along with his colleagues Yann LeCun and Yoshua Bengio, received the Turing Award, considered the Nobel Prize of computer science.
In the short term, the scientist fears a surge of fake photos, videos and texts, to the point that "an average person can no longer know what is true and what is not.”
He also fears that the technology will soon go from being a tool for many trades to that of a substitute. Here, Hinton is thinking of translators, personal assistants, legal assistants and other jobs with many routine tasks.
In the longer term, he fears for humanity. He points out that AI often discovers unexpected patterns or can draw conclusions from the massive amount of data it processes. The fact that systems are no longer just executing what humans ask of them, but are able to generate and execute code themselves, could become dangerous, he told The New York Times. By doing so, autonomous weapons are no longer an unthinkable doomsday scenario.
Hinton fears that a race between Microsoft and Google in AI could quickly derail the technology. The tech pioneer says he doesn't want to take part in such an escalation.
He joins the ranks of many tech luminaries who called for a pause in the development of the next generation of AI tools in a letter last month.
Elon Musk, CEO of Tesla (TSLA) - Get Free Report and founder of SpaceX, who was one of the signatories of this petition, welcomed the warnings from Hinton.
"Hinton knows what he’s talking about," the billionaire reacted on Twitter on May 2….
Ron Miller@ron_miller / 2:00 AM PDT•May 4, 2023

Image Credits: Kenstocker / Getty Images
Slack has evolved from a pure communications platform to one that enables companies to link directly to enterprise applications without having to resort to dreaded task switching. Today, at the Salesforce World Tour event in NYC, the company announced the next step in its platform’s evolution where it will be putting AI at the forefront of the user experience, making it easier to get information and build workflows.
It’s important to note that these are announcements, and many of these features are not available yet.
Rob Seaman says that rather than slapping on an AI cover, they are working to incorporate it in a variety of ways across the platform. That started last month with a small step, a partnership with OpenAI to bring a ChatGPT app into Slack, the first piece of a much broader vision for AI on the platform. That part is in beta at the moment.
Today’s announcement involves several new integrations, including SlackGPT, the company’s own flavor of generative AI built on top of the Slack platform, which users and developers can tap into to build AI-driven experiences. The content in Slack provides a starting point for building models related to the platform.
“We think Slack has a unique advantage when it comes to generative AI. A lot of the institutional knowledge on every topic, team, work item and project is already in Slack through the messages, the files and the clips that are shared every day,” he said.
When you combine that with Slack’s Partner ecosystem and platform, customers have a lot of options for integrating AI into their workflows. He says that Slack is thinking about this in three ways right now.
“For starters, Slack is going to bring AI natively into the user experience with SlackGPT to help customers work faster, communicate better, learn faster, etc. And an example of that is AI-powered conversation summaries and writing assistance for composition that’s going to be directly available in Slack,” he said.

By Richard & Dan
January 25, 2023
AI is all the rage these days. A short prompt synthesizes whole essays, images, and even code. It honestly feels a bit like magic.
Where there’s magic, there’s also money. Startups and venture capitalists are spending incredible sums to develop the next best machine learning models. It’s a rising tide in AI - companies that touch AI see a big bump in their stock price and net valuation.
Yet, amid all the excitement, there’s been little discussion on where profits are in the AI industry. Or put another way, who makes the most money with the AI wave.
Broadly speaking, there are five different layers in the AI industry.

Semiconductors
AI as an industry is only possible due to the steady increase in computational power. Compared to a decade ago, we now have chips that are roughly 32x faster. Semiconductor companies design and manufacture chips. Some companies focus on the design element (Nvidia), some are on the manufacturing side (TSMC), and others do both (Intel).
Nvidia, Intel, AMD, and TSMC
Cloud Platforms
Most semiconductor companies will sell their chips to enterprise buyers such as cloud platforms. These large platforms charge their customers for using these hardware resources, also known as compute (energy + usage of the chip). Machine learning models require massive amounts of compute to train and maintain.
AWS, Azure, GCP, and Oracle Cloud
Data Labeling
Some models require extensive data collection, sanitation, and labeling. Most AI companies collect data in-house but will employ third-parties to label and sanitize datasets manually. Higher quality data means better model performance.
Scale AI, Appen, Hive, Labelbox, Snorkel AI etc.
Research
AI research companies combine compute and data resources to train machine learning models. A model is built by piecing together various transformers, gathering billions of parameters, and spending millions of dollars in compute resources. At the end of this process, research companies produce a working model like GPT-3.
“Today I learned changing random stuff until your program works is ‘hacky’ and ‘bad coding practice’ but if you do it fast enough it’s Machine Learning and pays 4x your current salary.”
OpenAI, Deepmind, Stability, and Anthropic
Applications
Most AI research companies are focused on building and refining their models. Once a model is trained, application layer companies will tailor the model to specific applications such as marketing copy, image creation, or even code generations.
Marketing copy (Copy.AI or Jasper), image creation (Midjourney, Stable Diffusion, etc.), and code generation (Github Copilot, Replit)

Of these five layers, cloud platforms are best positioned to capture profits from the AI wave.
Jagmeet Singh, Ingrid Lunden/ April 28, 2023

Image Credits: Leon Neal / Getty Images
Updated to note that the Microsoft investment closed in January. The money from VCs reported here, part of a tender offer, is separate to that.
OpenAI, the startup behind the widely used conversational AI model ChatGPT, has picked up new backers, TechCrunch has learned.
VC firms including Sequoia Capital, Andreessen Horowitz, Thrive and K2 Global picking up new shares, according to documents seen by TechCrunch. A source tells us Founders Fund is also investing. Altogether the VCs have put in just over $300 million at a valuation of $27 billion – $29 billion. This is separate to a big investment from Microsoft announced earlier this year, a person familiar with the development told TechCrunch, which closed in January. The size of Microsoft’s investment is believed to be around $10 billion, a figure we confirmed with our source.
If all this is accurate, this is the closing of the tender offer the Wall Street Journal reported was in the works in January. We confirmed that was when discussions started, amid a viral surge of interest in OpenAI and its business.
We have reached out to the investors named here, as well as to OpenAI, for comment and will update this story as we learn more. OpenAI declined to comment on the tender offer, which is separate to the Microsoft investment that closed in January.
While Microsoft’s investment comes with a strong strategic angle — the tech giant is working to integrate OpenAI’s tech across a number of areas of its business — the VCs are coming in as financial backers.
From what we understand, the term sheets have been signed by investors and the money’s been transferred; still to come is countersigning from OpenAI. The plan was to make this investment public next week.
May 2, 2023

Five companies joined The Crunchbase Unicorn Board in April 2023 — the sixth month in a row for new unicorns to number in the single digits. Three of those companies are in the AI sector.
Of the five companies, three are U.S.-based. The U.K. and China each count for one new unicorn this past month.
Two companies were dropped from the unicorn board.
Tonal raised $130 million in funding at a valuation between $550 million and $600 million according to the WSJ, which removed it from the unicorn board. The company previously raised funding at a $1.6 billion value in March 2021.
And Cybereason, an endpoint security company, raised $100 million led by SoftBank which valued the company at $350 million, a 90% discount from its prior funding in July 2021 which valued the company at $3.2 billion.
Earlier this year, online media company Vox Media received a lowered valuation in February 2023 at $500 million, down from $1 billion. In December 2022, online grocery retailer Oda was removed from the board with a new valuation of $350 million, down from $1.2 billion.
As unicorn companies continue to raise funding, we expect more down rounds and for companies to drop off the list.
We find around half of the unicorn board has raised funding since the beginning of 2022. Around 40% of the 1,400+ unicorn board companies last raised funding in 2020 and 2021. These companies will be coming back to raise funding from the private markets as startups typically raise funding every 18 months to two years.
Here are the new unicorns:
New Jersey-based CoreWeave, a cloud infrastructure company that pivoted from Ethereum mining, raised a $221 million Series B led by Magnetar Capital. The funding valued the company at $2.2 billion.
Quantexa, a London-based data analytics company that uses AI, raised a $129 million Series E funding. The funding was led by Singapore-based Sovereign Wealth Fund GIC which valued the company at $1.8 billion.
San Francisco-based Replit, a developer platform that uses AI to complete code, raised a $97 million Series B funding led by Andreessen Horowitz. The company was valued at $1.2 billion.
Nevada-based Ohmium, a hydrogen producer of green energy, raised a $250 million Series C funding which valued the company at $1 billion, led by TPG Rise Climate Fund.
B&C Chemical , a materials chip maker based in China, raised an $87 million funding valuing the company at $1 billion. The funding was led by China Development Bank Capital and Zhongping Capital.
The company reportedly hopes to raise as much as $10 billion.

Dado Ruvic / reuters
Igor Bonifacic| @igorbonifacic| April 30, 2023
ARM has registered for a US stock market listing. In a press release published Saturday, the mobile chip company said it recently confidentially submitted a draft F-1 form to the Securities and Exchange Commission. According to Reuters, ARM hopes to raise between $8 billion and $10 billion dollars when it holds the initial public offering later this year, though over the weekend the company said it had yet to determine the size and price range of the proposed IPO.
ARM parent company SoftBank has been eyeing a public listing ever since NVIDIA’s $40 billion bid to buy the chip maker fell through at the start of last year due to regulatory resistance from the US Federal Trade Commission and other antitrust watchdogs. In March, SoftBank said it would list ARM on the US stock market after rebuffing a push for a London listing from the United Kingdom government. ARM designs the processor components used in almost every mobile device, including models from Apple and Samsung. Its licensing model means nearly every tech company depends on ARM designs. According to a recent Financial Times report, the company recently began work on a prototype chip that is “more advanced” than any semiconductor produced in the past.
Published April 30, 2023
By Andrew Hutchinson - Content and Social Media Manager
Regardless of how you feel about Elon Musk and his various projects and stances, at least he’s consistent. Well, in a business management context, at least.
Back in June last year, in an interview The Kilowatts, Elon discussed his plans for Twitter, which was well before he actually took ownership of the platform.
In that interview, Musk outlined his plans for a paywall bypass system, which would enable Twitter users to pay for one-off articles in-app, as opposed to subscribing to various publications.
That’s now becoming a reality, with Musk announcing over the weekend that Twitter will soon enable publications to charge Twitter users for access per article in-stream.
Which sounds interesting, right? As Musk says, maybe that’ll provide another way for publications to make money from people who are never going to become subscribers, but might pay for an article here or there.
Sounds interesting in theory, right?
Except, this very model has already been tried, and abandoned, many times, by various publications and platforms as they seek new monetization opportunities.
The key problem? By offering smaller, one-off payments for single article access, that then de-values subscriptions, which are far more valuable for media entities. Sure, not everybody’s going to become a subscriber, but a portion of their audience will, and if those few no longer need to subscribe to access content, that then means that your per-article model needs to deliver a lot more, in order to replace that lost subscription revenue.
Every past experiment has found that this ends up leading to a net loss for publishers versus the subscription system, which is why, try as many have, this system doesn’t work, and will fail again on Twitter.
Boring Co., the billionaire's infrastructure company, has been cleared to expand the Las Vegas Loop. In total, the Vegas Loop will extend to over 65 miles and will have 69 stations.
Elon Musk continues to expand his influence.
This effort most often involves the products and services offered by the companies he directs, founded and co-founded.
Take the electric-vehicle manufacturer Tesla. The Austin, Texas-based group, of which he is chief executive, has pushed the rest of the auto industry to transition to electric vehicles. Battery-powered vehicles are seen as the future of cars, thanks largely to Musk's vision.
In a few months, Tesla will deliver its Cybertruck, its first pickup. This vehicle is expected to revolutionize how consumers and analysts view pickup trucks. It is expected to broaden the appeal of this sector beyond America's heartland and reach younger urbanites with high purchasing power.
Another example of Musk's building his influence, his brand and his companies' brands is Starlink, the satellite internet access service developed by SpaceX, the billionaire's aerospace company.
Starlink became a prominent worldwide product after Musk, on Feb. 24, 2022, decided to provide it for free to Ukraine as it fought off Russia's invasion. Since then, the service has been seen as a window of freedom for millions of people living in dictatorships. For people in remote areas and regions, the service is a means of connecting to the rest of the world.
Yet another service contributes to increasing the tech mogul's influence.
This is Loop, developed by Boring Co., the tunneling and infrastructure company Musk founded to relieve congestion in large cities.
Loop is an all-electric high-speed underground public system in which passengers are transported to their destinations with no stops.
Boring Co.'s first major loop is in Las Vegas, a 29-mile tunnel network connecting 51 stations. Tesla has a fleet of vehicles in the loop with human drivers who ferry convention-goers.
This Loop is now set for major expansion as Boring Co.'s plans have been approved by local authorities, both sides said.
"#ClarkCounty Commissioners just approved new @boringcompany plans for 18 new stations and about 25 miles of tunnels," the county announced on Twitter, adding that the plans will extend the Vegas Loop out from the Las #Vegas Strip corridor.
It added that the Vegas Loop expansion plan includes sections in Clark County and also within the city of Las Vegas to the north and northwest. The new stations and extensions will also operate underground, according to the official documents, in the vicinity of the Resort Corridor, Allegiant Stadium, the University of Nevada-Las Vegas, Town Square Las Vegas, and Blue Diamond Road/Las Vegas Boulevard South.

Share Dialog
Share Dialog
No activity yet