The Paragraph version of That Was The Week
The Paragraph version of That Was The Week

Subscribe to That Was The Week

Subscribe to That Was The Week
<100 subscribers
<100 subscribers
This is the final That Was The Week for 2023. The next edition will be out on weekend of 12-13 January 2024. #47 issues since the start of the year.
Happy New Year to you all
A reminder for new readers. That Was The Week collects the best writing on critical issues in tech, startups, and venture capital. I selected the articles because they are of interest. The selections often include things I entirely disagree with. But they express common opinions, or they provoke me to think. The articles are only snippets. Click on the headline to go to the original. I express my point of view in the editorial and the weekly video below.
This Week’s Video and Podcast:
Thanks To This Week’s Contributors: @kteare, @ajkeen,
Contents
News Publishers See Google’s AI Search Tool as a Traffic-Destroying Nightmare
Partnership with Axel Springer to deepen beneficial use of AI in journalism
The AI revolution is an opportunity for writers (the human kind)
Step-Ups & Duration : The Shape of Things to Come to the Series A in 2024
Seed-stage startups — and their investors — react to higher hurdles for Series A funding
Industry Ventures Partner Roland Reynolds
This week’s Dall-E-generated image was created by asking:
I want a 16:9 image that depicts the irony that big media companies that have historically criticised Google web search for "stealing content" are now worried that AI may have the impact of reducing web search traffic.
The irony is obvious. This week Axel Springer struck a long-term deal to allow OpenAi to train on its published output. And according to Rupert Murdoch’s Wall Street Journal:
Google’s generative-AI-powered search is the true nightmare for publishers. Across the media world, Google generates nearly 40% of publishers’ traffic, accounting for the largest share of their “referrals,” according to a Wall Street Journal analysis of data from measurement firm Similarweb
Publishers are reacting to the new AI platforms in different ways. All are aware that the assumptions held since the emergence of the internet, and web 2.0, are now being called into question.
For all of the hullabaloo over the years decrying search engines for “stealing” content by indexing it, it is now clearer than ever that media companies depend on Google, and specifically on web search, for a large portion of its traffic.
Since the iPhone in 2007, Mobile has already impacted web traffic. AI will compound the effect because it can deliver a user’s needs without a URL or getting them to visit a destination.
What the WSJ does not drill into is that Google too is threatened by the rise of OpenAI. Google lives off of the revenue that web searches generate. Less web search means less revenue. As a result Google and the Big Media are actually on the same side. they need to slow down the impact of AI on web search. From the WSJ:
Liz Reid, a Google vice president who works on the search engine, said the company is committed to driving traffic to web publishers. She said Google has had more conversations with publishers than it typically does after introducing substantial changes to search, because “it’s a more significant change in the evolution of the space.” She offered no timeline for Google’s broader rollout of the AI-powered search tool.
“Any attempts to estimate the traffic impact of our SGE experiment are entirely speculative at this stage as we continue to rapidly evolve the user experience and design, including how links are displayed, and we closely monitor internal data from our tests,” Reid said.
All of this has led Google and publishers to carry out an increasingly complex dialogue. In some meetings, Google is pitching the potential benefits of the other AI tools it is building, including one that would help with the writing and publishing of news articles, according to people familiar with the matter. Many news outlets, from BuzzFeed to USA Today owner Gannett, already are experimenting with AI tools.
I hate to bring bad tidings, but the decline of web search is inevitable and will be driven by the success of conversational interfaces.
That does not have to mean the decline of big media. Axel Springer seems to have understood that media succeeds in so far as it penetrates new platforms. by embracing OpenAI it opens the possibility of its linked pages being available to users via the conversational interface.
Good content, written by authoritative writers, will be no less important in the future than in the past.
Substack’s Hamish McKenzie published a missive on AI and writing this week. He starts out understanding the angst AI may cause:
Many writers are anxious about how their lives and work will be affected as artificial intelligence becomes more powerful. Companies like Writer offer tools to instantly generate materials that might otherwise have been created by humans. Some news organizations have already started publishing automatically generated stories, even as they lay off writers and editors. With a one-sentence prompt and 20 seconds of thought, one can now get ChatGPT to turn out an essay that rivals something an experienced writer might have taken days to produce.
Whether you’re for or against this development ultimately doesn’t matter. It’s happening. The AI hype cycle may go through some ups and downs, but the new epoch has unquestionably begun. These technologies are already real, effective, and proven, which makes this particular technomania different from other hype cycles.
Given this set of conditions, writers—and all other culture makers whose livelihoods will in some way be touched by AI—are entitled to feel worried and perhaps even a little pissed. After all, these new machines are trained on a vast corpus of work produced by humans. And those humans, most of whom have never found a way to turn their art into riches, aren’t getting compensated along the way.
But then counsels creators:
But now is when we tell you that we don’t think this will be bad for culture makers. In fact, it will be very good. While AI will take over the rote and the replaceable, it will give superpowers to people doing original work, while at the same time increasing the value of that work.
When it comes to Substack, we have focused on using the internet’s powers to serve, rather than subsume, writers. There’s nothing in the AI revolution that suggests we will have to change this approach. From image generation and audio transcription tools we’ve already built, to a future where a single writer can make a feature film, and beyond, we will focus on harnessing the power of these tools for human users. If the computer is a bicycle for the mind, AI will be a jumbo jet.
The cost of “content creation” will be driven to almost zero. But content isn’t culture.
This same surge in AI-led content production will simultaneously fuel a tremendous need for cultural connection: real humans in communion with one another. These relationships help us make sense of the world, and to know where to direct our attention. Their value will dramatically increase. Culture will become the most important and fastest-growing slice of our global domestic product.
If we try to figure out winners and losers at this point then my money is on resistors or “decels” as the biggest losers. Google has to choose whether to be one of those by clinging to the past. Big Media has to make the same call. But creativity will be a winner, as will those who lean into the changes.

By Keach Hagey, Miles Kruppa and Alexandra Bruell
Dec. 14, 2023 5:30 am ET
Shortly after the launch of ChatGPT, the Atlantic drew up a list of the greatest threats to the 166-year-old publication from generative artificial intelligence. At the top: Google’s embrace of the technology.
About 40% of the magazine’s web traffic comes from Google searches, which turn up links that users click on. A task force at the Atlantic modeled what could happen if Google integrated AI into search. It found that 75% of the time, the AI-powered search would likely provide a full answer to a user’s query and the Atlantic’s site would miss out on traffic it otherwise would have gotten.
What was once a hypothetical threat is now a very real one. Since May, Google has been testing an AI product dubbed “Search Generative Experience” on a group of roughly 10 million users, and has been vocal about its intention to bring it into the heart of its core search engine.
Google’s integration of AI is crystallizing for media outlets the perils of relying on big technology companies to get their content in front of readers and viewers. Already, publishers are reeling from a major decline in traffic sourced from social-media sites, as both Meta and X, the former Twitter, have pulled away from distributing news.
As bad as the social-media downshift is, Google’s generative-AI-powered search is the true nightmare for publishers. Across the media world, Google generates nearly 40% of publishers’ traffic, accounting for the largest share of their “referrals,” according to a Wall Street Journal analysis of data from measurement firm Similarweb
“AI and large language models have the potential to destroy journalism and media brands as we know them,” said Mathias Dopfner, chairman and CEO of Axel Springer, referring to the technology that makes generative-AI possible. His company, one of Europe’s largest publishers and the owner of U.S. publications Politico and Business Insider, this week announced a deal to license its content to generative-AI specialist OpenAI.
While Google says the final shape of its AI product is far from set, publishers have seen enough to estimate that they will lose between 20% and 40% of their Google-generated traffic if anything resembling recent iterations rolls out widely. Google has said it is giving priority to sending traffic to publishers.
The rise of AI is the latest and most anxiety-inducing chapter in the long, uneasy marriage between Google and publishers, which have been bound to each other through a basic transaction: Google helps publishers be found by readers, and publishers give Google information—millions of pages of web content—to make its search engine useful.
Google’s embrace of AI in search threatens to throw off that delicate equilibrium, publishing executives say, by dramatically increasing the risk that users’ searches won’t result in them clicking on links that take them to publishers’ sites. Most gallingly for publishers, Google’s AI search was trained, in part, on their content and other material from across the web—without payment.
Google’s view is that anything available on the open internet is fair game for training AI models. The company cites a legal doctrine that allows portions of a copyrighted work to be used without permission for cases such as criticism, news reporting or research….
Axel Springer is the first publishing house globally to partner with us on a deeper integration of journalism in AI technologies.

December 13, 2023
This news was originally shared by Axel Springer and can also be read here.
Axel Springer is the first publishing house globally to partner with OpenAI on a deeper integration of journalism in AI technologies.
Axel Springer and OpenAI have announced a global partnership to strengthen independent journalism in the age of artificial intelligence (AI). The initiative will enrich users’ experience with ChatGPT by adding recent and authoritative content on a wide variety of topics, and explicitly values the publisher’s role in contributing to OpenAI’s products. This marks a significant step in both companies’ commitment to leverage AI for enhancing content experiences and creating new financial opportunities that support a sustainable future for journalism.
With this partnership, ChatGPT users around the world will receive summaries of selected global news content from Axel Springer’s media brands including POLITICO, BUSINESS INSIDER, and European properties BILD and WELT, including otherwise paid content. ChatGPT’s answers to user queries will include attribution and links to the full articles for transparency and further information.
In addition, the partnership supports Axel Springer’s existing AI-driven ventures that build upon OpenAI’s technology. The collaboration also involves the use of quality content from Axel Springer media brands for advancing the training of OpenAI’s sophisticated large language models.
“This partnership with Axel Springer will help provide people with new ways to access quality, real-time news content through our AI tools. We are deeply committed to working with publishers and creators around the world and ensuring they benefit from advanced AI technology and new revenue models,” says Brad Lightcap, COO of OpenAI.
DEC 13, 2023

There’s a billboard on one of the main streets in San Francisco that seems almost designed to trigger creative people. It’s advertising an AI startup that has raised more than $126 million in venture funding to provide automated writing tools to large companies. “Empower people. Transform work,” reads its tagline. But it’s the name of the company that really rankles.
Writer.
Many writers are anxious about how their lives and work will be affected as artificial intelligence becomes more powerful. Companies like Writer offer tools to instantly generate materials that might otherwise have been created by humans. Some news organizations have already started publishing automatically generated stories, even as they lay off writers and editors. With a one-sentence prompt and 20 seconds of thought, one can now get ChatGPT to turn out an essay that rivals something an experienced writer might have taken days to produce.
Whether you’re for or against this development ultimately doesn’t matter. It’s happening. The AI hype cycle may go through some ups and downs, but the new epoch has unquestionably begun. These technologies are already real, effective, and proven, which makes this particular technomania different from other hype cycles.
Given this set of conditions, writers—and all other culture makers whose livelihoods will in some way be touched by AI—are entitled to feel worried and perhaps even a little pissed. After all, these new machines are trained on a vast corpus of work produced by humans. And those humans, most of whom have never found a way to turn their art into riches, aren’t getting compensated along the way.
To many, this revolution might feel like the super-expression of a trend that has been commoditizing the work of writers and artists for decades. First, Google and Facebook broke the business models that once supported their work. Then social media turned everyone into content drones. Now here come the mega-robots to vaporize everything writers ever stood for. And of course the techies will make billions, while writers will get less than scraps.
Imagine how humiliating it might feel, then, for one of those opportunists to take the name Writer.
But now is when we tell you that we don’t think this will be bad for culture makers. In fact, it will be very good. While AI will take over the rote and the replaceable, it will give superpowers to people doing original work, while at the same time increasing the value of that work.
When it comes to Substack, we have focused on using the internet’s powers to serve, rather than subsume, writers. There’s nothing in the AI revolution that suggests we will have to change this approach. From image generation and audio transcription tools we’ve already built, to a future where a single writer can make a feature film, and beyond, we will focus on harnessing the power of these tools for human users. If the computer is a bicycle for the mind, AI will be a jumbo jet.
The cost of “content creation” will be driven to almost zero. But content isn’t culture.
This same surge in AI-led content production will simultaneously fuel a tremendous need for cultural connection: real humans in communion with one another. These relationships help us make sense of the world, and to know where to direct our attention. Their value will dramatically increase. Culture will become the most important and fastest-growing slice of our global domestic product.
Humanity has seen a story like this before. Prior to the industrial revolution, more than two-thirds of a country’s labor force had to work in agriculture to be able to feed its entire population. Since the automation of agriculture, that share has fallen to less than 5%. And yet we have abundant food and more jobs to do than ever. Today, many people have the kind of work and prosperity that their great-grandparents could only have imagined.
No matter how advanced AI gets, there will be unceasing demand for human connection. We will want to show each other how we feel as people. We’ll tire of getting what we want, and instead yearn to figure out together what we should want. We will share our hearts and compare our scars. We will long for the sound of each other’s voices, and to shape our own and each other’s stories, in wild and wonderful new ways.
AI will never be able to replace the dynamic that is most central to Substack: human-to-human relationships. New robots may rise and try to claim the mantles of writers and other culture makers, but none can seriously lay claim to what is most important about these people and groups—the human connections they are built on. That’s why we are making Substack the place for trusted, valuable relationships between thinking, breathing, feeling people.
DECEMBER 14, 2023 | On Technology, Photo Features

Over the past few months, I have had the opportunity to spend time with Apple’s soon-to-be-released Vision Pro, a spatial computer that you wear on your face. It is the next step in the long evolution of mixed reality glasses that started with the awkward, and ahead-of-its-time, Google Glass.
I have already stated on multiple occasions that Vision Pro is going to be a device that redefines our relationship with visual media and content—from movies to live sports to our home videos and photography—the display is going to leave us wanting and asking for more from our devices.
Although my time with the device has been limited — and I have yet to play a single game on it or do any kind of work on it—it is hard not to be excited by the possibilities, especially for photographers (like myself) and filmmakers.
My initial experience with the device was soon after it was launched; the next time I got to use the device was when Apple released the developer beta of iOS 17.2. That software had an update that gave the iPhone Pro models the ability to capture spatial video. Since then, Apple has released iOS 17.2 as an update to all phones.

As an iPhone 15 Pro user, you can go to your general settings and turn on spatial video by going to Settings → Camera → Formats. When you capture a video — sadly, only in landscape mode for now — you’ll see a Vision Pro icon. You can turn it on, and videos can be recorded as spatial videos.
By now you might be screaming — what the hell is a spatial video? Spatial video is a mixed-reality video format that allows videos to record the depth and spatial information of the scene, and when you play it back, you get a more immersive, three-dimensional (3D) experience. The iPhone 15 Pro utilizes its main lens and the ultra-wide lens to capture the depth and spatial information of the videos. The spatial videos are captured at 1080p, 30 frames per second, and use the HEIC format.
When viewing on a Vision Pro, it is a unique enough video experience to draw a gasp. Of course, given that the Vision Pro display is over 8K, your eyes can tell you that the spatial video is of slightly lower quality — 1080p. Why the lower quality? The ultra-wide lens captures images and videos at 12 megapixels. As a result, it doesn’t have enough information to create a field of view to match the normal (aka main) camera.
The playback of the videos happens in what seems like a hazy, light, borderless frame, giving the videos a dreamlike quality. It is a very strange feeling — as if you have been transported back in time — and the videos have a dimensionality to them. The spatial videos I experienced felt more like memories — somewhere between reality and an abstraction of it.
During my visit, Apple asked me to visit a special area where a sushi chef was making sushi, and I captured the video to be played back. I zoomed into his fingers massaging the rice, the sushi on the plate. The video was absolutely stunning, but clearly, it lacked the emotional appeal of a family video. On a recent visit, one of Apple’s team members took a video of me walking through the Apple Orchard toward the camera. It was almost as if I was walking out of the frame.
Long after we had moved on from the videos and demos, and I had returned home, I was left wondering — what if I had had the iPhone 15 Pro with me in Greenland? What an amazing way it would have been to capture the majestic beauty of the landscape. What would it have been like to capture inching closer to a mammoth iceberg? Or just having a cable car approaching slowly closer to me.
I was saddened that I couldn’t capture the video of Mr. Gibbs, my dogson, who has now been missing for almost a month. I miss him so much — and it would have been nice to relive his presence, even if just in video form.
Spatial videos — whether from Apple or anyone else — remind me of that scene from Mad Men, with Don Draper pitching the Carousel, a round photo projector.
The typical software startup raised their Series A 15 months after raising their seed at 2x their seed valuation.
A year ago, that Series A would have been raised three months earlier at 3.5x the valuation.

Here’s another way of visualizing the data. The red dot is 2023 - the other years are in grey. Across rounds for the 50th & 75th percentile of companies, step-up valuations are the lowest multiples in about ten years.

The time between rounds has also lengthened - something all of Startupland felt throughout 2023.

In almost every category, the time between rounds is also at decade highs.

In 2024, I expect most of these figures to revert to the mean - not the peaks of 2020-2021, but more akin to 2018.
The public valuation environment & pace of venture capital investments mirror those years better than others.
Five years ago, valuations continued to grow, competitive rounds didn’t last in market for more than a week or so, & pre-emptions did occur. But most of the market operated at a steady cadence with predictable figures for traction, valuation, & dilution.
Connie Loizos @cookie / 11:10 AM PST•December 14, 2023

Image Credits: champc / Getty Images
The hurdle for Series A funding is a lot higher than it was a year ago — and investors in seed-stage companies are having to respond.
They don’t have much choice if they want their startups to survive. When the market abruptly turned in the spring of 2022, late-stage companies were the first to feel the pain. But that downward financial pressure has more recently made its way to much newer outfits, which are getting lower valuations in their next round – 1.6x in the second quarter, the lowest value since the third quarter of 2013, per Pitchbook data – and facing choosier Series A investors with plenty of options.
There’s no shortage of ways that VCs are getting creative on this front. The European venture firm Breega touts its “scaling squad” to help support its many seed bets. Pear VC, a Bay Area-based seed-stage venture firm, is constantly rolling out new programming aimed at supporting and educating the nascent teams that it backs.
Even the bigger, stage-agnostic firms are doing more to telegraph that they’re responding to the current market. In October, for example, the investment firm Greylock rolled out Edge, a three-month company-building program designed to advance select pre-idea, pre-seed and seed founders from inception to product-market fit.
VC heavyweight Lightspeed Venture Partners is also stepping up its game. The firm has long written early (and sometimes, first) checks to nascent startups, including the messaging app Snapchat; the application performance management outfit AppDynamics (acquired by Cisco just ahead of its IPO); and the the publicly traded cloud computing company Nutanix (current market cap: $11.2 billion).
By the firm’s telling, it has long focused on polishing such diamonds in the rough. Still, given the rising standards of Series A investors across the board, Lightspeed tells TechCrunch that it’s now formalizing some of the mentorship it has long offered its portfolio companies through a company-building program for its founders dubbed Launch.
Led by partner Luke Beseda, the purported idea isn’t to attract more founders to Lightspeed but rather to clear the path for the startups it has already funded so they can get to that Series A round. Nearly all of them face the same questions and obstacles, explains Beseda. “They need to know: how do I set up and run a business? How do I hire and build a core team? How do I build my product strategy through customer interviews and design partnerships and drive revenue?”
Going forward, Lightspeed hopes to more systematically answer these questions through expert-led workshops, seed “playbooks,” and other toolkits that Lightspeed is offering through its new program.
Certainly, every bit of help must be welcome right now.
While many startups are simply dissolving — at least 3,200 venture-backed U.S. companies have gone out of business in 2023, according to data compiled for The New York Times by PitchBook — others are finding the emphasis on year-over-year growth and annual recurring revenue is real and not going away any time soon.
Right now, that includes at the Series A stage of things.
MICHAEL PAREKH, DEC 13, 2023

What I’ve been calling ‘Small AI’ in this AI Tech Wave, had a couple of notable developments this week, advancing their capabilities against peers and even much larger Foundation LLM AI models (what I’ve been calling ‘Big AI”), on some measures. Let me re-set the context. Here’s the AI Tech Wave chart again for background for some of the discussion below.

Back in September, in a post titled “From Big AI to Small AI”, I highlighted:
“Big AI is where the industry’s attention and dollars are currently riveted on: Foundation LLM AI models that are getting ever more powerful running on ever more powerful GPUs in ever bigger Compute and power intensive data centers.”
“Think AI possibly a thousand times more powerful than today’s OpenAI GPT4, Google Palm2/Bard (and upcoming Gemini), and Anthropic’s Claude 2 in three years or less. That sure beats Moore’s Law by a mile and then some. As discussed earlier, over two hundred billion are being invested worldwide to get ready for these ‘Big Daddy’ Foundation LLM AI models.”
“Ok, so where does ‘Small AI’ fit? These are far humbler and prosaic, but again ever more powerful LLM AI models increasingly being run on billions of devices already in the hands of over 4 billion users of smartphones and other computers ‘locally’. Also known as ‘the Edge’ as I’ve referred to it here and here. They're starting small today on photo and other apps on your phone, and will spread to almost every app and services on billions of local devices, tapping into near infinite local and cloud data sources. “
The Information notes today in “The Rise of ‘Small Language Models’ and Reinforcement Learning”:
“This week, several companies staked out new ground in the latest forefront of AI research: trying to prove that their models can do more with less. On Monday, the French AI startup Mistral—fresh off its $415 million funding round—published a new model called Mixtral 8x7B. The model, which is open-source, quickly racked up plaudits from AI researchers for its ability to match the quality of GPT-3.5 on some benchmarks despite its relatively puny size. Mixtral is small enough to run on a single computer (albeit one with a fairly hefty 100 gigabytes of RAM).”
“Mixtral 8x7B gets its name because it combines various smaller models that are trained to handle certain tasks, thereby running more efficiently. (It’s an approach called a ‘sparse mixture of experts’ model). Such models aren’t easy to pull off—OpenAI previously had to scrap development of a mixture of experts model earlier this year after it couldn’t get it to work, The Information previously reported.”
“Then, on Tuesday, Microsoft researchers published the latest version of their home-grown model called Phi-2. That model was tiny enough to run on a mobile phone, with just 2.7 billion parameters compared to Mixtral’s 7 billion (remember, OpenAI’s GPT-4 is believed to have around a trillion parameters across a number of expert models). It was trained on a carefully-selected dataset that is a high enough quality to ensure the model generates accurate results even with the limited computing power available on a phone.”
“Microsoft touted it as a “small language model,” a relatively new term for a category that Microsoft’s research division is leaning into heavily. It’s not yet clear exactly how Microsoft or other software makers might put small models to use, but the most obvious benefits would be driving down the cost of running AI applications at scale and dramatically broadening the usage of generative AI technology. That’s a big deal.”
It’s a big deal indeed. And other big tech companies (aka ‘Magnificent 7’) have yet to announce their moves in ‘Small AI’. I’m of course referring to Apple, along with a range of other technology companies that have yet to scale their AI moves.

In “Going Small to Go BIg” this November, I emphasized”:
“As I’ve highlighted before, Apple has led on this trend with over two billion devices in user hands:
“Apple has been including a neural engine as part of its homegrown processors since it introduced the A11 processor in the iPhone 10 in 2017.”
“This past Monday night, Apple unveiled its latest MacBooks during a rare prime-time event, a sign of its renewed focus on its own PC division. The latest MacBook Pros, along with an upgraded iMac, will all use a version of the company’s new M3 processors. Apple said the high-end version of the chip, called the M3 Max, will be capable of running complex AI workloads.”
“Chip makers AMD, Intel, Qualcomm, Micron, and others are all accelerating their efforts with AI chips particularly optimized for local ‘reinforced learning loops’ for AI inference at ‘the Edge’, which are going to be the primary driver of how users augment themselves with AI going forward.”
Key point here is that AI innovation is just beginning. It’s coming fast and furious. It will be open and closed AI, Narrow and Wide AI, and also Big and Small AI. All of these developments and more will give us far more augmented and native AI applications and services soon. Far beyond OpenAI’s ChatGPT, which kicked it all off just over a year ago. Stay tuned…..
DEC 8, 2023

Another day, another huge new AI model revealed. This time it’s Google’s Gemini. The demo video earlier this week was nothing short of amazing, as Gemini appeared to fluidly interact with a questioner going through various tasks and drawings, always giving succinct and correct answers.
Yet in reaction to the announcement Google’s stock only got a couple percentage bump—a minimal response to supposedly being one step closer to artificial general intelligence (AGI), the holy grail of AI. Perhaps because, while the video makes it seem like the AI is watching the person’s actions (like the viewer is) and reacting in real-time, that’s. . . not what’s going on. Rather, they pre-recorded it and sent individual frames of the video to Gemini to respond to, as well as more informative prompts than shown, in addition to editing the replies from Gemini to be shorter and thus, presumably, more relevant. Factor all that in, Gemini doesn’t look that different from GPT-4, scoring only slightly better on batteries of tests, and GPT-4 gives much the same answers to photos taken from the video. It was the stitching together of an illusion.
A synecdoche of the industry’s current state as a whole. It was only one month ago that OpenAI held a DevDay launch, unveiling with pomp a novel “GPT store” of apps. The presentation projected the image of an ascendant Sam Altman acting as the heir to Steve Jobs. In retrospect, the presentation was a high-water mark, or at least, some sort of local peak, for the company and the AI industry as a whole. Just several weeks later, the news that the board fired Sam set the internet ablaze, and led to increasingly speculative reporting on backroom maneuvers of the non-profit turned for-profit.

Continued hype is necessary for the industry, because so much money flowing in essentially allows the big players, like OpenAI, to operate free of economic worry and considerations. The money involved is staggering—Anthropic announced they would compete with OpenAI and raised 2 billion dollars to train their next-gen model, a European counterpart just raised 500 million, etc. Venture capitalists are eager to throw as much money as humanely possible into AI, as it looks so revolutionary, so manifesto-worthy, so lucrative.
Yet, who precisely is the waiting audience for the GPT Store? I haven’t seen it mentioned much, if at all, on social media. Even news stories about the GPT Store post-announcement are scarce, except a reveal it’s been delayed to 2024. I did, however, find a slideshow from Gizmodo of the current kinds of things they expect will be in GPT store, based on current GPTs. Here’s the first one:

While I have no idea what the downloads are going to be for the GPT Store next year, my suspicion is it does not live up to the hyped Apple-esque expectation.
And listen. We all know that, barring further infighting and coups, neither OpenAI, nor Anthropic or any of these leading players, are in any real immediate economic danger in the short term. That’s absolutely not what I’m saying. People in the industry are used to criticisms, which too often are some academic finger-wagging warning that AI will never work, that artificial general intelligence (like anything resembling a human’s) is impossible, or so on. Right now Waymo’s self-driving cars are outperforming humans, at least in the sense of getting into 76% less accidents. And given their test scores, I’m willing to say GPT-4 or Gemini is smarter along many dimensions than a lot of actual humans, at least in the breadth of their abstract knowledge—all while noting even leading models still have around a 3% hallucination rate, which stacks up in a complex task.
A more interesting “bear case” for AI is that, if you look at the list of industries that leading AIs like GPT-4 are capable of disrupting—and therefore making money off of—the list is lackluster from a return-on-investment perspective, because the industries themselves are not very lucrative. What are AIs of the GPT-4 generation best at? It’s things like:
writing essays or short fictions
digital art
chatting
programming assistance
The proposed GPT Store is a lot of versions of this, and these are also the use cases that high-profile investors are explicitly bullish about. Here’s from Andreessen Horowitz’s “The Economic Case for Generative AI:”
DEC 11, 2023

Erik Hoelhas written an article on the supply paradox of AI. It's well argued, well written and an interesting piece, and highlights some of the weirder aspects of the reality we’re living in. But it's also wrong, for what I think are pretty interesting reasons.
Hoel's main argument is somewhat as follows, I paraphrase:
AI is doing some pretty amazing things, like helping code or writing Shakespearean sonnets about tax law
AI companies are also raising tons of money
But AI is mainly only disrupting low value fields - essayists, graphic artists and basic programming - because that’s where the training data is
This will not create enough value, because these are low value fields
Higher value fields like law and medicine are outlawed to AI anyway
Oops
It's a clever argument. But it’s incomplete. The core crux is that what AI can do actually is amazing, and where the disruption starts isn't a great indication of where it ends. Because …
Demand isn't static
We've traditionally grown GDP not by displacing a segment of the economy but also by growing the pie. The first is efficiency and it's essential but it's what we do with the now freed up resources that's truly magical.
obviously proponents of investment in these companies will say all this is only the beginning. What about AI personal assistants? Robot butlers? All those things! Even assuming all that comes true sometime over the next decades: what is the market for personal assistants? What’s the market for butlers? Most people have neither of those things.
This is also what they said about cars, and mobile phones, and computers. Remember Microsoft's slogan of “A PC on every desk and in every home”? It seemed outlandish, audacious. Like the moon landing but harder because it needed to convince everyone.
And it came true.
So asking where exactly the benefits will come from is a bad question.
A much better question is, can this new thing do something interesting and useful? If so, we will find ways to make lives better by using them.

An example. Slightly more than a century ago we used to spend almost double our average income share on food. The drop did not come through some weird scarcity, but through abundance.
That’s why the new abilities are more interesting than the new applications. A sufficient degree of change in scale is equivalent to a change in scope. Even error prone in unpredictable ways, the ability to convert one form of input to another is a big chunk of our GDP.
It can mean making APIs talk to each other, it can mean converting vague ideas about a company’s strategy to something easily communicable, it can mean being able to create movies and music and stories leading to a renaissance of our interest in art.
We can’t easily know, but enabling more efficient production is the hardest part. Finding new ways of consumption is comparatively easy. This is because …
Abundance is what technology brings
Also, information technologies have always been weird. Because information in some sense is entirely surplus to survival. Now you can take it to me and that this is irrelevant or less important in some sense, but that's not true because it is what underpins almost all of our civilisation on growth so far.
Unimaginable abundance in any domain is the way humanity reallocates resources to new bottlenecks. At least in the absence of a godlike central planner.
The world's of art has always been less lucrative on the margins, because the supply of artists has always just exceeded the demand for them. This isn't magic. It's because even prospective artists need to eat and they make different decisions when they can't.
So when the ability to create art shifts downwards, the supply increases. Does this cheapen the art? I don't see how. It cheapens some art, sure, but only because other art rises to take its place. We can make cathedrals and temples that would've made emperors swoon a couple centuries ago but they go unnoticed, with people looking at them like a boondoggle or like a slightly prettier parking lot.
But without abundance there's no added supply. That's what technology does.
Name the usual suspects - electricity, transportation, energy and heating, cooling, calligraphy, theater, opera - they’re all examples of areas that technological abundance demolished. Or changed entirely, depending on your point of view.
The abundance that brought us the ability to listen to music in our dining rooms at the whisper of our whim is also what brought about the rise in those wanting to do this.

Which brings us to the central crux …
The Supply paradox of AI
Hoel defines the Supply paradox as follows.
the easier it is to train an AI to do something, the less economically valuable that thing is. After all, the huge supply of the thing is how the AI got so good in the first place.
The simplest reading of this, that it is the hard things that are worth doing, is true but trivial. The fact that it’s the supply that drives what makes something easier, that’s however incorrect.
The value of AI isn’t created purely because of the availability of data. Quality matters. In fact quality really matters. For instance, we have far more data and writing on scientific papers. Here’s the information for The Pile, a popular starting point for much of the modern LLM training.

Is it true that LLMs are better at generating PubMed abstracts than writing sonnets, or fake Wikipedia articles? Not really. They’re great at customer support conversations, higher value enterprise software, compared to writing useful short stories, something they ought to be better at.
Lessig, Nov 28

In my first book, Code and Other Laws of Cyberspace (1999), I told the story of why I had become a lawyer. My uncle, Richard Cates, had been the lawyer working for the House Committee on Impeachment (along with a much younger lawyer, Hillary Rodham (soon to be) Clinton). In 1974, just before Nixon resigned, Cates visited us in Pennsylvania and took me for a long walk. I wanted to know why he was doing what he was doing — persecuting Richard Nixon! I was 13. Uncle Dick was the only Democrat in our extended family. He was also the only lawyer. My father despised lawyers. I loved everything about my father.
Uncle Dick explained his job to me. It was, as he said, nothing more than to teach the facts of the case — the Watergate coverup — to Members of Congress. As I remembered his words in Code:
It is what a lawyer does, what a good lawyer does, that makes this system work. It is not the bluffing, or the outrage, or the strategies and tactics. It is something much simpler than that. What a good lawyer does is tell a story that persuades. Not by hiding the truth or exciting the emotion, but using reason, through a story, to persuade.
When it works, it does something to the people who experience this persuasion. Some, for the first time in their lives, see power constrained by reason. Not by votes, not by wealth, not by who someone knows — but by an argument that persuades. This is the magic of our system, however rare the miracles may be.
Those words changed me. They certainly changed who I wanted to be. A dozen years later, I would begin law school. Four years after that, I was clerking for Justice Scalia. And in that clerkship, too, if in only glimpses, I saw what Dick had spoken about. By then, I was no longer a Republican. Certainly not a conservative. But at that point, Scalia had a practice of hiring one liberal clerk. I was the token liberal for the OT 1990 term. And in that year, I sometimes saw law work as Dick had described it. I saw the quiet reasoning of law clerks flip the vote of the Court — twice, one time, from unanimous in one direction to unanimous in another; the other time, from 9–0, to 7–2 the other way round. And I saw Scalia repeatedly argued away from his initial conservative views, to views that were more consistent with his theory of originalism. The last time I saw him before he died, I joked that he had ruined me as a law professor: That as a clerk, he had shown me again and again how reason could drive him to do the “right” thing (as in the originalist thing) rather than the conservative thing; and that I had predicted the same again and again after I became a law professor. But again and again, I told him, he had let me down. Scalia laughed his famous laugh, and we spent the next hour arguing — with reason—about whether my criticism was correct.
Yet it is increasingly hard to sustain such confidence in reason’s power today. There are as many Americans today who believe the 2020 election was stolen as believed it was stolen on January 6. Reason is not responsible for that fact. Time and again, we all have the experience of engaging with someone about something relatively difficult. Time and again, we walk away believing that either we can’t persuade or that reason doesn’t work. The enterprise feels hopeless; most simply give up.
And then I thought, maybe reason isn’t dead. Maybe it’s just reason for us, today. Maybe another form of intelligence could play the reasoning game better. Like, for example, AI.
So I decided to test it. I’m not a supporter of RFK Jr. Indeed, I fear he is a fantasist. But among the “conspiracy theories” that RFK Jr. defends is a theory he has come to late in his life — that his father was not actually killed by Sirhan Sirhan. Certainly, Sirhan shot at RFK. Certainly, he had opportunity, motive, and means — and he confessed (though he said he didn’t remember the event). Most take that confession — and the supporting assertions by those in the government responsible for making such assertions—to mean that Sirhan killed RFK.
And yet, it is perfectly clear that can’t be correct. The coroner who conducted RFK’s autopsy—in the presence of military coroners who he had flown in to confirm his work as he did it—concluded that RFK was killed by shots at close range in his back. Sirhan was never behind Kennedy; never within inches of Kennedy; and every single bullet that Sirhan fired is accounted for — and none entered Kennedy’s body.
So I wanted to see how well ChatGPT responded to this conflict between the views of the authorities — that Sirhan killed RFK—and the view of the coroner—that Sirhan could not have killed RFK. Here’s the transcript:

I was astonished by this exchange. Because here was the reason Cates was talking about. ChatGPT made its point. I pointed out the weakness in its point. ChatGPT then “rethought” its argument and acknowledged its mistake. And then, it even acknowledged its failure fully to acknowledge its mistake. By the end, its conclusion contradicted where it had begun: Through “reason” it had been “persuaded.”
Dec. 12, 2023 5:00 PM PST
Silicon Valley’s phones buzzed with alerts last night, offering surprising news about one of its bellwether companies: A federal jury decided Google had exercised unlawful monopoly power over app developers like Epic Games. Investors so far have responded mostly with a shrug. By the end of today, the stock of Google’s parent company, Alphabet, had slumped by less than a point on a day the S&P 500 ticked up.
The muted response might reflect the slow-moving mechanics of the specific court case with Epic, maker of the game Fortnite. Google still has a chance to win that case on appeal. A ruling on one of Alphabet’s other high-profile antitrust cases—the Department of Justice’s lawsuit challenging the company’s search engine deal with Apple—isn’t expected until next year.
The broader truth, however, is that Wall Street is betting big tech can amass power faster than antitrust regulators, judges and juries can chip away at it. The stock prices of Alphabet, Amazon, Apple, Meta Platforms and Microsoft have seen more than twice the growth rate of the S&P 500 index this year—a time during which most of those five firms faced an onslaught of claims that they had abused monopoly power.
Investors have had plenty of fuel for their confidence this year. Lina Khan’s Federal Trade Commission, demonstrating more bark than bite, keeps losing important cases against tech giants like Meta and Microsoft. Meanwhile, the latest shiny tech advancement has seemingly played right into tech giants’ hands. Large language models require the kind of vast resources—data, cash, data centers, chips—that big tech has and that startups usually don’t have. (Even when startups are on the cutting edge, à la OpenAI and Anthropic, big tech has managed to buy its way in.)
Still, investors may be overlooking how Epic’s win over Google underlines a major sentiment shift that will continue to dog big tech. Unlike Apple’s earlier win in a similar case with Epic, which a judge ruled on, nine San Francisco jurors decided yesterday that Alphabet had broken the law. The people are speaking. That might be an antitrust indicator investors should remember—and a more difficult battle for big tech to win.

Netflix
Dec 13, 2023
This is a huge story, so I'll just cut to the chase; the notoriously secretive Netflix has published all its streaming numbers for the public to see.
Netflix is a streaming giant that got a huge head start on the content library before everyone else was able to get involved. They have over 200 million subscribers and have not really told people what those subscribers watch.
Transparency became the buzz word of the year when both SAG and the WGA asked for more of it in their contacts, so they could be paid when their shows were popular.
Now, right before the holiday break, Netflix has answered this call and put everything out into the world.
You can visit this site to see the full engagement report, but we'll go into details below.
The headline is that to provide more transparency, Netflix will publish a What We Watched: A Netflix Engagement Report twice a year.
They're going to cover:
Hours viewed for every title—original and licensed—watched for over 50,000 hours.
The premiere date for any Netflix TV series or film; and
Whether a title was available globally.
The report will cover 18,000 titles and represent 99% of all viewing on the streamer.
The streamer says: "Success on Netflix comes in all shapes and sizes, and is not determined by hours viewed alone. We have enormously successful movies and TV shows with both lower and higher hours viewed. It’s all about whether a movie or TV show thrilled its audience—and the size of that audience relative to the economics of the title; and to compare between titles it’s best to use our weekly Top 10 and Most Popular lists, which take into account run times and premiere dates."

Netflix
The engagement report is really eye-opening when it comes to what people are watching.
Before we get started, huge shoutout to No Film School founder, Ryan Koo, whose Netflix movie Amateur was viewed over 4 million hours, more than more recent releases like I Think You Should Leave, Steven Soderbergh’s The Laundromat, Entergalactic, and Tiger King 2.
But right at the top of the list, with over eight hundred million hours of view time is The Night Agent, with Ginny and Georgia, The Glory, and Wednesday right behind in the hundreds of millions
There is palpable enthusiasm for non-English stories, which generated 30% of all viewing on the streamer.
DEC 14, 2023

A new television station will feature artificial intelligence (AI) generated news anchors for the first time in the U.S. next year.
New Los Angeles-based station Channel 1, which will launch in 2024, is hoping to be the first ever national syndicated news station in the U.S. to use AI-created anchors — instead of real human people as presenters.
According to a report by the Daily Mail, Channel 1’s news segments will use a combination of AI-generated humans and digital avatars that have been created using doubles of real actors.
However, real-life human anchors will be used for Channel 1’s most important news reports.
The station wants to launch these AI-generated news anchors on free ad-supported streaming TV — including apps such as Crackle, Tubi, or Pluto — as early as February.
Channel 1 founder Adam Mosam tells the Daily Mail that the television station is aiming to “get out in front and create a responsible use of technology.”
Mosam assured the publication that it would not exploit AI technology. He also said that the company plans to be transparent with viewers about what footage is original and what is AI-generated.
However, others in the media industry have not been so convinced and raised concerns about the future of journalism.
“If you believe in the concept of ‘fake news,’ you have seen nothing,” Ruby Media Group CEO Kristen Ruby shared on X (formerly known as Twitter).
“At least your news is presented by humans. When AI news anchors replace human news anchors — the concept of fake news will have a totally different meaning.”
While AI-generated news anchors may be a new concept in the U.S., they have been used on China’s state news channels since 2018.
Earlier this year, China revealed its latest digital news anchor — an AI-powered “woman” named “Ren Xiaorong” that delivers news 24 hours a day, 365 days a year.
By Cory Weinberg and Ann Gehan
Dec. 12, 2023 2:46 PM PST

High-end pet food startup The Farmer’s Dog is working with JPMorgan and other investment banks to raise hundreds of millions of dollars by early next year, in a deal that could value it significantly higher than its last $2.5 billion valuation, people familiar with the matter said.
The fundraising could help the company buck the trend of collapsing direct-to-consumer valuations over the past two years. The Farmer’s Dog, which delivers bags of customized dog food to doorsteps of customers who sign up for a subscription, expects to generate more than $800 million in sales this year, the people said. That represents growth of about 60% from 2022, as pet owners continue to splurge on their animals even as they cut back on spending for themselves.
THE TAKEAWAY
• Pet food startup expects to top $800 million in revenue this year
• Slower spending, higher costs have hurt many direct-to-consumer sellers
• Essential goods have remained a bright spot
At that level of revenue, the Farmer’s Dog will generate higher revenue than publicly traded pet food company Freshpet, which has a market capitalization of $3.8 billion. The company’s revenue is likely to grow further in 2024. Its rate of annual recurring revenue will top more than $1 billion by the end of this year, the people said, adding that The Farmer’s Dog isn’t yet profitable.
Founded in 2014, The Farmer’s Dog says its high-end dog food is made with fresher, more nutritious ingredients than traditional dry dog food, allowing it to charge higher prices. It’s one of the highest-valued startups selling products or services to pet owners, a category that venture capital flooded with fundraising dollars after more people bought or adopted cats and dogs during the pandemic.
The company couldn’t immediately be reached for comment.
The Farmer’s Dog has previously raised more than $150 million in total from investors including Shasta Ventures, Insight Partners and Forerunner Ventures, and was most recently valued at about $2.5 billion in June 2022, according to a copy of its corporate charters provided by the Prime Unicorn Index. The firm plans to set aside some of the capital it raises to allow investors or employees to cash out some shares, the people said, as it doesn’t expect to go public next year.

This is the final That Was The Week for 2023. The next edition will be out on weekend of 12-13 January 2024. #47 issues since the start of the year.
Happy New Year to you all
A reminder for new readers. That Was The Week collects the best writing on critical issues in tech, startups, and venture capital. I selected the articles because they are of interest. The selections often include things I entirely disagree with. But they express common opinions, or they provoke me to think. The articles are only snippets. Click on the headline to go to the original. I express my point of view in the editorial and the weekly video below.
This Week’s Video and Podcast:
Thanks To This Week’s Contributors: @kteare, @ajkeen,
Contents
News Publishers See Google’s AI Search Tool as a Traffic-Destroying Nightmare
Partnership with Axel Springer to deepen beneficial use of AI in journalism
The AI revolution is an opportunity for writers (the human kind)
Step-Ups & Duration : The Shape of Things to Come to the Series A in 2024
Seed-stage startups — and their investors — react to higher hurdles for Series A funding
Industry Ventures Partner Roland Reynolds
This week’s Dall-E-generated image was created by asking:
I want a 16:9 image that depicts the irony that big media companies that have historically criticised Google web search for "stealing content" are now worried that AI may have the impact of reducing web search traffic.
The irony is obvious. This week Axel Springer struck a long-term deal to allow OpenAi to train on its published output. And according to Rupert Murdoch’s Wall Street Journal:
Google’s generative-AI-powered search is the true nightmare for publishers. Across the media world, Google generates nearly 40% of publishers’ traffic, accounting for the largest share of their “referrals,” according to a Wall Street Journal analysis of data from measurement firm Similarweb
Publishers are reacting to the new AI platforms in different ways. All are aware that the assumptions held since the emergence of the internet, and web 2.0, are now being called into question.
For all of the hullabaloo over the years decrying search engines for “stealing” content by indexing it, it is now clearer than ever that media companies depend on Google, and specifically on web search, for a large portion of its traffic.
Since the iPhone in 2007, Mobile has already impacted web traffic. AI will compound the effect because it can deliver a user’s needs without a URL or getting them to visit a destination.
What the WSJ does not drill into is that Google too is threatened by the rise of OpenAI. Google lives off of the revenue that web searches generate. Less web search means less revenue. As a result Google and the Big Media are actually on the same side. they need to slow down the impact of AI on web search. From the WSJ:
Liz Reid, a Google vice president who works on the search engine, said the company is committed to driving traffic to web publishers. She said Google has had more conversations with publishers than it typically does after introducing substantial changes to search, because “it’s a more significant change in the evolution of the space.” She offered no timeline for Google’s broader rollout of the AI-powered search tool.
“Any attempts to estimate the traffic impact of our SGE experiment are entirely speculative at this stage as we continue to rapidly evolve the user experience and design, including how links are displayed, and we closely monitor internal data from our tests,” Reid said.
All of this has led Google and publishers to carry out an increasingly complex dialogue. In some meetings, Google is pitching the potential benefits of the other AI tools it is building, including one that would help with the writing and publishing of news articles, according to people familiar with the matter. Many news outlets, from BuzzFeed to USA Today owner Gannett, already are experimenting with AI tools.
I hate to bring bad tidings, but the decline of web search is inevitable and will be driven by the success of conversational interfaces.
That does not have to mean the decline of big media. Axel Springer seems to have understood that media succeeds in so far as it penetrates new platforms. by embracing OpenAI it opens the possibility of its linked pages being available to users via the conversational interface.
Good content, written by authoritative writers, will be no less important in the future than in the past.
Substack’s Hamish McKenzie published a missive on AI and writing this week. He starts out understanding the angst AI may cause:
Many writers are anxious about how their lives and work will be affected as artificial intelligence becomes more powerful. Companies like Writer offer tools to instantly generate materials that might otherwise have been created by humans. Some news organizations have already started publishing automatically generated stories, even as they lay off writers and editors. With a one-sentence prompt and 20 seconds of thought, one can now get ChatGPT to turn out an essay that rivals something an experienced writer might have taken days to produce.
Whether you’re for or against this development ultimately doesn’t matter. It’s happening. The AI hype cycle may go through some ups and downs, but the new epoch has unquestionably begun. These technologies are already real, effective, and proven, which makes this particular technomania different from other hype cycles.
Given this set of conditions, writers—and all other culture makers whose livelihoods will in some way be touched by AI—are entitled to feel worried and perhaps even a little pissed. After all, these new machines are trained on a vast corpus of work produced by humans. And those humans, most of whom have never found a way to turn their art into riches, aren’t getting compensated along the way.
But then counsels creators:
But now is when we tell you that we don’t think this will be bad for culture makers. In fact, it will be very good. While AI will take over the rote and the replaceable, it will give superpowers to people doing original work, while at the same time increasing the value of that work.
When it comes to Substack, we have focused on using the internet’s powers to serve, rather than subsume, writers. There’s nothing in the AI revolution that suggests we will have to change this approach. From image generation and audio transcription tools we’ve already built, to a future where a single writer can make a feature film, and beyond, we will focus on harnessing the power of these tools for human users. If the computer is a bicycle for the mind, AI will be a jumbo jet.
The cost of “content creation” will be driven to almost zero. But content isn’t culture.
This same surge in AI-led content production will simultaneously fuel a tremendous need for cultural connection: real humans in communion with one another. These relationships help us make sense of the world, and to know where to direct our attention. Their value will dramatically increase. Culture will become the most important and fastest-growing slice of our global domestic product.
If we try to figure out winners and losers at this point then my money is on resistors or “decels” as the biggest losers. Google has to choose whether to be one of those by clinging to the past. Big Media has to make the same call. But creativity will be a winner, as will those who lean into the changes.

By Keach Hagey, Miles Kruppa and Alexandra Bruell
Dec. 14, 2023 5:30 am ET
Shortly after the launch of ChatGPT, the Atlantic drew up a list of the greatest threats to the 166-year-old publication from generative artificial intelligence. At the top: Google’s embrace of the technology.
About 40% of the magazine’s web traffic comes from Google searches, which turn up links that users click on. A task force at the Atlantic modeled what could happen if Google integrated AI into search. It found that 75% of the time, the AI-powered search would likely provide a full answer to a user’s query and the Atlantic’s site would miss out on traffic it otherwise would have gotten.
What was once a hypothetical threat is now a very real one. Since May, Google has been testing an AI product dubbed “Search Generative Experience” on a group of roughly 10 million users, and has been vocal about its intention to bring it into the heart of its core search engine.
Google’s integration of AI is crystallizing for media outlets the perils of relying on big technology companies to get their content in front of readers and viewers. Already, publishers are reeling from a major decline in traffic sourced from social-media sites, as both Meta and X, the former Twitter, have pulled away from distributing news.
As bad as the social-media downshift is, Google’s generative-AI-powered search is the true nightmare for publishers. Across the media world, Google generates nearly 40% of publishers’ traffic, accounting for the largest share of their “referrals,” according to a Wall Street Journal analysis of data from measurement firm Similarweb
“AI and large language models have the potential to destroy journalism and media brands as we know them,” said Mathias Dopfner, chairman and CEO of Axel Springer, referring to the technology that makes generative-AI possible. His company, one of Europe’s largest publishers and the owner of U.S. publications Politico and Business Insider, this week announced a deal to license its content to generative-AI specialist OpenAI.
While Google says the final shape of its AI product is far from set, publishers have seen enough to estimate that they will lose between 20% and 40% of their Google-generated traffic if anything resembling recent iterations rolls out widely. Google has said it is giving priority to sending traffic to publishers.
The rise of AI is the latest and most anxiety-inducing chapter in the long, uneasy marriage between Google and publishers, which have been bound to each other through a basic transaction: Google helps publishers be found by readers, and publishers give Google information—millions of pages of web content—to make its search engine useful.
Google’s embrace of AI in search threatens to throw off that delicate equilibrium, publishing executives say, by dramatically increasing the risk that users’ searches won’t result in them clicking on links that take them to publishers’ sites. Most gallingly for publishers, Google’s AI search was trained, in part, on their content and other material from across the web—without payment.
Google’s view is that anything available on the open internet is fair game for training AI models. The company cites a legal doctrine that allows portions of a copyrighted work to be used without permission for cases such as criticism, news reporting or research….
Axel Springer is the first publishing house globally to partner with us on a deeper integration of journalism in AI technologies.

December 13, 2023
This news was originally shared by Axel Springer and can also be read here.
Axel Springer is the first publishing house globally to partner with OpenAI on a deeper integration of journalism in AI technologies.
Axel Springer and OpenAI have announced a global partnership to strengthen independent journalism in the age of artificial intelligence (AI). The initiative will enrich users’ experience with ChatGPT by adding recent and authoritative content on a wide variety of topics, and explicitly values the publisher’s role in contributing to OpenAI’s products. This marks a significant step in both companies’ commitment to leverage AI for enhancing content experiences and creating new financial opportunities that support a sustainable future for journalism.
With this partnership, ChatGPT users around the world will receive summaries of selected global news content from Axel Springer’s media brands including POLITICO, BUSINESS INSIDER, and European properties BILD and WELT, including otherwise paid content. ChatGPT’s answers to user queries will include attribution and links to the full articles for transparency and further information.
In addition, the partnership supports Axel Springer’s existing AI-driven ventures that build upon OpenAI’s technology. The collaboration also involves the use of quality content from Axel Springer media brands for advancing the training of OpenAI’s sophisticated large language models.
“This partnership with Axel Springer will help provide people with new ways to access quality, real-time news content through our AI tools. We are deeply committed to working with publishers and creators around the world and ensuring they benefit from advanced AI technology and new revenue models,” says Brad Lightcap, COO of OpenAI.
DEC 13, 2023

There’s a billboard on one of the main streets in San Francisco that seems almost designed to trigger creative people. It’s advertising an AI startup that has raised more than $126 million in venture funding to provide automated writing tools to large companies. “Empower people. Transform work,” reads its tagline. But it’s the name of the company that really rankles.
Writer.
Many writers are anxious about how their lives and work will be affected as artificial intelligence becomes more powerful. Companies like Writer offer tools to instantly generate materials that might otherwise have been created by humans. Some news organizations have already started publishing automatically generated stories, even as they lay off writers and editors. With a one-sentence prompt and 20 seconds of thought, one can now get ChatGPT to turn out an essay that rivals something an experienced writer might have taken days to produce.
Whether you’re for or against this development ultimately doesn’t matter. It’s happening. The AI hype cycle may go through some ups and downs, but the new epoch has unquestionably begun. These technologies are already real, effective, and proven, which makes this particular technomania different from other hype cycles.
Given this set of conditions, writers—and all other culture makers whose livelihoods will in some way be touched by AI—are entitled to feel worried and perhaps even a little pissed. After all, these new machines are trained on a vast corpus of work produced by humans. And those humans, most of whom have never found a way to turn their art into riches, aren’t getting compensated along the way.
To many, this revolution might feel like the super-expression of a trend that has been commoditizing the work of writers and artists for decades. First, Google and Facebook broke the business models that once supported their work. Then social media turned everyone into content drones. Now here come the mega-robots to vaporize everything writers ever stood for. And of course the techies will make billions, while writers will get less than scraps.
Imagine how humiliating it might feel, then, for one of those opportunists to take the name Writer.
But now is when we tell you that we don’t think this will be bad for culture makers. In fact, it will be very good. While AI will take over the rote and the replaceable, it will give superpowers to people doing original work, while at the same time increasing the value of that work.
When it comes to Substack, we have focused on using the internet’s powers to serve, rather than subsume, writers. There’s nothing in the AI revolution that suggests we will have to change this approach. From image generation and audio transcription tools we’ve already built, to a future where a single writer can make a feature film, and beyond, we will focus on harnessing the power of these tools for human users. If the computer is a bicycle for the mind, AI will be a jumbo jet.
The cost of “content creation” will be driven to almost zero. But content isn’t culture.
This same surge in AI-led content production will simultaneously fuel a tremendous need for cultural connection: real humans in communion with one another. These relationships help us make sense of the world, and to know where to direct our attention. Their value will dramatically increase. Culture will become the most important and fastest-growing slice of our global domestic product.
Humanity has seen a story like this before. Prior to the industrial revolution, more than two-thirds of a country’s labor force had to work in agriculture to be able to feed its entire population. Since the automation of agriculture, that share has fallen to less than 5%. And yet we have abundant food and more jobs to do than ever. Today, many people have the kind of work and prosperity that their great-grandparents could only have imagined.
No matter how advanced AI gets, there will be unceasing demand for human connection. We will want to show each other how we feel as people. We’ll tire of getting what we want, and instead yearn to figure out together what we should want. We will share our hearts and compare our scars. We will long for the sound of each other’s voices, and to shape our own and each other’s stories, in wild and wonderful new ways.
AI will never be able to replace the dynamic that is most central to Substack: human-to-human relationships. New robots may rise and try to claim the mantles of writers and other culture makers, but none can seriously lay claim to what is most important about these people and groups—the human connections they are built on. That’s why we are making Substack the place for trusted, valuable relationships between thinking, breathing, feeling people.
DECEMBER 14, 2023 | On Technology, Photo Features

Over the past few months, I have had the opportunity to spend time with Apple’s soon-to-be-released Vision Pro, a spatial computer that you wear on your face. It is the next step in the long evolution of mixed reality glasses that started with the awkward, and ahead-of-its-time, Google Glass.
I have already stated on multiple occasions that Vision Pro is going to be a device that redefines our relationship with visual media and content—from movies to live sports to our home videos and photography—the display is going to leave us wanting and asking for more from our devices.
Although my time with the device has been limited — and I have yet to play a single game on it or do any kind of work on it—it is hard not to be excited by the possibilities, especially for photographers (like myself) and filmmakers.
My initial experience with the device was soon after it was launched; the next time I got to use the device was when Apple released the developer beta of iOS 17.2. That software had an update that gave the iPhone Pro models the ability to capture spatial video. Since then, Apple has released iOS 17.2 as an update to all phones.

As an iPhone 15 Pro user, you can go to your general settings and turn on spatial video by going to Settings → Camera → Formats. When you capture a video — sadly, only in landscape mode for now — you’ll see a Vision Pro icon. You can turn it on, and videos can be recorded as spatial videos.
By now you might be screaming — what the hell is a spatial video? Spatial video is a mixed-reality video format that allows videos to record the depth and spatial information of the scene, and when you play it back, you get a more immersive, three-dimensional (3D) experience. The iPhone 15 Pro utilizes its main lens and the ultra-wide lens to capture the depth and spatial information of the videos. The spatial videos are captured at 1080p, 30 frames per second, and use the HEIC format.
When viewing on a Vision Pro, it is a unique enough video experience to draw a gasp. Of course, given that the Vision Pro display is over 8K, your eyes can tell you that the spatial video is of slightly lower quality — 1080p. Why the lower quality? The ultra-wide lens captures images and videos at 12 megapixels. As a result, it doesn’t have enough information to create a field of view to match the normal (aka main) camera.
The playback of the videos happens in what seems like a hazy, light, borderless frame, giving the videos a dreamlike quality. It is a very strange feeling — as if you have been transported back in time — and the videos have a dimensionality to them. The spatial videos I experienced felt more like memories — somewhere between reality and an abstraction of it.
During my visit, Apple asked me to visit a special area where a sushi chef was making sushi, and I captured the video to be played back. I zoomed into his fingers massaging the rice, the sushi on the plate. The video was absolutely stunning, but clearly, it lacked the emotional appeal of a family video. On a recent visit, one of Apple’s team members took a video of me walking through the Apple Orchard toward the camera. It was almost as if I was walking out of the frame.
Long after we had moved on from the videos and demos, and I had returned home, I was left wondering — what if I had had the iPhone 15 Pro with me in Greenland? What an amazing way it would have been to capture the majestic beauty of the landscape. What would it have been like to capture inching closer to a mammoth iceberg? Or just having a cable car approaching slowly closer to me.
I was saddened that I couldn’t capture the video of Mr. Gibbs, my dogson, who has now been missing for almost a month. I miss him so much — and it would have been nice to relive his presence, even if just in video form.
Spatial videos — whether from Apple or anyone else — remind me of that scene from Mad Men, with Don Draper pitching the Carousel, a round photo projector.
The typical software startup raised their Series A 15 months after raising their seed at 2x their seed valuation.
A year ago, that Series A would have been raised three months earlier at 3.5x the valuation.

Here’s another way of visualizing the data. The red dot is 2023 - the other years are in grey. Across rounds for the 50th & 75th percentile of companies, step-up valuations are the lowest multiples in about ten years.

The time between rounds has also lengthened - something all of Startupland felt throughout 2023.

In almost every category, the time between rounds is also at decade highs.

In 2024, I expect most of these figures to revert to the mean - not the peaks of 2020-2021, but more akin to 2018.
The public valuation environment & pace of venture capital investments mirror those years better than others.
Five years ago, valuations continued to grow, competitive rounds didn’t last in market for more than a week or so, & pre-emptions did occur. But most of the market operated at a steady cadence with predictable figures for traction, valuation, & dilution.
Connie Loizos @cookie / 11:10 AM PST•December 14, 2023

Image Credits: champc / Getty Images
The hurdle for Series A funding is a lot higher than it was a year ago — and investors in seed-stage companies are having to respond.
They don’t have much choice if they want their startups to survive. When the market abruptly turned in the spring of 2022, late-stage companies were the first to feel the pain. But that downward financial pressure has more recently made its way to much newer outfits, which are getting lower valuations in their next round – 1.6x in the second quarter, the lowest value since the third quarter of 2013, per Pitchbook data – and facing choosier Series A investors with plenty of options.
There’s no shortage of ways that VCs are getting creative on this front. The European venture firm Breega touts its “scaling squad” to help support its many seed bets. Pear VC, a Bay Area-based seed-stage venture firm, is constantly rolling out new programming aimed at supporting and educating the nascent teams that it backs.
Even the bigger, stage-agnostic firms are doing more to telegraph that they’re responding to the current market. In October, for example, the investment firm Greylock rolled out Edge, a three-month company-building program designed to advance select pre-idea, pre-seed and seed founders from inception to product-market fit.
VC heavyweight Lightspeed Venture Partners is also stepping up its game. The firm has long written early (and sometimes, first) checks to nascent startups, including the messaging app Snapchat; the application performance management outfit AppDynamics (acquired by Cisco just ahead of its IPO); and the the publicly traded cloud computing company Nutanix (current market cap: $11.2 billion).
By the firm’s telling, it has long focused on polishing such diamonds in the rough. Still, given the rising standards of Series A investors across the board, Lightspeed tells TechCrunch that it’s now formalizing some of the mentorship it has long offered its portfolio companies through a company-building program for its founders dubbed Launch.
Led by partner Luke Beseda, the purported idea isn’t to attract more founders to Lightspeed but rather to clear the path for the startups it has already funded so they can get to that Series A round. Nearly all of them face the same questions and obstacles, explains Beseda. “They need to know: how do I set up and run a business? How do I hire and build a core team? How do I build my product strategy through customer interviews and design partnerships and drive revenue?”
Going forward, Lightspeed hopes to more systematically answer these questions through expert-led workshops, seed “playbooks,” and other toolkits that Lightspeed is offering through its new program.
Certainly, every bit of help must be welcome right now.
While many startups are simply dissolving — at least 3,200 venture-backed U.S. companies have gone out of business in 2023, according to data compiled for The New York Times by PitchBook — others are finding the emphasis on year-over-year growth and annual recurring revenue is real and not going away any time soon.
Right now, that includes at the Series A stage of things.
MICHAEL PAREKH, DEC 13, 2023

What I’ve been calling ‘Small AI’ in this AI Tech Wave, had a couple of notable developments this week, advancing their capabilities against peers and even much larger Foundation LLM AI models (what I’ve been calling ‘Big AI”), on some measures. Let me re-set the context. Here’s the AI Tech Wave chart again for background for some of the discussion below.

Back in September, in a post titled “From Big AI to Small AI”, I highlighted:
“Big AI is where the industry’s attention and dollars are currently riveted on: Foundation LLM AI models that are getting ever more powerful running on ever more powerful GPUs in ever bigger Compute and power intensive data centers.”
“Think AI possibly a thousand times more powerful than today’s OpenAI GPT4, Google Palm2/Bard (and upcoming Gemini), and Anthropic’s Claude 2 in three years or less. That sure beats Moore’s Law by a mile and then some. As discussed earlier, over two hundred billion are being invested worldwide to get ready for these ‘Big Daddy’ Foundation LLM AI models.”
“Ok, so where does ‘Small AI’ fit? These are far humbler and prosaic, but again ever more powerful LLM AI models increasingly being run on billions of devices already in the hands of over 4 billion users of smartphones and other computers ‘locally’. Also known as ‘the Edge’ as I’ve referred to it here and here. They're starting small today on photo and other apps on your phone, and will spread to almost every app and services on billions of local devices, tapping into near infinite local and cloud data sources. “
The Information notes today in “The Rise of ‘Small Language Models’ and Reinforcement Learning”:
“This week, several companies staked out new ground in the latest forefront of AI research: trying to prove that their models can do more with less. On Monday, the French AI startup Mistral—fresh off its $415 million funding round—published a new model called Mixtral 8x7B. The model, which is open-source, quickly racked up plaudits from AI researchers for its ability to match the quality of GPT-3.5 on some benchmarks despite its relatively puny size. Mixtral is small enough to run on a single computer (albeit one with a fairly hefty 100 gigabytes of RAM).”
“Mixtral 8x7B gets its name because it combines various smaller models that are trained to handle certain tasks, thereby running more efficiently. (It’s an approach called a ‘sparse mixture of experts’ model). Such models aren’t easy to pull off—OpenAI previously had to scrap development of a mixture of experts model earlier this year after it couldn’t get it to work, The Information previously reported.”
“Then, on Tuesday, Microsoft researchers published the latest version of their home-grown model called Phi-2. That model was tiny enough to run on a mobile phone, with just 2.7 billion parameters compared to Mixtral’s 7 billion (remember, OpenAI’s GPT-4 is believed to have around a trillion parameters across a number of expert models). It was trained on a carefully-selected dataset that is a high enough quality to ensure the model generates accurate results even with the limited computing power available on a phone.”
“Microsoft touted it as a “small language model,” a relatively new term for a category that Microsoft’s research division is leaning into heavily. It’s not yet clear exactly how Microsoft or other software makers might put small models to use, but the most obvious benefits would be driving down the cost of running AI applications at scale and dramatically broadening the usage of generative AI technology. That’s a big deal.”
It’s a big deal indeed. And other big tech companies (aka ‘Magnificent 7’) have yet to announce their moves in ‘Small AI’. I’m of course referring to Apple, along with a range of other technology companies that have yet to scale their AI moves.

In “Going Small to Go BIg” this November, I emphasized”:
“As I’ve highlighted before, Apple has led on this trend with over two billion devices in user hands:
“Apple has been including a neural engine as part of its homegrown processors since it introduced the A11 processor in the iPhone 10 in 2017.”
“This past Monday night, Apple unveiled its latest MacBooks during a rare prime-time event, a sign of its renewed focus on its own PC division. The latest MacBook Pros, along with an upgraded iMac, will all use a version of the company’s new M3 processors. Apple said the high-end version of the chip, called the M3 Max, will be capable of running complex AI workloads.”
“Chip makers AMD, Intel, Qualcomm, Micron, and others are all accelerating their efforts with AI chips particularly optimized for local ‘reinforced learning loops’ for AI inference at ‘the Edge’, which are going to be the primary driver of how users augment themselves with AI going forward.”
Key point here is that AI innovation is just beginning. It’s coming fast and furious. It will be open and closed AI, Narrow and Wide AI, and also Big and Small AI. All of these developments and more will give us far more augmented and native AI applications and services soon. Far beyond OpenAI’s ChatGPT, which kicked it all off just over a year ago. Stay tuned…..
DEC 8, 2023

Another day, another huge new AI model revealed. This time it’s Google’s Gemini. The demo video earlier this week was nothing short of amazing, as Gemini appeared to fluidly interact with a questioner going through various tasks and drawings, always giving succinct and correct answers.
Yet in reaction to the announcement Google’s stock only got a couple percentage bump—a minimal response to supposedly being one step closer to artificial general intelligence (AGI), the holy grail of AI. Perhaps because, while the video makes it seem like the AI is watching the person’s actions (like the viewer is) and reacting in real-time, that’s. . . not what’s going on. Rather, they pre-recorded it and sent individual frames of the video to Gemini to respond to, as well as more informative prompts than shown, in addition to editing the replies from Gemini to be shorter and thus, presumably, more relevant. Factor all that in, Gemini doesn’t look that different from GPT-4, scoring only slightly better on batteries of tests, and GPT-4 gives much the same answers to photos taken from the video. It was the stitching together of an illusion.
A synecdoche of the industry’s current state as a whole. It was only one month ago that OpenAI held a DevDay launch, unveiling with pomp a novel “GPT store” of apps. The presentation projected the image of an ascendant Sam Altman acting as the heir to Steve Jobs. In retrospect, the presentation was a high-water mark, or at least, some sort of local peak, for the company and the AI industry as a whole. Just several weeks later, the news that the board fired Sam set the internet ablaze, and led to increasingly speculative reporting on backroom maneuvers of the non-profit turned for-profit.

Continued hype is necessary for the industry, because so much money flowing in essentially allows the big players, like OpenAI, to operate free of economic worry and considerations. The money involved is staggering—Anthropic announced they would compete with OpenAI and raised 2 billion dollars to train their next-gen model, a European counterpart just raised 500 million, etc. Venture capitalists are eager to throw as much money as humanely possible into AI, as it looks so revolutionary, so manifesto-worthy, so lucrative.
Yet, who precisely is the waiting audience for the GPT Store? I haven’t seen it mentioned much, if at all, on social media. Even news stories about the GPT Store post-announcement are scarce, except a reveal it’s been delayed to 2024. I did, however, find a slideshow from Gizmodo of the current kinds of things they expect will be in GPT store, based on current GPTs. Here’s the first one:

While I have no idea what the downloads are going to be for the GPT Store next year, my suspicion is it does not live up to the hyped Apple-esque expectation.
And listen. We all know that, barring further infighting and coups, neither OpenAI, nor Anthropic or any of these leading players, are in any real immediate economic danger in the short term. That’s absolutely not what I’m saying. People in the industry are used to criticisms, which too often are some academic finger-wagging warning that AI will never work, that artificial general intelligence (like anything resembling a human’s) is impossible, or so on. Right now Waymo’s self-driving cars are outperforming humans, at least in the sense of getting into 76% less accidents. And given their test scores, I’m willing to say GPT-4 or Gemini is smarter along many dimensions than a lot of actual humans, at least in the breadth of their abstract knowledge—all while noting even leading models still have around a 3% hallucination rate, which stacks up in a complex task.
A more interesting “bear case” for AI is that, if you look at the list of industries that leading AIs like GPT-4 are capable of disrupting—and therefore making money off of—the list is lackluster from a return-on-investment perspective, because the industries themselves are not very lucrative. What are AIs of the GPT-4 generation best at? It’s things like:
writing essays or short fictions
digital art
chatting
programming assistance
The proposed GPT Store is a lot of versions of this, and these are also the use cases that high-profile investors are explicitly bullish about. Here’s from Andreessen Horowitz’s “The Economic Case for Generative AI:”
DEC 11, 2023

Erik Hoelhas written an article on the supply paradox of AI. It's well argued, well written and an interesting piece, and highlights some of the weirder aspects of the reality we’re living in. But it's also wrong, for what I think are pretty interesting reasons.
Hoel's main argument is somewhat as follows, I paraphrase:
AI is doing some pretty amazing things, like helping code or writing Shakespearean sonnets about tax law
AI companies are also raising tons of money
But AI is mainly only disrupting low value fields - essayists, graphic artists and basic programming - because that’s where the training data is
This will not create enough value, because these are low value fields
Higher value fields like law and medicine are outlawed to AI anyway
Oops
It's a clever argument. But it’s incomplete. The core crux is that what AI can do actually is amazing, and where the disruption starts isn't a great indication of where it ends. Because …
Demand isn't static
We've traditionally grown GDP not by displacing a segment of the economy but also by growing the pie. The first is efficiency and it's essential but it's what we do with the now freed up resources that's truly magical.
obviously proponents of investment in these companies will say all this is only the beginning. What about AI personal assistants? Robot butlers? All those things! Even assuming all that comes true sometime over the next decades: what is the market for personal assistants? What’s the market for butlers? Most people have neither of those things.
This is also what they said about cars, and mobile phones, and computers. Remember Microsoft's slogan of “A PC on every desk and in every home”? It seemed outlandish, audacious. Like the moon landing but harder because it needed to convince everyone.
And it came true.
So asking where exactly the benefits will come from is a bad question.
A much better question is, can this new thing do something interesting and useful? If so, we will find ways to make lives better by using them.

An example. Slightly more than a century ago we used to spend almost double our average income share on food. The drop did not come through some weird scarcity, but through abundance.
That’s why the new abilities are more interesting than the new applications. A sufficient degree of change in scale is equivalent to a change in scope. Even error prone in unpredictable ways, the ability to convert one form of input to another is a big chunk of our GDP.
It can mean making APIs talk to each other, it can mean converting vague ideas about a company’s strategy to something easily communicable, it can mean being able to create movies and music and stories leading to a renaissance of our interest in art.
We can’t easily know, but enabling more efficient production is the hardest part. Finding new ways of consumption is comparatively easy. This is because …
Abundance is what technology brings
Also, information technologies have always been weird. Because information in some sense is entirely surplus to survival. Now you can take it to me and that this is irrelevant or less important in some sense, but that's not true because it is what underpins almost all of our civilisation on growth so far.
Unimaginable abundance in any domain is the way humanity reallocates resources to new bottlenecks. At least in the absence of a godlike central planner.
The world's of art has always been less lucrative on the margins, because the supply of artists has always just exceeded the demand for them. This isn't magic. It's because even prospective artists need to eat and they make different decisions when they can't.
So when the ability to create art shifts downwards, the supply increases. Does this cheapen the art? I don't see how. It cheapens some art, sure, but only because other art rises to take its place. We can make cathedrals and temples that would've made emperors swoon a couple centuries ago but they go unnoticed, with people looking at them like a boondoggle or like a slightly prettier parking lot.
But without abundance there's no added supply. That's what technology does.
Name the usual suspects - electricity, transportation, energy and heating, cooling, calligraphy, theater, opera - they’re all examples of areas that technological abundance demolished. Or changed entirely, depending on your point of view.
The abundance that brought us the ability to listen to music in our dining rooms at the whisper of our whim is also what brought about the rise in those wanting to do this.

Which brings us to the central crux …
The Supply paradox of AI
Hoel defines the Supply paradox as follows.
the easier it is to train an AI to do something, the less economically valuable that thing is. After all, the huge supply of the thing is how the AI got so good in the first place.
The simplest reading of this, that it is the hard things that are worth doing, is true but trivial. The fact that it’s the supply that drives what makes something easier, that’s however incorrect.
The value of AI isn’t created purely because of the availability of data. Quality matters. In fact quality really matters. For instance, we have far more data and writing on scientific papers. Here’s the information for The Pile, a popular starting point for much of the modern LLM training.

Is it true that LLMs are better at generating PubMed abstracts than writing sonnets, or fake Wikipedia articles? Not really. They’re great at customer support conversations, higher value enterprise software, compared to writing useful short stories, something they ought to be better at.
Lessig, Nov 28

In my first book, Code and Other Laws of Cyberspace (1999), I told the story of why I had become a lawyer. My uncle, Richard Cates, had been the lawyer working for the House Committee on Impeachment (along with a much younger lawyer, Hillary Rodham (soon to be) Clinton). In 1974, just before Nixon resigned, Cates visited us in Pennsylvania and took me for a long walk. I wanted to know why he was doing what he was doing — persecuting Richard Nixon! I was 13. Uncle Dick was the only Democrat in our extended family. He was also the only lawyer. My father despised lawyers. I loved everything about my father.
Uncle Dick explained his job to me. It was, as he said, nothing more than to teach the facts of the case — the Watergate coverup — to Members of Congress. As I remembered his words in Code:
It is what a lawyer does, what a good lawyer does, that makes this system work. It is not the bluffing, or the outrage, or the strategies and tactics. It is something much simpler than that. What a good lawyer does is tell a story that persuades. Not by hiding the truth or exciting the emotion, but using reason, through a story, to persuade.
When it works, it does something to the people who experience this persuasion. Some, for the first time in their lives, see power constrained by reason. Not by votes, not by wealth, not by who someone knows — but by an argument that persuades. This is the magic of our system, however rare the miracles may be.
Those words changed me. They certainly changed who I wanted to be. A dozen years later, I would begin law school. Four years after that, I was clerking for Justice Scalia. And in that clerkship, too, if in only glimpses, I saw what Dick had spoken about. By then, I was no longer a Republican. Certainly not a conservative. But at that point, Scalia had a practice of hiring one liberal clerk. I was the token liberal for the OT 1990 term. And in that year, I sometimes saw law work as Dick had described it. I saw the quiet reasoning of law clerks flip the vote of the Court — twice, one time, from unanimous in one direction to unanimous in another; the other time, from 9–0, to 7–2 the other way round. And I saw Scalia repeatedly argued away from his initial conservative views, to views that were more consistent with his theory of originalism. The last time I saw him before he died, I joked that he had ruined me as a law professor: That as a clerk, he had shown me again and again how reason could drive him to do the “right” thing (as in the originalist thing) rather than the conservative thing; and that I had predicted the same again and again after I became a law professor. But again and again, I told him, he had let me down. Scalia laughed his famous laugh, and we spent the next hour arguing — with reason—about whether my criticism was correct.
Yet it is increasingly hard to sustain such confidence in reason’s power today. There are as many Americans today who believe the 2020 election was stolen as believed it was stolen on January 6. Reason is not responsible for that fact. Time and again, we all have the experience of engaging with someone about something relatively difficult. Time and again, we walk away believing that either we can’t persuade or that reason doesn’t work. The enterprise feels hopeless; most simply give up.
And then I thought, maybe reason isn’t dead. Maybe it’s just reason for us, today. Maybe another form of intelligence could play the reasoning game better. Like, for example, AI.
So I decided to test it. I’m not a supporter of RFK Jr. Indeed, I fear he is a fantasist. But among the “conspiracy theories” that RFK Jr. defends is a theory he has come to late in his life — that his father was not actually killed by Sirhan Sirhan. Certainly, Sirhan shot at RFK. Certainly, he had opportunity, motive, and means — and he confessed (though he said he didn’t remember the event). Most take that confession — and the supporting assertions by those in the government responsible for making such assertions—to mean that Sirhan killed RFK.
And yet, it is perfectly clear that can’t be correct. The coroner who conducted RFK’s autopsy—in the presence of military coroners who he had flown in to confirm his work as he did it—concluded that RFK was killed by shots at close range in his back. Sirhan was never behind Kennedy; never within inches of Kennedy; and every single bullet that Sirhan fired is accounted for — and none entered Kennedy’s body.
So I wanted to see how well ChatGPT responded to this conflict between the views of the authorities — that Sirhan killed RFK—and the view of the coroner—that Sirhan could not have killed RFK. Here’s the transcript:

I was astonished by this exchange. Because here was the reason Cates was talking about. ChatGPT made its point. I pointed out the weakness in its point. ChatGPT then “rethought” its argument and acknowledged its mistake. And then, it even acknowledged its failure fully to acknowledge its mistake. By the end, its conclusion contradicted where it had begun: Through “reason” it had been “persuaded.”
Dec. 12, 2023 5:00 PM PST
Silicon Valley’s phones buzzed with alerts last night, offering surprising news about one of its bellwether companies: A federal jury decided Google had exercised unlawful monopoly power over app developers like Epic Games. Investors so far have responded mostly with a shrug. By the end of today, the stock of Google’s parent company, Alphabet, had slumped by less than a point on a day the S&P 500 ticked up.
The muted response might reflect the slow-moving mechanics of the specific court case with Epic, maker of the game Fortnite. Google still has a chance to win that case on appeal. A ruling on one of Alphabet’s other high-profile antitrust cases—the Department of Justice’s lawsuit challenging the company’s search engine deal with Apple—isn’t expected until next year.
The broader truth, however, is that Wall Street is betting big tech can amass power faster than antitrust regulators, judges and juries can chip away at it. The stock prices of Alphabet, Amazon, Apple, Meta Platforms and Microsoft have seen more than twice the growth rate of the S&P 500 index this year—a time during which most of those five firms faced an onslaught of claims that they had abused monopoly power.
Investors have had plenty of fuel for their confidence this year. Lina Khan’s Federal Trade Commission, demonstrating more bark than bite, keeps losing important cases against tech giants like Meta and Microsoft. Meanwhile, the latest shiny tech advancement has seemingly played right into tech giants’ hands. Large language models require the kind of vast resources—data, cash, data centers, chips—that big tech has and that startups usually don’t have. (Even when startups are on the cutting edge, à la OpenAI and Anthropic, big tech has managed to buy its way in.)
Still, investors may be overlooking how Epic’s win over Google underlines a major sentiment shift that will continue to dog big tech. Unlike Apple’s earlier win in a similar case with Epic, which a judge ruled on, nine San Francisco jurors decided yesterday that Alphabet had broken the law. The people are speaking. That might be an antitrust indicator investors should remember—and a more difficult battle for big tech to win.

Netflix
Dec 13, 2023
This is a huge story, so I'll just cut to the chase; the notoriously secretive Netflix has published all its streaming numbers for the public to see.
Netflix is a streaming giant that got a huge head start on the content library before everyone else was able to get involved. They have over 200 million subscribers and have not really told people what those subscribers watch.
Transparency became the buzz word of the year when both SAG and the WGA asked for more of it in their contacts, so they could be paid when their shows were popular.
Now, right before the holiday break, Netflix has answered this call and put everything out into the world.
You can visit this site to see the full engagement report, but we'll go into details below.
The headline is that to provide more transparency, Netflix will publish a What We Watched: A Netflix Engagement Report twice a year.
They're going to cover:
Hours viewed for every title—original and licensed—watched for over 50,000 hours.
The premiere date for any Netflix TV series or film; and
Whether a title was available globally.
The report will cover 18,000 titles and represent 99% of all viewing on the streamer.
The streamer says: "Success on Netflix comes in all shapes and sizes, and is not determined by hours viewed alone. We have enormously successful movies and TV shows with both lower and higher hours viewed. It’s all about whether a movie or TV show thrilled its audience—and the size of that audience relative to the economics of the title; and to compare between titles it’s best to use our weekly Top 10 and Most Popular lists, which take into account run times and premiere dates."

Netflix
The engagement report is really eye-opening when it comes to what people are watching.
Before we get started, huge shoutout to No Film School founder, Ryan Koo, whose Netflix movie Amateur was viewed over 4 million hours, more than more recent releases like I Think You Should Leave, Steven Soderbergh’s The Laundromat, Entergalactic, and Tiger King 2.
But right at the top of the list, with over eight hundred million hours of view time is The Night Agent, with Ginny and Georgia, The Glory, and Wednesday right behind in the hundreds of millions
There is palpable enthusiasm for non-English stories, which generated 30% of all viewing on the streamer.
DEC 14, 2023

A new television station will feature artificial intelligence (AI) generated news anchors for the first time in the U.S. next year.
New Los Angeles-based station Channel 1, which will launch in 2024, is hoping to be the first ever national syndicated news station in the U.S. to use AI-created anchors — instead of real human people as presenters.
According to a report by the Daily Mail, Channel 1’s news segments will use a combination of AI-generated humans and digital avatars that have been created using doubles of real actors.
However, real-life human anchors will be used for Channel 1’s most important news reports.
The station wants to launch these AI-generated news anchors on free ad-supported streaming TV — including apps such as Crackle, Tubi, or Pluto — as early as February.
Channel 1 founder Adam Mosam tells the Daily Mail that the television station is aiming to “get out in front and create a responsible use of technology.”
Mosam assured the publication that it would not exploit AI technology. He also said that the company plans to be transparent with viewers about what footage is original and what is AI-generated.
However, others in the media industry have not been so convinced and raised concerns about the future of journalism.
“If you believe in the concept of ‘fake news,’ you have seen nothing,” Ruby Media Group CEO Kristen Ruby shared on X (formerly known as Twitter).
“At least your news is presented by humans. When AI news anchors replace human news anchors — the concept of fake news will have a totally different meaning.”
While AI-generated news anchors may be a new concept in the U.S., they have been used on China’s state news channels since 2018.
Earlier this year, China revealed its latest digital news anchor — an AI-powered “woman” named “Ren Xiaorong” that delivers news 24 hours a day, 365 days a year.
By Cory Weinberg and Ann Gehan
Dec. 12, 2023 2:46 PM PST

High-end pet food startup The Farmer’s Dog is working with JPMorgan and other investment banks to raise hundreds of millions of dollars by early next year, in a deal that could value it significantly higher than its last $2.5 billion valuation, people familiar with the matter said.
The fundraising could help the company buck the trend of collapsing direct-to-consumer valuations over the past two years. The Farmer’s Dog, which delivers bags of customized dog food to doorsteps of customers who sign up for a subscription, expects to generate more than $800 million in sales this year, the people said. That represents growth of about 60% from 2022, as pet owners continue to splurge on their animals even as they cut back on spending for themselves.
THE TAKEAWAY
• Pet food startup expects to top $800 million in revenue this year
• Slower spending, higher costs have hurt many direct-to-consumer sellers
• Essential goods have remained a bright spot
At that level of revenue, the Farmer’s Dog will generate higher revenue than publicly traded pet food company Freshpet, which has a market capitalization of $3.8 billion. The company’s revenue is likely to grow further in 2024. Its rate of annual recurring revenue will top more than $1 billion by the end of this year, the people said, adding that The Farmer’s Dog isn’t yet profitable.
Founded in 2014, The Farmer’s Dog says its high-end dog food is made with fresher, more nutritious ingredients than traditional dry dog food, allowing it to charge higher prices. It’s one of the highest-valued startups selling products or services to pet owners, a category that venture capital flooded with fundraising dollars after more people bought or adopted cats and dogs during the pandemic.
The company couldn’t immediately be reached for comment.
The Farmer’s Dog has previously raised more than $150 million in total from investors including Shasta Ventures, Insight Partners and Forerunner Ventures, and was most recently valued at about $2.5 billion in June 2022, according to a copy of its corporate charters provided by the Prime Unicorn Index. The firm plans to set aside some of the capital it raises to allow investors or employees to cash out some shares, the people said, as it doesn’t expect to go public next year.

Share Dialog
Share Dialog
No activity yet