
Nye's Digital Lab is a weekly scribble on creativity at the intersection of AI & Distributed Networks.
This week I am endeavoring to untangle the crisis of AI copyright, and compare it to what happened to the music industry when peer to peer sharing suddenly arrived.
“Napster hijacked our music without asking.
They never sought our permission. Our catalog of music simply became available as free downloads on the Napster system.” - Lars Ulrich, Metallica
Shawn Fanning, an 18-year-old freshman at Northeastern University, listened to his roommate complain about dead links to MP3 files online. So Fanning codes a solution, naming it after his high school nickname: "Napster."
Within 18 months, 26 million people are using it to share 80 million songs, and the entire music industry is in
Napster, a peer-to-peer sharing network, bypassed the traditional distribution model and caused sales in music publishing to plummet from a peak of $13 billion to $2 billion over a decade. The industry successfully sued and shut down Napster, but like “whack-a-mole,” users simply migrated to decentralized alternatives like LimeWire and BitTorrent. The entire business had to abandon its physical product (CDs) and pivot to new, less profitable models like digital downloads and streaming in a desperate, decade-long struggle for survival.

Today there are still superstars like Taylor Swift and the global dominance of K-pop Demon Hunters, and on the other side, a massive democratization of tools and music artists. But there is no more middle class of professional musicians. The middle of the music industry has completely been wiped out by instant distribution.
The story of Napster is a prophecy.
You're studying art, design, writing, or music in an era where AI can generate images in seconds, write essays in minutes, and compose symphonies without breaking a sweat. If you think what happened to the music industry was disruptive, we're watching something far more insanely radical unfold. But unlike Napster, we have the opportunity to build something better from the start.
But we better move. And I mean, now.

When Napster launched, it took two full years to reach its peak of 26 million users. The music industry had time to respond (badly, I might add ). They sued Napster in December 1999, got an injunction by 2001, and watched CD sales decline gradually from $13 billion to $2 billion over a decade.
Painful? Yes. But survivable.
AI's trajectory is insane. ChatGPT hit 100 million users in just two months. Despite backlash, it is the fastest-growing consumer application in history. Midjourney went from producing weird, distorted faces and multi-finger photos to near-realistic art in less than a year. Sora 2 can now generate Hollywood-quality video from text prompts. If you are one of the angry commenters I see calling out generative errors, wait six months, and the advances will easily correct them.
The acceleration is nauseating.
The music industry had a decade to figure out streaming. Visual artists have (maybe?) two years before AI reaches parity with human output. A recent Stanford study found that when AI art enters an online marketplace, human artists' earnings drop immediately. Not gradually, not over months, or even days.
Immediately.
Entry-level graphic design positions have dropped 28% in just two years. The International Confederation of Societies of Authors and Composers projects creators will lose $22 billion by 2028. Meanwhile, AI companies are generating $9 billion annually from creative content. The money isn't disappearing, folks. It’s being redirected from creators to tech.
This is alarming. But if we act, it's not inevitable.

The music industry thought they could sue their way out of disruption. They won their case against Napster, shutting it down in 2001. Victory, right?
Sorry.
Users just moved to Kazaa, LimeWire, and BitTorrent, and an endless stream of decentralized networks that couldn't be shut down. The RIAA ended up suing 12-year-old girls and grandmothers looking for rare Frank Sinatra cuts, destroying their public image while accomplishing nothing.
Today's legal battles are equally futile but far more confusing. (For this essay, I had agents crawl through over 10,000 AI copyright infringement lawsuits. It's a mess.)
In June 2025, two federal judges issued completely opposite rulings just two days apart. Judge William Alsup said training AI on copyrighted books was "quintessentially transformative" and legal. Judge Vince Chhabria warned the same practice could destroy creative markets. The U.S. Copyright Office insists AI-generated works can't be copyrighted without "substantial human authorship," but they admit they can't actually tell the difference between human and AI creation.
If you're a visual artist, companies like Stability AI and Midjourney have already trained on billions of images, including probably anything you've posted online. The same Stability that just partnered with gaming giant Electronic Arts.
The Copyright Office's latest report basically throws up its hands, saying current law is "adequate" while admitting they have no way to enforce it. For art students, this means the legal system won't save you today. Copyright law was designed for a world where copying was hard and creation was human.
Neither assumption holds anymore. Litigation won't reverse this.
We must invent an alternative path.
"A culture without property, or in which creators can't get paid, is anarchy, not freedom." - Lawrence Lessig

While Silicon Valley corporations scramble to monetize proprietary AI models, something quieter and more powerful is building:
the open-source revolution.
Meta released Llama as open-source. Stability AI made their model weights publicly available. These moves weren't charity. These moves were strategic and they opened the door to something the corporate model can't control: collective ownership of AI development.
China's strategy is also pretty interesting here. Rather than betting everything on proprietary platforms, they're prioritizing open-weight models and distributed systems.
Again, this isn't that the Chinese want to be nice and share. It’s a recognition that the future belongs to whoever controls the infrastructure, not the software on top. Open-weight means the AI industry can't be bottle-necked by three tech companies in California.
For creators, this is really, really, important.
Communities are already building provenance systems, (cough, cough) blockchain tracking that records where training data comes from, who created it, and who benefits from its use. (Oh jeez, not the blockchain again, Nye.)
Distributed ledgers get dismissed because of their association with cryptocurrency, but the underlying technology, the immutable records of authorship and attribution, is exactly what creators need.
Imagine an open AI model trained on a collective data commons, where artists, writers, and musicians retain verifiable ownership of their work. In theory, an AI generates something, the system records which training samples contributed to that output. Revenue flows back to creators automatically.
Hugging Face, which is a bit like an AI GitHub already has millions of active users. Filecoin is reaching maturity on the Ethereum main net. These are the exactly the kinds of initiatives building this collective infrastructure. The protocols do exist. What's missing is adoption.
Adoption will likely come through crisis and fear.
The music industry centralized around three major labels. The tech industry is centralizing around three major AI companies. But open-source and distributed systems offer another path: creators collectively owning their own work with a chance to define their own terms.
Blockchain startup, Story, is one of several initiatives in the complicated IP tokenization space.

I like to think I’m up on these things, but understand this is a guess.
Based on current trends and emerging alternatives, here's what the next 24 months may likely hold:
By end of 2025:
Corporate platforms will continue tightening IP tracking through metadata and tokenization, trying to extract value from creators through licensing fees. Simultaneously, open-source communities will release increasingly sophisticated models that anyone can fine-tune locally. Entry-level creative positions will probably drop another 20%, but a parallel economy of independent creators using open tools will grow exponentially.
By mid-2026:
The Andersen v. Stability AI trial will set a legal precedent, probably allowing AI training but requiring compensation through collective licensing. ( Think of it like ASCAP for visual art. ) At the same time, the first truly decentralized AI models trained on community-owned data will definitely emerge. Most likely from decentralized infrastructure development in Asia or India. Artists will have a choice: license to corporations with poor token compensation, or participate in unproven collective sovereign systems where they may retain ownership.
Likely outcome:
Both. Corporate AI will dominate consumer platforms, paying creators fractional fees like Spotify's model. But I hope that a parallel infrastructure of open-source models, community data commons, distributed provenance systems will allow creators to build independent economies. It won't be easy or profitable for everyone, but it will be possible.
And that, gives me hope.
The critical difference from Napster: this time, creators have the opportunity to build the alternative rather than just watch the old model collapse.
Collective compute AI company, Prime Intellect, is trailblazing in this area.

You're entering a field being revolutionized beneath your feet. Crazy right? You can't stop this transformation any more than musicians could stop file-sharing. But I believe you can shape it.
Hiding won’t help.
Learn to use AI tools while developing uniquely human skills like conceptual thinking, emotional resonance, cultural conversations, personal narrative.
Engage with the conversation.
But also: learn how open-source works! Understand distributed systems and how provenance tracking might function. Participate in communities building alternatives. Innovations like Glaze are working to digitally protect artist works. Contribute your work to collective data commons. Follow the software tooling industry as it learns to track metadata and experiment with digital copyright.
Support platforms experimenting with creator-owned and open source AI infrastructure.
Most importantly, organize with other artists now. There are guilds and subreddits arguing about this topic right now! The music industry fought alone and lost. This time, creators have the tools to build something collectively owned. Do it!
Napster's ghost says fighting the future is futile, but surrendering to it is equally dumb. Creators aren't powerless. The technology for collective ownership exists. The question isn't whether AI will transform creative work, but whether you'll help build the infrastructure where that transformation benefits you rather than extracting value from you.
It looks bleak, but the outcome isn't predetermined. I’m determined to fight.
Whether it’s centralized IP or collective distributed networks, well...
that’s up to all of us.
That's it for this time. I do this every week. If you vibe to the ideas I express, consider subscribing or sharing with friends. We'll see you next time.
Nye Warburton is a creative technologist and educator from Savannah, Georgia who believes that humans should control their creative future. This essay was improvised using Otter.ai, researched with Manus.im, and refined, reworked, and edited using Claude Sonnet 4.0.
For more information visit: https://nyewarburton.com
To understand intellectual property and it's intersection with technology I recommend reading Lawrence Lessig's work. He has many books on the topic, but my favorite is his 2001 "The Future of Ideas."
His focus on legally empowering creative ideas is a strong guide for complicated times like ours.
This essay was researched and informed from the following sources:
Recording Industry Association of America (RIAA). (2020). YEAR-END 2019 RIAA MUSIC REVENUES REPORT. Retrieved from https://www.riaa.com/wp-content/uploads/2020/02/RIAA-2019-Year-End-Music-Industry-Revenue-Report.pdf
comScore. (2001, July 20 ). Napster user statistics. As cited in various historical online reports. (Original comScore report no longer publicly available).
APRA AMCOS & Goldmedia. (2024, August 19). Largest report on AI in music reveals potentially devastating impact for music creators. Retrieved from https://www.apraamcos.com.au/about-us/news-and-events/ai-in-music-report
(primary source for the 65% and 82% musician concern figures. )
U.S. Copyright Office. (2024). Copyright and Artificial Intelligence. Retrieved from https://www.copyright.gov/ai/
Wired. (2024, December 19 ). Every AI Copyright Lawsuit in the US, Visualized. Retrieved from https://www.wired.com/story/ai-copyright-case-tracker/
Visual Capitalist. (2023, October 20 ). Charted: 50 Years of Music Industry Revenues, by Format. Retrieved from https://www.visualcapitalist.com/music-industry-revenues-by-format/
Copyright Alliance. (2025, August 21 ). Mid-Year Review: AI Copyright Case Developments in 2025. Retrieved from https://copyrightalliance.org/ai-copyright-case-developments-2025/
Herington, J., et al. (2025 ). Musicians’ ethical concerns about AI: an interview study. AI & SOCIETY, Springer.
The Guardian. (2019, May 31). Oversharing: how Napster nearly killed the music industry. Retrieved from https://www.theguardian.com/music/2019/may/31/napster-twenty-years-music-revolution
Statista. (2025, March ). Retail value of music shipments in the United States from 1999 to 2024 (in billion U.S. dollars). Retrieved from https://www.statista.com/statistics/279081/retail-value-of-music-shipments-in-the-us/
The Boss Fight Pedagogy, October 4, 2025
Networks of Trust, September 21, 2025
The Biff Problem, August 24, 2025
Share Dialog
Nye
Support dialog
No comments yet