
Inside the Launch of Film3 DAO: A Cultural Rebellion Goes Onchain
How a grassroots film festival and a series of discovered trademarks transformed into a decentralized movement.

Defending Artistic Freedom: The Fight Against Trademarking Film3
The blockchain was built to eliminate gatekeepers, not create new ones.

How Film3 Breaks a Century of Gatekeeping
Welcome to a New Era of Cinema without Permission
Redefining indie cinema through blockchain-powered tools, decentralized systems & a community-driven, disruptive creative philosophy 🎬 Film3 Festival on @base

Inside the Launch of Film3 DAO: A Cultural Rebellion Goes Onchain
How a grassroots film festival and a series of discovered trademarks transformed into a decentralized movement.

Defending Artistic Freedom: The Fight Against Trademarking Film3
The blockchain was built to eliminate gatekeepers, not create new ones.

How Film3 Breaks a Century of Gatekeeping
Welcome to a New Era of Cinema without Permission
Redefining indie cinema through blockchain-powered tools, decentralized systems & a community-driven, disruptive creative philosophy 🎬 Film3 Festival on @base

Subscribe to The Film3 Feed

Subscribe to The Film3 Feed


<100 subscribers
<100 subscribers
I shared a post about a creator who trained a custom AI model at home and generated an impressive short clip in minutes. One person. One machine. Idea to video in hours, not months. The response was overwhelming to say the least. And none of it was from anyone in my network.
We're talking +3s and some +2s — strangers algorithmically delivered to my comment section, armed with sweeping statements, moral certainty, and very little apparent firsthand experience with the technology they were condemning.
I'm not talking about a comment section schooling anyone. I'm talking about a real snapshot of exactly where the AI conversation has broken down.

I’m a filmmaker. I co-own and run two studios — one of them a painstaking stop-motion animation studio, the other documentary. I spend my time making things the slow way. Physical sets. Puppets. Cameras. We call a few seconds of movement a triumph and I love that pursuit.
I don’t currently use AI in the creative sense that tends to provoke the loudest reactions online. Not because I’m morally opposed to it — it’s simply not how I like to work or tell stories. But I can still recognize what the tools are unlocking for other people.
Over the past few years, creators have started building in public with these systems — sharing experiments and showing what the technology can do. Some of it is rough. Some of it is genuinely remarkable. And when someone trains a custom model at home and generates a very realistic short Seinfeld clip in minutes, that’s an interesting technical and creative milestone whether you personally want to work that way or not.
Like it or not, there’s a practical reason recognizable material gets used in demonstrations like this. Familiar characters and dialogue create an immediate reference point. Everyone already knows what Seinfeld is supposed to look and sound like, so the audience can instantly judge what the model accomplished. If the exact same experiment had used an image of the creator with the same dialogue, it likely wouldn’t have traveled nearly as far — not because the technology was less impressive, but because the reference point would be missing.
Over the weekend I was yelled at about copyright, crime, the environment — the same debates that have been circulating for years. Those are real conversations, and they deserve serious attention.
But I’m not convinced someone needs to license a clip just to demonstrate a technical capability people are still trying to understand.
We exist in an internet culture built on remixing and reference. People share GIFs, create memes, repost images, clip videos, and reference quotes every day — often from material they don’t own. Entire platforms are built on that behavior.
Yet when a technical demonstration uses a recognizable piece of culture as a reference point, suddenly the standard becomes absolute.
That shift doesn’t really seem to be about copyright law. It looks a lot more like discomfort with the technology.
There’s a lot that could be said about that debate. But it wasn’t the conversation I was trying to have.
What struck me most wasn't the disagreement — it was the confidence behind arguments that weren't backed by anything. Sweeping claims about what AI "always" does, what it "never" creates, what it "will inevitably" destroy. Delivered with the authority of someone who has done the work, by people who almost certainly haven't.
This is the discourse problem. Not that people have concerns — the concerns are real and worth discussing seriously. It's that the loudest voices in the conversation are often the least informed participants in it. And on LinkedIn especially, that dynamic is on full display.

The reactions I got weren't cutting edge critique. They were the same arguments that were circulating two years ago — recycled anxiety dressed up as insight. AI will kill creativity. AI will end jobs. AI is theft.
Full stop, no nuance, no acknowledgment of what's actually happening in practice. Meanwhile, creators are building. Filmmakers are experimenting. Tools are evolving faster than the takes about them. The people doing the work aren't spending much time in the comment sections — which is probably why the comment sections look the way they do.
LinkedIn has become a place where fear performs well and complexity gets skipped. That's not a foundation for a real conversation about technology that is genuinely reshaping creative industries.
Take the training data argument — the one that generated the most heat. The concern about AI being trained on existing work is legitimate. Copyright, consent, and creative ownership are real issues that deserve real legal and policy attention.
But here's what didn't come up: we all contributed to this.
Every photo you tagged. Every post you published. Every time you clicked "agree" on a terms of service you didn't read — that data went somewhere. Social media platforms have been harvesting behavioral and creative data for over a decade. Most people handed it over without a second thought because the alternative was not participating.
So the outrage about AI training data, from people who have been feeding the social data machine for years without complaint, doesn’t read like a principled stance. It's a vibe — discomfort finally finding a visible target.
You don't get to perform shock at a system you helped build.

People are reacting to AI like it's the first technology in human history to carry risk. Like disruption is new. Like the printing press, the camera, and the internet didn't each fundamentally destabilize an existing creative economy before ultimately expanding it.
And there's a particular irony in the discourse: many of the loudest critics of AI are using it daily — through search, through recommendation algorithms, through the writing and grammar tools built into the apps they're already typing in. The outrage is real. The participation is also real. Both things are true, and most people aren't sitting with that contradiction honestly.
Nobody opted into social media's data extraction either. It was the price of being online. AI isn't categorically different — it's the next layer of the same system. Treating it as uniquely threatening while remaining embedded in every other layer of that system isn't a coherent position.
None of this means AI in filmmaking — or creative industries broadly — is without real tension. It isn't. Access still isn't equal. High-end compute, technical fluency, and time are real barriers, and cheaper tools don't automatically mean equitable tools. The question of whether we lowered the barrier or just relocated it is worth asking seriously.
The question of originality matters too. If models are trained on what already exists, are we creating something new or remixing the past at scale? That's a genuine creative and philosophical question — not a reason to stop, but a reason to think harder about intent and authorship.
And yes, when content becomes instant, volume explodes. Attention shrinks. Signal gets harder to find. We're not heading into a world with less creativity — we're heading into one where taste, curation, and point of view matter more than ever. The tools are table stakes. What you have to say is the differentiator.
This is a redefinition of authorship, originality, and what it means to make something. That's real. That's worth taking seriously.
But "I didn't ask for this" has never been a workable response to technological change. The more useful question — the one that nobody in my comment section was asking — is: given that this is happening, what do we actually do?
How do creators protect their work within the systems that exist, not the ones they wish existed? How do platforms, policymakers, and the people building these tools get to something resembling accountability before the conversation is entirely shaped by fear?
That's the conversation I want to have. Not the one where strangers show up from three degrees away to tell me AI is the end of human creativity, without a single specific example, and then log off feeling righteous.
We can do better than that. The technology deserves a smarter argument — and so do the people whose livelihoods it's actually affecting.
*For the record — I didn’t post this as some kind of test. There was no social experiment, no trap, no gotcha. I shared something I genuinely found interesting and didn’t front-load it with my own bias. But when the responses started rolling in the way they did, I’d be lying if I said it didn’t start to feel like one.
Sometimes the internet runs the experiment for you...
I shared a post about a creator who trained a custom AI model at home and generated an impressive short clip in minutes. One person. One machine. Idea to video in hours, not months. The response was overwhelming to say the least. And none of it was from anyone in my network.
We're talking +3s and some +2s — strangers algorithmically delivered to my comment section, armed with sweeping statements, moral certainty, and very little apparent firsthand experience with the technology they were condemning.
I'm not talking about a comment section schooling anyone. I'm talking about a real snapshot of exactly where the AI conversation has broken down.

I’m a filmmaker. I co-own and run two studios — one of them a painstaking stop-motion animation studio, the other documentary. I spend my time making things the slow way. Physical sets. Puppets. Cameras. We call a few seconds of movement a triumph and I love that pursuit.
I don’t currently use AI in the creative sense that tends to provoke the loudest reactions online. Not because I’m morally opposed to it — it’s simply not how I like to work or tell stories. But I can still recognize what the tools are unlocking for other people.
Over the past few years, creators have started building in public with these systems — sharing experiments and showing what the technology can do. Some of it is rough. Some of it is genuinely remarkable. And when someone trains a custom model at home and generates a very realistic short Seinfeld clip in minutes, that’s an interesting technical and creative milestone whether you personally want to work that way or not.
Like it or not, there’s a practical reason recognizable material gets used in demonstrations like this. Familiar characters and dialogue create an immediate reference point. Everyone already knows what Seinfeld is supposed to look and sound like, so the audience can instantly judge what the model accomplished. If the exact same experiment had used an image of the creator with the same dialogue, it likely wouldn’t have traveled nearly as far — not because the technology was less impressive, but because the reference point would be missing.
Over the weekend I was yelled at about copyright, crime, the environment — the same debates that have been circulating for years. Those are real conversations, and they deserve serious attention.
But I’m not convinced someone needs to license a clip just to demonstrate a technical capability people are still trying to understand.
We exist in an internet culture built on remixing and reference. People share GIFs, create memes, repost images, clip videos, and reference quotes every day — often from material they don’t own. Entire platforms are built on that behavior.
Yet when a technical demonstration uses a recognizable piece of culture as a reference point, suddenly the standard becomes absolute.
That shift doesn’t really seem to be about copyright law. It looks a lot more like discomfort with the technology.
There’s a lot that could be said about that debate. But it wasn’t the conversation I was trying to have.
What struck me most wasn't the disagreement — it was the confidence behind arguments that weren't backed by anything. Sweeping claims about what AI "always" does, what it "never" creates, what it "will inevitably" destroy. Delivered with the authority of someone who has done the work, by people who almost certainly haven't.
This is the discourse problem. Not that people have concerns — the concerns are real and worth discussing seriously. It's that the loudest voices in the conversation are often the least informed participants in it. And on LinkedIn especially, that dynamic is on full display.

The reactions I got weren't cutting edge critique. They were the same arguments that were circulating two years ago — recycled anxiety dressed up as insight. AI will kill creativity. AI will end jobs. AI is theft.
Full stop, no nuance, no acknowledgment of what's actually happening in practice. Meanwhile, creators are building. Filmmakers are experimenting. Tools are evolving faster than the takes about them. The people doing the work aren't spending much time in the comment sections — which is probably why the comment sections look the way they do.
LinkedIn has become a place where fear performs well and complexity gets skipped. That's not a foundation for a real conversation about technology that is genuinely reshaping creative industries.
Take the training data argument — the one that generated the most heat. The concern about AI being trained on existing work is legitimate. Copyright, consent, and creative ownership are real issues that deserve real legal and policy attention.
But here's what didn't come up: we all contributed to this.
Every photo you tagged. Every post you published. Every time you clicked "agree" on a terms of service you didn't read — that data went somewhere. Social media platforms have been harvesting behavioral and creative data for over a decade. Most people handed it over without a second thought because the alternative was not participating.
So the outrage about AI training data, from people who have been feeding the social data machine for years without complaint, doesn’t read like a principled stance. It's a vibe — discomfort finally finding a visible target.
You don't get to perform shock at a system you helped build.

People are reacting to AI like it's the first technology in human history to carry risk. Like disruption is new. Like the printing press, the camera, and the internet didn't each fundamentally destabilize an existing creative economy before ultimately expanding it.
And there's a particular irony in the discourse: many of the loudest critics of AI are using it daily — through search, through recommendation algorithms, through the writing and grammar tools built into the apps they're already typing in. The outrage is real. The participation is also real. Both things are true, and most people aren't sitting with that contradiction honestly.
Nobody opted into social media's data extraction either. It was the price of being online. AI isn't categorically different — it's the next layer of the same system. Treating it as uniquely threatening while remaining embedded in every other layer of that system isn't a coherent position.
None of this means AI in filmmaking — or creative industries broadly — is without real tension. It isn't. Access still isn't equal. High-end compute, technical fluency, and time are real barriers, and cheaper tools don't automatically mean equitable tools. The question of whether we lowered the barrier or just relocated it is worth asking seriously.
The question of originality matters too. If models are trained on what already exists, are we creating something new or remixing the past at scale? That's a genuine creative and philosophical question — not a reason to stop, but a reason to think harder about intent and authorship.
And yes, when content becomes instant, volume explodes. Attention shrinks. Signal gets harder to find. We're not heading into a world with less creativity — we're heading into one where taste, curation, and point of view matter more than ever. The tools are table stakes. What you have to say is the differentiator.
This is a redefinition of authorship, originality, and what it means to make something. That's real. That's worth taking seriously.
But "I didn't ask for this" has never been a workable response to technological change. The more useful question — the one that nobody in my comment section was asking — is: given that this is happening, what do we actually do?
How do creators protect their work within the systems that exist, not the ones they wish existed? How do platforms, policymakers, and the people building these tools get to something resembling accountability before the conversation is entirely shaped by fear?
That's the conversation I want to have. Not the one where strangers show up from three degrees away to tell me AI is the end of human creativity, without a single specific example, and then log off feeling righteous.
We can do better than that. The technology deserves a smarter argument — and so do the people whose livelihoods it's actually affecting.
*For the record — I didn’t post this as some kind of test. There was no social experiment, no trap, no gotcha. I shared something I genuinely found interesting and didn’t front-load it with my own bias. But when the responses started rolling in the way they did, I’d be lying if I said it didn’t start to feel like one.
Sometimes the internet runs the experiment for you...
Share Dialog
Share Dialog
Filmmaker recounts a creator who trained a home AI model to generate a Seinfeld clip in minutes, marking a milestone amid debates on copyright, data usage, and authorship. It frames remix culture's role in AI discourse, calls for nuance, and urges practical steps for creators and platforms. @film3
1 comment
Filmmaker recounts a creator who trained a home AI model to generate a Seinfeld clip in minutes, marking a milestone amid debates on copyright, data usage, and authorship. It frames remix culture's role in AI discourse, calls for nuance, and urges practical steps for creators and platforms. @film3