Right now, as you read this, you’re almost certainly using some form of software born from one of computing’s most fascinating ideological battles.

It began with a broken printer, evolved into a pragmatic compromise, and is now being rewritten by artificial intelligence. The story of how we got here reveals not just the history of software, but a fundamental question that’s more relevant today than ever: Should technology prioritize freedom or effectiveness? And in the age of AI, is there a third unknown option between the two?

In 1980, Richard Stallman, who I wrote about in my Crypto Principles article, was working at MIT’s Artificial Intelligence Lab when he encountered a problematic printer that would ignite a revolution.
Much to the chagrin of the entire lab, the printer kept jamming without notice. Naturally as a computer programmer, Stallman figured he could easily fix the software to notify users when paper jammed. A simple, helpful modification.
There was just one problem: the manufacturer wouldn’t give him the source code.
This wasn’t unusual for the time. Software was becoming increasingly proprietary, with companies locking down their code behind legal restrictions. But for Stallman, who had grown up in a collaborative early computing culture where sharing code was the norm, this felt fundamentally wrong. It was a moral outrage.
“What does society need?” Stallman argued. “It needs information that is truly available to its citizens - programs that people can read, fix, adapt, and improve, not just operate.”
That frustration led Stallman to launch the GNU Project in 1984 and establish the Free Software Foundation in 1985. This wasn’t just about making software available, it was a full-on social movement.
He established four fundamental freedoms:
The freedom to run the program for any purpose
The freedom to study and modify the code
The freedom to share copies
The freedom to distribute your improvements
To protect these freedoms, Stallman created the GNU General Public License (GPL), introducing the concept of “copyleft”, a clever hack that used copyright law against itself. Any software under the GPL that you modified had to also be released under the GPL. This “viral” nature (as critics called it) ensured that free software would remain free forever.
This was free software: a moral stance that viewed restricting software as restricting human freedom itself.

By the mid-1990s, free software had proven itself technically. The Linux kernel, released in 1991 under the GPL and also referred to as the “missing kernel”, worked with GNU tools to create complete operating systems. Apache web server was powering the emerging internet. The code was solid. The communities were thriving.
But there was a problem: the word “free” was confusing (“Free as in freedom, not free beer”, as Stallman likes to joke), and more importantly, Stallman’s moral rhetoric made business executives nervous. Companies wanted to use the software, but the ideological baggage was heavy with the copyleft licensing models.
Enter Eric Raymond with his 1997 essay “The Cathedral and the Bazaar,” reframing how to think about software development. He argues the Cathedral approach is carefully crafted and held somewhat sanctimoniously. Meanwhile the Bazaar approach is open, with early and frequent releases relying on contributors of all levels. He was showing the world that the real story wasn’t about software freedom but rather about a superior development methodology that was open for scrutiny.
His famous principle, “Linus’ Law,” named after Linux creator Linus Torvalds, captures it succinctly: “Given enough eyeballs, all bugs are shallow.” The transparency of the code meant that thousands of developers worldwide could identify and fix vulnerabilities. The collaborative nature meant faster innovation. Instant peer review meant inherently higher quality.
This wasn’t a social movement. It was purely about optimal engineering.
In 1998, the term “open source” was born, deliberately chosen to emphasize practical benefits over philosophical ideals. As I’ve noted before in the Crypto Principles article, both terms refer to essentially the same licenses, but open source is a development methodology whereas free software is more of an ideological social movement.
The shift worked spectacularly. Companies that would never embrace “free software” and the strings that came with it eagerly adopted “open source.” By 2013, more than 90% of Fortune 500 companies were using Linux. Apache powered 30% of all websites. The infrastructure of the internet is now running on open source code.
Harvard Business School research would later quantify the impact: without open source software, firms would pay an estimated 3.5 times more to build equivalent platforms. We’re talking about roughly $8.8 trillion in value.
Open source had won. Not through ideology, but through adoption and results.
The philosophical divide between free software and open source never really healed. It erupted dramatically in 2007 when the Free Software Foundation released GPLv3.
Stallman’s new license attempted to address modern threats to software freedom: companies using GPL software in hardware that prevented users from running modified versions (called “tivoization”), and patent deals that undermined developer communities. The GPLv3 was more aggressive, more protective, more ideological.
The open source world rebelled. Foundational technologies used by millions rejected it.
Linus Torvalds, creator of Linux, refused to adopt GPLv3 for the kernel. MySQL declined. VLC media player declined. Blender declined.

Apple, perhaps most tellingly, switched its compiler from the GPL-licensed GCC to Clang, which used a permissive license. When Samba upgraded to GPLv3, Apple replaced it with a closed-source alternative. The message was clear: the GPL’s “viral” nature, which Stallman saw as protecting freedom, came off more to businesses as restricting growth and development.
Statistics tell the story of the shift:
In 2011, GPLv2 dominated with 42.5% of projects, while GPLv3 had just 6.5%
By 2015, the MIT license had overtaken GPLv2 for first place
GPL continued declining as developers chose permissive licenses over copyleft

The pragmatists definitively won. Free software’s moral stance was giving way to open source’s business-friendly approach.

Just when you thought the story was settled, enter artificial intelligence and the old debates are erupting with new intensity and perspective.
In 2024, something remarkable happened: AI-related projects dominated open source development in ways that would have seemed impossible just years earlier. The most starred open source projects from startups were overwhelmingly AI-focused, with 55% focused on AI and another 25% on AI-enabled developer tools.
But here’s where it gets interesting: the AI revolution is bringing back Stallman’s questions about freedom, control, and who benefits, framed in new forms.
The New Open Source Pioneers:
Zed, a next-generation code editor designed for collaboration between humans and AI, became the most popular new open source project of 2024. OpenHands acts as an AI software engineer that can understand your instructions and write code. Cline brings autonomous coding right into your IDE.
These are all tools reshaping how software gets created.
The Democratization Argument Returns:
Just as Stallman argued for users’ right to modify software, today’s open source AI advocates argue for the right to understand and control AI systems. Projects like Exo let you run AI clusters at home using everyday devices like phones and tablets. xAI released Grok-1, a major AI model, as fully open source.
AI leaning into open source practices challenges the notion that cutting-edge technology must be locked behind closed doors.
But the Tension is Different:
The corporate world learned from the GPL wars. Most AI companies releasing “open source” models use permissive licenses, not copyleft. They want the credibility and community of open source without the ideological restrictions of free software.
Meanwhile, new questions emerge that neither Stallman nor Raymond fully anticipated:
If an AI model is trained on GPL code, should its outputs be GPL?
When AI generates code, who owns it?
Is “open source AI” without the training data truly open?
Can you have meaningful freedom with a model you can’t afford to train yourself?
Python has become the language of choice for AI projects, powering 60% of the top open source releases in 2024. The community is building the infrastructure for the AI age using the same collaborative principles that built the internet.

The story from Stallman’s printer to today’s AI models is a preview of our immediate future.
We’re watching the same philosophical battle play out again:
The idealists argue that AI must be truly open: not just models, but training data, methodologies, and the ability to meaningfully modify and control these systems
The pragmatists argue that what matters is effective collaboration and innovation, even if some aspects remain proprietary
The corporations learned from the GPL era and are carefully choosing licenses that let them benefit from community contributions without giving up control
The numbers show the pragmatists won the last round decisively. GPL usage declined as developers chose permissive licenses that companies preferred. But AI might be different. The stakes feel higher when we’re talking about systems that might write most of our code, make decisions affecting millions, or shape how information flows through society. It could be a really slippery slope that would accelerate quickly.

Free software started as one programmer’s frustration with a broken printer and grew into a moral crusade about user freedom.
It evolved into the pragmatic open source methodology that conquered the tech industry. And now, as AI reshapes technology once again, we’re asking the same fundamental questions Stallman asked in 1980:
Who should control the tools we use? What does freedom mean in a world of AI code? Can openness and commercial success coexist?
The AI revolution is happening right now, in the open, with communities sprouting up tools that rival corporate offerings. Whether this new chapter emphasizes freedom or pragmatism, ideology or results, it will shape not just how we build software, but how AI integrates into everything we do.
The story continues to unfold all around us one pull request, one commit, one model at a time.

