Prof Nye's Digital Lab is a weekly blog about the creativity, the game industry, artificial intelligence, distributed computing, and everything creatives and designers might find interesting in tech.
This week I'm considering code. If anyone can do it, what makes code great?
AI can now write code faster than most humans, and suddenly everyone with a ChatGPT account thinks they're a "vibe" programmer. Claude Code, Copilot, agentic systems like Lovable.dev and the IDE Windsurf, have all lowered the barrier for many to begin developing their own software.
But this democratization of coding comes with a cost that we haven't really been confronted with yet.
Just as we've learned to question the quality of our food, our relationships, and our media consumption, it's time we started thinking seriously about the quality of our code.
...
I have a friend who recently walked away from a twenty-year career in tech.
He wasn't just any programmer; he was the kind of developer companies hired as a CTO or senior contractor when they needed someone who actually knew what they were doing. But after years of watching managers treat coding like a cost center and seeing "vibe coding" on the horizon, he decided he'd had enough. He bought and now manages a moving company, choosing to reroute his career into something physical.
His was planned, but layoffs have also accelerated this exodus. I see blog posts from computer science students questioning the path they were told was a "life-guarantee" to study. Across the industry, experienced developers are questioning whether the craft they've spent decades mastering still has value in a world where AI can spit out working code in seconds.
But here's what I want to know:
Who's going to teach the next generation what quality actually means?
Managers see cost savings—why pay a senior developer $150,000 when you can have a junior developer prompt ChatGPT for $60,000? The math seems simple until you realize that knowing how to ask an AI for code and knowing how to build robust, secure, maintainable software are entirely different skills. It's like the difference between knowing how to use Google Translate and actually speaking a foreign language fluently.
This trend is creating software that that may work—technically—but lacks the deeper architectural thinking that prevents systems from becoming nightmares to maintain. We're building on foundations of sand, and most people don't realize it until the house starts sinking.
...
The real danger isn't just that AI-generated code might be buggy—it's that people who don't understand the underlying systems could deploy it at scale. Imagine someone with minimal programming experience using AI to build blockchain smart contracts that handle real money, or creating web applications that store personal data without understanding basic security principles. The AI might produce syntactically correct code, but it can't instill the paranoid mindset that security requires.
This isn't hypothetical.
People are already using AI to generate payment processing systems, and user authentication flows without understanding the implications of what they're building. It's like giving someone a loaded gun without teaching them about gun safety—the tool works perfectly, but the human wielding it lacks the knowledge to use it responsibly.
The hallucination problem makes this even worse. When an inexperienced developer encounters a bug or security issue, they're likely to ask the AI for help. But AI systems can confidently provide incorrect solutions that seem plausible. A junior developer might not know enough to spot these errors, leading to systems built on compounding mistakes.
We're heading toward an internet flooded with software that's essentially unregulated and potentially dangerous. Scams will become more sophisticated, data breaches more common, and addictive applications more pervasive. The barrier to creating harmful software has never been lower, while the knowledge needed to create it responsibly remains high.
...
This brings us to the core question:
what does quality mean when anyone can generate code?
The answer lies in shifting our metrics from pure functionality to broader measures of value—much like how we've learned to think about quality of life beyond just income.
Let's use an analogy.
Several years ago, I moved from Los Angeles to Savannah. On paper, I might earn less money, but my quality of life improved dramatically. I traded highway commutes for time with my children, lowered my costs for sustainable living, and the stress of urban hustle. My personal success metrics changed from salary maximization to life optimization.
We need a similar shift in how we think about code quality.
Just because an AI can generate working code doesn't mean that code is valuable. Quality code isn't just code that runs—it's code that serves genuine human needs, respects user privacy, remains maintainable over time, and contributes to systems that make life better rather than more chaotic.
This means asking different questions:
Does this software solve a real problem, or does it exist only to capture attention and extract data?
Is it built with long-term sustainability in mind, or is it optimized for quick deployment and maximum engagement?
Does it empower users, or does it exploit psychological vulnerabilities to keep them hooked?
The gambling and addiction apps that dominate our phones are perfect examples of technically proficient but morally bankrupt code. They work exactly as intended—perhaps too well. Their algorithms are sophisticated, their user interfaces polished, and their business models proven. But they're also designed to exploit human psychology for profit, creating what amounts to digital casinos disguised as social platforms or games.
Real quality in code requires thinking beyond the immediate technical challenge to consider the broader human and social impact. It requires the kind of wisdom that comes from experience—understanding not just how to make something work, but whether it should work that way at all.
The path forward isn't to reject AI tools or return to some imaginary golden age of coding.
Instead, we need to maintain high standards for what we build while leveraging these powerful new capabilities. This means experienced developers staying engaged to mentor newcomers, companies prioritizing long-term code quality over short-term cost savings, and all of us—as users and citizens—demanding better from the software that increasingly shapes our daily lives.
We can't stop fighting for quality of code. Our day to day software depends on it.
If you vibe to the ideas I express here, please consider subscribing and sharing with friends.
We'll see you next time.
Nye Warburton is an educator and sometimes hacker from Savannah, GA. This essay was improvised with Otter.ai and then refined and rewritten with Claude Sonnet 4. Images created with Leonardo.ai flow state.
For more information visit https://nyewaburton.com
Rethinking Innovation: AI and the Problem with Objectives - Dec 26, 2024
WTF are Agents? The NooB Guide to Agentic Workflow - June 1, 2025
Bear_Rabbit_Robot: Mapping AI's Latent Space - March 25, 2025
Nye Warburton