>100 subscribers


Nye's Digital Lab is a weekly scribble on creativity in the age of ai and distributed systems.
It's a new year, and I'm focusing on three things. Here are my 2026 predictions.
It's fun to be asked what I think about this stuff. Sometimes I am wrong, though I have absolutely called a few right.
I’m going to tell you what I think is coming in 2026, not as someone watching from the sidelines, but as someone who spent a couple of my holiday break weeks experimenting with this stuff.
I teach artists, I write every week about these kinds of things, and in my spare time I hack together engine experiments in my digital lab. What I’m seeing in my own work is showing up everywhere in big ways.
Three things are converging:
agent training is becoming something anyone can do,
open-source AI is exploding (especially from China),
and intelligence is moving out of distant data centers and into our neighborhoods.
We’re not ready. To the Digital Lab!

Over the holiday break, I started training agents. This has been on my to-do list for a while, after seeing the video releases of Nvidia training little characters years ago.
With python, pygame and a couple of open source frameworks, I trained two AI agents to play Connect Four against each other. Not because I needed a Connect Four champion, but because I wanted to understand how machines actually learn to play games.
I used the OpenAI Gymnasium framework. It’s free, it’s Python, and you don’t need fancy compute to run them. Crappy ol’ Intel CPU for less than 200 bucks.
I set up the virtual game environment which defined the rules, and then just…
let them play!
Thousands of games. Over and over.
And then again... Over and Over.

And they learned to play.
Not by memorizing moves, but by learning strategy through trial and error. This is called “reinforcement learning.” I've written about it before here.
It’s a process of teaching agents by letting them experience consequences. When an agent makes a move that leads to winning, it gets rewarded. When it loses, it learns to avoid that path. It’s how you’d train a dog, except the dog is code and the treats are mathematical signals. Watching my Connect Four agents evolve from random moves to deliberate strategy is literally watching intelligence emerge from nothing.
With a little bit of python, Petting Zoo is a really straight forward way to start experimenting with multiple game playing agents.
So… If I can train agents to play Connect Four in my digital lab, I think someone else can train agents to manage inventory, optimize delivery routes, or coordinate robots in warehouses. Here’s the prediction.
“By the end of 2026, reinforcement learning won’t be some exotic AI technique. It’ll be as common as building a website.”
And the simulation piece is crucial too. With game engines running constantly, we can create virtual worlds where agents learn without real-world consequences.
Want to teach a robot to navigate a factory floor? Build a simulation.
Want to train an agent to respond to emergencies? Create the scenarios in an engine.
The agents fail safely, learn quickly, and then deploy into reality already trained. That’s the shift I think. From agents as productivity tools to agents as actual system operators.

I can download DeepSeek-R1, a state-of-the-art AI reasoning model (from China), right now, for free. I can run it on my computer. I can modify it. I can build on top of it. And so can anyone else on the planet with an internet connection.
While OpenAI and Anthropic guard their model’s weights like trade secrets, China has gone all-in on open source. Not out of generosity. ..Out of strategy. By mid-2025, China accounted for roughly 40% of all publicly released AI models worldwide. These aren’t second-rate models either. DeepSeek trained their R1 model in 55 days for under $6 million. Compare that to the hundreds of millions spent on GPT-4. Alibaba’s Qwen model family has spawned over 100,000 derivative models!
This is national policy both for the US and China. China’s government explicitly supports open-source AI development. Beijing and Shanghai have coordinated infrastructure plans. President Xi Jinping has called for “greater cooperation on open-source technologies.” Chinese companies release new models constantly! (Alibaba averaged one every 20 days in 2025.)
Ok so what’s the prediction?
Wayyyy more open code libraries, more frameworks, more AI models. They will increasingly be free, and easily available. That sounds great, and in many ways it is. When I’m experimenting with training agents, I can pull from an enormous ecosystem of tools without spending a dime. But…
Standardization is going to become a nightmare. When you have hundreds of competing models, each with different requirements and quirks, building anything reliable gets exponentially harder. It’s like having fifty different kinds of electrical outlets in your house; It’s great that you have options, but good luck plugging anything in.
More importantly, this shift favors whoever controls the infrastructure. China isn’t just releasing models. They’re building the electric grids, the 5G networks, the data centers to run them. While we debate and pump stocks, they’re laying cable and training engineers. The open-source approach creates community development at global scale, but it also creates dependency on whoever provides the pipes.
By 2026, the AI landscape will be incredibly rich and chaotically fragmented. You’ll be able to download world-class models for free, customize them for your specific needs, and deploy them however you want. But you’ll also spend half your time just figuring out which model to use and how to make them all work together.
And who controls the networks where these models actually run?
China, it seems, has the jump on the world.

In addition to gaming agents, I also picked up an Arduino Uno Q. The ones with Qualcomm chips and distance sensors? I wrote about them here.
That’s edge computing in miniature.
Instead of sending data to some massive server farm hundreds of miles away, the sensor processes information right there on the board and only sends what matters to the next device. It’s fast, it’s private, and it doesn’t depend on having a perfect internet connection.
Now scale that up to your entire neighborhood and you'll see my final prediction.
"Accessible Edge Computing."
We think of AI as something that happens in the cloud, like in Google’s data centers or Amazon’s server farms. But the really interesting shift coming in 2026 is intelligence moving to the edge of the network.
To your doorbell camera. To traffic lights. To the sensors in your neighbor’s solar panels. The big frontier models, these are your GPT-4s and Claude Sonnets, are hitting a wall. They cost more to train and deliver less improvement each generation. Meanwhile, small specialized models running on local devices are getting shockingly capable. Connect them together, and the potential is staggering.
This will accelerate for sure.

Why send your security camera footage to the cloud when the camera itself can detect motion, recognize faces, and decide what’s worth recording? Why route your smart thermostat through a server farm when it can talk directly to your neighbor’s solar panels to balance energy use?
I see huge huge value in the edge of the network.
lower latency,
better privacy,
and network resilience (Decentralization?)
This is already happening in the real world.
Peachtree Corners, a town in my home state of Georgia, has become America’s first real “living laboratory” for smart city technology. They’ve got autonomous vehicles running on public roads, 5G networks blanketing the area, and edge AI processing everything from traffic patterns to parking availability in real time. They’re not sending video feeds to Amazon Web Services, they’re processing locally and only uploading insights.

Peachtree Corners built a centralized control center that monitors IoT devices across the city and can make decisions with or without human intervention. They’ve created a digital twin, a real-time virtual copy of the entire city, that updates constantly based on sensor data. And they did all this in a regular suburban town, not some purpose-built futuristic metropolis.
As smart home technology gets cheaper and more common, we’ll see it scale to smart neighborhoods. Your security system coordinates with streetlights. Your electric car talks to the grid about when to charge. Suddenly you’ve got distributed intelligence systems managing resources at the community level, not just the household level.
The technical challenges are real. Edge devices have limited power and storage. Coordinating them without creating security vulnerabilities is genuinely hard. And there are thorny questions about governance:
Who owns the data?
Who controls the algorithms?
What happens when your neighborhood’s collective AI makes decisions that affect people who didn’t opt in?
But the advantages I mention before: speed, privacy, resilience… these make this inevitable. Cities will become the primary battleground for localized AI deployment.
And we’re going to see eruptions of innovation at the edge that rival anything happening in the big model labs.

So… I can train AI agents to play Connect Four. I can get Arduino boards talking to each other. I can download world-class AI models for free and run them on my MiniPC. And I’m just one person, tinkering in small office with an Iron Man poster. I do this, in my spare time between a family, teaching classes, running a department and writing essays.
If I can do this, everyone can do this. And that’s both extremely thrilling and terrifying.
Peachtree Corners isn’t some distant sci-fi experiment. It’s a real town in Georgia, right now, with autonomous vehicles on public roads and AI systems managing city infrastructure. You’ve probably seen autonomous vehicles in your home town. They’ve already thought through questions like liability insurance for driverless cars. They’ve built systems that can operate with or without human oversight. This isn’t the future.
This is friggin’ Tuesday.
And if a mid-sized suburban town can become a living laboratory for AI-driven urban management, what happens when this scales? When every neighborhood has connected sensors, when every city block has edge intelligence making decisions in real time, when the smart city isn’t a concept but just… the city?
So there you go…
accessible agent training,
open-source proliferation,
and edge intelligence.
These are converging in 2026. I can feel it in my jellies. The tools are getting easier. The models are getting better. The infrastructure is spreading. What used to require a research lab now requires a low cost computer and some curiosity.
I’ll be focusing on these three topics for my work in the foreseeable future. It's something I don't think I can put down right now. Please let me know if you have any thoughts!
Thanks for reading and we’ll see you next time!
Happy New Year.
Sources:
Velaga, K.S., Guo, Y., & Yu, W. (2025). “Edge AI for Smart Cities: Foundations, Challenges, and Opportunities.” Smart Cities, 8(6), 211. https://doi.org/10.3390/smartcities8060211
Lambert, N. (2025, September 9). “On China’s open source AI trajectory.” Interconnects. https://www.interconnects.ai/p/on-chinas-open-source-ai-trajectory
Microsoft Research Asia. (2025, December). “Agent Lightning: Adding reinforcement learning to AI agents without code rewrites.” Microsoft Research Blog. https://www.microsoft.com/en-us/research/blog/agent-lightning-adding-reinforcement-learning-to-ai-agents-without-code-rewrites/
nfoldROI. (2025, September 30). “Peachtree Corners Leads Nation in Smart City Innovation with Living Laboratory.” https://nfoldroi.com/peachtree-corners-leads-nation-in-smart-city-innovation-with-living-laboratory/
My Robot Dog and Me, January 28, 2025
Here Comes Reinforcement Learning, March 30, 2025
The Open Source Strategy, July 6, 2025
Nye Warburton is a creative technologist and educator writing weekly at Nye's Digital Lab about creativity and technology in everyday life. These essays start as voice recordings in Otter.ai and take shape through collaboration with Claude and localized open models.
Nye's Digital Lab is a weekly scribble on creativity in the age of ai and distributed systems.
It's a new year, and I'm focusing on three things. Here are my 2026 predictions.
It's fun to be asked what I think about this stuff. Sometimes I am wrong, though I have absolutely called a few right.
I’m going to tell you what I think is coming in 2026, not as someone watching from the sidelines, but as someone who spent a couple of my holiday break weeks experimenting with this stuff.
I teach artists, I write every week about these kinds of things, and in my spare time I hack together engine experiments in my digital lab. What I’m seeing in my own work is showing up everywhere in big ways.
Three things are converging:
agent training is becoming something anyone can do,
open-source AI is exploding (especially from China),
and intelligence is moving out of distant data centers and into our neighborhoods.
We’re not ready. To the Digital Lab!

Over the holiday break, I started training agents. This has been on my to-do list for a while, after seeing the video releases of Nvidia training little characters years ago.
With python, pygame and a couple of open source frameworks, I trained two AI agents to play Connect Four against each other. Not because I needed a Connect Four champion, but because I wanted to understand how machines actually learn to play games.
I used the OpenAI Gymnasium framework. It’s free, it’s Python, and you don’t need fancy compute to run them. Crappy ol’ Intel CPU for less than 200 bucks.
I set up the virtual game environment which defined the rules, and then just…
let them play!
Thousands of games. Over and over.
And then again... Over and Over.

And they learned to play.
Not by memorizing moves, but by learning strategy through trial and error. This is called “reinforcement learning.” I've written about it before here.
It’s a process of teaching agents by letting them experience consequences. When an agent makes a move that leads to winning, it gets rewarded. When it loses, it learns to avoid that path. It’s how you’d train a dog, except the dog is code and the treats are mathematical signals. Watching my Connect Four agents evolve from random moves to deliberate strategy is literally watching intelligence emerge from nothing.
With a little bit of python, Petting Zoo is a really straight forward way to start experimenting with multiple game playing agents.
So… If I can train agents to play Connect Four in my digital lab, I think someone else can train agents to manage inventory, optimize delivery routes, or coordinate robots in warehouses. Here’s the prediction.
“By the end of 2026, reinforcement learning won’t be some exotic AI technique. It’ll be as common as building a website.”
And the simulation piece is crucial too. With game engines running constantly, we can create virtual worlds where agents learn without real-world consequences.
Want to teach a robot to navigate a factory floor? Build a simulation.
Want to train an agent to respond to emergencies? Create the scenarios in an engine.
The agents fail safely, learn quickly, and then deploy into reality already trained. That’s the shift I think. From agents as productivity tools to agents as actual system operators.

I can download DeepSeek-R1, a state-of-the-art AI reasoning model (from China), right now, for free. I can run it on my computer. I can modify it. I can build on top of it. And so can anyone else on the planet with an internet connection.
While OpenAI and Anthropic guard their model’s weights like trade secrets, China has gone all-in on open source. Not out of generosity. ..Out of strategy. By mid-2025, China accounted for roughly 40% of all publicly released AI models worldwide. These aren’t second-rate models either. DeepSeek trained their R1 model in 55 days for under $6 million. Compare that to the hundreds of millions spent on GPT-4. Alibaba’s Qwen model family has spawned over 100,000 derivative models!
This is national policy both for the US and China. China’s government explicitly supports open-source AI development. Beijing and Shanghai have coordinated infrastructure plans. President Xi Jinping has called for “greater cooperation on open-source technologies.” Chinese companies release new models constantly! (Alibaba averaged one every 20 days in 2025.)
Ok so what’s the prediction?
Wayyyy more open code libraries, more frameworks, more AI models. They will increasingly be free, and easily available. That sounds great, and in many ways it is. When I’m experimenting with training agents, I can pull from an enormous ecosystem of tools without spending a dime. But…
Standardization is going to become a nightmare. When you have hundreds of competing models, each with different requirements and quirks, building anything reliable gets exponentially harder. It’s like having fifty different kinds of electrical outlets in your house; It’s great that you have options, but good luck plugging anything in.
More importantly, this shift favors whoever controls the infrastructure. China isn’t just releasing models. They’re building the electric grids, the 5G networks, the data centers to run them. While we debate and pump stocks, they’re laying cable and training engineers. The open-source approach creates community development at global scale, but it also creates dependency on whoever provides the pipes.
By 2026, the AI landscape will be incredibly rich and chaotically fragmented. You’ll be able to download world-class models for free, customize them for your specific needs, and deploy them however you want. But you’ll also spend half your time just figuring out which model to use and how to make them all work together.
And who controls the networks where these models actually run?
China, it seems, has the jump on the world.

In addition to gaming agents, I also picked up an Arduino Uno Q. The ones with Qualcomm chips and distance sensors? I wrote about them here.
That’s edge computing in miniature.
Instead of sending data to some massive server farm hundreds of miles away, the sensor processes information right there on the board and only sends what matters to the next device. It’s fast, it’s private, and it doesn’t depend on having a perfect internet connection.
Now scale that up to your entire neighborhood and you'll see my final prediction.
"Accessible Edge Computing."
We think of AI as something that happens in the cloud, like in Google’s data centers or Amazon’s server farms. But the really interesting shift coming in 2026 is intelligence moving to the edge of the network.
To your doorbell camera. To traffic lights. To the sensors in your neighbor’s solar panels. The big frontier models, these are your GPT-4s and Claude Sonnets, are hitting a wall. They cost more to train and deliver less improvement each generation. Meanwhile, small specialized models running on local devices are getting shockingly capable. Connect them together, and the potential is staggering.
This will accelerate for sure.

Why send your security camera footage to the cloud when the camera itself can detect motion, recognize faces, and decide what’s worth recording? Why route your smart thermostat through a server farm when it can talk directly to your neighbor’s solar panels to balance energy use?
I see huge huge value in the edge of the network.
lower latency,
better privacy,
and network resilience (Decentralization?)
This is already happening in the real world.
Peachtree Corners, a town in my home state of Georgia, has become America’s first real “living laboratory” for smart city technology. They’ve got autonomous vehicles running on public roads, 5G networks blanketing the area, and edge AI processing everything from traffic patterns to parking availability in real time. They’re not sending video feeds to Amazon Web Services, they’re processing locally and only uploading insights.

Peachtree Corners built a centralized control center that monitors IoT devices across the city and can make decisions with or without human intervention. They’ve created a digital twin, a real-time virtual copy of the entire city, that updates constantly based on sensor data. And they did all this in a regular suburban town, not some purpose-built futuristic metropolis.
As smart home technology gets cheaper and more common, we’ll see it scale to smart neighborhoods. Your security system coordinates with streetlights. Your electric car talks to the grid about when to charge. Suddenly you’ve got distributed intelligence systems managing resources at the community level, not just the household level.
The technical challenges are real. Edge devices have limited power and storage. Coordinating them without creating security vulnerabilities is genuinely hard. And there are thorny questions about governance:
Who owns the data?
Who controls the algorithms?
What happens when your neighborhood’s collective AI makes decisions that affect people who didn’t opt in?
But the advantages I mention before: speed, privacy, resilience… these make this inevitable. Cities will become the primary battleground for localized AI deployment.
And we’re going to see eruptions of innovation at the edge that rival anything happening in the big model labs.

So… I can train AI agents to play Connect Four. I can get Arduino boards talking to each other. I can download world-class AI models for free and run them on my MiniPC. And I’m just one person, tinkering in small office with an Iron Man poster. I do this, in my spare time between a family, teaching classes, running a department and writing essays.
If I can do this, everyone can do this. And that’s both extremely thrilling and terrifying.
Peachtree Corners isn’t some distant sci-fi experiment. It’s a real town in Georgia, right now, with autonomous vehicles on public roads and AI systems managing city infrastructure. You’ve probably seen autonomous vehicles in your home town. They’ve already thought through questions like liability insurance for driverless cars. They’ve built systems that can operate with or without human oversight. This isn’t the future.
This is friggin’ Tuesday.
And if a mid-sized suburban town can become a living laboratory for AI-driven urban management, what happens when this scales? When every neighborhood has connected sensors, when every city block has edge intelligence making decisions in real time, when the smart city isn’t a concept but just… the city?
So there you go…
accessible agent training,
open-source proliferation,
and edge intelligence.
These are converging in 2026. I can feel it in my jellies. The tools are getting easier. The models are getting better. The infrastructure is spreading. What used to require a research lab now requires a low cost computer and some curiosity.
I’ll be focusing on these three topics for my work in the foreseeable future. It's something I don't think I can put down right now. Please let me know if you have any thoughts!
Thanks for reading and we’ll see you next time!
Happy New Year.
Sources:
Velaga, K.S., Guo, Y., & Yu, W. (2025). “Edge AI for Smart Cities: Foundations, Challenges, and Opportunities.” Smart Cities, 8(6), 211. https://doi.org/10.3390/smartcities8060211
Lambert, N. (2025, September 9). “On China’s open source AI trajectory.” Interconnects. https://www.interconnects.ai/p/on-chinas-open-source-ai-trajectory
Microsoft Research Asia. (2025, December). “Agent Lightning: Adding reinforcement learning to AI agents without code rewrites.” Microsoft Research Blog. https://www.microsoft.com/en-us/research/blog/agent-lightning-adding-reinforcement-learning-to-ai-agents-without-code-rewrites/
nfoldROI. (2025, September 30). “Peachtree Corners Leads Nation in Smart City Innovation with Living Laboratory.” https://nfoldroi.com/peachtree-corners-leads-nation-in-smart-city-innovation-with-living-laboratory/
My Robot Dog and Me, January 28, 2025
Here Comes Reinforcement Learning, March 30, 2025
The Open Source Strategy, July 6, 2025
Nye Warburton is a creative technologist and educator writing weekly at Nye's Digital Lab about creativity and technology in everyday life. These essays start as voice recordings in Otter.ai and take shape through collaboration with Claude and localized open models.
Share Dialog
Share Dialog
Nye
Nye
No comments yet