
The Politics of Presence
The power of physical presence: reinventing democracy to combat populism

Decision Lego: A primitive on Governance for the 21st Century
What if collective decision making could be imagined as a big lego game? Answer: It can.

Towards an intersubjective concept of Sortition
The challenge2024 marks the 50th anniversary of the first known “minipublics”. Since then, the discussion about the “right” process for selecting citizens through random selection for a political office (aka sortition) is ongoing. Major progress has been made in the theoretical and practical field. We have seen a strong community of practitioners gather; an incredible amount of research being made. In some cases, we have even seen a full circle: For example, in Germany door to door recruitmen...
Exploring governance for the 21st Century.

The Politics of Presence
The power of physical presence: reinventing democracy to combat populism

Decision Lego: A primitive on Governance for the 21st Century
What if collective decision making could be imagined as a big lego game? Answer: It can.

Towards an intersubjective concept of Sortition
The challenge2024 marks the 50th anniversary of the first known “minipublics”. Since then, the discussion about the “right” process for selecting citizens through random selection for a political office (aka sortition) is ongoing. Major progress has been made in the theoretical and practical field. We have seen a strong community of practitioners gather; an incredible amount of research being made. In some cases, we have even seen a full circle: For example, in Germany door to door recruitmen...
Exploring governance for the 21st Century.

Subscribe to Antoine Vergne

Subscribe to Antoine Vergne


<100 subscribers
<100 subscribers
I have always been fascinated by technology. It started with a commodore 64 in 1989, then with an Olivetti desktop computer in 1995. A year later, I managed to convince the director of my high school to get a subscription to access the Internet to be able to code the webpage of our high school. It was in HTML2, in a text editor, with many animated GIFs. The 2000s and the era of broadband, online gaming, was also a thrill. I missed web2 and social media, got very interested in virtual worlds around 2015, then Web3 and crypto took over my attention. So much potential, so many pitfalls. Synthetic intelligence has been also a core topic since 2020 and the results of “We, the internet” the global dialogue on the future of the Internet during which citizens of the world gave one core advice: “we need to talk about AI much more, have thousands of dialogues”. How right they were 😊
Last November I entered a new rabbit hole: Vibe coding. Synthetic intelligence had been part of my curiosity, I followed the field, read about it, experimented as soon as an LLM came out. I could see the potential, but it remained somewhat abstract. I had ideas for applications, pieces of software, even business models. They were intuitions more than projects.
The obstacle was simple: I did not know how to code well enough to turn those ideas into real products. HTML 2 was not that up-to-date, hiring a team was unrealistic. Raising capital made no sense. There was always a gap between imagination and execution.

With vibe coding, that gap started to close.
Things that would have taken weeks suddenly became possible in a few hours. Projects that would normally require a team suddenly felt accessible to a single person. The technical barrier did not disappear entirely, but it stopped being decisive.
For the first time, I could test my ideas directly. I coded a swarm of agent to help me write an article; I deployed a custom smart home dashboard; I tested a couple of autonomous trading agents on Web3 platforms; deployed a full website in 4 hours with content at the right place, a nice design and in two languages. During a weekend, I recreated a civic tech platform that had taken us 3 years, 6 devs, 10 partners and €3M in an EU project to create.
Very quickly, a second step followed. If I could build things this way, why depend on proprietary tools? I started using open-source software for my projects. Then I moved to open-weight models and an open-source coding assistant.
What I discovered was both exciting and slightly uncomfortable: these models were good enough. Not perfect, but more than capable of building real things, iterating. Good enough for perhaps ninety percent of what anyone needs to do.

The next step was logical: If this worked, why stay dependent on cloud infrastructure? I used my Dappnode (small server the size of a chocolate box) to deploy the models locally, run the agents locally, store all data locally. For sure, I learned to deal with hardware limitations, to optimize, to wait a bit longer for answers, to work within constraints as I have no GPU or high-end hardware. But still, it was 80% as good as the cloud frontier model.
This lead me to the next step: if I was talking about decentralization, why remain dependent on centralized energy systems?
No reason, so I plugged my stack into my balcony solar powerplant and a home battery. For sure, its an investment, but one that is amortized in a couple of years. And it can also power my dishwasher and my oven.
So over the course of 2 months, I realized that the entire stack — open-source software, open-weight models, a local server, partially autonomous energy — could run with extremely low marginal costs once the initial infrastructure was in place. That was the moment when I felt something that looked like abundance.
Not universal abundance. Not material abundance. But an abundance of usable intelligence and capacity to build software. An abundance of experimentation. I could create almost anything I could imagine, without asking anyone for permission or having to discard the idea right away.
And yet, at the same time, nothing else around me had become abundant. Food was not free, it was even rising quickly due to inflation. Same for housing. Land was not abundant. Physical infrastructure still obeyed the old logic of scarcity and power. My balcony power plant was still worth a year of salary in some countries of the world, the server too.

This opened a new set of questions. If I, individually, could produce software at almost zero cost, what happens when millions of people can do the same thing? The answer is not abstract. The SaaS model — the dominant business model of the past two decades — rests on a specific friction: that building software is hard, that distribution requires infrastructure, that switching costs create lock-in. Remove those frictions, and the model does not slowly weaken. It collapses. We have seen that in the past weeks with each new skill from major AI models.
Why pay a monthly subscription for a tool that any moderately skilled person can now replicate in an afternoon for quasi free? And what happens down the line in 5 to 10 years when capability has risen, and an agent guiding other agents will create the software you need on the go on your phone?
The hyperscalers face a version of the same problem, only more structural. They built their moats on proprietary models, on the assumption that frontier intelligence would remain expensive and centralized. But open source, open-weight models are closing the capability gap faster than almost anyone expected. The cost of inference is falling. The difference between what you can run locally and what requires a data center is shrinking every quarter — not at the margins, but at the core. This is not a temporary lag. It is a fundamental erosion of the business model.
The largest capital investments in history are being made in AI infrastructure — chips, data centers, energy grids. Those investments require a return. That return requires scarcity: the ability to charge for access to something others cannot replicate. But the technology itself tends toward abundance. Every open release, every efficiency gain, every model that runs on consumer hardware undermines the scarcity the infrastructure was built to monetize. Part of it is the consequence of the arms race between the US and China which goes full steam open source.
The interesting part is that it is not only software where this logic applies. Current AI, a public interest initiative born at the Paris AI Summit, recently unveiled a handheld device built with the Indian government that runs three simultaneous AI models locally, across twenty-two languages, with no internet connection, on open-source hardware anyone can replicate. That is not a research prototype. So, even the hardware layer — the last plausible bottleneck for centralized control — is being opened.
Conclusion: You cannot build a durable rent-extracting monopoly on a commodity.
So where does value actually come from and goes? This is not a rhetorical question. It is the central economic question of the next decade. Some analysts have tried to answer it through financial scenario analysis — tracing how SaaS collapses into private credit defaults into mortgage stress into a broader demand spiral. That analysis is serious and the transmission mechanisms are real. But it frames the wrong crisis. The financial contagion is a consequence. The structural fact underneath it is simpler and more disorienting: intelligence is becoming a commons, and the institutions built to monetize its scarcity have no obvious successor. What I experienced was not the end of scarcity. It was the displacement of scarcity. Intelligence becomes cheap. Coordination becomes valuable. Access to physical resources remains structured by institutions and power.

As a side quest: The regions that invested most heavily in the first era of frontier AI — that raced to build proprietary models, sovereign data centers, national champions — are now most exposed to the model becoming obsolete. The EU, which spent years criticizing itself for missing the first wave, may have stumbled into an advantage: it arrives at the next era without a legacy infrastructure to protect, without national champions whose business models depend on keeping intelligence scarce. If the next era belongs to open, distributed, public-interest AI, then not having bet everything on the previous era is not a handicap. It is optionality.
From there, the questions that stayed with me were not about which companies survive. They were about what happens to people. If we can produce more with less human labor, why should survival remain tied to employment? And if work stops being the primary source of income, what happens to status? What happens to recognition? And what happens when synthetic intelligence becomes embodied and replaces millions of people?
Large societies that have partially separated survival from work did so deliberately and after long struggles: Public pensions decoupled old age from labor, universal healthcare decoupled illness from income, and public education decoupled learning from wealth. None of that happened automatically because technology made it possible. It happened because institutions were redesigned. And it happened imperfectly, partially, after political struggle that took decades.
If intelligence truly becomes cheap, we are facing a similar moment. But this time the thing becoming cheap is not a particular category of work — it is the cognitive substrate of almost all work. The displacement is general which is something that many observers seem to downplay when they compare the current revolution to past technological leaps.
Based on these little experiments in my living room, I see two paths that are genuinely possible.
The first is ungoverned abundance. The open-weight ecosystem wins. Intelligence becomes a genuine commons. Individuals and small groups gain real capacity. But the collective action problems — inequality, governance, climate, coordination — remain unsolved, because no institution captures the surplus generated by cheap intelligence and routes it toward solving shared problems. In this world, people have extraordinary tools and no roof. Digital abundance coexists with material insecurity. Extraordinary software explains why you cannot afford rent or why butter is twice as expensive as last year.
The second path requires more imagination, and more work. It starts from the same observation — that intelligence is becoming a commons — and asks what kind of institution could govern a commons at that scale. Not a state in the traditional sense, and not a market. Something that treats the surplus generated by cheap intelligence as shared infrastructure, and organizes its governance around participation rather than ownership. I have been thinking about this under the name Syntropolis https://paragraph.com/@antoinevergne/syntropolis-a-blue-print-for-mastering-ai : a polity built not on territory but on shared stewardship of synthetic intelligence, governed through randomly selected citizens' assemblies, distributing its surplus as a minimal guarantee of dignity: If intelligence is becoming the primary productive infrastructure of the economy, then inclusive control of that infrastructure is the condition for anything else being possible.
I do not know which path we will take.

What I do know is that the choice is not technical. The technology works. Open models exist. Local compute is real. Hardware is opening. The economics of abundance are not speculation. The question is whether we are willing to organize what comes next — or whether we will let it organize us.
Abundance has to be built.
I have always been fascinated by technology. It started with a commodore 64 in 1989, then with an Olivetti desktop computer in 1995. A year later, I managed to convince the director of my high school to get a subscription to access the Internet to be able to code the webpage of our high school. It was in HTML2, in a text editor, with many animated GIFs. The 2000s and the era of broadband, online gaming, was also a thrill. I missed web2 and social media, got very interested in virtual worlds around 2015, then Web3 and crypto took over my attention. So much potential, so many pitfalls. Synthetic intelligence has been also a core topic since 2020 and the results of “We, the internet” the global dialogue on the future of the Internet during which citizens of the world gave one core advice: “we need to talk about AI much more, have thousands of dialogues”. How right they were 😊
Last November I entered a new rabbit hole: Vibe coding. Synthetic intelligence had been part of my curiosity, I followed the field, read about it, experimented as soon as an LLM came out. I could see the potential, but it remained somewhat abstract. I had ideas for applications, pieces of software, even business models. They were intuitions more than projects.
The obstacle was simple: I did not know how to code well enough to turn those ideas into real products. HTML 2 was not that up-to-date, hiring a team was unrealistic. Raising capital made no sense. There was always a gap between imagination and execution.

With vibe coding, that gap started to close.
Things that would have taken weeks suddenly became possible in a few hours. Projects that would normally require a team suddenly felt accessible to a single person. The technical barrier did not disappear entirely, but it stopped being decisive.
For the first time, I could test my ideas directly. I coded a swarm of agent to help me write an article; I deployed a custom smart home dashboard; I tested a couple of autonomous trading agents on Web3 platforms; deployed a full website in 4 hours with content at the right place, a nice design and in two languages. During a weekend, I recreated a civic tech platform that had taken us 3 years, 6 devs, 10 partners and €3M in an EU project to create.
Very quickly, a second step followed. If I could build things this way, why depend on proprietary tools? I started using open-source software for my projects. Then I moved to open-weight models and an open-source coding assistant.
What I discovered was both exciting and slightly uncomfortable: these models were good enough. Not perfect, but more than capable of building real things, iterating. Good enough for perhaps ninety percent of what anyone needs to do.

The next step was logical: If this worked, why stay dependent on cloud infrastructure? I used my Dappnode (small server the size of a chocolate box) to deploy the models locally, run the agents locally, store all data locally. For sure, I learned to deal with hardware limitations, to optimize, to wait a bit longer for answers, to work within constraints as I have no GPU or high-end hardware. But still, it was 80% as good as the cloud frontier model.
This lead me to the next step: if I was talking about decentralization, why remain dependent on centralized energy systems?
No reason, so I plugged my stack into my balcony solar powerplant and a home battery. For sure, its an investment, but one that is amortized in a couple of years. And it can also power my dishwasher and my oven.
So over the course of 2 months, I realized that the entire stack — open-source software, open-weight models, a local server, partially autonomous energy — could run with extremely low marginal costs once the initial infrastructure was in place. That was the moment when I felt something that looked like abundance.
Not universal abundance. Not material abundance. But an abundance of usable intelligence and capacity to build software. An abundance of experimentation. I could create almost anything I could imagine, without asking anyone for permission or having to discard the idea right away.
And yet, at the same time, nothing else around me had become abundant. Food was not free, it was even rising quickly due to inflation. Same for housing. Land was not abundant. Physical infrastructure still obeyed the old logic of scarcity and power. My balcony power plant was still worth a year of salary in some countries of the world, the server too.

This opened a new set of questions. If I, individually, could produce software at almost zero cost, what happens when millions of people can do the same thing? The answer is not abstract. The SaaS model — the dominant business model of the past two decades — rests on a specific friction: that building software is hard, that distribution requires infrastructure, that switching costs create lock-in. Remove those frictions, and the model does not slowly weaken. It collapses. We have seen that in the past weeks with each new skill from major AI models.
Why pay a monthly subscription for a tool that any moderately skilled person can now replicate in an afternoon for quasi free? And what happens down the line in 5 to 10 years when capability has risen, and an agent guiding other agents will create the software you need on the go on your phone?
The hyperscalers face a version of the same problem, only more structural. They built their moats on proprietary models, on the assumption that frontier intelligence would remain expensive and centralized. But open source, open-weight models are closing the capability gap faster than almost anyone expected. The cost of inference is falling. The difference between what you can run locally and what requires a data center is shrinking every quarter — not at the margins, but at the core. This is not a temporary lag. It is a fundamental erosion of the business model.
The largest capital investments in history are being made in AI infrastructure — chips, data centers, energy grids. Those investments require a return. That return requires scarcity: the ability to charge for access to something others cannot replicate. But the technology itself tends toward abundance. Every open release, every efficiency gain, every model that runs on consumer hardware undermines the scarcity the infrastructure was built to monetize. Part of it is the consequence of the arms race between the US and China which goes full steam open source.
The interesting part is that it is not only software where this logic applies. Current AI, a public interest initiative born at the Paris AI Summit, recently unveiled a handheld device built with the Indian government that runs three simultaneous AI models locally, across twenty-two languages, with no internet connection, on open-source hardware anyone can replicate. That is not a research prototype. So, even the hardware layer — the last plausible bottleneck for centralized control — is being opened.
Conclusion: You cannot build a durable rent-extracting monopoly on a commodity.
So where does value actually come from and goes? This is not a rhetorical question. It is the central economic question of the next decade. Some analysts have tried to answer it through financial scenario analysis — tracing how SaaS collapses into private credit defaults into mortgage stress into a broader demand spiral. That analysis is serious and the transmission mechanisms are real. But it frames the wrong crisis. The financial contagion is a consequence. The structural fact underneath it is simpler and more disorienting: intelligence is becoming a commons, and the institutions built to monetize its scarcity have no obvious successor. What I experienced was not the end of scarcity. It was the displacement of scarcity. Intelligence becomes cheap. Coordination becomes valuable. Access to physical resources remains structured by institutions and power.

As a side quest: The regions that invested most heavily in the first era of frontier AI — that raced to build proprietary models, sovereign data centers, national champions — are now most exposed to the model becoming obsolete. The EU, which spent years criticizing itself for missing the first wave, may have stumbled into an advantage: it arrives at the next era without a legacy infrastructure to protect, without national champions whose business models depend on keeping intelligence scarce. If the next era belongs to open, distributed, public-interest AI, then not having bet everything on the previous era is not a handicap. It is optionality.
From there, the questions that stayed with me were not about which companies survive. They were about what happens to people. If we can produce more with less human labor, why should survival remain tied to employment? And if work stops being the primary source of income, what happens to status? What happens to recognition? And what happens when synthetic intelligence becomes embodied and replaces millions of people?
Large societies that have partially separated survival from work did so deliberately and after long struggles: Public pensions decoupled old age from labor, universal healthcare decoupled illness from income, and public education decoupled learning from wealth. None of that happened automatically because technology made it possible. It happened because institutions were redesigned. And it happened imperfectly, partially, after political struggle that took decades.
If intelligence truly becomes cheap, we are facing a similar moment. But this time the thing becoming cheap is not a particular category of work — it is the cognitive substrate of almost all work. The displacement is general which is something that many observers seem to downplay when they compare the current revolution to past technological leaps.
Based on these little experiments in my living room, I see two paths that are genuinely possible.
The first is ungoverned abundance. The open-weight ecosystem wins. Intelligence becomes a genuine commons. Individuals and small groups gain real capacity. But the collective action problems — inequality, governance, climate, coordination — remain unsolved, because no institution captures the surplus generated by cheap intelligence and routes it toward solving shared problems. In this world, people have extraordinary tools and no roof. Digital abundance coexists with material insecurity. Extraordinary software explains why you cannot afford rent or why butter is twice as expensive as last year.
The second path requires more imagination, and more work. It starts from the same observation — that intelligence is becoming a commons — and asks what kind of institution could govern a commons at that scale. Not a state in the traditional sense, and not a market. Something that treats the surplus generated by cheap intelligence as shared infrastructure, and organizes its governance around participation rather than ownership. I have been thinking about this under the name Syntropolis https://paragraph.com/@antoinevergne/syntropolis-a-blue-print-for-mastering-ai : a polity built not on territory but on shared stewardship of synthetic intelligence, governed through randomly selected citizens' assemblies, distributing its surplus as a minimal guarantee of dignity: If intelligence is becoming the primary productive infrastructure of the economy, then inclusive control of that infrastructure is the condition for anything else being possible.
I do not know which path we will take.

What I do know is that the choice is not technical. The technology works. Open models exist. Local compute is real. Hardware is opening. The economics of abundance are not speculation. The question is whether we are willing to organize what comes next — or whether we will let it organize us.
Abundance has to be built.
Share Dialog
Share Dialog
1 comment
The displacement of scarcity