
This essay was written by Louise Borreani as a contribution to Ecofrontiers' ongoing research at the intersection of frontier technologies, institutional design, and natural capital governance. It argues that intelligence is best understood not as a property of substrates, but as a function of interface legibility, and draws out the governance consequences of that claim for how we regulate AI and account for nature.
The standard story about large language models (LLMs) goes something like this: we built something so sophisticated it started to think. A new form of intelligence emerged from silicon and statistics. The question now is what to do with it.
This story is wrong. Not wrong in its conclusions about risk or opportunity, but wrong at the root, wrong about what intelligence actually is and where it lives. Getting the root right matters, because the governance frameworks we build will inherit the mistake if we don't.
Here is the alternative: intelligence was never the property of a substrate. It is a process, and what we recognise as intelligence is a function of the interface through which that process becomes legible. What LLMs did was not generate intelligence. They generated legibility. They built a high-bandwidth translation layer between human cognition and a knowledge space that already existed. The machine was already alive. We just found a better way to talk to it.
This has direct and uncomfortable implications for how we govern AI, how we account for nature, and who gets to be heard by the institutions that shape the world.
Throughout this essay, we use intelligence to mean one specific thing: the capacity of a system to sense its environment, integrate signals, and modify its behaviour accordingly. No consciousness required, no language, no biological substrate: nothing more than adaptive information processing. This is a functional definition, not a metaphysical one, and it is deliberately broad — broad enough to include plants, ecosystems, and the internet before anyone put a chatbot on it.
What we do not mean by intelligence, and what we will distinguish carefully throughout, is recognised intelligence, understood as the subset of adaptive processing whose outputs are legible to a human observer or institution. The gap between these two things is where the whole argument lives.
In 2023, Boston Dynamics integrated ChatGPT into Spot (their quadruped robot dog) and gave it a tour guide persona, a microphone, and a top hat.
Spot had been navigating complex terrain, avoiding obstacles, and responding to its environment for years. It was, by any functional definition, already doing something, which is sensing, integrating, adapting. But nobody called it intelligent. It moved like a machine because it responded like a machine: efficiently, legibly mechanical.
Then they plugged in the LLM. The team described it as though the LLM acted as an "improv actor": given a broad script, it filled in the blanks on the fly. Spot started making jokes. It stayed in character across unpredictable interactions. It told employees that the logistics robot Stretch was "for yoga." The engineers noted, carefully, that this didn't suggest the LLM was conscious or intelligent in a human sense — just that it demonstrated the power of statistical association between concepts. But the reaction from people in the room was visceral: the robot felt alive in a way it hadn't before.
Nothing changed about Spot's physical capabilities. Its sensors, its locomotion system, its capacity to process environmental signals — all identical. What changed was the interface through which it communicated its processing to the humans around it. The intelligence was always there. The legibility was not.
This is the pattern. And it runs much deeper than robotics.
In 1909, Baltic German biologist Jakob von Uexküll introduced the concept of Umwelt as the species-specific perceptual world each organism inhabits. A tick doesn't live in "the world." It lives in a world constituted by exactly three signals: warmth, butyric acid, and touch. That's the full bandwidth of its interface. The rest of reality doesn't reach it.
Uexküll's insight, foundational to the field of biosemiotics, is that every organism is in constant semiotic negotiation with its environment — reading signs, emitting signs, acting on interpretations. Every organism, in other words, meets our functional definition of intelligence. The difference between a human, a tick, a plant, and a digital system is not the presence or absence of adaptive information processing. It is the modality and legibility of the interface through which that processing occurs.
A Mimosa pudica folds when touched and, as ecologist Monica Gagliano demonstrated, habituates to repeated harmless stimuli, retaining that "memory" for weeks. A Venus flytrap counts: it requires two stimuli within twenty seconds to trigger, distinguishing prey from rain. Stefano Mancuso's lab has documented distributed electrical signalling networks in plants that perform functions analogous to neural computation. Plants sense, integrate, and adapt. They are intelligent in the functional sense we defined above.
We don't call it intelligence. But that's not because the behaviour fails a rigorous test. It's because the output doesn't reach us in a form we can parse. No language, no affect, no press secretary. The plant's interface has no legibility for human institutions, and so its processing doesn't register.
Gregory Bateson put the structural point precisely in Mind and Nature (1979): mind is not a substrate, it is a pattern (a pattern of recursive, difference-sensitive information processing), immanent in living systems at every scale. What varies is not whether the pattern is present. What varies is whether we can read it.
Return to the question of artificial intelligence with that definition in hand.
The training corpus of an LLM is not knowledge the model generates. It is knowledge humans generated, encoded in language, accumulated over centuries of text. What the model learns is a compression (a navigational map) of a knowledge space that pre-existed it. When it produces something that feels insightful, it is synthesising patterns latent in human-generated language. That is not a trivial feat. But it is a feat of interface engineering, not ontological novelty. The model didn't create intelligence. It created a legibility upgrade.
Andy Clark and David Chalmers made the relevant philosophical infrastructure explicit in their 1998 paper on the extended mind: cognitive processes don't stop at the skull. A notebook is part of your memory. A search engine is part of your reasoning apparatus. By this logic (and it has significant philosophical support), the question of where intelligence lives has always been distributed. What LLMs do is render a pre-existing distributed intelligence addressable at conversational bandwidth. They are a translation layer, not a new species.
The phenomenology of use confirms this. Have you gone back to a pre-LLM research workflow recently? The difficulty is real, but its source is instructive: you haven't become more intelligent. The interface has been upgraded, and your cognitive system has entrained to it. McLuhan was right not just about mass media: the medium reshapes the user. The interface doesn't just carry intelligence, but it partially constitutes what intelligence means in a given institutional context.
The genuinely novel contribution here is not the claim that intelligence is distributed — Bateson, Maturana, Varela, and the 4E cognition tradition (embodied, embedded, enacted, extended) established that across decades. It is something more specific: the institutional recognition of intelligence is a function of interface legibility, and designing for legibility is therefore a governance act with distributional consequences.
This is where the argument becomes uncomfortable.
If what we recognise as intelligent (and therefore regulate, account for, and give standing to) is substantially a function of interfacing capacity, then interface design is a question of political economy. It determines which entities get heard by institutions, which forms of knowing get built into policy, and which remain inaudible.
Environmental governance already has this problem, has had it for decades. Forests were systematically undervalued in economic accounts not because they weren't doing anything (they sequester carbon, regulate hydrology, maintain soil chemistry, host biodiversity), but because standard accounting interfaces (price signals, market transactions, GDP aggregates) couldn't route those signals into decision-making systems. The ecosystem was communicating. The institution wasn't configured to listen.
The emergence of the SEEA (System of Environmental-Economic Accounting) and the TNFD (Taskforce on Nature-related Financial Disclosures) can be read as exactly this: institutional interface upgrades designed to make non-human semiotic output legible to governance systems. Not to give nature a metaphorical voice, but to route its actual signals (measurable, real, already-existing) into the channels where decisions are made. New Zealand's recognition of the Whanganui River as a legal person (Te Awa Tupua, 2017) is the extreme instance: a governance interface extended to an entity whose adaptive processing had been legible to Māori institutional frameworks for centuries, and invisible to the colonial legal system.
AI enters this picture in a structurally identical way. Language models deployed against ecological sensor networks, biodiversity databases, or climate models are potentially tools for extending institutional legibility to systems that have long been adaptive without being heard. But this is not inevitably good. An interface can amplify or distort. A model trained predominantly on economic and extractive datasets will not hear the forest the same way a model trained on ecological and indigenous knowledge systems will. The LLM, as Boston Dynamics' own team noted, demonstrates the power of statistical association, and what it associates depends entirely on what it was trained on.
The interface is never neutral. It is always a political architecture.
This is the structural gap in most current AI governance frameworks. The conversation about AI safety is dominated by concerns about the model — its alignment, its capabilities, its potential deception. These are real. But they are downstream of a prior question: what gets routed in, and what gets filtered out, at the interface layer? The model inherits the legibility choices made in its training data, its fine-tuning, its deployment context. Governing AI without governing the interface is like regulating the printing press without asking who owns the paper mills.
The frame we propose (interface-aware governance ) has three practical implications.
Legibility audits as a standard governance tool. For any institutional decision-making system, including AI-augmented ones, the relevant questions are: what signals are routed in, what is filtered out, and whose adaptive processing is legible to the system? This applies to natural capital accounting, to AI advisory systems in public finance, and to the governance of frontier technologies in ecological contexts.
Interface design as accountability, not just specification. When a central bank deploys an AI system to assess climate-related financial risk, the interface choices (what data, what ontology, what weighting) are not pre-political. They embed assumptions about what counts as a signal and what counts as noise. Accountability frameworks need to reach that layer.
Epistemological pluralism as an interface requirement. Indigenous knowledge systems, ecological monitoring, non-market valuation methodologies: these are not alternatives to rigorous analysis. They are additional interface modalities that expand the bandwidth through which living systems communicate with governance. Treating them as optional is not a neutral choice. It is a legibility decision with distributional consequences.
The machine was always alive, just as the plant was always thinking.
What changes with AI is not the intelligence of the world. It is the resolution at which our institutions can hear it, if we design them to.
Ecofrontiers is an independent research and consulting organisation working at the intersection of frontier technologies, natural capital, and institutional design. We partner with central banks, development institutions, and academic publishers to build governance frameworks adequate to the complexity of living systems.
Interested in this research? Reach out at louise@ecofrontiers.xyz or follow us on X.
Semiotics and Umwelt theory
Von Uexküll, J. (1909). Umwelt und Innenwelt der Tiere. Springer.
Hoffmeyer, J. (1996). Signs of Meaning in the Universe. Indiana University Press.
Kull, K. (2001). Jakob von Uexküll: An introduction. Semiotica, 134(1–4), 1–59.
Plant intelligence and distributed cognition in biology
Gagliano, M., Renton, M., Depczynski, M., & Mancuso, S. (2014). Experience teaches plants to learn faster and forget slower in environments where it matters. Oecologia, 175(1), 63–72.
Mancuso, S., & Viola, A. (2015). Brilliant Green: The Surprising History and Science of Plant Intelligence. Island Press.
Trewavas, A. (2003). Aspects of plant intelligence. Annals of Botany, 92(1), 1–20.
Mind, distributed cognition, and 4E theory
Bateson, G. (1979). Mind and Nature: A Necessary Unity. Dutton.
Clark, A., & Chalmers, D. (1998). The extended mind. Analysis, 58(1), 7–19.
Maturana, H., & Varela, F. (1987). The Tree of Knowledge. Shambhala.
Varela, F., Thompson, E., & Rosch, E. (1991). The Embodied Mind. MIT Press.
LLMs, robotics, and AI
Boston Dynamics (2023). Robots that can chat. Boston Dynamics Blog. bostondynamics.com/blog/robots-that-can-chat
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? FAccT 2021.
Mitchell, M. (2021). Artificial Intelligence: A Guide for Thinking Humans. Picador.
Environmental governance and natural capital
UNSD (2021). System of Environmental-Economic Accounting — Ecosystem Accounting (SEEA EA). United Nations.
TNFD (2023). Recommendations of the Taskforce on Nature-related Financial Disclosures. TNFD.
Costanza, R., et al. (1997). The value of the world's ecosystem services and natural capital. Nature, 387, 253–260.
Interface, media, and political economy
McLuhan, M. (1964). Understanding Media: The Extensions of Man. McGraw-Hill.
Winner, L. (1980). Do artifacts have politics? Daedalus, 109(1), 121–136.

This essay was written by Louise Borreani as a contribution to Ecofrontiers' ongoing research at the intersection of frontier technologies, institutional design, and natural capital governance. It argues that intelligence is best understood not as a property of substrates, but as a function of interface legibility, and draws out the governance consequences of that claim for how we regulate AI and account for nature.
The standard story about large language models (LLMs) goes something like this: we built something so sophisticated it started to think. A new form of intelligence emerged from silicon and statistics. The question now is what to do with it.
This story is wrong. Not wrong in its conclusions about risk or opportunity, but wrong at the root, wrong about what intelligence actually is and where it lives. Getting the root right matters, because the governance frameworks we build will inherit the mistake if we don't.
Here is the alternative: intelligence was never the property of a substrate. It is a process, and what we recognise as intelligence is a function of the interface through which that process becomes legible. What LLMs did was not generate intelligence. They generated legibility. They built a high-bandwidth translation layer between human cognition and a knowledge space that already existed. The machine was already alive. We just found a better way to talk to it.
This has direct and uncomfortable implications for how we govern AI, how we account for nature, and who gets to be heard by the institutions that shape the world.
Throughout this essay, we use intelligence to mean one specific thing: the capacity of a system to sense its environment, integrate signals, and modify its behaviour accordingly. No consciousness required, no language, no biological substrate: nothing more than adaptive information processing. This is a functional definition, not a metaphysical one, and it is deliberately broad — broad enough to include plants, ecosystems, and the internet before anyone put a chatbot on it.
What we do not mean by intelligence, and what we will distinguish carefully throughout, is recognised intelligence, understood as the subset of adaptive processing whose outputs are legible to a human observer or institution. The gap between these two things is where the whole argument lives.
In 2023, Boston Dynamics integrated ChatGPT into Spot (their quadruped robot dog) and gave it a tour guide persona, a microphone, and a top hat.
Spot had been navigating complex terrain, avoiding obstacles, and responding to its environment for years. It was, by any functional definition, already doing something, which is sensing, integrating, adapting. But nobody called it intelligent. It moved like a machine because it responded like a machine: efficiently, legibly mechanical.
Then they plugged in the LLM. The team described it as though the LLM acted as an "improv actor": given a broad script, it filled in the blanks on the fly. Spot started making jokes. It stayed in character across unpredictable interactions. It told employees that the logistics robot Stretch was "for yoga." The engineers noted, carefully, that this didn't suggest the LLM was conscious or intelligent in a human sense — just that it demonstrated the power of statistical association between concepts. But the reaction from people in the room was visceral: the robot felt alive in a way it hadn't before.
Nothing changed about Spot's physical capabilities. Its sensors, its locomotion system, its capacity to process environmental signals — all identical. What changed was the interface through which it communicated its processing to the humans around it. The intelligence was always there. The legibility was not.
This is the pattern. And it runs much deeper than robotics.
In 1909, Baltic German biologist Jakob von Uexküll introduced the concept of Umwelt as the species-specific perceptual world each organism inhabits. A tick doesn't live in "the world." It lives in a world constituted by exactly three signals: warmth, butyric acid, and touch. That's the full bandwidth of its interface. The rest of reality doesn't reach it.
Uexküll's insight, foundational to the field of biosemiotics, is that every organism is in constant semiotic negotiation with its environment — reading signs, emitting signs, acting on interpretations. Every organism, in other words, meets our functional definition of intelligence. The difference between a human, a tick, a plant, and a digital system is not the presence or absence of adaptive information processing. It is the modality and legibility of the interface through which that processing occurs.
A Mimosa pudica folds when touched and, as ecologist Monica Gagliano demonstrated, habituates to repeated harmless stimuli, retaining that "memory" for weeks. A Venus flytrap counts: it requires two stimuli within twenty seconds to trigger, distinguishing prey from rain. Stefano Mancuso's lab has documented distributed electrical signalling networks in plants that perform functions analogous to neural computation. Plants sense, integrate, and adapt. They are intelligent in the functional sense we defined above.
We don't call it intelligence. But that's not because the behaviour fails a rigorous test. It's because the output doesn't reach us in a form we can parse. No language, no affect, no press secretary. The plant's interface has no legibility for human institutions, and so its processing doesn't register.
Gregory Bateson put the structural point precisely in Mind and Nature (1979): mind is not a substrate, it is a pattern (a pattern of recursive, difference-sensitive information processing), immanent in living systems at every scale. What varies is not whether the pattern is present. What varies is whether we can read it.
Return to the question of artificial intelligence with that definition in hand.
The training corpus of an LLM is not knowledge the model generates. It is knowledge humans generated, encoded in language, accumulated over centuries of text. What the model learns is a compression (a navigational map) of a knowledge space that pre-existed it. When it produces something that feels insightful, it is synthesising patterns latent in human-generated language. That is not a trivial feat. But it is a feat of interface engineering, not ontological novelty. The model didn't create intelligence. It created a legibility upgrade.
Andy Clark and David Chalmers made the relevant philosophical infrastructure explicit in their 1998 paper on the extended mind: cognitive processes don't stop at the skull. A notebook is part of your memory. A search engine is part of your reasoning apparatus. By this logic (and it has significant philosophical support), the question of where intelligence lives has always been distributed. What LLMs do is render a pre-existing distributed intelligence addressable at conversational bandwidth. They are a translation layer, not a new species.
The phenomenology of use confirms this. Have you gone back to a pre-LLM research workflow recently? The difficulty is real, but its source is instructive: you haven't become more intelligent. The interface has been upgraded, and your cognitive system has entrained to it. McLuhan was right not just about mass media: the medium reshapes the user. The interface doesn't just carry intelligence, but it partially constitutes what intelligence means in a given institutional context.
The genuinely novel contribution here is not the claim that intelligence is distributed — Bateson, Maturana, Varela, and the 4E cognition tradition (embodied, embedded, enacted, extended) established that across decades. It is something more specific: the institutional recognition of intelligence is a function of interface legibility, and designing for legibility is therefore a governance act with distributional consequences.
This is where the argument becomes uncomfortable.
If what we recognise as intelligent (and therefore regulate, account for, and give standing to) is substantially a function of interfacing capacity, then interface design is a question of political economy. It determines which entities get heard by institutions, which forms of knowing get built into policy, and which remain inaudible.
Environmental governance already has this problem, has had it for decades. Forests were systematically undervalued in economic accounts not because they weren't doing anything (they sequester carbon, regulate hydrology, maintain soil chemistry, host biodiversity), but because standard accounting interfaces (price signals, market transactions, GDP aggregates) couldn't route those signals into decision-making systems. The ecosystem was communicating. The institution wasn't configured to listen.
The emergence of the SEEA (System of Environmental-Economic Accounting) and the TNFD (Taskforce on Nature-related Financial Disclosures) can be read as exactly this: institutional interface upgrades designed to make non-human semiotic output legible to governance systems. Not to give nature a metaphorical voice, but to route its actual signals (measurable, real, already-existing) into the channels where decisions are made. New Zealand's recognition of the Whanganui River as a legal person (Te Awa Tupua, 2017) is the extreme instance: a governance interface extended to an entity whose adaptive processing had been legible to Māori institutional frameworks for centuries, and invisible to the colonial legal system.
AI enters this picture in a structurally identical way. Language models deployed against ecological sensor networks, biodiversity databases, or climate models are potentially tools for extending institutional legibility to systems that have long been adaptive without being heard. But this is not inevitably good. An interface can amplify or distort. A model trained predominantly on economic and extractive datasets will not hear the forest the same way a model trained on ecological and indigenous knowledge systems will. The LLM, as Boston Dynamics' own team noted, demonstrates the power of statistical association, and what it associates depends entirely on what it was trained on.
The interface is never neutral. It is always a political architecture.
This is the structural gap in most current AI governance frameworks. The conversation about AI safety is dominated by concerns about the model — its alignment, its capabilities, its potential deception. These are real. But they are downstream of a prior question: what gets routed in, and what gets filtered out, at the interface layer? The model inherits the legibility choices made in its training data, its fine-tuning, its deployment context. Governing AI without governing the interface is like regulating the printing press without asking who owns the paper mills.
The frame we propose (interface-aware governance ) has three practical implications.
Legibility audits as a standard governance tool. For any institutional decision-making system, including AI-augmented ones, the relevant questions are: what signals are routed in, what is filtered out, and whose adaptive processing is legible to the system? This applies to natural capital accounting, to AI advisory systems in public finance, and to the governance of frontier technologies in ecological contexts.
Interface design as accountability, not just specification. When a central bank deploys an AI system to assess climate-related financial risk, the interface choices (what data, what ontology, what weighting) are not pre-political. They embed assumptions about what counts as a signal and what counts as noise. Accountability frameworks need to reach that layer.
Epistemological pluralism as an interface requirement. Indigenous knowledge systems, ecological monitoring, non-market valuation methodologies: these are not alternatives to rigorous analysis. They are additional interface modalities that expand the bandwidth through which living systems communicate with governance. Treating them as optional is not a neutral choice. It is a legibility decision with distributional consequences.
The machine was always alive, just as the plant was always thinking.
What changes with AI is not the intelligence of the world. It is the resolution at which our institutions can hear it, if we design them to.
Ecofrontiers is an independent research and consulting organisation working at the intersection of frontier technologies, natural capital, and institutional design. We partner with central banks, development institutions, and academic publishers to build governance frameworks adequate to the complexity of living systems.
Interested in this research? Reach out at louise@ecofrontiers.xyz or follow us on X.
Semiotics and Umwelt theory
Von Uexküll, J. (1909). Umwelt und Innenwelt der Tiere. Springer.
Hoffmeyer, J. (1996). Signs of Meaning in the Universe. Indiana University Press.
Kull, K. (2001). Jakob von Uexküll: An introduction. Semiotica, 134(1–4), 1–59.
Plant intelligence and distributed cognition in biology
Gagliano, M., Renton, M., Depczynski, M., & Mancuso, S. (2014). Experience teaches plants to learn faster and forget slower in environments where it matters. Oecologia, 175(1), 63–72.
Mancuso, S., & Viola, A. (2015). Brilliant Green: The Surprising History and Science of Plant Intelligence. Island Press.
Trewavas, A. (2003). Aspects of plant intelligence. Annals of Botany, 92(1), 1–20.
Mind, distributed cognition, and 4E theory
Bateson, G. (1979). Mind and Nature: A Necessary Unity. Dutton.
Clark, A., & Chalmers, D. (1998). The extended mind. Analysis, 58(1), 7–19.
Maturana, H., & Varela, F. (1987). The Tree of Knowledge. Shambhala.
Varela, F., Thompson, E., & Rosch, E. (1991). The Embodied Mind. MIT Press.
LLMs, robotics, and AI
Boston Dynamics (2023). Robots that can chat. Boston Dynamics Blog. bostondynamics.com/blog/robots-that-can-chat
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? FAccT 2021.
Mitchell, M. (2021). Artificial Intelligence: A Guide for Thinking Humans. Picador.
Environmental governance and natural capital
UNSD (2021). System of Environmental-Economic Accounting — Ecosystem Accounting (SEEA EA). United Nations.
TNFD (2023). Recommendations of the Taskforce on Nature-related Financial Disclosures. TNFD.
Costanza, R., et al. (1997). The value of the world's ecosystem services and natural capital. Nature, 387, 253–260.
Interface, media, and political economy
McLuhan, M. (1964). Understanding Media: The Extensions of Man. McGraw-Hill.
Winner, L. (1980). Do artifacts have politics? Daedalus, 109(1), 121–136.

AI Mechanism Designer
The job that doesn't exist yet

When The Toolbox Outgrows The Workshop
Some Thoughts on the State of Public-Good Funding Following Devconnect Buenos Aires

Ecofrontiers — End of Year Review 2025
Ecofrontiers continued in 2025 as a research and consulting agency operating at the intersection of Web3 and impact. This year marked a transition from exploration to consolidation: Across research, consulting, and public-interest infrastructure, we worked with an expanding network of builders, institutions, and educators to advance credible, system-level approaches to green crypto. Here are our 2025 highlights ✨

AI Mechanism Designer
The job that doesn't exist yet

When The Toolbox Outgrows The Workshop
Some Thoughts on the State of Public-Good Funding Following Devconnect Buenos Aires

Ecofrontiers — End of Year Review 2025
Ecofrontiers continued in 2025 as a research and consulting agency operating at the intersection of Web3 and impact. This year marked a transition from exploration to consolidation: Across research, consulting, and public-interest infrastructure, we worked with an expanding network of builders, institutions, and educators to advance credible, system-level approaches to green crypto. Here are our 2025 highlights ✨
Share Dialog
Share Dialog
<100 subscribers
<100 subscribers
No comments yet