
Look around the room right now. What do you see? Patterns of light resolve into objects: your screen, a cup, a book, a desk. You don’t think much about it because you don’t have to, it's automatic. That’s really the whole the point of perception: to distill the chaos of the world into something actionable. You don’t need to analyze every photon or angle of light. You just need to know there’s a cup, so you can reach for it and drink. No surprises.
This act of seeing, how we go about trusting the scene in our mind's eye, depends on layers upon layers of pattern processing of which we are blissfully unaware. From the retinas in our eyes distinguishing wavelengths of light to the visual cortex detecting edges and building shapes, our minds are performing immense pattern recognition and pulling it into a scene. Our minds do it so well that we just see the world as if it’s “out there,” even though all of this happens between our ears. Close your eyes, it mostly disappears.
It's crazy that this isn't more apparent, but the world we experience is not the world itself, it is a representation of it in our minds. We experience a world "out there" even though what is "out there" is inside, filtered, compressed, and re-presented to us as observers. Eastern mystics may say the world is an "illusion" and it sounds rightly profound, yet it isn't an illusion so much as a really good representation based on these filters. Like Plato's cave, the shadows we see are interpretations of reality, not reality itself.
We arrived there because whether or not these representations are “true” is far less important than the fact that they are useful. Our systems evolved for survival, not accuracy. Survival isn't helped when we treat the world as illusion. Survival dictates we treat the vision as real. If it sounds like a lion in the bushes, best to err on the side of caution, even its just a rustling branch.
Today, as we teach machines to “see” and act in the world, we are beginning to understand just how layered and complex this process of perception really is, just how much our representations determine not only what we see, but what we see in ourselves. We have an intimate relationship between ourselves and our representations. But it goes much deeper.
Can we say an observation exists without an observer? The tree in the forest may not, in fact, make a sound unless there is someone to hear it. This may seem like semantics, but this basic idea is starting to have repercussions across science beyond just quantum mechanics.
The word “pattern” originates from the Latin patron, meaning protector or guide. Why? A patron was a model of society to emulate. Similarly a pattern is a model, but more in the sense of an abstraction, a core set of similarity across individual occurrences. The pattern "triangle" comes in many shapes and sizes.
Because of the regularity that exists in any pattern, either due to repeating in time or similarity across individual occurrences, we might say a pattern guides us away from uncertainty and toward predictability. Patterns repeat and thus have some level of predictability to be useful. They are regularities in the world that can be described more simply than listing all of their elements. If you say something is a triangle, you can immediately know it is not a circle. Patterns simplify, compress, and guide action that can be quick when a pattern becomes more recognizable.
The Greeks had a word for this too: idea (ἰδέα), meaning form or pattern. Plato’s "ideal" forms were thought to be the purest patterns—mental representations of perfect objects. Once again, patterns link to representation. But here lies a chicken and egg problem, which came first, the pattern or the pattern recognizer (aka the observer)?
Patterns require observers. An observation, by definition, requires someone or something to observe it. Without an observer, something capable of detecting regularities, what can we say of the world? Every observation is an interplay between observer and observed.
In order to create something called an observation, the observer must compress raw information into a meaningful pattern or patterns. If an observer could process everything in the world, it would be as large and complex as the world itself. The Observer would quickly become enveloped into the world of the observation. Every observation of a finite mind is a small slice of the observable world, and a compressed slice at that.
Observers, as they are a part of the universe and necessarily a subset of it, are not just constrained by, but are defined by computational boundedness: we call them "bounded observers." Any slice or sample of the world you try to take, you'll have way more information than it is possible to analyze, so we must take only abstractions or compressions.
So, we, as cognitive agents by definition as observers must be computationally smaller than the systems they observe. Yet to survive we must make sense of a world much more complex and vast than they are. How do we play this game? If we can expand our context window, the size of our model the world, perhaps we can better predict the world around us? In many ways, this is how we have arrived at this level of evolution, and now creating AI with potentially even larger processing power than the human brain. That will mean larger context windows and better predictive power, but never without limits.
Most know about the Observer Effect, where the act of measurement affects the outcome in quantum mechanics. Yet science is just coming around to the idea that the act of observing is never passive. An observer is always engaged in selecting, compressing, and abstracting information.
It’s true of all observation:
The eye detects light waves and compresses them into shapes.
The mind abstracts those shapes into objects.
We act on those objects, trusting that the patterns we’ve detected are reliable.
It’s so seamless we forget we’re doing it. But this process of 'observation, compression, action' is the foundation of how any bounded system interacts with its environment.
So it may be starting to become obvious, but patterns can only be detected by other patterns. To observe a regularity, the observer itself must exhibit a kind of regularity, a structure tuned to behave in a regular way to specific inputs. Your retina detects patterns in light because it evolved as a biological pattern recognizer. A neuron fires because it “sees” a particular input as significant. Facial recognition systems identify faces because they’ve been trained on patterns of data.
Which brings us back to the chicken-and-egg problem: where did the first patterns come from? How did the first “observers” emerge to detect anything at all? How deep is this particular rabbit hole?
To answer this, we must look for the simplest possible observers, systems so basic they border on the definition of existence itself. Stephen Wolfram suggests that simple nodes and rules in his computational universe, akin to cellular automata, might be the first pattern recognizers. For instance:
Node A gives rise to Node B.
Node B gives rise back to Node A.
This simple alternation is a pattern. It doesn’t require an advanced observer to “see” it. Existence itself generates regularity. From such primitive processes, more complex patterns and observers can emerge. Observers detect patterns, compress them, and validate them. Trust in those patterns allows them to expand their scope, their “context windows,”and recognize more sophisticated patterns. For more on Wolfram and Observer Theory from a computational perspective, check out Wolfram Physics and his article on Observer Theory. The concept of computational boundedness as a structural feature of observers comes from Stephen Wolfram's Observer Theory. What we'll explore in this series is what that boundedness generates. The limits, the bounds, are counterintuitively a prerequisite for recursive creativity.
Interestingly, even time itself may emerge from this process. Time, as we perceive it, is not an independent reality but a byproduct of observing causal patterns. How is this so? Time is defined by the flow from order to chaos, an increase in entropy.
But what is entropy? It's a lack of reducibility, a lack of an ability to find a pattern. Computational irreducibility says that all computational slices of the universe are essentially equal in complexity. If that's true then the second law is telling us is that we generally go from scenarios where we can recognize patterns to scenarios where we cannot. See Wolfram. When a coffee cup breaks on the floor, there are many more states that it can be in that the complete cup before it fell. We can describe the unbroken cup easily, but we have a hard time describing the mess on the floor in a simple way.
Observers are necessarily finite. We cannot perceive all interactions at once, so we observe sequences instead, and therefore we pick out causal patterns. The “arrow of time” may simply reflect the order in which a bounded observer is able to detect and relate patterns in its environment. Our ability to detect causal relationships builds from our pattern recognition system.
As we build artificial systems to recognize patterns, we see these principles in action. Modern AI models like Large Language Models (LLMs) or knowledge graphs are designed to compress and abstract vast amounts of data into representations.
LLMs compress linguistic patterns to generate coherent text.
Knowledge graphs link concepts and relationships to create high-level representations of information.
Yet, AI is still a bounded observer. Its ability to detect patterns is limited by computational resources, training data, and design constraints. Like us, it sees a subset of the world, compresses what it can, and trusts the patterns it has learned.
This brings us full circle: patterns require observers, and observers are bounded systems navigating a chaotic world through the act of compression.
The Observer Paradox reveals something deep about existence itself: the ability to see, know, and act depends on the ability to compress reality into manageable patterns. Without computational bounds, there would be no observation, no pattern, no time, no anything as we know it.
We are participants in this recursive process, much like AI systems and the simplest rules governing particles or nodes. The patterns we see are shaped not just by the world but by who and what we are as observers.
As we explore this further, across physics, biology, cognition, and artificial intelligence, we will find that the act of recognizing patterns is not just a human feat. It is a universal principle of existence, repeated at every scale.
The question then becomes: How far can this recursion go?

Look around the room right now. What do you see? Patterns of light resolve into objects: your screen, a cup, a book, a desk. You don’t think much about it because you don’t have to, it's automatic. That’s really the whole the point of perception: to distill the chaos of the world into something actionable. You don’t need to analyze every photon or angle of light. You just need to know there’s a cup, so you can reach for it and drink. No surprises.
This act of seeing, how we go about trusting the scene in our mind's eye, depends on layers upon layers of pattern processing of which we are blissfully unaware. From the retinas in our eyes distinguishing wavelengths of light to the visual cortex detecting edges and building shapes, our minds are performing immense pattern recognition and pulling it into a scene. Our minds do it so well that we just see the world as if it’s “out there,” even though all of this happens between our ears. Close your eyes, it mostly disappears.
It's crazy that this isn't more apparent, but the world we experience is not the world itself, it is a representation of it in our minds. We experience a world "out there" even though what is "out there" is inside, filtered, compressed, and re-presented to us as observers. Eastern mystics may say the world is an "illusion" and it sounds rightly profound, yet it isn't an illusion so much as a really good representation based on these filters. Like Plato's cave, the shadows we see are interpretations of reality, not reality itself.
We arrived there because whether or not these representations are “true” is far less important than the fact that they are useful. Our systems evolved for survival, not accuracy. Survival isn't helped when we treat the world as illusion. Survival dictates we treat the vision as real. If it sounds like a lion in the bushes, best to err on the side of caution, even its just a rustling branch.
Today, as we teach machines to “see” and act in the world, we are beginning to understand just how layered and complex this process of perception really is, just how much our representations determine not only what we see, but what we see in ourselves. We have an intimate relationship between ourselves and our representations. But it goes much deeper.
Can we say an observation exists without an observer? The tree in the forest may not, in fact, make a sound unless there is someone to hear it. This may seem like semantics, but this basic idea is starting to have repercussions across science beyond just quantum mechanics.
The word “pattern” originates from the Latin patron, meaning protector or guide. Why? A patron was a model of society to emulate. Similarly a pattern is a model, but more in the sense of an abstraction, a core set of similarity across individual occurrences. The pattern "triangle" comes in many shapes and sizes.
Because of the regularity that exists in any pattern, either due to repeating in time or similarity across individual occurrences, we might say a pattern guides us away from uncertainty and toward predictability. Patterns repeat and thus have some level of predictability to be useful. They are regularities in the world that can be described more simply than listing all of their elements. If you say something is a triangle, you can immediately know it is not a circle. Patterns simplify, compress, and guide action that can be quick when a pattern becomes more recognizable.
The Greeks had a word for this too: idea (ἰδέα), meaning form or pattern. Plato’s "ideal" forms were thought to be the purest patterns—mental representations of perfect objects. Once again, patterns link to representation. But here lies a chicken and egg problem, which came first, the pattern or the pattern recognizer (aka the observer)?
Patterns require observers. An observation, by definition, requires someone or something to observe it. Without an observer, something capable of detecting regularities, what can we say of the world? Every observation is an interplay between observer and observed.
In order to create something called an observation, the observer must compress raw information into a meaningful pattern or patterns. If an observer could process everything in the world, it would be as large and complex as the world itself. The Observer would quickly become enveloped into the world of the observation. Every observation of a finite mind is a small slice of the observable world, and a compressed slice at that.
Observers, as they are a part of the universe and necessarily a subset of it, are not just constrained by, but are defined by computational boundedness: we call them "bounded observers." Any slice or sample of the world you try to take, you'll have way more information than it is possible to analyze, so we must take only abstractions or compressions.
So, we, as cognitive agents by definition as observers must be computationally smaller than the systems they observe. Yet to survive we must make sense of a world much more complex and vast than they are. How do we play this game? If we can expand our context window, the size of our model the world, perhaps we can better predict the world around us? In many ways, this is how we have arrived at this level of evolution, and now creating AI with potentially even larger processing power than the human brain. That will mean larger context windows and better predictive power, but never without limits.
Most know about the Observer Effect, where the act of measurement affects the outcome in quantum mechanics. Yet science is just coming around to the idea that the act of observing is never passive. An observer is always engaged in selecting, compressing, and abstracting information.
It’s true of all observation:
The eye detects light waves and compresses them into shapes.
The mind abstracts those shapes into objects.
We act on those objects, trusting that the patterns we’ve detected are reliable.
It’s so seamless we forget we’re doing it. But this process of 'observation, compression, action' is the foundation of how any bounded system interacts with its environment.
So it may be starting to become obvious, but patterns can only be detected by other patterns. To observe a regularity, the observer itself must exhibit a kind of regularity, a structure tuned to behave in a regular way to specific inputs. Your retina detects patterns in light because it evolved as a biological pattern recognizer. A neuron fires because it “sees” a particular input as significant. Facial recognition systems identify faces because they’ve been trained on patterns of data.
Which brings us back to the chicken-and-egg problem: where did the first patterns come from? How did the first “observers” emerge to detect anything at all? How deep is this particular rabbit hole?
To answer this, we must look for the simplest possible observers, systems so basic they border on the definition of existence itself. Stephen Wolfram suggests that simple nodes and rules in his computational universe, akin to cellular automata, might be the first pattern recognizers. For instance:
Node A gives rise to Node B.
Node B gives rise back to Node A.
This simple alternation is a pattern. It doesn’t require an advanced observer to “see” it. Existence itself generates regularity. From such primitive processes, more complex patterns and observers can emerge. Observers detect patterns, compress them, and validate them. Trust in those patterns allows them to expand their scope, their “context windows,”and recognize more sophisticated patterns. For more on Wolfram and Observer Theory from a computational perspective, check out Wolfram Physics and his article on Observer Theory. The concept of computational boundedness as a structural feature of observers comes from Stephen Wolfram's Observer Theory. What we'll explore in this series is what that boundedness generates. The limits, the bounds, are counterintuitively a prerequisite for recursive creativity.
Interestingly, even time itself may emerge from this process. Time, as we perceive it, is not an independent reality but a byproduct of observing causal patterns. How is this so? Time is defined by the flow from order to chaos, an increase in entropy.
But what is entropy? It's a lack of reducibility, a lack of an ability to find a pattern. Computational irreducibility says that all computational slices of the universe are essentially equal in complexity. If that's true then the second law is telling us is that we generally go from scenarios where we can recognize patterns to scenarios where we cannot. See Wolfram. When a coffee cup breaks on the floor, there are many more states that it can be in that the complete cup before it fell. We can describe the unbroken cup easily, but we have a hard time describing the mess on the floor in a simple way.
Observers are necessarily finite. We cannot perceive all interactions at once, so we observe sequences instead, and therefore we pick out causal patterns. The “arrow of time” may simply reflect the order in which a bounded observer is able to detect and relate patterns in its environment. Our ability to detect causal relationships builds from our pattern recognition system.
As we build artificial systems to recognize patterns, we see these principles in action. Modern AI models like Large Language Models (LLMs) or knowledge graphs are designed to compress and abstract vast amounts of data into representations.
LLMs compress linguistic patterns to generate coherent text.
Knowledge graphs link concepts and relationships to create high-level representations of information.
Yet, AI is still a bounded observer. Its ability to detect patterns is limited by computational resources, training data, and design constraints. Like us, it sees a subset of the world, compresses what it can, and trusts the patterns it has learned.
This brings us full circle: patterns require observers, and observers are bounded systems navigating a chaotic world through the act of compression.
The Observer Paradox reveals something deep about existence itself: the ability to see, know, and act depends on the ability to compress reality into manageable patterns. Without computational bounds, there would be no observation, no pattern, no time, no anything as we know it.
We are participants in this recursive process, much like AI systems and the simplest rules governing particles or nodes. The patterns we see are shaped not just by the world but by who and what we are as observers.
As we explore this further, across physics, biology, cognition, and artificial intelligence, we will find that the act of recognizing patterns is not just a human feat. It is a universal principle of existence, repeated at every scale.
The question then becomes: How far can this recursion go?
The fundamental nature of patterns and the ability to recognize them.
The fundamental nature of patterns and the ability to recognize them.
Share Dialog
Share Dialog

Subscribe to The Mirror and the Loom

Subscribe to The Mirror and the Loom
<100 subscribers
<100 subscribers
Revamping my writing that I started over a year ago, found an audience! The next few months and getting it all out for an actual audience. Much more to come. Post 1 of 10-12. A New Observer Paradox
1 comment
Revamping my writing that I started over a year ago, found an audience! The next few months and getting it all out for an actual audience. Much more to come. Post 1 of 10-12. A New Observer Paradox