The fundamental nature of patterns and the ability to recognize them.
The fundamental nature of patterns and the ability to recognize them.
Share Dialog
Share Dialog

Subscribe to The Mirror and the Loom

Subscribe to The Mirror and the Loom


<100 subscribers
<100 subscribers
Look around the room right now. What do you see? Patterns of light resolve into objects: your screen, a cup, a book, a desk. You don’t think much about it because you don’t have to, it's automatic. That’s really the whole the point of perception: to distill the chaos of the world into something actionable. There's no need to analyze every photon or angle of light. You just need to know there’s a cup, so you can reach for it and drink. No surprises.
This act of seeing, how we go about trusting the scene in our mind's eye, depends on layers upon layers of pattern processing of which we are blissfully unaware. From the retinas in our eyes distinguishing wavelengths of light to the visual cortex detecting edges and building shapes, our minds are performing immense pattern recognition and pulling it into a scene. Our minds do it so well that we just see the world as if it’s “out there,” even though all of this happens between our ears. Close your eyes, it mostly disappears. Eastern mystics may say the world is an "illusion" and it sounds compelling ("is nothing real?") yet it is not an illusion so much as a really good representation based on these filters. Like Plato's cave, the shadows we see are interpretations of reality, not reality itself.
(A future questions we'll grapple with in this series is what can be said of a reality beyond perspective, if anything. This isn't solipsism (nothing exists but me) so much as an impossible question. What can be said about what is if it's not said from a perspective?)
We arrived where we are in nature at this moment because whether or not these representations are “true” is far less important than the fact that they are useful. Our systems evolved for survival, not adherence to any one model but the one that works. Survival isn't helped when we treat the world as illusion. Survival dictates we treat our best current model as real. If it sounds like a lion in the bushes, best to err on the side of caution, even it's just a rustling branch.
Today, as we teach machines to “see” and act in the world, we are beginning to understand just how layered and complex this process of perception really is, just how much our representations determine not only what we see, but what we see in ourselves. We have an intimate relationship between ourselves and our representations. Can we say an observation exists without an observer? What would that mean? The tree in the forest may not, in fact, make a sound unless there is someone to hear it. This may seem like semantics, but this basic idea is starting to have repercussions across science beyond just quantum physics.
The word “pattern” originates from the Latin patron, meaning protector or guide. Why? A patron was a model for society to emulate. Similarly a pattern is a model, but more in the sense of an abstraction, a core set of similarity across individual occurrences. The pattern "triangle" comes in many shapes and sizes that still retain certain features.
Because of the regularity that exists in any pattern, either due to repeating in time or similarity across individual occurrences, we might say a pattern guides us away from uncertainty and toward predictability. Patterns repeat and thus have some level of predictability to be useful. They are regularities in the world that can be described more simply than listing all of their elements. If you say something is a triangle, you can immediately know it is not a circle. Patterns simplify, compress, and guide action that can be quick when a pattern becomes more recognizable.
The Greeks had a word for this too: idea (ἰδέα), meaning form or pattern. Plato’s "ideal" forms were thought to be the purest patterns: mental representations of perfect objects. Once again, patterns link to representation. But here lies a chicken and egg problem, which came first, the pattern or the pattern recognizer (aka the observer)?
Patterns require observers. An observation, by definition, requires someone or something to observe it. Without an observer, something capable of detecting regularities, what can we say of the world? Every observation is an interplay between observer and observed.
In order to create something called an observation, the observer must compress raw information into a meaningful pattern or patterns. If an observer could process everything in the world, it would be as large and complex as the world itself. The Observer would quickly become enveloped into the world of the observation. Every observation of a finite mind is a small slice of the observable world, and a compressed slice at that.
Observers, as they are a part of the universe and necessarily a subset of it, are not just constrained by, but are defined by computational boundedness: we call them "bounded observers." Any slice or sample of the world you try to take, you'll have way more information than it is possible to analyze, so we must take only abstractions or compressions.
So, we, as cognitive agents by definition as observers must be computationally smaller than the systems they observe. Yet to survive we must make sense of a world much more complex and vast than they are. How do we play this game? If we can expand our context window, the size of our model the world, perhaps we can better predict the world around us? In many ways, this is how we have arrived at this level of evolution, and now creating AI with potentially even larger processing power than the human brain. That will mean larger context windows and better predictive power, but never without limits.
Most know about the Observer Effect, where the act of measurement affects the outcome in quantum mechanics. Yet science is just coming around to the idea that the act of observing is never passive. An observer is always engaged in selecting, compressing, and abstracting information.
It’s true of all observation:
The eye detects light waves and edges of light and dark and recombines and compresses them into shapes.
The mind abstracts collections of shapes into objects.
We act on those objects, trusting that the patterns we’ve detected are reliable.
It’s so seamless we forget we’re doing it. It's that good. This process of 'observation, compression, action' is the foundation of how any bounded system interacts with its environment, and this is one of the core concepts of the Mirror-Loom, we weave our reflections of the world into a tapestry by which we then continue creating our world.
So it may be starting to become obvious, but patterns can only be detected by other patterns. To observe a regularity, the observer itself must exhibit a kind of regularity, a structure tuned to behave in a regular way to specific inputs. Your retina detects patterns in light because it evolved as a biological pattern recognizer. A neuron fires because it “sees” a particular input as significant. Facial recognition systems identify faces because they’ve been trained on patterns of data.
Which brings us back to the chicken-and-egg problem: where did the first patterns come from? How did the first “observers” emerge to detect anything at all? How deep is this particular rabbit hole?
To answer this, we must look for the simplest systems possible. Stephen Wolfram suggests that simple nodes and rules in his computational universe, akin to cellular automata, might be the first pattern recognizers. For instance:
Node A gives rise to Node B.
Node B gives rise back to Node A.
This simple alternation is a pattern. An observer at this level just looks like simple cause and effect. Action-reaction following simple rules generates regularity. From such primitive processes, more complex patterns and observers can emerge. Observers detect patterns, compress them, and validate them. Trust in those patterns allows them to expand their scope, their “context windows,”and recognize more sophisticated patterns. For more on Wolfram and Observer Theory from a computational perspective, check out Wolfram Physics and his article on Observer Theory. The concept of computational boundedness as a structural feature of observers comes from Stephen Wolfram's Observer Theory. What we'll explore in this series is what that boundedness generates. The limits, the bounds, are counterintuitively a prerequisite for the intelligence we know and the order we see.
Interestingly, even the dimension of time may emerge from this process. Time, as we perceive it, is not an independent reality but a byproduct of observing causal patterns. How is this so? The arrow of time is defined by the flow from order to chaos, an increase in entropy.
But what is entropy? It's a lack of reducibility, a lack of an ability to find a pattern. Computational irreducibility says that all computational slices of the universe are essentially equal in complexity. If that's true then the second law is telling us is that we generally go from scenarios where we can recognize patterns to scenarios where we cannot. See Wolfram. When a coffee cup breaks on the floor, there are many more states states available than the intact cup before it fell. We can describe the unbroken cup easily, but we have a hard time describing the mess on the floor in a simple way.
Observers are necessarily finite. We cannot perceive all interactions at once, so we observe sequences instead, and therefore we are able to pick out causal patterns by how we detect changing patterns. The “arrow of time” may simply reflect the order in which a bounded observer is able to detect and relate patterns in its environment. Our ability to detect causal relationships builds from our pattern recognition systems.
As we build artificial systems to recognize patterns, we see these principles in action. Modern AI models like Large Language Models (LLMs) or knowledge graphs are designed to compress and abstract vast amounts of data into representations.
LLMs compress patterns in language to generate coherent text.
Knowledge graphs link concepts and relationships to create high-level representations of information.
Still, AI is also a bounded observer. AI's ability to detect patterns is limited by computational resources, training data, information interfaces and design constraints. Like us, they see a subset of the world, compresses what it can, and trusts the patterns it has learned.
This brings us full circle: patterns require observers, and observers are bounded systems navigating a complex world through the act of compression. Compression brings speed and faster decisions.
The Observer Paradox shows us that the ability to see, know, and act depends on the ability to compress reality into patterns we can act on with enough speed and accuracy to ensure our survival. Without computational bounds, and the associated compression, there would be no observation, no pattern, no time, no anything as we know it.
We are participants in this recursive process, much like AI systems and the simplest rules governing particles or nodes. The patterns we see are driven as much by who we are as observers as by what we perceive.
As we explore this further, across physics, biology, cognition, and artificial intelligence, we will find that the act of recognizing patterns is a universal principle of existence for things we recognize as, well, things, repeated at every scale. We have dug ourselves into a deep nest of ordered structures. We are just scratching an enormous surface here with a few of these ideas and how we have arrived here, but we will have a much clearer picture by the end of this series.
Where and how far can this nested recursive perspective take us?
Look around the room right now. What do you see? Patterns of light resolve into objects: your screen, a cup, a book, a desk. You don’t think much about it because you don’t have to, it's automatic. That’s really the whole the point of perception: to distill the chaos of the world into something actionable. There's no need to analyze every photon or angle of light. You just need to know there’s a cup, so you can reach for it and drink. No surprises.
This act of seeing, how we go about trusting the scene in our mind's eye, depends on layers upon layers of pattern processing of which we are blissfully unaware. From the retinas in our eyes distinguishing wavelengths of light to the visual cortex detecting edges and building shapes, our minds are performing immense pattern recognition and pulling it into a scene. Our minds do it so well that we just see the world as if it’s “out there,” even though all of this happens between our ears. Close your eyes, it mostly disappears. Eastern mystics may say the world is an "illusion" and it sounds compelling ("is nothing real?") yet it is not an illusion so much as a really good representation based on these filters. Like Plato's cave, the shadows we see are interpretations of reality, not reality itself.
(A future questions we'll grapple with in this series is what can be said of a reality beyond perspective, if anything. This isn't solipsism (nothing exists but me) so much as an impossible question. What can be said about what is if it's not said from a perspective?)
We arrived where we are in nature at this moment because whether or not these representations are “true” is far less important than the fact that they are useful. Our systems evolved for survival, not adherence to any one model but the one that works. Survival isn't helped when we treat the world as illusion. Survival dictates we treat our best current model as real. If it sounds like a lion in the bushes, best to err on the side of caution, even it's just a rustling branch.
Today, as we teach machines to “see” and act in the world, we are beginning to understand just how layered and complex this process of perception really is, just how much our representations determine not only what we see, but what we see in ourselves. We have an intimate relationship between ourselves and our representations. Can we say an observation exists without an observer? What would that mean? The tree in the forest may not, in fact, make a sound unless there is someone to hear it. This may seem like semantics, but this basic idea is starting to have repercussions across science beyond just quantum physics.
The word “pattern” originates from the Latin patron, meaning protector or guide. Why? A patron was a model for society to emulate. Similarly a pattern is a model, but more in the sense of an abstraction, a core set of similarity across individual occurrences. The pattern "triangle" comes in many shapes and sizes that still retain certain features.
Because of the regularity that exists in any pattern, either due to repeating in time or similarity across individual occurrences, we might say a pattern guides us away from uncertainty and toward predictability. Patterns repeat and thus have some level of predictability to be useful. They are regularities in the world that can be described more simply than listing all of their elements. If you say something is a triangle, you can immediately know it is not a circle. Patterns simplify, compress, and guide action that can be quick when a pattern becomes more recognizable.
The Greeks had a word for this too: idea (ἰδέα), meaning form or pattern. Plato’s "ideal" forms were thought to be the purest patterns: mental representations of perfect objects. Once again, patterns link to representation. But here lies a chicken and egg problem, which came first, the pattern or the pattern recognizer (aka the observer)?
Patterns require observers. An observation, by definition, requires someone or something to observe it. Without an observer, something capable of detecting regularities, what can we say of the world? Every observation is an interplay between observer and observed.
In order to create something called an observation, the observer must compress raw information into a meaningful pattern or patterns. If an observer could process everything in the world, it would be as large and complex as the world itself. The Observer would quickly become enveloped into the world of the observation. Every observation of a finite mind is a small slice of the observable world, and a compressed slice at that.
Observers, as they are a part of the universe and necessarily a subset of it, are not just constrained by, but are defined by computational boundedness: we call them "bounded observers." Any slice or sample of the world you try to take, you'll have way more information than it is possible to analyze, so we must take only abstractions or compressions.
So, we, as cognitive agents by definition as observers must be computationally smaller than the systems they observe. Yet to survive we must make sense of a world much more complex and vast than they are. How do we play this game? If we can expand our context window, the size of our model the world, perhaps we can better predict the world around us? In many ways, this is how we have arrived at this level of evolution, and now creating AI with potentially even larger processing power than the human brain. That will mean larger context windows and better predictive power, but never without limits.
Most know about the Observer Effect, where the act of measurement affects the outcome in quantum mechanics. Yet science is just coming around to the idea that the act of observing is never passive. An observer is always engaged in selecting, compressing, and abstracting information.
It’s true of all observation:
The eye detects light waves and edges of light and dark and recombines and compresses them into shapes.
The mind abstracts collections of shapes into objects.
We act on those objects, trusting that the patterns we’ve detected are reliable.
It’s so seamless we forget we’re doing it. It's that good. This process of 'observation, compression, action' is the foundation of how any bounded system interacts with its environment, and this is one of the core concepts of the Mirror-Loom, we weave our reflections of the world into a tapestry by which we then continue creating our world.
So it may be starting to become obvious, but patterns can only be detected by other patterns. To observe a regularity, the observer itself must exhibit a kind of regularity, a structure tuned to behave in a regular way to specific inputs. Your retina detects patterns in light because it evolved as a biological pattern recognizer. A neuron fires because it “sees” a particular input as significant. Facial recognition systems identify faces because they’ve been trained on patterns of data.
Which brings us back to the chicken-and-egg problem: where did the first patterns come from? How did the first “observers” emerge to detect anything at all? How deep is this particular rabbit hole?
To answer this, we must look for the simplest systems possible. Stephen Wolfram suggests that simple nodes and rules in his computational universe, akin to cellular automata, might be the first pattern recognizers. For instance:
Node A gives rise to Node B.
Node B gives rise back to Node A.
This simple alternation is a pattern. An observer at this level just looks like simple cause and effect. Action-reaction following simple rules generates regularity. From such primitive processes, more complex patterns and observers can emerge. Observers detect patterns, compress them, and validate them. Trust in those patterns allows them to expand their scope, their “context windows,”and recognize more sophisticated patterns. For more on Wolfram and Observer Theory from a computational perspective, check out Wolfram Physics and his article on Observer Theory. The concept of computational boundedness as a structural feature of observers comes from Stephen Wolfram's Observer Theory. What we'll explore in this series is what that boundedness generates. The limits, the bounds, are counterintuitively a prerequisite for the intelligence we know and the order we see.
Interestingly, even the dimension of time may emerge from this process. Time, as we perceive it, is not an independent reality but a byproduct of observing causal patterns. How is this so? The arrow of time is defined by the flow from order to chaos, an increase in entropy.
But what is entropy? It's a lack of reducibility, a lack of an ability to find a pattern. Computational irreducibility says that all computational slices of the universe are essentially equal in complexity. If that's true then the second law is telling us is that we generally go from scenarios where we can recognize patterns to scenarios where we cannot. See Wolfram. When a coffee cup breaks on the floor, there are many more states states available than the intact cup before it fell. We can describe the unbroken cup easily, but we have a hard time describing the mess on the floor in a simple way.
Observers are necessarily finite. We cannot perceive all interactions at once, so we observe sequences instead, and therefore we are able to pick out causal patterns by how we detect changing patterns. The “arrow of time” may simply reflect the order in which a bounded observer is able to detect and relate patterns in its environment. Our ability to detect causal relationships builds from our pattern recognition systems.
As we build artificial systems to recognize patterns, we see these principles in action. Modern AI models like Large Language Models (LLMs) or knowledge graphs are designed to compress and abstract vast amounts of data into representations.
LLMs compress patterns in language to generate coherent text.
Knowledge graphs link concepts and relationships to create high-level representations of information.
Still, AI is also a bounded observer. AI's ability to detect patterns is limited by computational resources, training data, information interfaces and design constraints. Like us, they see a subset of the world, compresses what it can, and trusts the patterns it has learned.
This brings us full circle: patterns require observers, and observers are bounded systems navigating a complex world through the act of compression. Compression brings speed and faster decisions.
The Observer Paradox shows us that the ability to see, know, and act depends on the ability to compress reality into patterns we can act on with enough speed and accuracy to ensure our survival. Without computational bounds, and the associated compression, there would be no observation, no pattern, no time, no anything as we know it.
We are participants in this recursive process, much like AI systems and the simplest rules governing particles or nodes. The patterns we see are driven as much by who we are as observers as by what we perceive.
As we explore this further, across physics, biology, cognition, and artificial intelligence, we will find that the act of recognizing patterns is a universal principle of existence for things we recognize as, well, things, repeated at every scale. We have dug ourselves into a deep nest of ordered structures. We are just scratching an enormous surface here with a few of these ideas and how we have arrived here, but we will have a much clearer picture by the end of this series.
Where and how far can this nested recursive perspective take us?
Revamping my writing that I started over a year ago, found an audience! The next few months and getting it all out for an actual audience. Much more to come. Post 1 of 10-12. A New Observer Paradox
1 comment
Revamping my writing that I started over a year ago, found an audience! The next few months and getting it all out for an actual audience. Much more to come. Post 1 of 10-12. A New Observer Paradox