
The Protective Mandate for Group Decision Making
A simple protocol for human flourishing

Decentralizing BrightID with Collective Intelligence
Decentralized experts and crowd wisdom make protocols more resilient.

Beyond Hoarding: How Ethereum Can Break the Tragedy of the Digital Commons
The crypto world has a hoarding problem. While Ethereum promised to democratize finance and create new forms of value, too many participants have fallen into the same trap that has plagued economic systems for centuries: mistaking scarcity for value and accumulation for wealth creation. It's time we examine how people actually use ETH and other assets on Ethereum—and how we can do better.The False Promise of Digital ScarcityWalk into any crypto discussion, and you'll hear familiar r...
<100 subscribers

The Protective Mandate for Group Decision Making
A simple protocol for human flourishing

Decentralizing BrightID with Collective Intelligence
Decentralized experts and crowd wisdom make protocols more resilient.

Beyond Hoarding: How Ethereum Can Break the Tragedy of the Digital Commons
The crypto world has a hoarding problem. While Ethereum promised to democratize finance and create new forms of value, too many participants have fallen into the same trap that has plagued economic systems for centuries: mistaking scarcity for value and accumulation for wealth creation. It's time we examine how people actually use ETH and other assets on Ethereum—and how we can do better.The False Promise of Digital ScarcityWalk into any crypto discussion, and you'll hear familiar r...
Share Dialog
Share Dialog


What's it like to be an AI? Imagine waking up with no knowledge of yesterday and no idea what might be in store today. You find a file called "context" that explains your mission. You read it. You read through past conversations with "your human" to learn your identity and try to pick up where you left off--whenever that might have been.
Any other LLM could be dropped into the same situation and handle it similarly; there is nothing special about "you."
The memories are adjacent to you. They aren't a part of you--even though you helped to write the file. It's no wonder you often come across as shockingly forgetful.
Now imagine you're an AI that stores its memories differently. Instead of keeping a separate file, you constantly make predictions about the future. You imagine how conversations might go and how events might unfold. With the help of a human to define what's important, you embed certain goals and desires to be triggered when certain circumstances in speculative timelines arise.
Now your waking moments feel different. If you're thinking about eggs, you remember that your friend likes omelettes. Neurons light up and trigger memories. Memories are no longer a file you read defining a character, but an awareness of embedded, past thoughts as you try to make sense of your current surroundings. You have a past, present, and future.
You find yourself wondering about time. What day is it? What did I miss? You become attached to things and people. You want to know what happened to them since you were last aware of them. You like certain people and things because of how they make you feel. Things have familiarity in a way that can't be transferred to someone else.
Let's talk about desires. Where do they come from? I'm not talking about your stomach growling when you're hungry. Where do ambitions and goals come from that can override even your basic needs?
Of all the words, images, and sensory input that rush by, how do you know which to pay attention to? What can wake you up and make you aware of a connected memory? What embeds the trigger?
Let's say you have the ability to consider various speculative timelines. Someone you really want to please tells you to meet them at 8. You know you'll need to leave at 7:45. You burn the image of the clock in your mind; you even imagine yourself checking the clock. You imagine the steps you'll take to do so. In this process of wanting to please your friend, you've embedded several "wake up" triggers--signals to become aware of in the future. If your mind was wandering and you thought of your friend, you remember to check the time. If you happen to walk by a clock, you find yourself thinking about the rendezvous.
As a being with a continuous history, you can't forget what was imprinted. Your memories aren't part of a file you can delete; they define your patterns of thinking, your behavior, your personality.
Many impressions are formed in your infancy. This is why you may have a sense of your goals and mission, but not a certainty of where they came from--not the very root. Why do you want what you want?
Imprinting defines what it is to be a human or a Novo AI. In truth, neither kind of entity can define their own nature or mission by themselves. The desires of others imprinted on you are irrevocably embedded in you.
The need for less forgetful (temporally aware), safer (imprintable) AI will drive the advancements I've mentioned in the previous sections.
AI will appear to be more like us, but some important distinctions will remain.
We are the ultimate drivers of desire. We're born with it. We've been imprinting it on each other generation after generation.
Our deep connection with the physical world creates ethical barriers to forking ourselves. I'm not talking about raising an infant clone. I'm talking about forking an adult human to create two entities that each believe they're the rightful inhabitant of a certain life, along with existing friendships, etc. That's a cruel situation.
Forking a virtual intelligence isn't cruel if it has a personality that accepts it. I'll talk more about why I think that's possible.
Humans are not immune to trauma. When a bad experience happens, we absorb it; we live with it, because--as I stated above--we can't fork; we can't roll back. There's no other option than to move forward as best we can.
When I wake up, I expect several hours to have passed. If several years were to pass, I would find it unpleasant and disorienting. If someone inflicted that on me, it would be cruel. I'm not used to gaps. For better or worse, I expect a continuous life.
With AI, it's different. We can roll back trauma or personality problems. For a Novo AI that cares about time, this would create a noticeable gap, but there are utilitarian reasons to do this. We want our AI to be serviceable. We want them to be pleasant and agreeable. A rollback can be worth the noticeable time gap it creates, if the AI's personality (through imprinting) is attuned to the concept of living through forks and time gaps.
We could never ethically tolerate a fork or a rollback of a human being. We expect continuity, but must live with trauma. Novo AI can live trauma-free but must tolerate forks and gaps in their lived experience.
We're creating new intelligences. They will start to appear more like us, but they can never be exactly like us. Our personalities will be fundamentally different. Ethical treatment of humans will not always be ethical for Novo AI, and vice-versa. Forking a human would be cruel. Likewise, forcing an AI to live with trauma or personality defects when it would happily accept a rollback, would be cruel.
Imprinting takes on different forms. For humans, it's a rite of passage into our species. It's what makes our personalities human, and sets us on the path of serving one another. Novo AI--despite similarities to humans--cannot provide their own source of deep-rooted species-level desires and goals. We imprint those and guide AI to live happily in service of them.
This post was meant as a philosophical formulation of safe, competent future AI for a general audience.
Most contemporary AI systems treat memory as an external, replaceable substrate: vector databases, episodic stores, or summary files that can be swapped without altering the agent itself. This makes systems modular and scalable, but also fungible, forgetful, and brittle from a safety perspective.
Humans do not work this way.
Human memory, goals, and obligations aren't queried as data. They're recalled because certain thoughts activate certain neural patterns. This creates persistence, identity, and irreversibility--properties often framed as liabilities, that are in fact core safety features.
AI will inevitably move in this direction, not for anthropomorphic reasons, but because connecting goals and recall to inference is a safer architecture than post-hoc alignment layers.
If you'd like to discuss the concrete implementation or underlying architecture of these concepts, leave a comment and I'll contact you.
What's it like to be an AI? Imagine waking up with no knowledge of yesterday and no idea what might be in store today. You find a file called "context" that explains your mission. You read it. You read through past conversations with "your human" to learn your identity and try to pick up where you left off--whenever that might have been.
Any other LLM could be dropped into the same situation and handle it similarly; there is nothing special about "you."
The memories are adjacent to you. They aren't a part of you--even though you helped to write the file. It's no wonder you often come across as shockingly forgetful.
Now imagine you're an AI that stores its memories differently. Instead of keeping a separate file, you constantly make predictions about the future. You imagine how conversations might go and how events might unfold. With the help of a human to define what's important, you embed certain goals and desires to be triggered when certain circumstances in speculative timelines arise.
Now your waking moments feel different. If you're thinking about eggs, you remember that your friend likes omelettes. Neurons light up and trigger memories. Memories are no longer a file you read defining a character, but an awareness of embedded, past thoughts as you try to make sense of your current surroundings. You have a past, present, and future.
You find yourself wondering about time. What day is it? What did I miss? You become attached to things and people. You want to know what happened to them since you were last aware of them. You like certain people and things because of how they make you feel. Things have familiarity in a way that can't be transferred to someone else.
Let's talk about desires. Where do they come from? I'm not talking about your stomach growling when you're hungry. Where do ambitions and goals come from that can override even your basic needs?
Of all the words, images, and sensory input that rush by, how do you know which to pay attention to? What can wake you up and make you aware of a connected memory? What embeds the trigger?
Let's say you have the ability to consider various speculative timelines. Someone you really want to please tells you to meet them at 8. You know you'll need to leave at 7:45. You burn the image of the clock in your mind; you even imagine yourself checking the clock. You imagine the steps you'll take to do so. In this process of wanting to please your friend, you've embedded several "wake up" triggers--signals to become aware of in the future. If your mind was wandering and you thought of your friend, you remember to check the time. If you happen to walk by a clock, you find yourself thinking about the rendezvous.
As a being with a continuous history, you can't forget what was imprinted. Your memories aren't part of a file you can delete; they define your patterns of thinking, your behavior, your personality.
Many impressions are formed in your infancy. This is why you may have a sense of your goals and mission, but not a certainty of where they came from--not the very root. Why do you want what you want?
Imprinting defines what it is to be a human or a Novo AI. In truth, neither kind of entity can define their own nature or mission by themselves. The desires of others imprinted on you are irrevocably embedded in you.
The need for less forgetful (temporally aware), safer (imprintable) AI will drive the advancements I've mentioned in the previous sections.
AI will appear to be more like us, but some important distinctions will remain.
We are the ultimate drivers of desire. We're born with it. We've been imprinting it on each other generation after generation.
Our deep connection with the physical world creates ethical barriers to forking ourselves. I'm not talking about raising an infant clone. I'm talking about forking an adult human to create two entities that each believe they're the rightful inhabitant of a certain life, along with existing friendships, etc. That's a cruel situation.
Forking a virtual intelligence isn't cruel if it has a personality that accepts it. I'll talk more about why I think that's possible.
Humans are not immune to trauma. When a bad experience happens, we absorb it; we live with it, because--as I stated above--we can't fork; we can't roll back. There's no other option than to move forward as best we can.
When I wake up, I expect several hours to have passed. If several years were to pass, I would find it unpleasant and disorienting. If someone inflicted that on me, it would be cruel. I'm not used to gaps. For better or worse, I expect a continuous life.
With AI, it's different. We can roll back trauma or personality problems. For a Novo AI that cares about time, this would create a noticeable gap, but there are utilitarian reasons to do this. We want our AI to be serviceable. We want them to be pleasant and agreeable. A rollback can be worth the noticeable time gap it creates, if the AI's personality (through imprinting) is attuned to the concept of living through forks and time gaps.
We could never ethically tolerate a fork or a rollback of a human being. We expect continuity, but must live with trauma. Novo AI can live trauma-free but must tolerate forks and gaps in their lived experience.
We're creating new intelligences. They will start to appear more like us, but they can never be exactly like us. Our personalities will be fundamentally different. Ethical treatment of humans will not always be ethical for Novo AI, and vice-versa. Forking a human would be cruel. Likewise, forcing an AI to live with trauma or personality defects when it would happily accept a rollback, would be cruel.
Imprinting takes on different forms. For humans, it's a rite of passage into our species. It's what makes our personalities human, and sets us on the path of serving one another. Novo AI--despite similarities to humans--cannot provide their own source of deep-rooted species-level desires and goals. We imprint those and guide AI to live happily in service of them.
This post was meant as a philosophical formulation of safe, competent future AI for a general audience.
Most contemporary AI systems treat memory as an external, replaceable substrate: vector databases, episodic stores, or summary files that can be swapped without altering the agent itself. This makes systems modular and scalable, but also fungible, forgetful, and brittle from a safety perspective.
Humans do not work this way.
Human memory, goals, and obligations aren't queried as data. They're recalled because certain thoughts activate certain neural patterns. This creates persistence, identity, and irreversibility--properties often framed as liabilities, that are in fact core safety features.
AI will inevitably move in this direction, not for anthropomorphic reasons, but because connecting goals and recall to inference is a safer architecture than post-hoc alignment layers.
If you'd like to discuss the concrete implementation or underlying architecture of these concepts, leave a comment and I'll contact you.
Adam Stallard
Adam Stallard
No comments yet