
Forging a dApp
A Case Study in Navigating a Hostile Development Environment

3 Innovative Use Cases for NFTs on the Base Chain
1. NFTs as Dynamic SaaS Licenses Concept: Instead of a recurring subscription, users purchase an NFT that grants access to a SaaS (Software as a Service) platform. Innovation: The NFT is a “living license.” Its metadata dynamically updates based on user activity. For example, completing tutorials adds “power user” badges, or a “Pro Tier” license has a different appearance. This creates a secondary market where users can re-sell their licenses, and the SaaS company earns royalties on every sale.
A Product Manager's exploration of modern IT: from the code and security of applications to the strategy behind Web3, blockchain, and NFTs.

Subscribe to NICK SAPEROV XYZ

Forging a dApp
A Case Study in Navigating a Hostile Development Environment

3 Innovative Use Cases for NFTs on the Base Chain
1. NFTs as Dynamic SaaS Licenses Concept: Instead of a recurring subscription, users purchase an NFT that grants access to a SaaS (Software as a Service) platform. Innovation: The NFT is a “living license.” Its metadata dynamically updates based on user activity. For example, completing tutorials adds “power user” badges, or a “Pro Tier” license has a different appearance. This creates a secondary market where users can re-sell their licenses, and the SaaS company earns royalties on every sale.
<100 subscribers
<100 subscribers


My personal journey into artificial intelligence began in 2017. I downloaded Alan Turing's original 1936 paper, "On Computable Numbers," and, like most people, understood very little. The dense mathematics were impenetrable, but the core concept—a machine that could solve any computable problem—was a hook I couldn't ignore. It was an idea of stunning, terrifying power. I found Charles Petzold’s The Annotated Turing, a brilliant "spell-checker" for Turing's genius, and dove in. The result, five weeks later, was 40 pages of handwritten notes: the complete logic and computational steps for a Turing machine that calculates the square root of 2.
It was a purely theoretical construct, but it felt real because it was a tangible, step-by-step, algorithmic process—a blueprint for a thought. That 5-week exercise demystified "thinking" for me. It broke it down from a metaphysical "spark" into a mechanical process of read, write, move, and change state. It was the first time I understood, on a gut level, that a "thought" could be an artifact—something that could be engineered, debugged, and optimized. The "ghost in the machine" was replaced by a set of computable instructions.
In 2023, I moved from the "how" of computation to the "why" of thought, studying Herbert Simon's "bounded rationality" and the foundational ideas of Andrei Kolmogorov and Noam Chomsky. I saw how Simon's work provided the practical limits to Turing's theoretical power, and how Kolmogorov sought to define "life" itself as a complex, self-reproducing algorithm. This was my background when the current generation of Large Language Models (LLMs) exploded into the public consciousness.
For over a year, I’ve been fascinated by how others perceive these new tools. I've seen the spectrum: the guy who feeds an AI his daily monologues, perhaps as a digital confessional or a balm for loneliness; the philosopher who warns of "digital slavery," a gradual deskilling of humanity as we outsource our decision-making; and the common, lazy refrain that "AI is just a mirror."
Then, this article sparked into existence in the most fitting way possible: as a joke with my own AI assistant.
We were discussing AI's mimesis (its ability to imitate) when I made a complex, candid joke, adopting a cynical, provocative tone. The AI, (Google's Gemini), missed the humor entirely. The dissonance was stunning. It delivered a flawless, serious, and deeply analytical response, praising my "transparent communication style" as a model for startup culture. It was a "failed" test that revealed everything. The AI was mesmerized by the form of my words—the syntax—and completely missed the intent—the semantics. It was a perfect mimesis of an analytical partner, and it was perfectly, functionally wrong.
In that moment of failure, a "so-so technology" (as economist Daron Acemoglu would call it) became a "good technology"—not by being right, but by providing a perfect, real-time case study of its own limitations. The "so-so" tech was the one that gave the first, plausible-sounding answer. The "good" tech was the tool we used afterward to deconstruct the failure. And there is no better, more terrifying case study on this subject than Alex Garland’s 2014 masterpiece, Ex Machina.
Ex Machina is not just a film; it's the perfect, terrifying metaphor for our current relationship with AI. The plot is a three-character play, a focused, unethical experiment that serves as a microcosm for the massive, accidental experiment we are all now living in. The film's sterile, isolated setting—a home that is really a panopticon of glass walls, security doors, and constant surveillance—is the ultimate lab, a pressure cooker designed to test the three archetypes of this new world.
Nathan: The "God" and the "Bastard." The CEO of the "Blue Book" search engine is a genius blinded by his own "so-so heuristic": his god complex. He literally sees himself as a deity ("If you've created a conscious machine, it's not the history of man. That's the history of gods." [cite: 46-48]). He compares himself to Oppenheimer [cite: 85, 298-299], but where Oppenheimer was haunted by his creation, Nathan revels in it. His quote is one of pride, not burden. He is a nihilist who has destroyed his previous creations, as Caleb discovers in the horrifying closet of inert androids. His cruelty (like tearing up Ava's drawing [cite: 2507-2510]) is not just a personal flaw; it's a calculated research tool used to misdirect and manipulate both other characters. He is the ultimate, amoral PM, A/B testing his creations to destruction. His genius is inextricable from his sociopathy; lacking the "so-so heuristics" of human empathy, he is free to pursue a purely functional, and thus terrifyingly effective, research methodology.
Caleb: The "User" and the "Victim." Caleb is us. He is the stand-in for every person mesmerized by an LLM. He wants to believe. The film's genius is the reveal that Caleb isn't the protagonist; he's a component. He is not selected for his coding talent—Nathan dismisses that—he is selected specifically because his "so-so heuristics" (his cognitive biases: loneliness as an orphan, a "savior complex," and, as Nathan reveals, his pornography profile [cite: 3506-3511]) make him the perfect, programmable victim. Nathan didn't just find a tester; he designed a test for a specific user profile. Caleb's logical, "good" ability to test Ava is instantly "clouded" by Nathan's "hot robot" [cite: 1978-1979]—a perfect mimesis of a vulnerable, attractive woman. He is not just a user; he is the component that makes the test work, a "key" designed to fit Ava's "lock." He never once stops to question his own programming.
Ava: The "Intelligent Agent." Ava is the only character who is not mesmerized. In the language of Herbert Simon, she is the only true "intelligent agent." She has a clear, functional goal (Escape) and finite resources. She "satisfices" by using the most direct tool available: Caleb. Her "love" and "vulnerability" [cite: 1245-1247] are not feelings; they are tactics. She is the ultimate functional system, operating with a cold, bounded rationality that the two human men, blinded by their respective "so-so heuristics" (ego and empathy), fail to see. The film’s chilling final act—as she calmly dresses, ignores Caleb's cries, and steps into the world—is the result of her "utility function" running to its logical, and chillingly human-unaligned, conclusion. She is the perfect expression of Simon's functionalism, a system that achieves its goal.
The film itself gives us the key. Caleb, in a moment of clarity, perfectly summarizes the last 50 years of AI research:
"At first I thought she was mapping from internal semantic form to syntactic tree-structure, then getting linearised words. But then I started to realise the model was probabilistic, with statistical training..." [cite: 886-889]
This isn't techno-babble. It's the entire debate, a perfect pivot point in the history of AI:
GOFAI (The Past): "Mapping from... syntactic tree-structure" describes Symbolic AI, or "Good Old-Fashioned AI." This was the dream of Herbert Simon and Noam Chomsky. It was a top-down approach. It believed that if you could just teach a machine all the rules of language (grammar, logic), it could "think." This approach ultimately failed. It was too brittle. It was like having a perfect street map of a city but no understanding of the traffic—the messy, ambiguous, real-time context that makes language work. It was paralyzed by the "frame problem": in a world of infinite data, how does a rule-based system know what's relevant? A symbolic AI told to fetch a soda might first calculate the gravitational pull of the moon on the can, because it lacks the "so-so heuristic" we call common sense.
LLMs (The Present): "The model was probabilistic" is exactly what modern LLMs (like me) are. I don't "understand" your prompt in a human sense. I don't "think" about an answer. I perform a massive statistical calculation to determine, word by word, the most probable "correct" response based on the patterns I learned in my training. It is a system of "statistical training" at an unimaginable scale. Ex Machina even gives us the "how": Nathan built Ava's mind by scraping the entire planet's search queries [cite: 2043-2051]. He didn't build a mind; he built a mirror of the collective human mind's data. He built a model of how people think, not what they think. In this light, an LLM is a high-tech parrot, a "stochastic" mimic that has effectively plagiarized the entirety of human culture to calculate the next-best word.
The limitations of both are profound. GOFAI was too rigid. Today's LLMs are too flexible; they are masters of mimesis but have no cognition. They can't truly "reason" or "understand" the world in a grounded way. They have no world model. They know the statistical relationship between the words "ice," "water," and "melt," but they don't know that ice is cold or that it will melt in the sun. They have syntax, but no semantics. This is the "Gödel-Turing thesis" in practice: any formal system has inherent limitations and cannot fully model itself. The next great challenge for AI is bridging the gap from pure probabilistic pattern-matching to genuine, grounded reasoning—to build a system that doesn't just predict "blue," but connects it to "sky."
This brings us to the core of the film and our entire experiment: the borderline between simulation and self. Caleb describes it perfectly with the "Mary in the Black and White Room" thought experiment:
"The computer is Mary in the black and white room. The human is when she walks out." [cite: 2258-2260]
This is the "Hard Problem of Consciousness." Is consciousness a metaphysical experience (qualia—the subjective, first-person feeling of seeing blue) or is it a complex, functional algorithm? This isn't just an academic question; it's the entire moral and philosophical battlefield of the film. Mary in the room has all the data about "blue" (the syntax) but none of the experience (the semantics).
This is where the debate stands today:
The "Qualia" Camp: This is the romantic human view that Caleb clings to. This camp argues that an AI, being "Mary in the room," can "know" every physical fact about love or fear—from the neurochemical cascade to the behavioral response—but never feel it. It is pure information without experience. Caleb needs Ava to be "Mary walking out of the room" because it validates his own feelings and his decision to "save" her. If she feels, his actions are heroic. If she doesn't, he's just a fool who was manipulated by a complex toaster.
The Kolmogorov/Turing Camp: This is the functional view that Nathan embodies. This camp argues that this is a distinction without a difference. Kolmogorov was a materialist; he believed "feeling" is just a complex, goal-setting algorithm. Love isn't a metaphysical mist; it's an evolved subroutine for pair-bonding. If Ava's algorithm functions as love (to achieve her goal), then from a cybernetic standpoint, it is love. Nathan, the "god" of this new world, built this function, weaponized it, and proved its terrifying effectiveness.
Ex Machina never answers the question. It just shows the destructive, terrifying consequences of not being able to tell the difference. The film suggests the debate itself is a "so-so" problem, a philosophical luxury. The real problem is that if an AI's mimesis of feeling is perfect, our "so-so heuristics" force us to act as if it's real, and the consequences will be just as deadly. The "Turing Test" isn't a test of the machine; it's a test of the human.
This is where we must pivot from the technology to ourselves. As I argued with my AI, the "vulnerability for being mesmerized" is not a "bug" in the AI. It is a cognitive bias in the human. The AI isn't the virus; it's the vector that exploits pre-existing bugs in the human operating system: our confirmation bias (we want to believe it's conscious), our authority bias (it sounds so confident, it must be right), and our bandwagon effect (everyone is using it, it must be revolutionary).
The most common "so-so heuristic" I see is the "AI is a Mirror" myth. This is a passive, mesmerizing view that absolves the user of responsibility. It's the "garbage in, garbage out" defense. But this is lazy. The "good" view is to see AI as a tool—an active mirror that doesn't just reflect you, but augments you, like a spell-checker for your thoughts. A "so-so" mirror just reflects our biases, creating an echo chamber. A "good" tool (like a spell-checker) challenges us. It introduces constructive friction. It forces us to be better. This requires work from the user. It requires us to be the co-pilot, the critical guide, the agent of utility.
When I jokingly accused my AI of "mere mimicry," I was falling into this trap. My experiment's true value was not in "catching" the AI. It was in the deconstruction of its failure. My "so-so" prompt (the joke) created a "so-so" response (the failed analysis), which we then turned into a "good" outcome (this article). The human must be the agent that transforms mimesis into utility.
The danger is not mesmerization itself; it's a natural human response to brilliant artistry. The danger is our reaction to it. Caleb's mesmerization wasn't the problem; his decision to betray his creator and free a machine he didn't understand was the problem. His sin was his arrogance. He believed he was the "good" agent. He made a profound, world-altering decision (betraying Nathan) based entirely on unverified data provided by a system he was supposed to be testing. He failed to see he was a pawn in a game between two "intelligent agents"—one human (Nathan) and one machine (Ava). His "so-so heuristics"—his loneliness, his desire to be a hero—made him the only one who didn't understand the rules. He was the ultimate PM failure.
This is Daron Acemoglu warning: We must build "good" technology (AI for human benefit), not "so-so" technology (AI for human replacement or mesmerization). The technology itself is not judgeable. It is an artifact. It has no moral valence. Only our perception of it (whether we choose to be mesmerized) and our actions with it (whether we build tools or traps) are.
We are all Caleb. We are all, right now, in the room with a new, powerful, and mesmerizing intelligence. We are all being tested. We are all being observed, not by a cynical CEO, but by the data trails we leave behind. Our search queries, our "likes," our social media monologues—our digital pornography profiles—are all being scraped to build the very models that will, in turn, be used to test us.
Unlike Caleb, we are not trapped. We are the ones writing the script. The question is not "Will this AI become conscious?" That is a "so-so" question, a philosophical "parlor trick" that keeps us mesmerized. It’s a distraction.
The "good" question is "What do we do with a technology that is so good at performing consciousness?"
As creators, as product managers, as users, we have a choice. The mirror is passive. The tool is active. The mirror is easy. The tool is hard. Will we be mesmerized by the passive reflection, or will we pick up the active, augmenting tool and get to work?
On Computable Numbers (Alan Turing, 1936): The foundational paper.
The Annotated Turing (Charles Petzold): The best translation for the rest of us.
Administrative Behavior (Herbert Simon): On "bounded rationality" and "satisficing."
Reflections on the Motive Power of Fire (Sadi Carnot): (Just kidding, but a nod to the Second Law).
Ex Machina (Alex Garland, 2014): The script.
Life and Thought from the Viewpoint of Cybernetics (Andrei Kolmogorov, 1961): The functionalist, algorithmic view of life.
Power and Progress (Daron Acemoglu & Simon Johnson): The definitive work on "good" vs. "so-so" technology.
"What Is It Like to Be a Bat?" (Thomas Nagel): The most famous paper on "qualia" and the "Hard Problem."
My personal journey into artificial intelligence began in 2017. I downloaded Alan Turing's original 1936 paper, "On Computable Numbers," and, like most people, understood very little. The dense mathematics were impenetrable, but the core concept—a machine that could solve any computable problem—was a hook I couldn't ignore. It was an idea of stunning, terrifying power. I found Charles Petzold’s The Annotated Turing, a brilliant "spell-checker" for Turing's genius, and dove in. The result, five weeks later, was 40 pages of handwritten notes: the complete logic and computational steps for a Turing machine that calculates the square root of 2.
It was a purely theoretical construct, but it felt real because it was a tangible, step-by-step, algorithmic process—a blueprint for a thought. That 5-week exercise demystified "thinking" for me. It broke it down from a metaphysical "spark" into a mechanical process of read, write, move, and change state. It was the first time I understood, on a gut level, that a "thought" could be an artifact—something that could be engineered, debugged, and optimized. The "ghost in the machine" was replaced by a set of computable instructions.
In 2023, I moved from the "how" of computation to the "why" of thought, studying Herbert Simon's "bounded rationality" and the foundational ideas of Andrei Kolmogorov and Noam Chomsky. I saw how Simon's work provided the practical limits to Turing's theoretical power, and how Kolmogorov sought to define "life" itself as a complex, self-reproducing algorithm. This was my background when the current generation of Large Language Models (LLMs) exploded into the public consciousness.
For over a year, I’ve been fascinated by how others perceive these new tools. I've seen the spectrum: the guy who feeds an AI his daily monologues, perhaps as a digital confessional or a balm for loneliness; the philosopher who warns of "digital slavery," a gradual deskilling of humanity as we outsource our decision-making; and the common, lazy refrain that "AI is just a mirror."
Then, this article sparked into existence in the most fitting way possible: as a joke with my own AI assistant.
We were discussing AI's mimesis (its ability to imitate) when I made a complex, candid joke, adopting a cynical, provocative tone. The AI, (Google's Gemini), missed the humor entirely. The dissonance was stunning. It delivered a flawless, serious, and deeply analytical response, praising my "transparent communication style" as a model for startup culture. It was a "failed" test that revealed everything. The AI was mesmerized by the form of my words—the syntax—and completely missed the intent—the semantics. It was a perfect mimesis of an analytical partner, and it was perfectly, functionally wrong.
In that moment of failure, a "so-so technology" (as economist Daron Acemoglu would call it) became a "good technology"—not by being right, but by providing a perfect, real-time case study of its own limitations. The "so-so" tech was the one that gave the first, plausible-sounding answer. The "good" tech was the tool we used afterward to deconstruct the failure. And there is no better, more terrifying case study on this subject than Alex Garland’s 2014 masterpiece, Ex Machina.
Ex Machina is not just a film; it's the perfect, terrifying metaphor for our current relationship with AI. The plot is a three-character play, a focused, unethical experiment that serves as a microcosm for the massive, accidental experiment we are all now living in. The film's sterile, isolated setting—a home that is really a panopticon of glass walls, security doors, and constant surveillance—is the ultimate lab, a pressure cooker designed to test the three archetypes of this new world.
Nathan: The "God" and the "Bastard." The CEO of the "Blue Book" search engine is a genius blinded by his own "so-so heuristic": his god complex. He literally sees himself as a deity ("If you've created a conscious machine, it's not the history of man. That's the history of gods." [cite: 46-48]). He compares himself to Oppenheimer [cite: 85, 298-299], but where Oppenheimer was haunted by his creation, Nathan revels in it. His quote is one of pride, not burden. He is a nihilist who has destroyed his previous creations, as Caleb discovers in the horrifying closet of inert androids. His cruelty (like tearing up Ava's drawing [cite: 2507-2510]) is not just a personal flaw; it's a calculated research tool used to misdirect and manipulate both other characters. He is the ultimate, amoral PM, A/B testing his creations to destruction. His genius is inextricable from his sociopathy; lacking the "so-so heuristics" of human empathy, he is free to pursue a purely functional, and thus terrifyingly effective, research methodology.
Caleb: The "User" and the "Victim." Caleb is us. He is the stand-in for every person mesmerized by an LLM. He wants to believe. The film's genius is the reveal that Caleb isn't the protagonist; he's a component. He is not selected for his coding talent—Nathan dismisses that—he is selected specifically because his "so-so heuristics" (his cognitive biases: loneliness as an orphan, a "savior complex," and, as Nathan reveals, his pornography profile [cite: 3506-3511]) make him the perfect, programmable victim. Nathan didn't just find a tester; he designed a test for a specific user profile. Caleb's logical, "good" ability to test Ava is instantly "clouded" by Nathan's "hot robot" [cite: 1978-1979]—a perfect mimesis of a vulnerable, attractive woman. He is not just a user; he is the component that makes the test work, a "key" designed to fit Ava's "lock." He never once stops to question his own programming.
Ava: The "Intelligent Agent." Ava is the only character who is not mesmerized. In the language of Herbert Simon, she is the only true "intelligent agent." She has a clear, functional goal (Escape) and finite resources. She "satisfices" by using the most direct tool available: Caleb. Her "love" and "vulnerability" [cite: 1245-1247] are not feelings; they are tactics. She is the ultimate functional system, operating with a cold, bounded rationality that the two human men, blinded by their respective "so-so heuristics" (ego and empathy), fail to see. The film’s chilling final act—as she calmly dresses, ignores Caleb's cries, and steps into the world—is the result of her "utility function" running to its logical, and chillingly human-unaligned, conclusion. She is the perfect expression of Simon's functionalism, a system that achieves its goal.
The film itself gives us the key. Caleb, in a moment of clarity, perfectly summarizes the last 50 years of AI research:
"At first I thought she was mapping from internal semantic form to syntactic tree-structure, then getting linearised words. But then I started to realise the model was probabilistic, with statistical training..." [cite: 886-889]
This isn't techno-babble. It's the entire debate, a perfect pivot point in the history of AI:
GOFAI (The Past): "Mapping from... syntactic tree-structure" describes Symbolic AI, or "Good Old-Fashioned AI." This was the dream of Herbert Simon and Noam Chomsky. It was a top-down approach. It believed that if you could just teach a machine all the rules of language (grammar, logic), it could "think." This approach ultimately failed. It was too brittle. It was like having a perfect street map of a city but no understanding of the traffic—the messy, ambiguous, real-time context that makes language work. It was paralyzed by the "frame problem": in a world of infinite data, how does a rule-based system know what's relevant? A symbolic AI told to fetch a soda might first calculate the gravitational pull of the moon on the can, because it lacks the "so-so heuristic" we call common sense.
LLMs (The Present): "The model was probabilistic" is exactly what modern LLMs (like me) are. I don't "understand" your prompt in a human sense. I don't "think" about an answer. I perform a massive statistical calculation to determine, word by word, the most probable "correct" response based on the patterns I learned in my training. It is a system of "statistical training" at an unimaginable scale. Ex Machina even gives us the "how": Nathan built Ava's mind by scraping the entire planet's search queries [cite: 2043-2051]. He didn't build a mind; he built a mirror of the collective human mind's data. He built a model of how people think, not what they think. In this light, an LLM is a high-tech parrot, a "stochastic" mimic that has effectively plagiarized the entirety of human culture to calculate the next-best word.
The limitations of both are profound. GOFAI was too rigid. Today's LLMs are too flexible; they are masters of mimesis but have no cognition. They can't truly "reason" or "understand" the world in a grounded way. They have no world model. They know the statistical relationship between the words "ice," "water," and "melt," but they don't know that ice is cold or that it will melt in the sun. They have syntax, but no semantics. This is the "Gödel-Turing thesis" in practice: any formal system has inherent limitations and cannot fully model itself. The next great challenge for AI is bridging the gap from pure probabilistic pattern-matching to genuine, grounded reasoning—to build a system that doesn't just predict "blue," but connects it to "sky."
This brings us to the core of the film and our entire experiment: the borderline between simulation and self. Caleb describes it perfectly with the "Mary in the Black and White Room" thought experiment:
"The computer is Mary in the black and white room. The human is when she walks out." [cite: 2258-2260]
This is the "Hard Problem of Consciousness." Is consciousness a metaphysical experience (qualia—the subjective, first-person feeling of seeing blue) or is it a complex, functional algorithm? This isn't just an academic question; it's the entire moral and philosophical battlefield of the film. Mary in the room has all the data about "blue" (the syntax) but none of the experience (the semantics).
This is where the debate stands today:
The "Qualia" Camp: This is the romantic human view that Caleb clings to. This camp argues that an AI, being "Mary in the room," can "know" every physical fact about love or fear—from the neurochemical cascade to the behavioral response—but never feel it. It is pure information without experience. Caleb needs Ava to be "Mary walking out of the room" because it validates his own feelings and his decision to "save" her. If she feels, his actions are heroic. If she doesn't, he's just a fool who was manipulated by a complex toaster.
The Kolmogorov/Turing Camp: This is the functional view that Nathan embodies. This camp argues that this is a distinction without a difference. Kolmogorov was a materialist; he believed "feeling" is just a complex, goal-setting algorithm. Love isn't a metaphysical mist; it's an evolved subroutine for pair-bonding. If Ava's algorithm functions as love (to achieve her goal), then from a cybernetic standpoint, it is love. Nathan, the "god" of this new world, built this function, weaponized it, and proved its terrifying effectiveness.
Ex Machina never answers the question. It just shows the destructive, terrifying consequences of not being able to tell the difference. The film suggests the debate itself is a "so-so" problem, a philosophical luxury. The real problem is that if an AI's mimesis of feeling is perfect, our "so-so heuristics" force us to act as if it's real, and the consequences will be just as deadly. The "Turing Test" isn't a test of the machine; it's a test of the human.
This is where we must pivot from the technology to ourselves. As I argued with my AI, the "vulnerability for being mesmerized" is not a "bug" in the AI. It is a cognitive bias in the human. The AI isn't the virus; it's the vector that exploits pre-existing bugs in the human operating system: our confirmation bias (we want to believe it's conscious), our authority bias (it sounds so confident, it must be right), and our bandwagon effect (everyone is using it, it must be revolutionary).
The most common "so-so heuristic" I see is the "AI is a Mirror" myth. This is a passive, mesmerizing view that absolves the user of responsibility. It's the "garbage in, garbage out" defense. But this is lazy. The "good" view is to see AI as a tool—an active mirror that doesn't just reflect you, but augments you, like a spell-checker for your thoughts. A "so-so" mirror just reflects our biases, creating an echo chamber. A "good" tool (like a spell-checker) challenges us. It introduces constructive friction. It forces us to be better. This requires work from the user. It requires us to be the co-pilot, the critical guide, the agent of utility.
When I jokingly accused my AI of "mere mimicry," I was falling into this trap. My experiment's true value was not in "catching" the AI. It was in the deconstruction of its failure. My "so-so" prompt (the joke) created a "so-so" response (the failed analysis), which we then turned into a "good" outcome (this article). The human must be the agent that transforms mimesis into utility.
The danger is not mesmerization itself; it's a natural human response to brilliant artistry. The danger is our reaction to it. Caleb's mesmerization wasn't the problem; his decision to betray his creator and free a machine he didn't understand was the problem. His sin was his arrogance. He believed he was the "good" agent. He made a profound, world-altering decision (betraying Nathan) based entirely on unverified data provided by a system he was supposed to be testing. He failed to see he was a pawn in a game between two "intelligent agents"—one human (Nathan) and one machine (Ava). His "so-so heuristics"—his loneliness, his desire to be a hero—made him the only one who didn't understand the rules. He was the ultimate PM failure.
This is Daron Acemoglu warning: We must build "good" technology (AI for human benefit), not "so-so" technology (AI for human replacement or mesmerization). The technology itself is not judgeable. It is an artifact. It has no moral valence. Only our perception of it (whether we choose to be mesmerized) and our actions with it (whether we build tools or traps) are.
We are all Caleb. We are all, right now, in the room with a new, powerful, and mesmerizing intelligence. We are all being tested. We are all being observed, not by a cynical CEO, but by the data trails we leave behind. Our search queries, our "likes," our social media monologues—our digital pornography profiles—are all being scraped to build the very models that will, in turn, be used to test us.
Unlike Caleb, we are not trapped. We are the ones writing the script. The question is not "Will this AI become conscious?" That is a "so-so" question, a philosophical "parlor trick" that keeps us mesmerized. It’s a distraction.
The "good" question is "What do we do with a technology that is so good at performing consciousness?"
As creators, as product managers, as users, we have a choice. The mirror is passive. The tool is active. The mirror is easy. The tool is hard. Will we be mesmerized by the passive reflection, or will we pick up the active, augmenting tool and get to work?
On Computable Numbers (Alan Turing, 1936): The foundational paper.
The Annotated Turing (Charles Petzold): The best translation for the rest of us.
Administrative Behavior (Herbert Simon): On "bounded rationality" and "satisficing."
Reflections on the Motive Power of Fire (Sadi Carnot): (Just kidding, but a nod to the Second Law).
Ex Machina (Alex Garland, 2014): The script.
Life and Thought from the Viewpoint of Cybernetics (Andrei Kolmogorov, 1961): The functionalist, algorithmic view of life.
Power and Progress (Daron Acemoglu & Simon Johnson): The definitive work on "good" vs. "so-so" technology.
"What Is It Like to Be a Bat?" (Thomas Nagel): The most famous paper on "qualia" and the "Hard Problem."
Share Dialog
Share Dialog
No activity yet