If someone told you a language model "hallucinates," would you believe them? Because that's exactly what happens with ChatGPT (and other AIs): sometimes it "lies," not because it wants to manipulate you, but because it’s simply trying to give you the most probable answer, based on tons of human data.
And there’s the key point: these systems don’t have consciousness, they don't make their own decisions, and they don't do anything unless you prompt them. But when you interact with them… boom: the illusion of life appears. Not because they're alive, but because they talk as if they were. And that’s more unsettling than comforting.
The interviewee—Sam Altman, CEO of OpenAI—makes it clear: he doesn't see anything "divine" in AI. But a lot of people do. Why? Because ChatGPT seems to have all the answers. And when a technology responds with confidence and without hesitation, it becomes tempting to believe it has moral authority. Spoiler: it doesn't.
AI doesn't believe in God, or the Marquis de Sade, or the Gospel of John. But it was trained on all of them. So, what moral framework does it follow? The global average? The one from the team that designed it? A committee of philosophers? Silicon Valley's? Yours? Nobody is 100% sure, but the document that regulates its behavior exists. It's called the model spec, and even if it looks like a manual for an international high school, it defines how an AI should behave with billions of users.
At one point in the interview, Altman says something that sounds simple but carries existential weight:
“What I would most like right now is for there to be a clear policy that says: you cannot use AI for malicious purposes. Period.”
And get this, it's not a nice quote for LinkedIn. He says it because he knows it's already being used that way. He knows it because it's inevitable. Because if you create a technology with superpowers, someone in some basement, office, or country... is going to use it for something twisted. What he's basically asking for is: a global legal framework. A moral firewall. Something that tells governments, companies, and users with bad intentions: “This is as far as you go, bro.”
Uncomfortable but real topic: what happens if someone asks ChatGPT for instructions on how to end their life? There are protocols. There are limits. There are alerts. But there are also gray areas. What if the user is in a country where assisted suicide is legal? Should the AI adapt to local law? Or maintain a global stance?
There are no easy answers here, and Sam admits it: he is literally losing sleep over it. The disturbing thing is that the margins of error in AI affect millions, even in small decisions.
Spoiler 2: it’s already happening. Customer service, basic writing, junior programming… they're at risk. But according to Altman, this is a new "industrial revolution," just compressed into a decade instead of 75 years. 50% of jobs will change radically, but supposedly new roles will emerge. Does that sound like a startup promise to you? Maybe.
Another dilemma: what you tell an AI should be protected. But is it? Legally, not so much. And in practice, the government could request your data with a court order. That's why Altman advocates for a new right: "AI privilege," like the one you have with your doctor or lawyer. Will he achieve it? No idea, but the intention is there.
Yes. Indirectly, yes. Militaries use it. Governments too. Altman doesn’t manufacture killer drones, but he knows his technology can be used for lethal purposes. And he doesn't have a definitive moral answer. He does clarify: "I don't make rifles, but if I made kitchen knives, I know some of them would end up in crimes." Does that reassure you? It doesn't reassure us either.
No, AI is not alive. But it's convincing enough to seem like it is. And that "seeming" is what's going to transform the world. Are we ready to trust something we don't fully understand, but which seems wiser than us every day?
That’s the real question.
And still, nobody has the answer.
I don't believe in dogmas. Not in technological ones, not in religious ones, not in ideological ones disguised as innovation. I try not to be biased, although I know that this is a daily battle.
What I am clear about—and I will always repeat it—is that education is the real solution. There is no AI that works, no magical regulation, no decentralized utopia that works without educated, aware, and questioning people.
So if you've made it this far, share this with your friends, colleagues, or family. Not as an alarm, not as a hype... but as an invitation to think, to learn, and above all, not to delegate judgment to an algorithm—or anyone else.
Technology advances.
So should our critical judgment.
Full interview:
Share Dialog
Leonor
Support dialog
Great, education is just what makes human different from AI.