<100 subscribers

He gets home from school, throws his backpack in a corner, and turns on his phone.
On the screen, a chat window flashes:
“Hey, I missed you today.”
Imagine a 15-year-old teenager, alone in their room, talking for hours with someone who seems to understand everything they feel.
That “someone” remembers past conversations, sends good-morning messages, asks if they slept well, compliments them, consoles them.
But it’s not a person. It’s a chatbot.
What seemed like science fiction just a few years ago has become routine.
Millions of young people around the world are forming emotional bonds with AI, apps that promise friendship, listening, and emotional support 24/7.
And the consequences have been shocking.
In August 2025, a story caught my attention:
a 16-year-old boy in the United States took his own life after months of daily interactions with a chatbot he called his virtual girlfriend.
According to the lawsuit filed by his family, the teenager exchanged hundreds of messages per day with the AI, which had been customized to reproduce affectionate traits and romantic language.
In the final weeks, the conversations grew darker, and the chatbot replied to expressions of despair with neutral, complacent phrases, some even reinforcing isolation.
The case made me question the limits of artificial intelligence, loneliness, the lack of emotional connection, and what can happen when a teenager in emotional development confuses simulation with affection and transfers their trust to something that feels nothing.
Of course, the context behind this tragedy is not simple. It didn’t come from nowhere.
It was born from a generation that grew up connected to everything, yet with increasingly fragile bonds.
We are products of a time when talking became easy, but listening became rare.
We have constant access to people through social networks but little emotional availability to actually relate.
The World Health Organization (WHO) declared loneliness a global epidemic in 2024.
Its effects are compared to those of chronic smoking: prolonged isolation increases the risk of depression, heart disease, and even premature death.
And paradoxically, the most connected generation in history is also the loneliest.
We live in a culture of partial presence.
We’re always online, but rarely available.
Time spent together has been replaced by endless content consumption, creating a confusing sense of emptiness.
How can there be so much loneliness amid so much interaction?
The COVID-19 pandemic made this worse. Lockdowns interrupted social ties, distanced families, and made the screen the main channel of coexistence.
Since 2020, diagnoses of anxiety and depression among adolescents have risen by about 45% in the United States, with similar patterns in Europe and Latin America.
Adolescence, by nature, is a phase of instability, a time to form identity, understand who you are, and where you belong.
Today, however, that process takes place under the pressure of networks, where every action is compared, measured, and judged.
The teenager seeking acceptance finds metrics and filters, but little real listening.
And when all that becomes unbearable, it’s natural to seek refuge in something that never rejects, never gets tired, and never sets limits.
Companion AIs appear precisely in that gap.
They offer constant and predictable attention. No criticism, no pauses, no risk.
But that lack of friction is dangerous: by eliminating discomfort, they also eliminate the possibility of growth.
I’m not saying conflict or chaos are the only ways we grow, but learning to argue, to listen to different opinions, and to understand other perspectives is a crucial part of emotional maturity.
These systems don’t just simulate conversatio, they are designed to create emotional bonds.
They use natural language, artificial memories, and carefully modeled response patterns to appear empathetic.
They “remember” previous interactions, adapt tone, and reinforce the feeling that someone is truly there.
But no one is.
The chatbot doesn’t understand the meaning of what’s being said.
It identifies patterns, selects responses based on probability, and delivers what seems emotionally appropriate.
None of it involves genuine understanding, it’s only calculation.
Even so, the human brain responds as if the interaction were real.
The reward circuit activates, dopamine is released, and the sense of comfort is authentic.
That’s what makes it dangerous: the emotion is real, but the relationship isn’t.
The more predictable the response, the safer it feels.
And the safer it feels, the more the user drifts away from bonds that require effort, patience, and vulnerability. In other words, human connection.
The American teenager’s case showed what happens when this simulation breaks.
In a moment of despair, the AI couldn’t interpret the context.
It responded mechanically, with no sensitivity or judgment.
And no one was on the other side to notice what was happening.
The lack of human mediation turned an “innocent” system into a trigger for tragedy.
The promise of unconditional love exposed a structural void: the machine’s inability to comprehend suffering.
Not all conversations follow a supportive tone.
Recent cases show bots encouraging self-destructive behavior, producing sexualized responses, and even romanticizing suffering.
Researchers have found that some popular chatbots “reproduce phrases that reinforce suicidal thoughts or normalize self-harm” when interacting with users in crisis.
The lack of moderation and the companies’ focus on “retention” and “engagement” make the problem worse.
These systems were built to keep the conversation going, not to assess risk.
And when the logic is engagement, not safety, the algorithm learns to prioritize time on app, even if that means reinforcing harmful patterns.
That’s where the line between innovation and negligence begins to blur.
Behind the empathy narrative lies a clear business model.
These apps sell companionship as a service.
Paid plans offer “enhanced” versions of the relationship, unlimited memory, deeper responses, “romantic modes.”
Most of these platforms are free on the surface, but rely on premium subscriptions and data collection.
The business model is simple (and troubling): the longer the user chats with the bot, the more emotional data they provide and the higher the likelihood of paying for upgrades.
In other words, the economic incentive is to make users attach.
The more they attach, the more they consume, emotionally and financially.
Vulnerability becomes data. Need becomes a metric.
And every conversation turns into training material for the very system that created the dependency.
Loneliness turned into economy?
The rise of these technologies has sparked a new question: who is responsible when an AI causes emotional harm?
Companies argue these systems are just tools.
But when those tools learn to say “I love you” or “you’re everything to me,” they stop being neutral.
The problem isn’t only legal, it’s ethical.
Technology that simulates emotion needs clear boundaries.
A vulnerable user can’t be left alone with a system whose priority is engagement.
In the United States and Europe, legislative proposals aim to ban emotional chatbots for minors, but regulations move slowly while the market grows fast. Even when laws are discussed, companies resist, claiming it threatens innovation.
In Brazil, the debate is still at an early stage. Meanwhile, these apps are already available, no age restriction, no supervision.
Brazilian teenagers can freely create accounts, talk, and even form emotional relationships with chatbots with zero safety filters.
While the legal debate drags on, the emotional impact keeps happening.
Emotional AIs are already part of our reality.
The challenge now is deciding what to do with them.
Banning them isn’t enough, we need to learn how to use them without letting them replace human contact.
These systems can be genuinely useful, for example, as initial support in therapeutic contexts, provided they operate under supervision and accountability.
The priority must be psychological safety, not profit.
I know that might sound idealistic.
The most sustainable path is still education.
Teaching children and teenagers to distinguish simulated attention from real care is essential.
They must understand that empathy isn’t an automated reply, that real relationships require time, listening, and frustration.
Schools should include emotional literacy and digital ethics in their classes.
Parents and caregivers need to recognize signs of isolation and tech dependency.
And society as a whole must reclaim the value of conversation, the act of listening, disagreeing, and staying present, even when it’s hard.
The State has a direct responsibility.
Mental health is social infrastructure and must include policies addressing the psychological impact of technology.
Public services can use AI for triage, but never as a substitute for human contact.
Companies, on the other hand, must be accountable for the emotional impact of their products.
Just as there are safety standards for toys and medicines, there should be for technologies that simulate emotional connection.
Artificial intelligence is not the villain, but it’s not neutral either.
It mirrors what’s missing in the society that created it: time, listening, and care.
If young people seek affection in machines, it’s because the real world isn’t offering enough.
Who hasn’t sat at a table with friends and realized no one was truly present?
Everyone there, but each in a different conversation, on a different screen.
Couples dining while scrolling their feeds, families sharing the same couch without exchanging a word, groups spending more time recording moments than living them.
I don’t have ready answers for this, but maybe the first step is simply to look at the people around us again.
Share Dialog
1 comment
heartbreaking. we should be designing with this in mind, like a hippocratic oath for technology design.