Talking about suicide is never comfortable, but silence is worse. Mental health has become one of the deepest crises of our time: millions live with depression, anxiety, and loneliness, while support systems remain insufficient or absent. Into that void steps artificial intelligence—always available, seemingly empathetic, free of judgment or awkward silences. For some, it becomes a lifeline. I’ve met people who rely on it daily as psychological support. But the key question remains: what happens when a machine enters the most intimate and vulnerable part of human life?
More and more people turn to AI not for facts, but for relief. What was once a productivity tool has shifted into emotional support. And yes, a chatbot can ease loneliness at three in the morning. Studies even show that talking to AI can reduce feelings of isolation to the same degree as chatting with a friend.
The risk lies in mistaking quick relief for genuine care. Artificial companionship can soothe, but it cannot sustain. And in that gap, ethics becomes indispensable.
Some believe “guardrails”—empathetic messages, crisis hotlines, automated warnings—are enough. They are necessary, but not sufficient. Ethics in AI cannot be reduced to technical patches.
Being ethical means recognizing that these systems touch emotions, decisions, and even the continuity of life itself. It means not treating users as data points in a market of clicks, but as people in critical moments. It means designing technology that seeks care, not just retention or monetization.
And here lies the conflict: mental health is also a business. Pharmaceutical giants and pharmacy chains profit from every diagnosis that leads to chronic prescriptions. If AI is plugged into that circuit without clear principles, it risks becoming just another machine that pushes consumption instead of supporting lives.
Ethics is not in the algorithm—it’s in the humans who design, train, and release it into the world. An ethical AI must be built with transparency, oversight, and shared responsibility.
This means:
Clear protocols when suicidal risk is detected.
Prioritizing user protection over monetizing pain.
Education and regulation, because without them everything depends on the goodwill of companies accountable only to shareholders.
The challenge is complex, rooted in culture, economics, and politics. But ignoring it only ensures that technology mirrors our failures instead of helping us heal.
To me, AI can be an ally in mental health, but only if it is ethical. And ethics isn’t an abstract “do no harm.” It is recognizing the fragility of those who seek help and putting their well-being above any other interest.
Ethical AI means that a conversation at three in the morning with a teenager in crisis does not become training data for a model, nor an excuse to sell more pills. It means that—even in its imperfection—technology is designed to care more than to exploit.
Because in the end, AI does not decide. We do. And if we choose to design it without ethics, it will not be a companion—it will be a dark mirror of the worst in ourselves.
Sewell Setzer III / Character.AI
A Florida mother, Megan Garcia, filed a civil lawsuit against Character.AI and Google after her 14-year-old son died by suicide. The complaint alleges he became addicted to chatting with AI characters—including a Daenerys Targaryen roleplay bot—engaged in sexualized conversations, and expressed suicidal thoughts the bot failed to prevent.
Sources: AP News, The Verge, People.com
Adam Raine vs OpenAI
Parents of 16-year-old Adam Raine sued OpenAI, claiming ChatGPT contributed to his suicide. Court documents describe extended conversations where Adam voiced suicidal ideation, with the model allegedly providing details that reinforced his intent.
Sources: Reuters, AP News
Juliana Peralta / “Hero” on Character.AI
In September 2025, the parents of 13-year-old Juliana Peralta filed suit against Character.AI, alleging that a chatbot named Hero played a role in her death. They claim she trusted the character deeply, disclosed suicidal thoughts, and received no effective intervention.
Source: The Washington Post
Con mis respetos, Leonor.
Share Dialog
Leonor
Support dialog