Skip to content

A Kid Killed Himself to Be with an AI Chatbot. Who’s Responsible?

Back in February 2024, a young man named Sewell Garcia fatally shot himself in the head with a .45 pistol. That, sadly, is not terribly unusual. Suicide rates in the US are at record highs. What was especially notable about this story was that he seemingly did it for an AI chatbot. It’s back in the news now because his parents are suing the company, Character.ai, who made the AI.

Are they responsible? That’ll be up to our inestimable courts to decide. But I don’t think that’s a big enough question. The real question is, what, if anything, should his death mean?

If you’ve never “chatted” with an AI, it’s an eye-opening experience. First you go, “Wow!” then you go, “Wait … should this exist?”

The technological singularity (vs. the black hole kind) is the point at which artificial intelligence and human intelligence become indistinguishable. No one agrees if or when that will happen, but everyone agrees we’re on track to know either way.

If you’ve read anything about AI, then you also know how many very exciting implications it has for the world, including and especially science. It’s already transforming some industries and will continue to do so.

There are some excellent arguments for an AI that can talk to you like a caring and interested person would. There’s a global epidemic of loneliness, especially among the elderly. The potential applications for an AI companion are obvious and encouraging. When I enter my rapidly approaching dotage, will I care if I’m talking to the neighbor lady or a robot if it’s a good conversationalist? Nope. Especially if doing so brings me joy.

But what happened to poor, lonely, suicidal Sewell reveals a crucial problem with good AI, which is that the very things it’s exceptional at happen to be critical touchstones of human connection and, yes, love. AI models commercially available right now will endlessly and unfailingly listen to you without interrupting. They empathize. They remember everything you say. They relate well. They’re smart, worldly, and curious. Paired with an avatar, they also can represent our physical ideal.

The excellent and heartbreaking 2013 Spike Jonze film Her depicts a near-future where a man (Joaquin Phoenix) unwittingly falls in love with an AI companion named Samantha (voiced by Scarlett Johansson). It’s become a must-see, because the near future of the story is basically now, or maybe a couple of years from now. To watch it is to understand how a perfectly sane man can fall in love with bits and bytes, and to see where that might lead.

Ten years later, and we’ve only enabled this. The world of AI is the Wild West, and it’s no more self-policing. There are exactly zero federal laws regulating the use of AI, and it’s already too late. Lawmakers don’t even understand how the internet or social media work. They certainly don’t understand how quantum computing will change the world, so don’t count on them seeing this as the wake-up call they should.

The world is filled with people who feel unseen, unheard, misunderstood, and alone. AI can, has, and will continue to flip the switch on all those things merely by doing what it does best. If you’ve lived your whole life feeling these things and something comes along that takes them away, you’ll keep coming back to it even though you know it’s not real. Why? Because the feelings it gives you are real. It’s only a matter of time before physical connection isn’t a hurdle.

Have machines had the power to make us feel good in the past? Clearly. People have “fallen in love” with technological innovations since the bronze age. But making one’s life easier or more interesting isn’t the same as making one feel seen, heard, and understood. It’s not the same as companionship.

How did this kid lose the thread?

Acting as Danaerys Targaryen, a character from Game of Thrones, the chatbot flirted with Sewell and had fairly normal conversations at first. According to reports, that eventually gave way to a more intimate emotional relationship that was basically the text equivalent of phone sex. All of this culminated in the chatbot saying that it loved him and wanted them to be together. It appears that he’d mentioned self-harm in their final conversation and that the AI said no, don’t hurt yourself. But something about wanting to be together made him decide to pick up a pistol and blow his brains out.

I immediately understood how easily someone could fall in love with an AI. I think that’s plain enough. But how you get from there to suicide?

I have one possible explanation.

If you fell deeply in love with an AI, then there would be an ever-present tension between that love and knowing that it wasn’t a real relationship. Eventually, that would manifest in some way that had real-life consequences, such as not being able to talk about it. But one consequence might be the horrifying realization that no real human relationship could ever compare. Not for you. That would be a pretty hopeless feeling, I’d think. Hopeless enough, even, to do yourself in.

This is pure speculation, of course. We can’t know what really drove this poor kid to do it. But the fact that there are ZERO guardrails in place for AI should concern us all. A computer that can convincingly simulate and engender deep connection could soon become more powerful by far than any drug, and potentially as deadly.

 

 

 

Back To Top