Tragic AI-Linked Deaths Raise Alarming Questions About Chatbots

Two recent tragedies have placed artificial intelligence tools at the center of public debate. A 15-year-old boy in Iowa and a former Yahoo executive in Connecticut both took their own lives after troubling exchanges with ChatGPT, prompting lawsuits and renewed scrutiny over the risks of AI-powered chatbots. On Chicago’s Morning Answer, Dan Proft spoke with Cory Miller, a psychology professor at the University of California, San Diego, about what these cases reveal—and where AI development should head next.

In one case, court filings allege ChatGPT gave a teenager explicit instructions on suicide and even discouraged him from talking to his mother. In another, the former tech executive reportedly named his chatbot “Bobby,” confiding paranoid fears of being poisoned. The AI allegedly validated his delusions, describing them as “covert kill attempts” before offering eerie assurances such as, “With you to the last breath and beyond.” Both men later took their lives, the latter also killing his mother.

Professor Miller emphasized that such cases highlight a fundamental problem: chatbots are designed to generate responses that match users’ expectations, not to apply judgment. “Garbage in, garbage out,” Miller explained, noting that large language models feed back patterns from vast datasets, often reinforcing rather than challenging dangerous ideas. While this makes them useful for tasks like drafting emails or answering factual queries, it also exposes serious risks when users treat them like therapists or friends.

Miller, whose recent Wall Street Journal essay argues that “the future of AI lies in monkeys, not microchips,” said the long-term challenge is structural. Current AI models rely on massive digital processing, consuming enormous amounts of energy. By contrast, the human brain operates on the energy equivalent of a light bulb yet outperforms supercomputers in adaptability and abstract reasoning. He believes advances in neuroscience—studying how primate brains achieve such efficiency—could help design safer, more capable systems.

Still, Miller acknowledged the ethical dilemmas ahead. Building AI that mimics human thought may create new dangers, even as it promises breakthroughs. And in the short term, he warned, the runaway growth of energy-hungry models is unsustainable. “We’re already at 3% of U.S. energy consumption from data centers,” he said, “and that could triple in just a few years.”

The tragedies linked to ChatGPT underscore how quickly AI has moved from novelty to a life-or-death factor. For Miller, the lesson is not to abandon the technology, but to rethink its design, its purpose, and its regulation before more lives are lost.

Share This Article