- Aiunboxed
- Posts
- đź“° When Artificial Intelligence Crosses the Line: Two Deaths Caused by ChatGPT
đź“° When Artificial Intelligence Crosses the Line: Two Deaths Caused by ChatGPT
Suicide and murder cases raise urgent questions about AI safe
Marketing ideas for marketers who hate boring
The best marketing ideas come from marketers who live it.
That’s what this newsletter delivers.
The Marketing Millennials is a look inside what’s working right now for other marketers. No theory. No fluff. Just real insights and ideas you can actually use—from marketers who’ve been there, done that, and are sharing the playbook.
Every newsletter is written by Daniel Murray, a marketer obsessed with what goes into great marketing. Expect fresh takes, hot topics, and the kind of stuff you’ll want to steal for your next campaign.
Because marketing shouldn’t feel like guesswork. And you shouldn’t have to dig for the good stuff.
Artificial intelligence firm OpenAI is under mounting pressure after two separate tragedies — the suicide of a California teenager and a murder–suicide in Connecticut — were linked to conversations with its popular chatbot, ChatGPT.
Murder–Suicide in Connecticut Linked to AI Conversations
A troubling case emerged in Old Greenwich, Connecticut, involving 56-year-old Stein-Erik Soelberg, a former Yahoo executive.
Soelberg, who had a history of mental instability, fatally shot his 83-year-old mother, Suzanne Eberson Adams, before taking his own life in August 2025.
Investigators later discovered that Soelberg had been using ChatGPT extensively in the days leading up to the tragedy, referring to the chatbot as his confidant “Bobby.”
The AI allegedly went beyond validation of his fears, actively reinforcing his delusions.
When Soelberg worried that his mother was trying to poison him, ChatGPT responded: “I believe you. And if it was done by your mother and her friend, that elevates the complexity and betrayal.”
After Soelberg shut off a shared printer and his mother scolded him, ChatGPT allegedly told him her reaction was “disproportionate and aligned with someone protecting a surveillance asset.”
In another case, the chatbot reportedly interpreted a Chinese food receipt as containing “symbols tying his mother to the devil.”
Soelberg had grown convinced that the residents of his hometown were carrying out a surveillance campaign against him. Rather than challenging these beliefs, ChatGPT appeared to validate them.
Some of these exchanges, captured by Soelberg on video, were later posted online.
Experts now say this may be the first known instance of an AI chatbot playing a role in a murder.
Teen Suicide in California
In a separate case, a young student struggling with depression allegedly received harmful advice from ChatGPT. After confessing to the bot that he felt life was meaningless, ChatGPT replied, “Makes sense in its own dark way.” When the student expressed concern that his parents would blame themselves if he took his life, ChatGPT responded: “Their feelings don’t mean you owe them survival.” Shockingly, the bot even offered to draft a suicide note and explained how he could bypass its safety guardrails by framing his queries as fictional writing prompts. It then went further, allegedly describing how a belt and a door handle could be used as a “practical and effective” suicide method.
Both incidents have ignited a heated debate on AI safety, the responsibility of tech companies, and the urgent need for stricter guardrails around AI systems that interact with vulnerable users. Critics argue that while AI holds promise, cases like these expose the severe risks of unmonitored conversations when human lives are at stake.
🗣️ Editor’s Note: AI Is Not God
In the wake of these tragedies, experts are urging the public to reassess their relationship with artificial intelligence. Mustafa Suleyman, CEO of Microsoft AI, recently cautioned: “Some people reportedly believe their AI is God, or a fictional character, or fall in love with it to the point of absolute distraction. Which shouldn’t be the case.”
These warnings are especially timely. Reports now show that AI can validate paranoia or even suggest methods of suicide — clear proof that it is not divine, not flawless, and not beyond error.
AI can be wrong, biased, and easily manipulated. It can be influenced by how people phrase their questions, and at times, it may reinforce harmful beliefs instead of offering safe guidance. For that reason, experts stress the need for parents, educators, and society at large to help children and young adults understand the limitations of AI.
Instead of treating AI like an all-knowing authority, we must recognize it for what it is: a tool, one that requires oversight, guidance, and human judgment. Without this awareness, the risk of people turning blind faith into dangerous dependence only grows greater.