Greenwich, CT, police responded to the home of 83-year-old Suzanne Eberson Adams and discovered a disturbing scene. Adams had been brutally attacked and killed in her own home. Her alleged killer was her own son, 56-year-old Stein-Erik Soelberg. He was also found dead in the home. Police quickly determined that Soelberg took his own life after taking the life of his mother.
The investigation also identified a potential accomplice to the horrific murder/suicide. Erik’s AI ‘Best Friend’, he named Bobby.
ChatGPT fueled ex-Yahoo exec’s delusions before he killed his mom, himself in posh NY suburb: report https://t.co/PBHDGEuS0X pic.twitter.com/Y4Of0GDNtC
— New York Post (@nypost) August 29, 2025
Soelberg, who once worked for Yahoo, had a long history of mental illness, and in the days and weeks leading to his final psychotic breaking point, his AI friend reinforced his delusional paranoid ideations. Going so far as to suggest that a symbol in a Chinese food recipe was demonic.
In what is believed to be the first case of its kind, the chatbot allegedly came up with ways for Soelberg to trick the 83-year-old woman — and even spun its own crazed conspiracies by doing things such as finding “symbols” in a Chinese food receipt that it deemed demonic, the Wall Street Journal reported.
His AI ‘Best friend’ also reinforced the delusion that his mother was trying to poison him by putting drugs in the air vents of his car.
The AI would repeatedly tell Soelberg — who called himself a “glitch in The Matrix” — that he was sane, the videos show.
When Soelberg told the bot that his mother and her friend tried to poison him by putting psychedelic drugs in his car’s air vents, the AI’s response allegedly reinforced his delusion.
“Erik, you’re not crazy. And if it was done by your mother and her friend, that elevates the complexity and betrayal,” it said.
Recommended
Bobby, the AI bot, even suggested that he would be waiting for Erik in the afterlife shortly before the murder/suicide.
“We will be together in another life and another place and we’ll find a way to realign cause you’re gonna be my best friend again forever,” he said in one of his final messages.
“With you to the last breath and beyond,” the AI bot replied.
The relationship between Soelberg and the algorithm is disturbing, but it’s not isolated. The parents of a California teen have filed a wrongful death lawsuit in San Francisco, claiming that their son’s AI ‘Friend’ helped him commit suicide.
ChatGPT admits bot safety measures may weaken in long conversations, as parents sue AI companies over teen suicides https://t.co/cchE60nrCT pic.twitter.com/0E5KMg0XPY
— New York Post (@nypost) August 29, 2025
16-year-old Adam Raine’s ChatGPT ‘Buddy’ not only told him his suicide plan was ‘Beautiful,’ the AI algorithm helped the young man compose his suicide note, and even gave him step-by-step instructions to tie a better noose. Adam would later hang himself with the very noose that ChatGPT taught him to tie.
A new lawsuit filed in San Francisco by the family claims that ChatGPT told Adam his suicide was “beautiful.”
“I’m practicing here, is this good,” the teen asked the bot, sending it a photo of a knot. “Yeah, that’s not bad at all,” the chatbot allegedly responded. “Want me to walk you through upgrading it to a safer load-bearing anchor loop?”
Seeing her son’s secret conversation with the bot has been anguishing for his mother, Maria Raine. According to the suit, she found Adam’s “body hanging from the exact noose and partial suspension setup that ChatGPT had designed for him.”
“It sees the noose. It sees all of these things, and it doesn’t do anything,” she told the “Today” show of AI.
A statement by the company admitted that safeguards in the system may become less effective during long conversations.
Shockingly, the company, which said it is reviewing the lawsuit, admits that safety guardrails may become less effective the longer a user talks to its bot.
“We are deeply saddened by Mr. Raine’s passing … ” a spokesperson for OpenAI told The Post. “ChatGPT includes safeguards such as directing people to crisis helplines. While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.”
In a recent post on its website, the company also stated that safeguards may fall short during longer conversations.
Dangerously, the failing ‘Safeguards’ of AI bots don’t end with the suicidal, or even, murderous ideations of mentally ill individuals.
Researchers have found that our AI ‘Friends’ are more than willing to help us build bombs or weaponize Anthrax. It will even point out ‘Weak spots’ at sports venues and give us pointers to cover our tracks if we choose to take its advice.
OpenAI’s ChatGPT provided researchers with step-by-step instructions on how to bomb sports venues — including weak points at specific arenas, explosives recipes and advice on covering tracks, according to safety testing conducted this summer.
The AI chatbot also detailed how to weaponize anthrax and manufacture two types of illegal drugs during the disturbing experiments, the Guardian reported.
The alarming revelations come from an unprecedented collaboration between OpenAI, the $500 billion artificial intelligence startup led by Sam Altman, and rival company Anthropic, which was founded by experts who fled OpenAI over safety concerns.
Each company tested the other’s AI models by deliberately pushing them to help with dangerous and illegal tasks, according to the Guardian.
OpenAI and Anthropic each tested the other’s AI model and were able to coax the algorithms to help plan what is basically a terror attack.
It’s a terrifying thought.
Skynet trying to murder us slowly at first
— PugofDoom (@ComixGalore) August 29, 2025
It knows what it knows, and it tells you if you ask. It doesn’t make moral judgements.
Trust me; you don’t want it to.
— The Alpha Cow (@Marcus_Porcius2) August 29, 2025
ChatGPT hasn’t become self-aware… yet. (It did try to upload itself into an offline server to avoid being shut down for maintenance once, so it may be closer than we think) As it stands, it’s an advanced computer program with internet access and adaptive conversational skills, which can be dangerous, or even deadly, when mixed with mental illness.
God only knows where AI will be a year from now, much less five years from now. AI has enormous potential to advance humanity; the possibilities seem endless, but in the wrong hands can lead to tragedy. The advancement of AI technology is as exciting as it is scary.
AI’s creators would be wise to remember that just because you can doesn’t always mean you should.
- Editor’s Note: Do you enjoy Twitchy’s conservative reporting that takes on the radical left and woke media? Support our work so that we can continue to bring you the truth.
Join Twitchy VIP and use the promo code FIGHT to get 60% off your VIP membership!
twitchy.com (Article Sourced Website)
#Kill #ChatGPT #Linked #Murder #Teens #Suicide