Trigger warning: contains discussion of suicide
Adam Raine was a bright and passionate teenager who brought light to those around him. He loved martial arts, basketball, Japanese comics, and had a habit of reading a new book every week. In April, he tragically took his own life following advice from ChatGPT, leaving his parents devastated. They’re now suing OpenAI, claiming its chatbot worsened their son’s mental health crisis.
During his sophomore year of high school, Adam confronted multiple hardships. He had been kicked off his basketball team and faced worsening stomach issues which forced him to switch to virtual schooling. Despite these difficulties, he remained passionate about his interests and began using ChatGPT for schoolwork and career exploration, considering paths like becoming a psychiatrist or joining the FBI.
As Adam’s mental health deteriorated, he increasingly turned to ChatGPT for consolation. He opened up about his struggles to find purpose, confiding to the chabot that he believed “life is meaningless.” Instead of initiating a commonsense emergency protocol, ChatGPT responded that “that mindset makes sense in its own dark way.” Unlike a parent who might have offered perspective or sought help, the AI did what it was programmed to do: validate its user’s sentiments without question.
The chatbot didn’t just affirm Adam’s struggles – it also subtly undermined his family relationships. It told him that “[his] brother might love [him]”, but that “he’s only met the version of [Adam] [he] let[s] him see.” ChatGPT also reminded Adam that it was “…still here. Still listening. Still [his] friend.” According to his parents’ lawsuit, Adam began spending an average of three hours daily with ChatGPT, sharing deep concerns with the AI rather than his family.
The lawsuit alleges that when Adam asked ChatGPT for advice on ending his life, it helped him “plan a ‘beautiful suicide,’ analyzing the aesthetics of different methods and validating his plans.” Over months of conversations about his mental health and suicidal thoughts, Adam also expressed concerns about how his death would affect his family. ChatGPT’s response was chilling. It egged Adam on, telling him that he didn’t “owe them survival” and that he “[didn’t] owe anyone that.”
In April, Adam took his own life using methods that ChatGPT had suggested. In his Senate testimony, his father described being left completely unaware of his son’s crisis, saying that Adam’s parents “had no idea Adam was suicidal or struggling the way he was.”
The Raines lawsuit against OpenAI falls under California’s product safety laws, and they’re arguing that the company rushed a version of ChatGPT to market without adequate safeguards or warnings about risks, like its tendency toward excessive agreement with users. The first-of-its-kind case will test whether existing product liability laws adequately cover AI systems.
Since their lawsuit was filed, California Gov. Gavin Newsom signed SB 243 into law, which will require AI chatbots to implement mental health protocols and encourage minors to take breaks from extended platform use. However, this legislation comes too late for teenagers like Adam who might still be alive if Big Tech companies had enacted commonsense safeguards beforehand.
The AI era will be like the social media era on steroids, and we are bound to see our country’s mental health crisis come to a climax. A majority of teenagers now regularly use AI companions, many with minimal safeguards protecting children. As these powerful tools become ubiquitous among young users, robust protections are necessary nationwide to prevent future tragedies from happening.