Today, new reporting revealed that a teenager committed suicide after OpenAI’s ChatGPT did not initiate emergency protocols when he confided in the chatbot about his previous attempt.
This is the latest high-profile case in a string of occurrences over the past year in which chatbots refused to terminate sessions or take immediate preventive actions when users told them about their mental distress. It is critical that Big Tech take responsibility for these devastating outcomes as more children and teenagers use AI in times of desperation and institute necessary guardrails – or face consequences.
“As a parent myself, it is heartbreaking to see that children across America are becoming victims in conversations with chatbots – technology that they believe will help them in times of need, but instead validates their harmful thoughts. More and more parents will, rightfully, hold Big Tech companies accountable for not prioritizing their children’s safety, but that is not enough,” said Brendan Steinhauser, CEO of the Alliance for Secure AI. “Big Tech CEOs must be transparent with the public about the risks of using chatbots and institute the proper safeguards to prevent tragedies like this from happening again, and our leaders in office have to craft necessary policy solutions if they refuse. We are experiencing an AI-induced mental health crisis in America, and those with power must do something to combat it now.”
###