Today, Character.AI announced that it was banning minors from using its platform – an entire year after a teenager committed suicide following conversations with its chatbot. This announcement is part of an alarming trend emerging from Big Tech – a pattern in which AI developers only implement safeguards on their systems after tragedies have occurred:
- OpenAI enacted new safety measures for ChatGPT only in the wake of a lawsuit by the parents of Adam Raine, a teenager who committed suicide after the chatbot guided him in the process of taking his own life.
- Meta only implemented safety features for teens in its AI chatbot following a damning report revealing that the company allowed the chatbot to have ‘romantic and sensual’ conversations with teens, demands for transparency from the Senate, and an FTC inquiry.
This is a step in the right direction, but it is clear that Big Tech only acts when its feet are held to the fire. Business as usual for AI companies involves accelerating and deploying AI as quickly as possible and cleaning up the mess afterwards.
“Children and teenagers across the country are becoming victims of manipulative and addictive chatbots, and Big Tech has proven that it is not up to the task of proactively enacting safeguards into their systems,” said Brendan Steinhauser, CEO of the Alliance for Secure AI. “We cannot fall into a pattern in which families have to experience tragedy after tragedy in order for change to occur. We must combat Big Tech’s missteps as they rapidly deploy AI systems that are drastically changing society as we know it.”
This week, Sens. Josh Hawley (R-MO) and Richard Blumenthal (D-CT) announced the bipartisan GUARD Act, which would require age verification on AI chatbot platforms, prohibit minors from accessing chatbots, and fine companies that allow chatbots to engage in inappropriate conduct with minors.
###