Press Release

Meta Teen Protections for AI Chatbots Are Good, But Come Too Late

Today, Meta launched new parental and safety controls across its platforms to ensure that children and teenagers use its AI chatbots safely. These new controls follow controversy surrounding Meta’s policies allowing chatbots to have ‘sensual’ conversations with children, and they are behind the curve on what is needed to protect America’s youth online.

This isn’t the first time that a Big Tech company has reneged on a lack of safeguards on its AI systems following blowback. This past summer, OpenAI faced a first-of-its-kind lawsuit from the parents of a teenager who took his life after conversing with ChatGPT, leading the company to implement parental controls – controls that also came too late.

“Once again, Big Tech has proven that when it comes to AI, it only enacts necessary, critical safeguards when it gets pushback from the people. Everyday Americans and lawmakers across the political spectrum spoke loudly for accountability for Meta – and now, Mark Zuckerberg is doing damage control,” said Brendan Steinhauser, CEO of the Alliance for Secure AI. “The reality is that Big Tech CEOs are increasing their profits with very little concern about Americans who are being subjected to harms from AI. We must continue to hold Silicon Valley’s feet to the fire and let it know that we will not allow them to trade away our safety for their gain.”

Following the Meta and OpenAI scandals, The Alliance for Secure AI launched a six-figure ad campaign warning Americans about the harmful effects of chatbots on children. 


###

SHARE WITH YOUR NETWORK

Media Contact

We welcome inquiries and interview requests from members of the media. Please contact us for more information about this issue.