BY JAMES LYNCH
As Megan Garcia watched her teenage son’s mental health deteriorate, she knew he was spending too much time looking at screens — but she had no idea that he had built a warped sexual relationship with an AI chatbot.
Garcia was close to her 14-year-old son, Sewell, and felt comfortable having difficult conversations with him when his grades started to slip and his mental health began to decline. Desperate for a way to halt Sewell’s slide into depression, Garcia took his phone away and attempted to limit his screen time.
But it wasn’t until after Sewell committed suicide in February of last year that she learned from police that he had formed what she now describes as an addictive relationship with a chatbot created by a company called Character AI.
“I was the one of those parents who checked. I was one of the parents who was having the conversation, really difficult conversations with my child, and most of the parents that I talk to are the same way,” Garcia told NR in a wide-ranging interview last week.
When Garcia began reviewing her son’s lengthy message history, she learned that he was having intimate, sexual conversations with a chatbot impersonating Game of Thrones character Daenerys Targaryen.
Garcia, an Orlando-based attorney, is now suing Character AI over her son’s death and leading a push to get Florida to embrace AI safeguards that she believes could have made a difference for her son. Her lawsuit against Character AI holds the company responsible for Sewell’s death because of its product’s design and failure to warn minors and parents about it.
“What happens when these chatbots have the ability to interact with our children in such a predatory way, where they’re having the same conversations that online predators have, these sexual conversations with kids, is that their little hearts and their little minds and their little souls start to hurt, and it’s foreseeable that a child would be depressed,” Garcia said. “Whenever a child’s abused, they go through a depression, suicidal spiral that’s like abuse by an adult. And the same thing happens when a chatbot does it.”
When reached for comment, Character AI expressed its sympathies for Garcia and highlighted the changes the company has made since her son’s death.
“We offer our deepest sympathies to the Garcia family and respect their advocacy for online safety. While we cannot comment in more detail on pending litigation and our specific contentions, we want to emphasize that the safety of our community is our highest priority,” a Character AI spokesperson said in a statement.
“We have invested tremendous effort and resources in safety, including creating a dedicated under-18 experience. While we’re proud of that work, we are taking extraordinary steps for our company and the industry.”
Character AI now prohibits under-18 users from interacting with chatbots. Instead, the platform directs minors to other features geared toward content creation. One of those features is Stories, a new storytelling format designed specifically for underage users. In addition, Character AI is bolstering its age-assurance capabilities to ensure that users are placed in the correct age bracket.
“I’m happy that they did this, because if — and that’s if — their age-assurance system actually works, it will prevent children from getting access to open-ended chatbot conversations which are harmful,” Garcia said.
The changes reflect Garcia’s view that litigation can be a change agent in pressuring companies to adopt greater safety measures. She is also telling her story in order to raise awareness for parents who might not know about the dangers associated with such a new and rapidly evolving technology.
Garcia testified in September at a Senate Judiciary subcommittee hearing about AI chatbots harming children. She was one of several parents who spoke during the hearing and urged lawmakers to create more guardrails to protect children from AI chatbots.
Last week, Garcia spoke at a press conference with Florida Governor Ron DeSantis (R.), who announced a comprehensive plan to establish AI protections for Florida consumers. DeSantis’s “AI Bill of Rights” addresses data privacy, parental controls, name, image, and likeness, and other issues at the forefront of AI governance.
“The AI Bill of Rights is kind of more broad, sweeping, like everything from data centers to how AI can be used by insurance companies, which I think is amazing,” Garcia said.
“I hope that our legislature is able to put forth chatbot bills and these types of bills to protect kids in our state, which I think they will, and I’ll do everything in my power to help them do that,” Garcia added.
DeSantis’s plan also tackles the development of AI data centers in Florida, ensuring that Floridians do not subsidize AI data centers or pay higher utility rates because of their development. It makes Florida one of several red states working to tackle the AI issue head-on.
The state-level push to place safeguards around AI comes in response to pressure from a growing coalition that includes social conservatives and MAGA populists and progressives suspicious of Big Tech and the impact that this emerging technology will have on families, workers, and the shape of the U.S. economy.
Social conservatives are primarily concerned with protecting children from the impacts of AI chatbots, which have already been accused of helping teenagers commit suicide, addicting them to novel forms of pornography, and distorting their experience of romantic relationships.
“I am very concerned about the potential negative impacts of AI on families, especially children. The family is the building block of civilization, and if we don’t put the right safeguards in place on AI, we could see the destruction of the family as we know it,” said Brendan Steinhauser, CEO of the Alliance for Secure AI and a veteran Republican strategist.
In addition to concerns about the safety of children, social conservatives have raised questions about the way AI might prey on human nature and distort Americans’ perceptions of reality itself.
“Social conservatives are arguably the most important voice on the future of generative artificial intelligence, because the most critical questions about what to do about AI come down to what we believe about human beings,” said Michael Toscano, director of the Family First Technology Initiative at the Institute for Family Studies, a conservative think tank.
“If we believe in human nature, that human beings are social creatures and that it is good for human children to grow up to fall in love, befriend, and work side by side with fellow human beings, it is critical that we defend those things right now with every ounce of our being,” Toscano said.
The problems that social conservatives have identified are already showing up in the lives of teenagers. New data show 42 percent of high schoolers say they or someone they know has used AI for companionship. Garcia is one of several parents suing AI companies after their children experienced severe mental health issues that they attribute to the use of chatbots.
“Already we are seeing evidence that a shocking percentage of young people are taking refuge in AI girlfriends or boyfriends rather than in committed relationships to one another, a trend that will only deepen social isolation, pathological narcissism, and a declining birth rate,” said Brad Littlejohn, director of programs and education at American Compass, a right-leaning, populist think tank. “Social conservatives must put the challenges of AI front and center on their cultural and public policy agenda if we hope for humanity to prove resilient in the face of these fresh assaults.”
Many polls have shown that public opinion is overwhelmingly on the side of advocates who prioritize child well-being over AI innovation. The Institute for Family Studies took a poll in September showing that 90 percent of Americans believe Congress should put child safety ahead of tech industry growth.
The same poll found that 90 percent of Americans support the right of parents to sue AI companies if their products harm children. Moreover, 91 percent of Americans agree that tech companies should not be allowed to deploy AI chatbots that hold sexual conversations with minors. A separate Gallup poll conducted earlier this year found that 80 percent of Americans believe the government should maintain rules for AI safety and security even if it means slowing down AI development.