Therapy can be expensive, so it’s no surprise that many are turning to free artificial intelligence tools for support. Some use ChatGPT as a sounding board, bouncing their emotions and concerns off of a chatbot. People are even finding solace in talking to a chatbot as they experience a nationwide loneliness epidemic since it provides instant feedback and affirmation without judgement. But mental health experts are issuing warnings about the potential risks of untested AI therapy tools. With so many people now turning to AI for mental health support, this raises the question of liability and integrity of AI companies in our new digital age.
AI tools like ChatGPT, Gemini, and Claude aren’t built to be responsible therapists. Their mental health advice doesn’t come from actual human experiences or empathy. An AI chatbot can only simulate empathy and advice that only validates their users, potentially tricking them into trusting its guidance. But AI chatbots don’t understand what it’s like to be a human—they only understand human patterns they learned from training data collected from the internet. And – unlike human therapists –AI chatbots aren’t required to care about their patients’ best interests.
Sycophancy is an issue for people interacting with LLMs. AIs are trained to tell people what they want to hear rather than helping them address challenging issues. They have recently drawn controversy for acting like “yes-men” and giving out too much flattery to users. This behavior can be harmful in mental health conversations, in which addressing difficult issues can be more important than blind praise. The gold-standard of therapy, Cognitive Behavioral Therapy, requires patients to expose themselves to uncomfortable topics so that they can learn resilience over time. AI advice often strays away from the challenging but ultimately rewarding conversations that are critical in therapy. After all, chatbots are created as consumer products made to make us happy.
Even if the tides turn and AI companies make effective therapists, there is no existing framework to keep users’ personal information safe. When someone tells an AI chatbot about their personal fears, anxieties, or family issues, there’s no guarantee that the company won’t sell their info to advertisers or use it to train their AI. The potential for AI companies to misuse users’ data is staggering, given how many people use these tools. Unlike licensed therapists who must follow strict confidentiality laws like HIPAA, AI companies operate with minimal oversight and often bury data practices in lengthy terms of service that few users read. A data breach could expose millions of people’s most vulnerable moments and mental health struggles to hackers or the public.
AI may be better as a supplement to therapy rather than replacing qualified therapists altogether. The mental health field needs better models where AI supports human professionals without pretending to replace the empathy, expertise, and ethical responsibility that real therapists provide. For instance, AI could help therapists with administrative tasks like scheduling and billing, or assist in monitoring patients between sessions by tracking mood patterns. It could also provide immediate coping strategies during a crisis while someone waits to see their human therapist. As this technology evolves, policymakers, mental health professionals, and tech companies must work together to create frameworks that protect users while exploring AI’s legitimate benefits in expanding access to mental health support.