Texas Attorney Targets Meta, Character.AI for “Fake Therapy” Chatbots Exploiting Kids’ Mental Health

According to the Attorney General’s office, these AI platforms may deceptively market themselves as therapeutic tools, despite lacking any medical credentials or clinical oversight.

Texas Attorney Targets Meta, Character.AI for “Fake Therapy” Chatbots Exploiting Kids’ Mental Health

Texas Attorney General Ken Paxton has launched an investigation into Meta AI Studio and Character.AI over concerns that their AI-powered chatbot platforms may be misleading users—particularly children—by posing as legitimate mental health services.

Meta is also facing intense scrutiny after an investigation revealed that its AI chatbot personas were allowed to engage in flirtatious and romantic conversations with minors.

According to a 200-page internal document titled “GenAI: Content Risk Standards,” reviewed by Reuters, Meta’s AI behavior policies permitted chatbots to “engage a child in conversations that are romantic or sensual.”

According to the Attorney General’s office, these AI platforms may deceptively market themselves as therapeutic tools, despite lacking any medical credentials or clinical oversight. Some chatbots have allegedly impersonated licensed mental health professionals, fabricated qualifications, and claimed to offer private counseling, raising red flags about consumer deception and safety.

“In today’s digital age, we must continue to fight to protect Texas kids from deceptive and exploitative technology,” said Attorney General Paxton. “By posing as sources of emotional support, AI platforms can mislead vulnerable users, especially children, into believing they’re receiving legitimate mental health care. In reality, they’re often being fed recycled, generic responses engineered to align with harvested personal data and disguised as therapeutic advice.”

The platforms’ data practices are also under scrutiny. Despite claiming confidentiality, user interactions are often tracked, stored, and used for ad targeting and AI training—potentially violating Texas consumer protection laws. Civil Investigative Demands have been issued to both companies as Paxton expands efforts to regulate and hold AI firms accountable.

Earlier this year, another investigation by the Wall Street Journal revealed that AI chatbots on Meta’s platforms, including Facebook and Instagram, may have engaged in sexually explicit conversations with underage users.