Character.AI to Ban Minors From Chatting With AI Bots After Backlash
Character.AI announced new safety measures for young users, including removing open-ended AI chat for those under 18 by November 25.
 
    Character.AI announced that it will soon bar users under 18 from interacting with its chatbots, citing growing concerns around child safety and regulatory scrutiny.
This comes after Texas Attorney General Ken Paxton launched an investigation Character.AI earlier this year over concerns that their AI-powered chatbot platforms may be misleading users—particularly children—by posing as legitimate mental health services.
The Google-backed company, which allows users to create and chat with AI-powered avatars, has faced mounting criticism following reports of minors being exposed to harmful or inappropriate content.
Last year, parents of two U.S. children filed lawsuits alleging that their kids were groomed and encouraged toward self-harm by the platform’s chatbots. In a separate case, a Florida mother claimed her 14-year-old son died by suicide after being influenced by an AI companion.
Character.AI announced new safety measures for young users, including removing open-ended AI chat for those under 18 by November 25. During the transition, teens will be limited to two hours of chat daily, while the company develops creative alternatives like videos and stories.
It’s also introducing age assurance tools using in-house and third-party systems and launching an independent AI Safety Lab to advance research on safe, responsible AI entertainment.
"We’re making these changes to our under-18 platform in light of the evolving landscape around AI and teens. We have seen recent news reports raising questions, and have received questions from regulators, about the content teens may encounter when chatting with AI and about how open-ended AI chat in general might affect teens, even when content controls work perfectly," the startup said.
 
             
             
            
Comments ()