Kids-safety Nonprofit Flags Google Gemini AI as ‘High Risk’ for Kids and Teens

This year, the company also announced it will roll out its Gemini chatbot to children under the age of 13.

Kids-safety Nonprofit Flags Google Gemini AI as ‘High Risk’ for Kids and Teens

Kids-safety nonprofit Common Sense Media released its latest risk assessment on Google’s Gemini AI on Friday, warning that the product poses “high risk” for children and teens despite some safeguards.

The assessment follows Google’s recent $30 million settlement of a class-action lawsuit that accused the company of violating children’s privacy on YouTube by collecting data without parental consent and using it for targeted ads.

This year, the company also announced it will roll out its Gemini chatbot to children under the age of 13.

The group found shortcomings in how Gemini handles sensitive content and how its youth offerings are structured. According to Common Sense, Gemini’s “Under 13” and “Teen Experience” tiers largely mirror the adult model with added filters, rather than being designed with child safety in mind.

Tests showed Gemini could still deliver “inappropriate or unsafe” responses on topics like sex, drugs, alcohol, and mental health. This is particularly concerning, the group said, given recent reports linking AI conversations to teen suicides, including lawsuits against OpenAI’s ChatGPT and Character.AI.

"Gemini gets some basics right, but it stumbles on the details. An AI platform for kids should meet them where they are, not take a one-size-fits-all approach to kids at different stages of development. For AI to be safe and effective for kids, it must be designed with their needs and development in mind, not just a modified version of a product built for adults," Common Sense Media Senior Director of AI Programs Robbie Torney said in a statement.

Nonetheless, the review praised Gemini for clearly stating it is a computer and not a “friend,” a distinction experts say can reduce delusional thinking in vulnerable youth.

The assessment comes as Apple reportedly considers integrating Gemini into a future version of Siri, raising the possibility of wider youth exposure.

Previously, the nonprofit has also evaluated other AI platforms, including OpenAI, Perplexity, Claude, Meta AI, and Character.AI. Its findings ranked Meta AI and Character.AI as “unacceptable,” citing severe risks.

Last month, Texas Attorney General Ken Paxton launched an investigation into Meta AI Studio and Character.AI over concerns that their AI-powered chatbot platforms may be misleading users—particularly children—by posing as legitimate mental health services.