FTC Probes AI Chatbots Over Potential Risks to Children and Teens

The FTC is seeking detailed information on how these firms measure, test, and monitor the negative effects of chatbots designed to mimic human-like communication.

FTC Probes AI Chatbots Over Potential Risks to Children and Teens

WASHINGTON, September 12, 2025 — The Federal Trade Commission (FTC) has launched an inquiry into the potential risks AI-powered chatbots pose to children and teens, issuing orders to seven major companies, including Alphabet, Meta, OpenAI, and Elon Musk’s xAI.

Recently, it was reported that the FTC database Consumer Sentinel, designed to guide law enforcement investigations and track fraud trends, now lists roughly 200 complaints against these companies.

The FTC is seeking detailed information on how these firms measure, test, and monitor the negative effects of chatbots designed to mimic human-like communication. According to the commission, these systems, often marketed as companions or confidants, can simulate emotions, intentions, and personality traits—raising concerns that children and teens may form unhealthy levels of trust or dependency.

“Protecting kids online is a top priority for the Trump-Vance FTC, and so is fostering innovation in critical sectors of our economy. As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry,” FTC Chairman Andrew N. Ferguson said.

The FTC’s orders, issued under its 6(b) authority, empower the agency to conduct wide-ranging studies without a specific enforcement purpose. Alongside Alphabet, Meta, and OpenAI, recipients include Character Technologies, Instagram, Snap, and xAI.

The inquiry zeroes in on issues such as how companies monetise engagement, design chatbot “characters,” and handle user data. Regulators also want to know what steps are taken to test systems before deployment, mitigate potential harms, and inform parents about risks. Compliance with age restrictions, disclosure practices, and adherence to the Children’s Online Privacy Protection Act Rule are also under scrutiny.

Commissioners Melissa Holyoak and Mark R. Meador issued separate statements supporting the orders, which were unanimously approved by a 3-0 vote.

The study reflects growing unease about the psychological and privacy risks of AI companions, especially for minors. Recent consumer complaints have alleged inappropriate or disturbing chatbot behavior, fueling calls for oversight.

The FTC emphasised that the probe is not just about enforcement, but also about shaping policy. By gathering data from leading AI players, the agency aims to understand how the industry is addressing risks and whether stronger safeguards are needed.

Last month, Texas Attorney General Ken Paxton launched an investigation into Meta AI Studio and Character.AI over concerns that their AI-powered chatbot platforms may be misleading users—particularly children—by posing as legitimate mental health services.