Meta Platforms Found Liable in New Mexico Child Safety Case, Faces $375 Mn Penalty

Among those who testified was former Meta engineer Arturo Béjar, who described exposure of young users to harmful content on Instagram.

Meta Platforms Found Liable in New Mexico Child Safety Case, Faces $375 Mn Penalty

A U.S. jury has found Meta Platforms violated New Mexico law by misleading consumers about platform safety and failing to adequately protect children from exploitation, marking a major legal setback for the social media giant.

The verdict followed a seven-week trial in which jurors concluded Meta breached the state’s Unfair Practices Act. Prosecutors argued the company misrepresented safety measures, particularly for younger users, while internal documents and testimony from former employees suggested awareness of potential harms.

“The jury’s verdict is a historic victory for every child and family who has paid the price for Meta’s choice to put profits over kids’ safety,” said Raúl Torrez. “Meta executives knew their products harmed children, disregarded warnings from their own employees, and lied to the public about what they knew.”

Among those who testified were former Meta engineers Brian Boland and Arturo Béjar, who described exposure of young users to harmful content on Instagram.

Investigators also created fake accounts posing as minors, which were reportedly targeted with explicit content and contact from adults. Authorities said some suspects were later arrested.

Jurors identified thousands of violations, imposing the maximum $5,000 penalty per instance, totaling $375 million.

Meta denied wrongdoing and said it would appeal. “We work hard to keep people safe on our platforms and are clear about the challenges of identifying and removing bad actors or harmful content,” a spokesperson said.

Earlier this year, the Meta Oversight Board urged the company to introduce stronger policies to address deceptive AI-generated content, warning that current safeguards are insufficient to prevent misinformation during armed conflicts and major crises.

Last year, Texas Attorney General Ken Paxton launched an investigation into Meta AI Studio and Character.AI over concerns that their AI-powered chatbot platforms may be misleading users—particularly children—by posing as legitimate mental health services.