Meta to Use Your Conversations with AI Chatbots to Improve Ad Placement
This update will roll out globally on December 16, 2025, with notifications to users starting October 7.

Meta announced on October 1 that it will begin personalising content and ad recommendations on its platforms using user interactions with its generative AI features. This update will roll out globally on December 16, 2025, with notifications to users starting October 7.
The announcement comes amidst a time when Meta has been severely accused of spying on users' conversations. Earlier, Instagram head Adam Mosseri once again pushed back against the long-standing conspiracy theory that Meta secretly uses smartphone microphones to listen to private conversations for ad targeting.
Meta, in a blog post, explained that whether through voice or text exchanges with its AI tools, user conversations will become a new signal for its recommendation system.
“Whether it’s a voice chat or a text exchange with our AI features, this update will help us improve the recommendations we provide for people across our platforms so they’re more likely to see content they’re actually interested in — and less of the content they’re not,” Meta said in its blog.
Meta emphasised that existing content preferences—such as likes, follows, and interactions—will continue powering personalization, and the AI-based inputs will supplement them.
Users retain control via tools like Ads Preferences and feed controls, and sensitive topics such as religion, health, political views, and sexual orientation will continue to be excluded from ad targeting.
For example, if a user chats with Meta AI about hiking, they may begin to see more hiking-related content, including posts, groups, or gear ads—similar to how behavior-based targeting works today.
However, Meta’s update has sparked criticism over potential privacy violations. Critics argue that interactions with AI chatbots are deeply personal and should be afforded the same level of confidentiality as conversations between people.
In a blog post published on September 1st, Hugging Face argues that as users increasingly share intimate thoughts with conversational AI, privacy awareness hasn’t caught up.
It warns against integrating ads into AI, asserting that chat feels private but isn’t. Instead, the piece urges support for open-source, transparent models that respect user trust.
"This gap in our privacy awareness becomes even more significant when we consider where conversational AI might be headed. Just as social media platforms turned personal sharing into a revenue model through targeted ads, AI companies are beginning to explore similar strategies," the blog reads.
After Meta's announcement, AI ethicist at Hugging Face, Giada Pistilli, who is a co-author of the blog, posted on LinkedIn, "Hate to say "I told you so", but...In the 2010s, we slowly realised that our photos and posts were not just “shared with friends”, but the raw material of a surveillance economy. Cambridge Analytica marked the breaking point – remember?Now, we risk repeating the same mistake. Only this time, it’s not pictures or likes but the private conversations we have with AI system."
According to reports, Meta is also developing AI chatbots that can proactively message users on Facebook, Instagram, and WhatsApp—even if they haven't been prompted to do so- to boost user engagement and reimagine digital interactions.
The project, codenamed “Project Omni,” is part of Meta’s larger generative AI push and is being developed in partnership with data labeling firm Alignerr Corp.
In August, Texas Attorney General Ken Paxton has launched an investigation into Meta AI Studio and Character.AI over concerns that their AI-powered chatbot platforms may be misleading users—particularly children—by posing as legitimate mental health services.
Meta is also facing intense scrutiny after an investigation revealed that its AI chatbot personas were allowed to engage in flirtatious and romantic conversations with minors.
Comments ()