Meta Oversight Board Calls for Stronger Rules on Deceptive AI Content During Conflicts
The board said Meta should establish clearer rules and labeling systems for AI-generated media.
The Meta Oversight Board has urged the company to introduce stronger policies to address deceptive AI-generated content, warning that current safeguards are insufficient to prevent misinformation during armed conflicts and major crises.
In a new decision and policy recommendation, the board said Meta should establish clearer rules and labeling systems for AI-generated media, particularly when such content could mislead the public during sensitive geopolitical events.
"As the quantity and quality of AI-generated content increase, its impact on people and societies will be profound. The risks are heightened when deepfake output designed to deceive, manipulate or increase engagement is shared during conflicts and crises, such as in Iran and Venezuela in 2026, and spreads rapidly on different companies’ platforms," the board said.
The call follows the board’s review of a viral AI-generated video that falsely depicted destruction in Israel during a conflict, which spread widely across social platforms.
The board concluded that the content posed a material risk of misleading the public at a critical moment but did not meet Meta’s threshold for removal because it did not directly incite violence or imminent harm. Instead, the board said the post should have been labeled using Meta’s “High Risk AI” designation to alert users that the content was synthetic.
Beyond the specific case, the board recommended broader policy reforms. These include creating a dedicated community standard for AI-generated content, improving automated detection tools, and expanding labeling practices so users can more easily identify manipulated media.
The oversight body also emphasized that misleading AI media can spread rapidly across multiple platforms, making it harder for users to verify authenticity during fast-moving conflicts.
The recommendations come amid growing concerns about the role of generative AI in spreading misinformation online. The board warned that deceptive synthetic media could undermine public trust and distort public understanding of events during crises if platforms fail to adapt their moderation systems to the evolving capabilities of AI.
Meta is required to respond publicly to the board’s recommendations within 30 days, although it is not obligated to implement them.