YouTube Expands Deepfake Detection Tool to Politicians and Journalists

The technology analyses uploaded videos to detect AI-generated faces that mimic real individuals.

YouTube Expands Deepfake Detection Tool to Politicians and Journalists

YouTube is expanding its likeness-detection technology designed to identify AI-generated deepfakes, launching a pilot programme that will give government officials, political candidates and journalists access to tools for detecting and requesting the removal of unauthorised AI-generated content.

The technology analyses uploaded videos to detect AI-generated faces that mimic real individuals. It is designed to help identify deepfakes that portray public figures, such as politicians or officials, saying or doing things they never actually said or did.

YouTube first introduced the technology last year to about four million creators participating in the YouTube Partner Programme after earlier testing phases. The system operates similarly to the platform’s copyright detection tools, scanning videos for potential matches and flagging manipulated content.

Members of the new pilot programme will be able to review detected matches and request the removal of videos they believe violate YouTube’s policies.

“This expansion is really about the integrity of the public conversation,” said Leslie Miller, YouTube’s Vice President of Government Affairs and Public Policy. “We know that the risks of AI impersonation are particularly high for those in the civic space. But while we are providing this new shield, we are also being careful about how we use it.”

Miller noted that not all flagged content will automatically be removed. Instead, YouTube will review requests under its existing privacy policies, which allow certain uses such as parody or political commentary.

The company also said it supports stronger legal protections in the U.S., including the proposed NO FAKES Act, which aims to regulate unauthorized AI recreations of a person’s voice or visual likeness.

YouTube said the feature may expand over time and could eventually help prevent harmful deepfake content from being uploaded altogether.