<-- End Marfeel -->
X

DO NOT USE

YouTube To Allow Politicians, Government Officials, And Journalists To Have Likeness Removed From AI-Generated Content

(Photo: REUTERS/Dado Ruvic)

YouTube is expanding its crackdown on AI-generated content.

View Quiz

On March 10, YouTube announced it is expanding its likeness detection pilot—previously available to creators in the YouTube Partner Program—to include government officials, journalists, and political candidates. The tool, similar to Content ID, scans for a participant’s likeness in AI-generated content.

If a match is found, such as a deepfake using their face, the individual can review the video and request its removal if it violates the platform’s privacy guidelines.

content-ad1">

“This expansion is really about the integrity of the public conversation,” said Leslie Miller, YouTu

be’s vice president of Government Affairs and Public Policy, in a press briefing. “We know that the risks of AI impersonation are particularly high for those in the civic space. But while we are providing this new shield, we’re also being careful about how we use it,” she noted.

While not every flagged video will be removed, YouTube said each request will be reviewed under its existing privacy policies. The company will assess whether the content qualifies as parody or political commentary—forms of expression that are protected. The pilot program is designed to balance free speech with the growing risks posed by AI tools capable of creating convincing likenesses of public figures.

“There’s a lot of content that’s produced with AI, but that distinction’s actually not material to the content itself,” explained Amjad Hanif, YouTube’s vice president of creator products, as to the label’s placement. “It could be a cartoon that is generated with AI. And so I think there’s a judgment on whether it’s a category that maybe merits from a very visible disclaimer,” he said.

Alongside tightening its rules on AI-generated content, YouTube is also pushing for federal protections by backing the NO FAKES Act in Washington, D.C. The proposed law

would regulate the use of AI to create unauthorized versions of a person’s voice or likeness. YouTube also plans to expand its deepfake detection tools to cover recognizable voices and other intellectual property, including popular characters.

RELATED CONTENT: Kennesaw State Introduces Georgia’s First Bachelor’s And Graduate Degrees In Artificial Intelligence

Show comments