Daily Tech News, Interviews, Reviews and Updates

YouTube’s Likeness Detection feature expands to a pilot group of government officials, journalists and political candidates

Last year, YouTube introduced the Likeness Detection tool for creators in the YouTube Partner Program. This tool allowed creators to detect and manage videos with AI using their facial likeness.

Yesterday, the company announced that this feature is now expanding to a pilot group of government officials, journalists and political candidates.

How does the likeness detection tool work?

This tool looks for a participant’s likeness in AI-generated content, and if a match is found (like a deepfake of their face), the individual can review the content and request removal if it violates YouTube’s privacy guidelines.

It provides a powerful way to manage unauthorised AI-impersonation, but detection does not guarantee removal. YouTube has a long history of protecting free expression and content in the public interest, including preserving content like parody and satire, even when used to critique world leaders or influential figures. The company will continue to carefully evaluate these exceptions when it receives requests for removal.

Also, to guard against abuse and ensure the tool is only used by those it’s meant to protect, participants are required to verify their identity before enrolling them in likeness detection. It is specified that the data provided during setup is strictly used for identity verification purposes and to power this safety feature, and is not used to train Google’s generative AI models.

Additionally, in the blog post, YouTube has mentioned that it is supporting the NO FAKES Act, which establishes a federal right of publicity and acts as a blueprint for international adoption to ensure technology serves and never replaces human creativity.

Get real time updates directly on you device, subscribe now.

You might also like