YouTube is working on new tools to give content producers more control over generative AI-generated video that uses their voice or likeness. YouTube claimed in an official post that by encouraging responsible AI research, the new likeness management technology will help to protect its creators and partners while enabling them to “harness AI’s creative potential.”
With the first tool, referred to as “synthetic-singing identification technology,” creators and musicians can use generative AI to automatically recognize and regulate YouTube videos that mimic their singing sounds. According to YouTube, the technology is part of its current Content ID copyright recognition system and will be tested as part of a pilot program early in 2025.

Also Read: Top Tech Firms Used YouTube Subtitles for AI Training: Report
The declaration comes after YouTube promised in November of last year to provide music companies with a means of eliminating AI musician clones. Artists are concerned about generative AI music technologies being used for plagiarism, copycatting, and copyright infringement because of their rapid advancement and accessibility.
Over 200 musicians, including Billie Eilish, Pearl Jam, and Katy Perry, requested greater responsibility around creating unapproved AI-generated mimicry in an open letter earlier this year, calling it an “assault on human creativity” and threatening the livelihoods of performers.
Additionally, a different technology that can recognize facial deepfakes of singers, actors, creators, and athletes on the platform is being developed. The system is currently under development, and YouTube has not provided an estimated release date.
Also Read: Join YouTube Premium to test the AI-powered “Jump ahead” feature immediately
Additionally, YouTube has promised to take tough measures against users who use the platform to scrape data to create artificial intelligence tools. Despite the platform’s stated policy that “accessing creator content in unauthorized ways violates our Terms of Service,” businesses such as OpenAI, Apple, Anthropic, Nvidia, Salesforce, and Runway AI have continued to train their AI systems on thousands of YouTube videos that have been scraped.
Investing in scraping detection technologies and preventing scrapers from accessing YouTube are two ways to prevent this kind of activity.
YouTube stated in its release that “as AI evolves, we believe it should enhance human creativity, not replace it. We’re dedicated to collaborating with our partners to make sure that their voices are amplified in future developments, and we’ll keep creating safeguards to address concerns and accomplish our shared objectives.”
Also Read: YouTube CEO Warns OpenAI Against Using Videos for Training AI Models
Additionally, YouTube says it is working on ways to offer content creators more control over how third-party AI businesses can use their work on the platform and will provide more information later this year.