Today, YouTube announced that it is revising its content guidelines in response to AI-generated deepfakes signifies an important shift in content moderation on the platform. In a blog post, Jennifer Flannery O'Connor and Emily Moxley, Vice Presidents of Product Management at YouTube, say these changes aim to maintain a balance between encouraging generative AI's creativity and safeguarding the community against potential misuse.
In reality, it also serves to appease YouTube's partners in the music industry who are irate at the proliferation and popularity of hit sensations such as Heart on my Sleeve that was created using AI-generated voices of Drake and The Weeknd.
A critical aspect of these updates is the requirement for creators to disclose AI-generated content, particularly when it realistically alters or synthesizes material. Failure to do so could result in takedowns and demonetization. Existing AI tools have demonstrated limited success in identifying undisclosed synthetic media, so this presents a significant challenge for YouTube in ensuring compliance with these guidelines.
It gets a bit more complicated when considering the removal of videos that simulate identifiable individuals, incorporating faces or voices. YouTube plans to follow a nuanced approach, assessing various factors such as whether the content qualifies as parody or satire, or if it involves public figures. The rules will follow legal standards around fair use and defamation but since there is no specific legal framework for AI deepfakes, YouTube will create and enforce its own set of rules.
For AI-generated music content, YouTube introduces stricter controls. Unlike other AI-generated content, there will be no allowances for parody and satire in cases where AI mimics an artist's unique singing or rapping voice. This stipulation reflects the platform's need to maintain strong relations with the music industry, which is crucial for its operational model, especially in competing with platforms like TikTok for music discovery.
In contrast to this cautious approach towards the music industry, Google, YouTube's parent company, has scraped troves of content from the internet to power its own AI ambitions. This includes using vast amounts of copyright content. When juxtaposed with the special treatment given to the music industry, it highlights a disparity in how different types of content and creators are treated when it comes to AI.
This tension highlights the broader challenges faced by tech giants seeking to advance innovation while still protecting intellectual property rights. In the absence of comprehensive legal frameworks governing AI and digital content, reconciling these competing goals will only grow more complicated as generative technologies become more sophisticated. The lack of clear regulations around emerging areas like synthetic media leaves platforms to navigate blurred lines, codifying their own makeshift rules to fill the legal void. Until legislators and courts catch up, the onus falls on companies like Google and YouTube to strike an elusive balance between progress and property through stopgap policies.