The company announced on Monday that the YouTube platform is now requiring creators to declare to the viewers when realistic content was made with AI. The service will introduce a new tool in Creator Studio, which will require creators to declare whether content that users could mistake for a real person, place, or event was created with altered or synthetic media, such as generative AI.
The new disclosures are geared toward saving users from being fooled into thinking that a video, created synthetically, is genuine as new generative AI tools make it increasingly difficult to differentiate between reality and fiction. The launch is at a time when experts have forewarned that AI and deepfakes will be a major threat during the next U.S. presidential election.
Today, YouTube officially announced an update but said it would update its AI policies as part of an enormous rollout in November.
The new policy by YouTube claims it does not require creators to disclose content that is obviously unrealistic or animated and therefore do not have to give warnings, such as someone riding a unicorn through a fantastical world. It is also not requiring creators to disclose their content that has used generative AI for help in production like generating scripts or auto-captions.
Instead, YouTube said it is aiming at videos that use the likeness of a realistic person. For example, creators will have to disclose instances when they have digitally altered content to "replace the face of one individual with another's or synthetically generating a person's voice to narrate a video," YouTube says.
For instance, they will also have to disclose content that alters the footage of actual events or places so that it looks like an actual building is burning. Similarly, creators will have to disclose when they generated realistic scenes of fictional major events, such as a tornado moving toward a real town.
As YouTube explains, the label will remain on most videos while appearing as a prominent overlay in the expanded description for sensitive topics like health and news.
These labels will begin to appear across all YouTube formats over the coming weeks, starting with the YouTube mobile app, and then desktop and TV.
The firm will examine enforcement action directed at the creators that, for whatever reason, do not wish to take up the labels. This is coming after the firm indicated that it would add labels in certain scenarios where a creator has not provided one for themself or themselves especially where the content might confuse or mislead folks.
YouTube adjusts policies for the upcoming wave of AI clips