YouTube is putting warning labels on realistic artificial intelligence-generated videos as the company moves to protect users from various information.
According to the company, content creators are required to label any video they upload that was helped in any part by realistic AI generation tools.
“This is especially important in cases where the content discusses sensitive topics, such as elections, ongoing conflicts and public health crises, or public officials,” YouTube Vice Presidents of Product Management Jennifer Flannery O’Connor and Emily Moxley said Tuesday.
The labels will be required only on content that’s realistic, according to YouTube’s standards. This means videos showing fictitious events or real people doing things they didn’t really do would need the labels.
If users fail to adhere to the policy, they risk having their videos removed and their accounts suspended.
The new labels come as tech platforms wrangle with rapidly developing AI technology. Platforms like TikTok and Meta have added tags to better inform users about the validity of the content they’re watching, and Facebook has even limited the use of AI in advertising.