Starting in 2024, Meta will start forcing political or issue-advertisers on its platforms to disclose when their ads are "digitally created or altered" through AI.
Soon, the new ads about elections, politics and social issues will be requiring one added step on Facebook and Instagram-one which advertisers will handle while submitting new ads.
Such disclosures by the advertiser would be required only when an ad contains a photorealistic image or video, or realistic sounding audio in just a few categories.
Meta is cracking down on deepfakes — digitally manipulated media designed to be misleading. This company will require disclosures in ads that were either created or manipulated to show a person doing or saying something they didn't.
Other exceptions to the general prohibition on advertisements warranted disclosures include ads that feature "photo-realistic people that do not actually exist or events that, although they look realistic, never occurred including in light of these rules altered imagery from real-life events" and ads that feature "realistic event[s] that allegedly occurred" but that are "not a true image, video, or audio recording of the event.".
Meta is quick to point out that routine digital manipulation-like picture sharpening and cropping and other changes-do not become new disclosures. The digitally manipulated ads are going to find a way out into Meta's Ad Library. It is a searchable library that collates paid advertisements on the company's sites.
"Advertisers running these ads do not have to disclose when content is digitally created or altered in ways that are inconsequential or immaterial to the claim, assertion, or issue raised in the ad," Meta's president of Global Affairs Nick Clegg wrote in a press release.
This policy on ad disclosures is in relation to social and political issues, against the backdrop of news that Meta would place new limitations on the kind of ads its own generative AI tools could be used for.
Last month, the company rolled out a set of new AI tools aimed at advertisers. The tools help advertisers create multiple variations in rapid succession and flex images to fit the needs of differing aspect ratios, among other applications.
Those AI tools are off limits for political elections, social issues and anything touching politics as campaigns, according to a report by Reuters FIRST. The firm announced this week it is going to ban AI tools for advertising in "potentially sensitive topics" from all industries, housing, employment, health, pharmaceuticals and financial services. Those are all areas where the company could very easily get in regulatory hot water, given the attention on AI right now-or areas where Meta has already gotten itself in trouble, like with discriminatory housing ads on Facebook.
Lawmakers were already examining the interface of AI and political advertising. This spring, Sen. Amy Klobuchar (D-MN) and Rep. Yvette Clarke (D-NY) introduced a bill that would mandate disclaimers on AI-altered or created political ads.
Deceptive AI could fundamentally upend our democracy, causing voters to wonder whether videos they are viewing of candidates are real or fake, Klobuchar said of Meta's new restrictions on its own in-house AI tools. "This decision by Meta is a step in the right direction, but we can't rely on voluntary commitments alone.".
As it has been putting some guardrails into AI use in political and social issue ads, some platforms are happy to stay out of that business altogether. TikTok does not wade into political advertising at all, banning any kind of paid political content across brand ads and paid branded content.