Meta Sets New Guidelines for AI-Generated Content in Political Ads

Meta aims to stay ahead of the anticipated surge in AI-generated content in political campaigns.
Meta Sets New Guidelines for AI-Generated Content in Political Ads

Continuing from earlier this week's timeline, Meta has issued rules regarding the usage of AI in political ads. It says it "will begin allowing advertisers" to use generative AI, but only for certain types of promotions.

"We will allow the use of generative AI in promotions that are: for government entities; on topics of national importance; and that are not likely to lead to harm or violence," Meta said

"We're announcing a new policy to help people understand when a social issue, election or political ad on Facebook or Instagram has been digitally created or altered, including through the use of AI. This policy will go into effect in the new year and will be required globally."

Meta had already taken this step in part, but in relation to multiple reports of AI-based manipulation within political ads.

But now, it's making it official, with specific guidelines around what's not allowed within AI-based promotions, and the disclosures required on such.

Under the new policy, advertisers will be required to disclose whenever a social issue, electoral, or political ad contains a photorealistic image or video, or realistic-sounding audio, that has been digitally created or altered.

Specifics: Disclosure will be required

In conclusion, an artificially generated ad should not contain a real person saying or doing anything that they never said or did.
Any AI-generated ad should not contain a realistic-looking person that existed never, nor event that took place never.
Any AI ad should not present manipulated footage of something that's real.
An ad about a real incident that allegedly happened, but is not a picture or video or audio clip of the incident.
In one sense, these sorts of revelations appear unnecessary at least because most AI-generated content looks and sounds remarkably obviously fake.
Political campaigners are, however already using AI-generated depictions to influence voters with realistic-looking and sounding dummies depicting opponents.

For example, the latest advertisement by U.S. Presidential candidate Ron DeSantis featured an AI-generated image of Donald Trump hugging Anthony Fauci, while a voice simulation of Trump was used in another ad.
Some of these will be glaringly apparent to others, but if they sway just one voter by those depictions, that is an unfair, misleading tactic. And, frankly, AI such as this are going to influence some level of voters, even with these new controls.

Meta will supplement the ad information when an advertiser decides, in the ad flow, that the content is digitally created or altered. This information will also be included in the Ad Library. If we determine an advertiser is not disclosing as required, we will reject the ad and repeated failure to disclose can lead to penalties against the advertiser. We will provide further information on the exact process that advertisers will undergo when building an ad.

So the risk here is that your ad gets disapproved and you'll incur suspension of the ad account due to repeated offenses.

But you can already discern how political campaigners might use such depictions to swing voters in the last days heading to the polls.

What if I created a really damaging AI video clip of a political rival, and I paid to promote that on the last day of the campaign, spreading it out there in the final hours before political ad blackout period?

That has to have some effect, right? And even if my ad account gets suspended because of it, it might be worth the risk if the clip seeds enough doubt, through a realistic-enough portrayal and message.

It seems inevitable that this is going to become more problematic, and no platform has all the answers on how to address such as yet.

But Meta's implementing enforcement rules, based on what it can thus far.

How effective they'll be is the next test.

Blog
|
2024-11-14 03:45:29