encompass blog

Meta’s Plan for Political Ads in 2024

Meta’s Plan for Political Ads in 2024

Written by enCOMPASS Agency

As we’re all undoubtedly aware, 2024 is a major presidential election year. As such, it promises no shortage of friction, legally, politically, and culturally. But beyond the historic implications of the election itself, there are also a number of important considerations with regard to technology. In particular, 2024 could be a watershed year for policies regarding transparency and generative AI use within political advertising.

Already, Meta (the company behind Facebook, Instagram, and countless other platforms) has unveiled some guidelines related to political advertising. These stipulations set the tone for what the digital ad industry will be talking about for much of the coming year. As such, we thought it was worth exploring Meta’s new guidelines in depth.

AI and Political Advertising

A lot of the concern surrounding political advertising pertains to AI, specifically technologies that can mask the voice or appearance of a political candidate. Many experts have noted that this technology makes it very difficult for the average consumer to separate real footage from something computer-generated, significantly blurring the lines of truth and fiction within an already fraught political ecosystem.

Meta has weighed in on this issue with a set of parameters to dictate how generative AI can be used within political advertising… or at least, the extent to which advertisers must be up-front about their AI use.

In an official statement, Meta says: “We’re announcing a new policy to help people understand when a social issue, election, or political advertisement on Facebook or Instagram has been digitally created or altered, including through the use of AI. This policy will go into effect in the new year and will be required globally.”

Sounds good, but what does it mean? To dig into the details a bit, Meta will now require advertisers to disclose any of the following:

  • Any instance in which AI-generated content depicts a person saying or doing something they didn’t actually say or do.
  • Any instance of AI depicting a realistic-looking person who doesn’t actually exist, or a realistic-looking event that never actually happened.
  • Any instance of AI depicting altered footage of a real-life event.
  • Any instance of AI depicting an event that allegedly occurred, but with falsified or computer-generated audio, video, or imagery.

Of course, such disclosures would be pretty unnecessary if AI-generated content looked fake and easy-to-spot across the board. And it’s certainly true that a lot of AI content does fail the authenticity test.

At the same time, political campaigns have become increasingly deft in their deployment of AI, while AI itself is becoming more and more potent. In the past year, the presidential campaign of Ron DeSantis deployed an ad that faked the voice and image of Donald Trump, and the results don’t appear obviously fraudulent or faked. Clearly, as AI advances, the need for these disclosures is acute.

What This Means for Advertisers

To get even deeper into the nitty-gritty of this policy, here’s another blurb from the Meta statement: “Meta will add information on the ad when an advertiser discloses in the advertising flow that the content is digitally created or altered. This information will also appear in the Ad Library. If we determine that an advertiser doesn’t disclose as required, we will reject the ad and repeated failure to disclose may result in penalties against the advertiser. We will share additional details about the specific process advertisers will go through during the ad creation process.”

The implication for political campaigns? AI-enhanced ads may prove useful in swaying voters or in sowing seeds of misinformation. By violating Meta’s disclosure policies, a campaign may have its ad content pulled from the platform or even its entire account suspended.

Those are steep penalties, and yet, they wouldn’t necessarily undo the damage done by an AI deepfake that goes viral. So, while these policies are certainly a step in the right direction, and suggest that companies like Meta are at least trying to be responsible about self-regulation, it’s unclear just how effective such policies will be in minimizing misinformation within the political landscape.

Onward into the Election Cycle

Here’s one more nugget from Meta, this time related to election interference: “We label state-controlled media on Facebook, Instagram and Threads so that users know when content is from a publication that may be wholly or partially under the editorial control of a government. As we have since 2020, we also block ads from state-controlled media outlets targeting people in the US.”

The fact that this policy even needs to exist suggests the high stakes for political advertising, particularly in the age of AI and social media. Again, we think these efforts from Meta are notable and praiseworthy, but whether they are sufficient remains to be seen. For now, we’ll keep an eye on these trends and keep you posted, right here at the enCOMPASS blog.

SHARE THIS ARTICLE: