Facebook parent Meta announced this week that it will require labeling on ads using artificial intelligence or other digital tools regarding elections and political and social issues. Earlier this week, we wrote about the issues that AI in political ads pose for media companies and about some of the governmental regulations that are being considered (and the limited rules that have thus far been adopted). These concerns are prompting all media companies to consider how AI will affect them in the coming election, and Meta’s announcement shows how these considerations are being translated into policy.
The Meta announcement sets out situations where labeling of digitally altered content will be required. Such disclosure of the digital alteration will be required when digital tools have been used to:
- Depict a real person as saying or doing something they did not say or do; or
- Depict a realistic-looking person that does not exist or a realistic-looking event that did not happen, or alter footage of a real event that happened; or
- Depict a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.
The Meta announcement makes clear that using AI or other digital tools to make inconsequential changes that don’t impact the message of the ad (they give examples of size adjusting, cropping an image, color correction, or image sharpening) will be permitted without disclosure. But even these changes can trigger disclosure obligations if they are in fact consequential to the message. In the past, we’ve seen allegations of attack ads using shading or other seemingly minor changes to depict candidates in ways that make them appear more sinister or which otherwise convey some other negative message – presumably the uses that Meta is seeking to prohibit.
This change will be applicable not just to US elections, but worldwide. Already, I have seen TV pundits, when asked about the effect that the new policy will have, suggesting that what is really important is what other platforms, including television and cable, do to match this commitment. So we thought that we would look at the regulatory schemes that, in some ways, limit what traditional electronic media providers can do in censoring political ads. As detailed below, broadcasters, local cable companies, and direct broadcast satellite television providers are subject to statutory limits under Section 315 of the Communications Act that forbid them from “censoring” the content of candidate advertising. Section 315 essentially requires that candidate ads (whether from a federal, state, or local candidate) be run as they are delivered to the station – they cannot be rejected based on their content. The only exception thus far recognized by the FCC has been for ads that have content that violates federal criminal law. There is thus a real question as to whether a broadcaster or cable company could impose a labeling requirement on candidate ads given their inability to reject a candidate ad based on its content. Note, however, that the no-censorship requirement only applies to candidate ads, not those purchased by PACs, political parties, and other non-candidate individuals or groups. So, policies like that adopted by Meta could be considered for these non-candidate ads even by these traditional platforms. Continue Reading Meta to Require Labeling of Digitally Altered Political Ads (Including Those Generated By AI) – Looking at the Rules that Apply to Various Media Platforms Limiting Such Policies on Broadcast and Cable