regulation of online platforms

Facebook parent Meta announced this week that it will require labeling on ads using artificial intelligence or other digital tools regarding elections and political and social issues. Earlier this week, we wrote about the issues that AI in political ads pose for media companies and about some of the governmental regulations that are being considered (and the limited rules that have thus far been adopted).  These concerns are prompting all media companies to consider how AI will affect them in the coming election, and Meta’s announcement shows how these considerations are being translated into policy.

The Meta announcement sets out situations where labeling of digitally altered content will be required.  Such disclosure of the digital alteration will be required when digital tools have been used to:

  • Depict a real person as saying or doing something they did not say or do; or
  • Depict a realistic-looking person that does not exist or a realistic-looking event that did not happen, or alter footage of a real event that happened; or
  • Depict a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.

The Meta announcement makes clear that using AI or other digital tools to make inconsequential changes that don’t impact the message of the ad (they give examples of size adjusting, cropping an image, color correction, or image sharpening) will be permitted without disclosure.  But even these changes can trigger disclosure obligations if they are in fact consequential to the message.  In the past, we’ve seen allegations of attack ads using shading or other seemingly minor changes to depict candidates in ways that make them appear more sinister or which otherwise convey some other negative message – presumably the uses that Meta is seeking to prohibit. 

This change will be applicable not just to US elections, but worldwide.  Already, I have seen TV pundits, when asked about the effect that the new policy will have, suggesting that what is really important is what other platforms, including television and cable, do to match this commitment.  So we thought that we would look at the regulatory schemes that, in some ways, limit what traditional electronic media providers can do in censoring political ads.  As detailed below, broadcasters, local cable companies, and direct broadcast satellite television providers are subject to statutory limits under Section 315 of the Communications Act that forbid them from “censoring” the content of candidate advertising.  Section 315 essentially requires that candidate ads (whether from a federal, state, or local candidate) be run as they are delivered to the station – they cannot be rejected based on their content.  The only exception thus far recognized by the FCC has been for ads that have content that violates federal criminal law.  There is thus a real question as to whether a broadcaster or cable company could impose a labeling requirement on candidate ads given their inability to reject a candidate ad based on its content.  Note, however, that the no-censorship requirement only applies to candidate ads, not those purchased by PACs, political parties, and other non-candidate individuals or groups.  So, policies like that adopted by Meta could be considered for these non-candidate ads even by these traditional platforms. Continue Reading Meta to Require Labeling of Digitally Altered Political Ads (Including Those Generated By AI) – Looking at the Rules that Apply to Various Media Platforms Limiting Such Policies on Broadcast and Cable

In our summary of last week’s regulatory actions, I was struck by a common thread in comments made by several FCC Commissioners in different contexts – the thread being the FCC’s role in regulating Internet content companies.  As we noted in our summary, both Republican commissioners issued statements last week in response to a request by a public interest group that the FCC block Elon Musk’s acquisition of Twitter.  The Commissioners stated that the FCC had no role to play in reviewing that acquisition.  Twitter does not appear to own regulated communications assets and thus the FCC would not be called upon to review any application for the acquisition of that company.  The Commissioners also noted concerns with the First Amendment implications of trying to block the acquisition because of Musk’s hands-off position on the regulation of content on the platform, but the Commissioners’ principal concern was with FCC jurisdiction (Carr StatementSimington Comments).  In the same week, FCC Chairwoman Jessica Rosenworcel, in remarks to a disability rights organization, talked about plans for more FCC forums on the accessibility of Internet content to follow up on the sessions that we wrote about here.

The ability of the FCC to regulate internet content and platforms depends on statutory authority.  In holding the forums on captioning of online video content, the FCC could look to the language of the 21st Century Communications and Video Accessibility Act, which included language that asked the FCC to look at the accessibility of video content used on internet platforms.  In other areas, the FCC’s jurisdiction is not as clear, but calls arise regularly for the FCC to act to regulate content that, as we have written in other contexts, looks more and more like broadcast content and competes directly with that content.
Continue Reading Does the FCC Regulate Internet Content and Companies?