Here are some of the regulatory developments of significance to broadcasters from the past two weeks, with links to where you can go to find more information as to how these actions may affect your operations.

  • The AM for Every Vehicle Act was scheduled for a US Senate vote this week through an expedited process

Another state has joined the list of those that require clear disclosure of the use of artificial intelligence (“AI”) in political ads, joining others that have addressed concerns about deep fakes corrupting the political process. Michigan’s Governor Whitmer just signed a bill that adds Michigan to 4 other states (Texas, California, Washington, and Minnesota) that have enacted laws requiring the clear identification of the use of AI in political ads.  As many media companies are struggling with their policies on AI, and as the federal government has not acted to impose limits on the use of AI in political ads (see our posts here and here), it has been up to states to adopt rules that limit these practices.

The Michigan bill, H.B. 5141, applies to “qualified political advertisements” which include any advertising “relating to a candidate for federal, state, or local office in this state, any election to federal, state, or local office in this state, or a ballot question that contains any image, audio, or video that is generated in whole or substantially with the use of artificial intelligence.”  A companion bill, H.B. 5143, defines “artificial intelligence” as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments, and that uses machine and human-based inputs to do all of the following: (a) Perceive real and virtual environments. (b) Abstract such perceptions into models through analysis in an automated manner. (c) Use model inference to formulate options for information or action.”Continue Reading Michigan Becomes the Fifth State to Require Disclosure of the Use of AI in Political Ads

Here are some of the regulatory developments of significance to broadcasters from the past week, with links to where you can go to find more information as to how these actions may affect your operations.

  • The FCC has until December 27th to comply with a court order requiring the agency to conclude its still-pending

Facebook parent Meta announced this week that it will require labeling on ads using artificial intelligence or other digital tools regarding elections and political and social issues. Earlier this week, we wrote about the issues that AI in political ads pose for media companies and about some of the governmental regulations that are being considered (and the limited rules that have thus far been adopted).  These concerns are prompting all media companies to consider how AI will affect them in the coming election, and Meta’s announcement shows how these considerations are being translated into policy.

The Meta announcement sets out situations where labeling of digitally altered content will be required.  Such disclosure of the digital alteration will be required when digital tools have been used to:

  • Depict a real person as saying or doing something they did not say or do; or
  • Depict a realistic-looking person that does not exist or a realistic-looking event that did not happen, or alter footage of a real event that happened; or
  • Depict a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.

The Meta announcement makes clear that using AI or other digital tools to make inconsequential changes that don’t impact the message of the ad (they give examples of size adjusting, cropping an image, color correction, or image sharpening) will be permitted without disclosure.  But even these changes can trigger disclosure obligations if they are in fact consequential to the message.  In the past, we’ve seen allegations of attack ads using shading or other seemingly minor changes to depict candidates in ways that make them appear more sinister or which otherwise convey some other negative message – presumably the uses that Meta is seeking to prohibit. 

This change will be applicable not just to US elections, but worldwide.  Already, I have seen TV pundits, when asked about the effect that the new policy will have, suggesting that what is really important is what other platforms, including television and cable, do to match this commitment.  So we thought that we would look at the regulatory schemes that, in some ways, limit what traditional electronic media providers can do in censoring political ads.  As detailed below, broadcasters, local cable companies, and direct broadcast satellite television providers are subject to statutory limits under Section 315 of the Communications Act that forbid them from “censoring” the content of candidate advertising.  Section 315 essentially requires that candidate ads (whether from a federal, state, or local candidate) be run as they are delivered to the station – they cannot be rejected based on their content.  The only exception thus far recognized by the FCC has been for ads that have content that violates federal criminal law.  There is thus a real question as to whether a broadcaster or cable company could impose a labeling requirement on candidate ads given their inability to reject a candidate ad based on its content.  Note, however, that the no-censorship requirement only applies to candidate ads, not those purchased by PACs, political parties, and other non-candidate individuals or groups.  So, policies like that adopted by Meta could be considered for these non-candidate ads even by these traditional platforms. Continue Reading Meta to Require Labeling of Digitally Altered Political Ads (Including Those Generated By AI) – Looking at the Rules that Apply to Various Media Platforms Limiting Such Policies on Broadcast and Cable

In the Washington Post last weekend, an op-ed article suggested that political candidates should voluntarily renounce the use of artificial intelligence in their campaigns.  The article seemed to be looking for candidates to take the actions that governments have largely thus far declined to mandate.  As we wrote back in July, despite calls from some for federal regulation of the use of AI-generated content in political ads, little movement in that direction has occurred. 

As we noted in July, a bill was introduced in both the Senate and the House of Representatives to require that there be disclaimers on all political ads using images or video generated by artificial intelligence, in order to disclose that they were artificially generated (see press release here), but there has been little action on that legislation.  The Federal Election Commission released a “Notice of Availability” in August (see our article here) asking for public comment on whether it should start a rulemaking to determine if the use of deepfakes and other synthetic media imitating a candidate violate FEC rules that forbid a candidate or committee from fraudulently misrepresenting that they are “speaking or writing or otherwise acting for or on behalf of any other candidate or political party or employee or agent thereof on a matter which is damaging to such other candidate or political party or employee or agent thereof.”  Comments were filed last month (available here), and include several (including those of the Republican National Committee) that question the authority of the FEC to adopt any rules in this area, both as a matter of statutory authority and under the First Amendment.  Such comments do not portend well for voluntary limits by candidates, nor for actions from an FEC that by law has 3 Republican and 3 Democratic commissioners.Continue Reading Artificial Intelligence in Political Ads – Media Companies Beware

The Federal Election Commission last week voted to open for public comment the question of whether to start a rulemaking proceeding to declare that “deepfakes” or other AI technology used to generate false images of a candidate doing or saying something, without a disclosure that the image, audio or video, was generated by artificial intelligence and portrays fictitious statements and actions, violates the FEC’s rules.  The FEC rule that is allegedly being violated is one that prohibits a candidate or committee from fraudulently misrepresentating that they are “speaking or writing or otherwise acting for or on behalf of any other candidate or political party or employee or agent thereof on a matter which is damaging to such other candidate or political party or employee or agent thereof.”  In other words, the FEC rule prohibits one candidate or committee from falsely issuing statements in the name of an opposing candidate or committee.  The FEC approved the Draft Notice of Availability to initiate the request for public comment on a second rulemaking petition filed by the group Public Citizen, asking for this policy to be adopted.  This Notice of Availability was published in the Federal Register today, initiating the comment period.  The deadline for comments is October 16, 2023.  This is just a preliminary request for comments as to the merits of the Public Citizen petition, and whether the FEC should move forward with a more formal proceeding.

As we wrote in an article a few weeks ago, the FEC had a very similar Notice of Availability before it last month and took no action, after apparently expressing concerns that the FEC does not have statutory authority to regulate deliberately deceptive AI-produced content in campaign ads.  Apparently Public Citizen’s second petition adequately addressed that concern.  The Notice published in the Federal Register today at least starts the process, although it may be some time before any formal rules are adopted.  As we noted in our article, a few states have already taken action to require disclosures about AI content used in political ads, particularly those in state and local elections.  Thus far, there is no similar federal requirement. Continue Reading FEC Asks for Public Comment on Petition for Rulemaking on the Use of Artificial Intelligence in Political Ads

Here are some of the regulatory developments of significance to broadcasters from the past week, with links to where you can go to find more information as to how these actions may affect your operations.

  • The FCC released its Report and Order setting the annual regulatory fees that broadcasters must pay for 2023. The Order

Here are some of the regulatory developments of significance to broadcasters from the past week, with links to where you can go to find more information as to how these actions may affect your operations.

  • FEMA and the FCC announced that this year’s Nationwide EAS Test is scheduled for October 4, 2023 (with a back-up

Here are some of the regulatory developments of significance to broadcasters from the past week, with links to where you can go to find more information as to how these actions may affect your operations.

  • Around this time of year, the FCC typically issues a Public Notice reminding TV broadcasters, cable operators, satellite television services,

Stories about “deepfakes,” “synthetic media,” and other forms of artificial intelligence being used in political campaigns, including in advertising messages, have abounded in recent weeks.  There were stories about a superPAC running attack ads against Donald Trump where Trump’s voice was allegedly synthesized to read one of his tweets condemning the Iowa governor for not supporting him in his Presidential campaign.  Similar ads have been run attacking other political figures, prompting calls from some for federal regulation of the use of AI-generated content in political ads.  The Federal Election Commission last month discussed a Petition for Rulemaking filed by the public interest group Public Citizen asking for a rulemaking on the regulation of these ads.  While the FEC staff drafted a “Notification of Availability” to tell the public that the petition was filed and to ask for comments on whether the FEC should start a formal rulemaking on the subject, according to an FEC press release, no action was taken on that Notification.  A bill has also been introduced in both the Senate and the House of Representatives to require that there be disclaimers on all political ads using images or video generated by artificial intelligence revealing that they were artificially generated (see press release here).

These federal efforts to require labeling of political ads using AI have yet to result in any such regulation, but a few states have stepped into the void and adopted their own requirements.   Washington State recently passed legislation requiring the labeling of AI-generated content in political ads.  Some states, including Texas and California, already provide penalties for deepfakes that do not contain a clear public disclosure when used in political ads within a certain period before an election (Texas, within 30 days and California within 60 days).Continue Reading Artificial Intelligence in Political Ads – Legal Issues in Synthetic Media and Deepfakes in Campaign Advertising – Concerns for Broadcasters and Other Media Companies