We’ve written several times (see for instance our articles here, here, and here) about all of the action in state legislatures to regulate the use of artificial intelligence in political advertising – with approximately 17 states now having adopted laws or rules, most requiring the labeling of “deep fakes” in such ads, and a few banning deep fakes entirely.  Action on the federal level seems to be picking up, with two significant actions in the last week.  This week, FCC Chairwoman Jessica Rosenworcel issued a Press Release announcing that the FCC would be considering the adoption of rules requiring broadcasters and other media to include disclaimers when AI is used in political advertising. Last week, the Senate Committee on Rules and Administration considered three bills addressing similar issues.  These actions, along with a long-pending Federal Election Commission proceeding to consider labeling obligations on federal election ads (see our article here), are the federal government’s attempts to address this issue – though, with the time left before the election, none  of these proposals appear likely to have a significant effect during the current election cycle.

At the FCC, according to the Chairwoman’s Press Release, a draft Notice of Proposed Rulemaking is circulating among the Commissioners for their review.  The proposal is to require broadcasters, local cable companies, and other regulated entities with political broadcasting obligations under FCC rules, to include mandatory disclosures on political ads when AI is used.  The disclosures would be required on the air and in writing in a station’s FCC-hosted online public inspection file.  While the text of the NPRM is not yet public, the Press Release did provide some specifics as to the questions that would be asked in this proceeding.Continue Reading The FCC and Congress Advance Proposals to Regulate Artificial Intelligence in Political Advertising

Here are some of the regulatory developments of significance to broadcasters from the past week, with links to where you can go to find more information as to how these actions may affect your operations.

  • Perhaps the biggest regulatory news of the past week came not from the FCC, but instead from the Federal Trade

Artificial Intelligence was the talk of the NAB Convention last week.  Seemingly, not a session took place without some discussion of the impact of AI.  One area that we have written about many times is the impact of AI on political advertising.  Legislative consideration of that issue has exploded in the first quarter of 2024, as over 40 state legislatures considered bills to regulate the use of AI (or “deep fakes” or “synthetic media”) in political advertising – some purporting to ban the use entirely, with most allowing the use if it is labeled to disclose to the public that the images or voices that they are experiencing did not actually happen in the way that they are portrayed.  While over 40 states considered legislation in the first quarter, only 11 have thus far adopted laws covering AI in political ads, up from 5 in December when we reported on the legislation adopted in Michigan late last year.

The new states that have adopted legislation regulating AI in political ads in 2024 are Idaho, Indiana, New Mexico, Oregon, Utah, and Wisconsin.  These join Michigan, California, Texas, Minnesota, and Washington State which had adopted such legislation before the start of this year.  Broadcasters and other media companies need to carefully review all of these laws.  Each of these laws is unique – there is no standard legislation that has been adopted across multiple states.  Some have criminal penalties, while others simply imposing civil liability.  Media companies need to be aware of the specifics of each of these bills to assess their obligations under these new laws as we enter this election season where political actors seem to be getting more and more aggressive in their attacks on candidates and other political figures. Continue Reading 11 States Now Have Laws Limiting Artificial Intelligence, Deep Fakes, and Synthetic Media in Political Advertising – Looking at the Issues

Here are some of the regulatory developments of significance to broadcasters from this past week, with links to where you can go to find more information as to how these actions may affect your operations.

  • The FCC announced several dates and deadlines in proceedings of importance to broadcasters:

Here are some of the regulatory developments of significance to broadcasters from the past week, with links to where you can find more information as to how these actions may affect your operations.

  • The debate over the AM for Every Vehicle Act intensified this week, with the Wall Street Journal’s Editorial Board publishing an article

Another state has joined the list of those that require clear disclosure of the use of artificial intelligence (“AI”) in political ads, joining others that have addressed concerns about deep fakes corrupting the political process. Michigan’s Governor Whitmer just signed a bill that adds Michigan to 4 other states (Texas, California, Washington, and Minnesota) that have enacted laws requiring the clear identification of the use of AI in political ads.  As many media companies are struggling with their policies on AI, and as the federal government has not acted to impose limits on the use of AI in political ads (see our posts here and here), it has been up to states to adopt rules that limit these practices.

The Michigan bill, H.B. 5141, applies to “qualified political advertisements” which include any advertising “relating to a candidate for federal, state, or local office in this state, any election to federal, state, or local office in this state, or a ballot question that contains any image, audio, or video that is generated in whole or substantially with the use of artificial intelligence.”  A companion bill, H.B. 5143, defines “artificial intelligence” as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments, and that uses machine and human-based inputs to do all of the following: (a) Perceive real and virtual environments. (b) Abstract such perceptions into models through analysis in an automated manner. (c) Use model inference to formulate options for information or action.”Continue Reading Michigan Becomes the Fifth State to Require Disclosure of the Use of AI in Political Ads

Facebook parent Meta announced this week that it will require labeling on ads using artificial intelligence or other digital tools regarding elections and political and social issues. Earlier this week, we wrote about the issues that AI in political ads pose for media companies and about some of the governmental regulations that are being considered (and the limited rules that have thus far been adopted).  These concerns are prompting all media companies to consider how AI will affect them in the coming election, and Meta’s announcement shows how these considerations are being translated into policy.

The Meta announcement sets out situations where labeling of digitally altered content will be required.  Such disclosure of the digital alteration will be required when digital tools have been used to:

  • Depict a real person as saying or doing something they did not say or do; or
  • Depict a realistic-looking person that does not exist or a realistic-looking event that did not happen, or alter footage of a real event that happened; or
  • Depict a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.

The Meta announcement makes clear that using AI or other digital tools to make inconsequential changes that don’t impact the message of the ad (they give examples of size adjusting, cropping an image, color correction, or image sharpening) will be permitted without disclosure.  But even these changes can trigger disclosure obligations if they are in fact consequential to the message.  In the past, we’ve seen allegations of attack ads using shading or other seemingly minor changes to depict candidates in ways that make them appear more sinister or which otherwise convey some other negative message – presumably the uses that Meta is seeking to prohibit. 

This change will be applicable not just to US elections, but worldwide.  Already, I have seen TV pundits, when asked about the effect that the new policy will have, suggesting that what is really important is what other platforms, including television and cable, do to match this commitment.  So we thought that we would look at the regulatory schemes that, in some ways, limit what traditional electronic media providers can do in censoring political ads.  As detailed below, broadcasters, local cable companies, and direct broadcast satellite television providers are subject to statutory limits under Section 315 of the Communications Act that forbid them from “censoring” the content of candidate advertising.  Section 315 essentially requires that candidate ads (whether from a federal, state, or local candidate) be run as they are delivered to the station – they cannot be rejected based on their content.  The only exception thus far recognized by the FCC has been for ads that have content that violates federal criminal law.  There is thus a real question as to whether a broadcaster or cable company could impose a labeling requirement on candidate ads given their inability to reject a candidate ad based on its content.  Note, however, that the no-censorship requirement only applies to candidate ads, not those purchased by PACs, political parties, and other non-candidate individuals or groups.  So, policies like that adopted by Meta could be considered for these non-candidate ads even by these traditional platforms. Continue Reading Meta to Require Labeling of Digitally Altered Political Ads (Including Those Generated By AI) – Looking at the Rules that Apply to Various Media Platforms Limiting Such Policies on Broadcast and Cable

In the Washington Post last weekend, an op-ed article suggested that political candidates should voluntarily renounce the use of artificial intelligence in their campaigns.  The article seemed to be looking for candidates to take the actions that governments have largely thus far declined to mandate.  As we wrote back in July, despite calls from some for federal regulation of the use of AI-generated content in political ads, little movement in that direction has occurred. 

As we noted in July, a bill was introduced in both the Senate and the House of Representatives to require that there be disclaimers on all political ads using images or video generated by artificial intelligence, in order to disclose that they were artificially generated (see press release here), but there has been little action on that legislation.  The Federal Election Commission released a “Notice of Availability” in August (see our article here) asking for public comment on whether it should start a rulemaking to determine if the use of deepfakes and other synthetic media imitating a candidate violate FEC rules that forbid a candidate or committee from fraudulently misrepresenting that they are “speaking or writing or otherwise acting for or on behalf of any other candidate or political party or employee or agent thereof on a matter which is damaging to such other candidate or political party or employee or agent thereof.”  Comments were filed last month (available here), and include several (including those of the Republican National Committee) that question the authority of the FEC to adopt any rules in this area, both as a matter of statutory authority and under the First Amendment.  Such comments do not portend well for voluntary limits by candidates, nor for actions from an FEC that by law has 3 Republican and 3 Democratic commissioners.Continue Reading Artificial Intelligence in Political Ads – Media Companies Beware

The Federal Election Commission last week voted to open for public comment the question of whether to start a rulemaking proceeding to declare that “deepfakes” or other AI technology used to generate false images of a candidate doing or saying something, without a disclosure that the image, audio or video, was generated by artificial intelligence and portrays fictitious statements and actions, violates the FEC’s rules.  The FEC rule that is allegedly being violated is one that prohibits a candidate or committee from fraudulently misrepresentating that they are “speaking or writing or otherwise acting for or on behalf of any other candidate or political party or employee or agent thereof on a matter which is damaging to such other candidate or political party or employee or agent thereof.”  In other words, the FEC rule prohibits one candidate or committee from falsely issuing statements in the name of an opposing candidate or committee.  The FEC approved the Draft Notice of Availability to initiate the request for public comment on a second rulemaking petition filed by the group Public Citizen, asking for this policy to be adopted.  This Notice of Availability was published in the Federal Register today, initiating the comment period.  The deadline for comments is October 16, 2023.  This is just a preliminary request for comments as to the merits of the Public Citizen petition, and whether the FEC should move forward with a more formal proceeding.

As we wrote in an article a few weeks ago, the FEC had a very similar Notice of Availability before it last month and took no action, after apparently expressing concerns that the FEC does not have statutory authority to regulate deliberately deceptive AI-produced content in campaign ads.  Apparently Public Citizen’s second petition adequately addressed that concern.  The Notice published in the Federal Register today at least starts the process, although it may be some time before any formal rules are adopted.  As we noted in our article, a few states have already taken action to require disclosures about AI content used in political ads, particularly those in state and local elections.  Thus far, there is no similar federal requirement. Continue Reading FEC Asks for Public Comment on Petition for Rulemaking on the Use of Artificial Intelligence in Political Ads

Here are some of the regulatory developments of significance to broadcasters from the past week, with links to where you can go to find more information as to how these actions may affect your operations.

  • The FCC released its Report and Order setting the annual regulatory fees that broadcasters must pay for 2023. The Order