artificial intelligence in political ads

  • The FCC announced that oppositions are due August 27 in response to the National Association of Broadcasters’ petition for reconsideration

The agenda for the Federal Election Commission’s August 15 Open Meeting was released last week, and it contains a proposed Notification of Disposition of the FEC’s review of a July 2023 petition for rulemaking filed by the advocacy group Public Citizen seeking to initiate a proceeding to address the use of Artificial Intelligence in campaign communications.  The FEC asked for public comment on that petition last August (see our article here).  The draft Notification and accompanying memorandum circulated by the three Republican members of the FEC proposes to deny the request to initiate such a proceeding.  As the FEC has equal representation of Democrats and Republicans, even if all of the Democrats disagree with the position advocated in the Notification, it would appear that the proposal would still be on hold for the foreseeable future as there would not be a majority of Commissioners necessary to move it forward.

The Public Citizen petition asked that the FEC “clarify that the [Federal Election Campaign Act’s prohibitions] against ‘fraudulent misrepresentation’ (52 U.S.C. § 30124) applies to deliberately deceptive AI-produced content in campaign communications.”  The draft Notification finds that the FEC lacks the statutory authority to initiate the proceeding – that the fraudulent misrepresentation language applies to a misrepresentation of a sponsor of a campaign ad, not to misleading messages in the ads themselves.  The Notice also contends that the FEC is “ill-positioned to take on the issue of AI regulation and does not have the technical expertise required to design appropriately tailored rules for AI-generated advertising.”  The draft notice suggests that, before any action is taken by the FEC, Congress must first authorize it.   Continue Reading FEC Appears Ready to Take a Pass on Regulating AI in Political Ads

Last week, the FCC released a Notice of Proposed Rulemaking that was first announced by the FCC Chairwoman three months ago (see our article here), proposing to require that the use of artificial intelligence in political advertising be disclosed when it airs on broadcast stations, local cable systems, or satellite radio or TV.  This proposal has been controversial, even before the details were released, with many (including the Chair of the Federal Election Commission and some in Congress) questioning whether the FCC had the authority to adopt rules in this area, and also asking whether it would be wise to adopt rules so close to the upcoming election (the Chairwoman had indicated an interest in completing the proceeding so that rules could be in place before November’s election).  The timing of the release of the NPRM seems to rule out any new rules becoming effective before this year’s election (see below), and the NPRM itself asks questions as to whether the FCC’s mandate to regulate in the public interest and other specific statutory delegations of power are sufficient to cover regulation in this area.  So, these fundamental questions are asked, along with many basic questions of how any obligation that would be adopted by the Commission would work. 

The FCC is proposing that broadcasters and the other media it regulates be required to transmit an on-air notice (either immediately before, after, or during a political ad) to identify an ad that was created in whole or in part using AI.  In addition, broadcasters and other media subject to the rule would need to upload a notice to their online public files identifying any political ads that were created using AI.  The NPRM sets forth many questions for public comment – and also raises many practical and policy issues that will need to be considered by the FCC and the industry in evaluating these proposals.Continue Reading The FCC Proposes Requirements for Disclosures About the Use of Artificial Intelligence in Political Ads – Looking at Some of the Many Issues for Broadcasters

We’ve written several times (see for instance our articles here, here, and here) about all of the action in state legislatures to regulate the use of artificial intelligence in political advertising – with approximately 17 states now having adopted laws or rules, most requiring the labeling of “deep fakes” in such ads, and a few banning deep fakes entirely.  Action on the federal level seems to be picking up, with two significant actions in the last week.  This week, FCC Chairwoman Jessica Rosenworcel issued a Press Release announcing that the FCC would be considering the adoption of rules requiring broadcasters and other media to include disclaimers when AI is used in political advertising. Last week, the Senate Committee on Rules and Administration considered three bills addressing similar issues.  These actions, along with a long-pending Federal Election Commission proceeding to consider labeling obligations on federal election ads (see our article here), are the federal government’s attempts to address this issue – though, with the time left before the election, none  of these proposals appear likely to have a significant effect during the current election cycle.

At the FCC, according to the Chairwoman’s Press Release, a draft Notice of Proposed Rulemaking is circulating among the Commissioners for their review.  The proposal is to require broadcasters, local cable companies, and other regulated entities with political broadcasting obligations under FCC rules, to include mandatory disclosures on political ads when AI is used.  The disclosures would be required on the air and in writing in a station’s FCC-hosted online public inspection file.  While the text of the NPRM is not yet public, the Press Release did provide some specifics as to the questions that would be asked in this proceeding.Continue Reading The FCC and Congress Advance Proposals to Regulate Artificial Intelligence in Political Advertising

  • Perhaps the biggest regulatory news of the past week came not from the FCC, but instead from the Federal Trade

Artificial Intelligence was the talk of the NAB Convention last week.  Seemingly, not a session took place without some discussion of the impact of AI.  One area that we have written about many times is the impact of AI on political advertising.  Legislative consideration of that issue has exploded in the first quarter of 2024, as over 40 state legislatures considered bills to regulate the use of AI (or “deep fakes” or “synthetic media”) in political advertising – some purporting to ban the use entirely, with most allowing the use if it is labeled to disclose to the public that the images or voices that they are experiencing did not actually happen in the way that they are portrayed.  While over 40 states considered legislation in the first quarter, only 11 have thus far adopted laws covering AI in political ads, up from 5 in December when we reported on the legislation adopted in Michigan late last year.

The new states that have adopted legislation regulating AI in political ads in 2024 are Idaho, Indiana, New Mexico, Oregon, Utah, and Wisconsin.  These join Michigan, California, Texas, Minnesota, and Washington State which had adopted such legislation before the start of this year.  Broadcasters and other media companies need to carefully review all of these laws.  Each of these laws is unique – there is no standard legislation that has been adopted across multiple states.  Some have criminal penalties, while others simply imposing civil liability.  Media companies need to be aware of the specifics of each of these bills to assess their obligations under these new laws as we enter this election season where political actors seem to be getting more and more aggressive in their attacks on candidates and other political figures. Continue Reading 11 States Now Have Laws Limiting Artificial Intelligence, Deep Fakes, and Synthetic Media in Political Advertising – Looking at the Issues