no censorship of political ads

Last week, the FCC released a Notice of Proposed Rulemaking that was first announced by the FCC Chairwoman three months ago (see our article here), proposing to require that the use of artificial intelligence in political advertising be disclosed when it airs on broadcast stations, local cable systems, or satellite radio or TV.  This proposal has been controversial, even before the details were released, with many (including the Chair of the Federal Election Commission and some in Congress) questioning whether the FCC had the authority to adopt rules in this area, and also asking whether it would be wise to adopt rules so close to the upcoming election (the Chairwoman had indicated an interest in completing the proceeding so that rules could be in place before November’s election).  The timing of the release of the NPRM seems to rule out any new rules becoming effective before this year’s election (see below), and the NPRM itself asks questions as to whether the FCC’s mandate to regulate in the public interest and other specific statutory delegations of power are sufficient to cover regulation in this area.  So, these fundamental questions are asked, along with many basic questions of how any obligation that would be adopted by the Commission would work. 

The FCC is proposing that broadcasters and the other media it regulates be required to transmit an on-air notice (either immediately before, after, or during a political ad) to identify an ad that was created in whole or in part using AI.  In addition, broadcasters and other media subject to the rule would need to upload a notice to their online public files identifying any political ads that were created using AI.  The NPRM sets forth many questions for public comment – and also raises many practical and policy issues that will need to be considered by the FCC and the industry in evaluating these proposals.Continue Reading The FCC Proposes Requirements for Disclosures About the Use of Artificial Intelligence in Political Ads – Looking at Some of the Many Issues for Broadcasters

With the verdict in the first criminal case against former President (and now candidate) Trump having been released, we can envision a whole raft of attack ads likely to be airing before the November elections.  The verdict is likely to also increase political divisions within the country, and potentially fuel many other nasty attack ads to be aired in political races from the top of the ballot to the local races that appear toward its end.  The use of artificial intelligence in such ads raises the prospect of even nastier attack ads, and its use raises a whole host of legal issues beyond defamation worries, though it raises those too (see our article here on defamation concerns about AI generated content, and our recent articles here and here about other potential FCC and state law liability arising from such ads).  Given the potential for a nasty election season getting even nastier, we thought that we would revisit our warning about broadcasters needing to assess the content of attack ads – particularly those from non-candidate groups. 

As we have written before, broadcasters (and local cable companies) are forbidden from editing the message of a candidate or rejecting that ad based on what is says except in extreme circumstances where the ad itself would violate a federal criminal law and possibly if it contains a false EAS alert (see, for instance, our articles herehere and here).  Section 315 of the Communications Act forbids a broadcaster or a local cable operator from censoring a candidate ad.  Because broadcasters cannot censor candidate ads, the Supreme Court has ruled that broadcasters are immune from any liability for the content of those ads.  (Note that this protection applies only to over-the-air broadcasters and local cable companies – the no censorship rule does not apply to cable networks or online distribution – see our articles here and here)  Other protections, such as Section 230, may apply to candidate ads placed on online platforms, but the circumstances in which the ad became part of the program offering need to be considered. Continue Reading Trump Verdict Raises Concerns About A Nasty Election Campaign Getting Nastier – Looking at a Broadcaster’s Potential Liability for Attack Ads

Artificial Intelligence was the talk of the NAB Convention last week.  Seemingly, not a session took place without some discussion of the impact of AI.  One area that we have written about many times is the impact of AI on political advertising.  Legislative consideration of that issue has exploded in the first quarter of 2024, as over 40 state legislatures considered bills to regulate the use of AI (or “deep fakes” or “synthetic media”) in political advertising – some purporting to ban the use entirely, with most allowing the use if it is labeled to disclose to the public that the images or voices that they are experiencing did not actually happen in the way that they are portrayed.  While over 40 states considered legislation in the first quarter, only 11 have thus far adopted laws covering AI in political ads, up from 5 in December when we reported on the legislation adopted in Michigan late last year.

The new states that have adopted legislation regulating AI in political ads in 2024 are Idaho, Indiana, New Mexico, Oregon, Utah, and Wisconsin.  These join Michigan, California, Texas, Minnesota, and Washington State which had adopted such legislation before the start of this year.  Broadcasters and other media companies need to carefully review all of these laws.  Each of these laws is unique – there is no standard legislation that has been adopted across multiple states.  Some have criminal penalties, while others simply imposing civil liability.  Media companies need to be aware of the specifics of each of these bills to assess their obligations under these new laws as we enter this election season where political actors seem to be getting more and more aggressive in their attacks on candidates and other political figures. Continue Reading 11 States Now Have Laws Limiting Artificial Intelligence, Deep Fakes, and Synthetic Media in Political Advertising – Looking at the Issues

Facebook parent Meta announced this week that it will require labeling on ads using artificial intelligence or other digital tools regarding elections and political and social issues. Earlier this week, we wrote about the issues that AI in political ads pose for media companies and about some of the governmental regulations that are being considered (and the limited rules that have thus far been adopted).  These concerns are prompting all media companies to consider how AI will affect them in the coming election, and Meta’s announcement shows how these considerations are being translated into policy.

The Meta announcement sets out situations where labeling of digitally altered content will be required.  Such disclosure of the digital alteration will be required when digital tools have been used to:

  • Depict a real person as saying or doing something they did not say or do; or
  • Depict a realistic-looking person that does not exist or a realistic-looking event that did not happen, or alter footage of a real event that happened; or
  • Depict a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.

The Meta announcement makes clear that using AI or other digital tools to make inconsequential changes that don’t impact the message of the ad (they give examples of size adjusting, cropping an image, color correction, or image sharpening) will be permitted without disclosure.  But even these changes can trigger disclosure obligations if they are in fact consequential to the message.  In the past, we’ve seen allegations of attack ads using shading or other seemingly minor changes to depict candidates in ways that make them appear more sinister or which otherwise convey some other negative message – presumably the uses that Meta is seeking to prohibit. 

This change will be applicable not just to US elections, but worldwide.  Already, I have seen TV pundits, when asked about the effect that the new policy will have, suggesting that what is really important is what other platforms, including television and cable, do to match this commitment.  So we thought that we would look at the regulatory schemes that, in some ways, limit what traditional electronic media providers can do in censoring political ads.  As detailed below, broadcasters, local cable companies, and direct broadcast satellite television providers are subject to statutory limits under Section 315 of the Communications Act that forbid them from “censoring” the content of candidate advertising.  Section 315 essentially requires that candidate ads (whether from a federal, state, or local candidate) be run as they are delivered to the station – they cannot be rejected based on their content.  The only exception thus far recognized by the FCC has been for ads that have content that violates federal criminal law.  There is thus a real question as to whether a broadcaster or cable company could impose a labeling requirement on candidate ads given their inability to reject a candidate ad based on its content.  Note, however, that the no-censorship requirement only applies to candidate ads, not those purchased by PACs, political parties, and other non-candidate individuals or groups.  So, policies like that adopted by Meta could be considered for these non-candidate ads even by these traditional platforms. Continue Reading Meta to Require Labeling of Digitally Altered Political Ads (Including Those Generated By AI) – Looking at the Rules that Apply to Various Media Platforms Limiting Such Policies on Broadcast and Cable

There is but a week to go before the mid-term elections, and political ads blanket the airwaves across the country.  From discussions that I have had with many attorneys, broadcasters and other campaign observers, the ads this year have been particularly aggressive.  Some publications have even suggested that, in the waning days of the campaign, the ads may become even worse as desperate campaigns look for some last-minute claim that could turn the tide in an election.  In this rush to election day, broadcasters need to be on the alert for allegations that an attack ad from a non-candidate group is false or defamatory, because in certain instances, the ad could result in a claim against the broadcaster.

As we have written before, broadcasters (and local cable companies) are forbidden from censoring the message of a candidate (see, for instance, our articles here and here).  Section 315 of the Communications Act forbids a broadcaster or a local cable operator from censoring a candidate ad.  Because broadcasters cannot censor candidate ads, the Supreme Court has ruled that broadcasters are immune from any liability for the content of those ads.  (Note that this protection applies only to broadcasters and local cable companies – the no censorship rule does not apply to online distribution – see our articles here and here – so other considerations need to be considered when dealing with online political ads).  But some have taken that to mean that broadcasters have no fear of liability for any political ad.  As I explained in a recent interview with a Detroit television station, that is not true – broadcasters do theoretically have the potential for liability if they run an ad from a non-candidate group either knowing that ad to be false, or by continuing to run a false ad after being put on notice that the ad was false and ignoring that notice (see also this article about this distinction between candidate and non-candidate ads, and how the media’s coverage of campaigns can overlook these distinctions).  In 2020, President Trump’s campaign brought a lawsuit against a Wisconsin television station alleging that a PAC ad run on the station was false and defamatory (see our articles here and here on that suit).  In this election cycle, there are press reports of a lawsuit by Senate candidate Evan McMullin against a political party’s campaign committee and three local TV station owners for running an ad that had allegedly edited remarks by McMullin to make it seem like he said all Republicans were racist (see articles here and here).  Even Roy Moore, the defeated Senate candidate from several years ago in Alabama, successfully pursued a defamation suit against the sponsor of an ad that Moore claimed falsely accused him of improper conduct (this decision was not against a broadcaster, but instead against the ad’s sponsor, see report here).Continue Reading With A Week to Go Before the Midterm Elections, Watch for Last Minute Unfounded Attack Ads – The Potential Liability of Stations for False Claims in Ads from PACs, Parties and Other Noncandidate Groups

Facebook will disable “new” political ads the week before this year’s November mid-term election (see its post on this policy here), just as many broadcast stations will be struggling with commercial inventory issues, trying to get last minute political ads on the air without having to dump all of their regular commercial advertisers who will be just starting to ramp up their commercial campaigns for the holiday season.  We’ve written previously about how the legal policies that govern Facebook and other online platforms are different than those that govern broadcast, local cable, and direct broadcast satellite (DBS) political ad sales.  Many of the policies adopted by these online platforms could not be adopted by broadcasters, local cable and DBS companies.  In light of Facebook’s recent announcement and the upcoming election, we thought that we would recap some of our previous reviews of this issue.

In June 2021, we wrote about Facebook’s plans to end its policy of not subjecting posts by elected officials to the same level of scrutiny by its Oversight Board that it applies to other platform users.  Facebook’s announced policy has been that the newsworthiness of posts by politicians and elected officials was such that it outweighed Facebook’s uniform application of its Community Standards – although it did make exceptions for calls to violence and questions of election integrity, and where posts linked to other offending content.  Just a year before, there were calls for Facebook to take more aggressive steps to police misinformation on its platforms. These calls grew out of the debate over the need to revise Section 230 of the Communications Decency Act, which insulates online platforms from liability for posts by unrelated parties on those platforms (see our article here on Section 230).
Continue Reading Facebook to Reject New Political Ads the Week Before the November Election – Why Broadcasters Can’t Do That

Last week, much was made of an FCC Media Bureau decision rejecting the “reasonable access” claim of a write-in candidate for a Congressional seat in Ohio against radio stations which, after initially running his spots, decided to pull those spots because he had not made a “substantial showing” of his candidacy.  Candidates for federal office (the US House of Representatives, the US Senate and for President) are entitled to buy reasonable amounts of commercial time on all broadcast stations, once those candidates are “legally qualified.”  In other words, commercial broadcast stations cannot refuse to run any ads for candidates for any federal elective office.  We wrote more about reasonable access here, including the considerations about how much time is “reasonable.”

In most cases, the question of whether a candidate is legally qualified for FCC purposes is a relatively simple one.  A station looks to see if that candidate has filed the required paperwork and qualified for a place on the election ballot in the district in which they are seeking office.  The case decided last week was one of the hard cases, where the candidate did not qualify for a place on the ballot but argued that he was a write-in candidate for the congressional seat.  The FCC has recognized that write-in candidates can be legally qualified so as to be guaranteed reasonable access and other protections afforded to candidates under FCC rules, including the right to not have their commercial messages censored by the station (see our posts here and here on the no censorship rule) – but they must make a substantial showing that their candidacy is legitimate.  The FCC has recognized that it would put broadcasters in an untenable position if anyone could, on a whim, declare that they are a write-in candidate and therefore be entitled to buy uncensored advertising time (at lowest unit rates in the 45 days before a primary or the 60 days before a general election – see our post here on lowest unit rates) on any commercial broadcast station that they wanted to.  So the FCC requires this substantial showing – and the adequacy of that showing was the issue in last week’s decision, and has been a question that other write-ins have faced in other elections in the past.
Continue Reading Reasonable Access and the Problem Candidate – FCC Declares a Write-In Candidate Not Entitled to Buy Radio Spots, But That May Not Be the End of the Story

Here are some of the regulatory developments of significance to broadcasters from the last week, with links to where you can go to find more information as to how these actions may affect your operations.

  • Following up on its proposals from last summer to clean up radio technical rules that were inconsistent, outdated, or inaccurate,

Ads planned to run in yesterday’s Super Bowl by Republican candidates in primaries to select candidates for 2022 senate elections drew comments and controversy even before the game, with some calls to block the ads from the air.  Ads for a candidate in Pennsylvania used the “Let’s Go Brandon” language generally acknowledged to be an allusion to a profanity directed at President Biden (see article here).  In Arizona, a Senate candidate showed the candidate in a fictionalized old west high noon shootout with characters playing President Biden, Nancy Pelosi and Senator Mark Kelly (see article here), which some found particularly offensive because of its associating gun violence with Kelly whose wife, Gabby Giffords, was a victim of such violence while serving in Congress.  There were calls for the stations running the game to reject these ads, or for the FCC to penalize stations for those ads.  While popular sentiment may call for such actions, the law does not allow that to happen,

We have written about this issue many times before (see, for instance, our refreshers on the rules with respect to candidate ads, here and our article here), yet these issues still come up all the time whenever a legally qualified candidate produces a controversial ad.  Broadcasters need to know the rules so that they don’t pull an ad that they are not allowed to censor under the FCC’s rules, and that they don’t run one for which they could in fact have liability.
Continue Reading Controversial Super Bowl Political Ads on Local Stations – Why They Can’t Be Pulled

A recent controversial court of appeals decision on a defamation claim brought by Congressman Devin Nunes sends a signal to broadcasters about the care they need to give to reviewing commercial messages – particularly political attack ads – when questions are raised as to the truth of the assertions made in those ads.  As we have written before, broadcasters are immune from civil liability for defamation claims when they broadcast an ad from the campaign of a legally qualified candidate, as a station cannot censor a candidate ad.  Because broadcasters must transmit the ad as produced, they are immune from liability for its content.  But ads from non-candidate groups, including political parties and PACs, can be censored by stations – so stations that decide to run such ads are subject to liability for their content.  Under Supreme Court precedent, defamation of a public figure (like a political candidate) is found when material is transmitted to the public that is false and results in injuries to the candidate plus, unique to public figures, the ad was transmitted with “actual malice.” Malice means that it was transmitted either knowing that the ad was false or having reason to believe that it was false.  See our article here about the analysis of this issue in other cases.  When a broadcaster receives objections alleging that content in the ad is false, it can be argued that the station has been put on notice that it has an obligation to assess the truth of the ad, and thus would need to take it down if the ad includes defamatory claims being made.

We recently wrote about the opinions from two Supreme Court justices suggesting that it should be easier for public figures to prove defamation claims. The case that led to the recent court of appeals decision began when Congressman Nunes brought a defamation lawsuit in response to a magazine’s publication of allegations that his family’s farm used illegal migrant labor and suggested that his political positions against immigration were thus hypocritical.  That lawsuit urged the same change in defamation law suggested in the Supreme Court opinions, and also alleged that the implications in the article were false, as Nunes know nothing about the migrant laborers.  A few months later, a reporter tweeted a link to the article, suggesting that his twitter followers look at the allegations in the article.  While the court found that the article itself was not defamatory (since the publisher had no reason to believe the information in the article was false at the time of publication, and thus acted without malice), it also found that the reporter’s tweet was potentially defamatory since, after the article was published, Nunes had filed his lawsuit against the magazine claiming that the article’s suggestion that he knew about the illegal workers was false.  The court held that a summary decision in favor of the reporter was not proper, finding that a jury could determine that the reporter’s tweet was defamatory even though the underlying article was not, as the tweet came after the claim by Nunes that he knew nothing about the illegal workers.
Continue Reading Defamation by Tweet – Court Case Reminds Broadcasters to Take Cease and Desist Requests about Attack Ads Seriously