liability for false attack ad

With less than a month to go before the November election, we can expect more and more attack ads, some of which may lead to cease and desist letters from the candidate being attacked.  These letters can raise the risk of defamation claims against broadcasters and cable companies when the ads are not bought by candidates.  The use of artificial intelligence in such ads raises the prospect of even nastier attack ads, and its use raises a whole host of legal issues beyond defamation worries, though it raises those too (see our article here on defamation concerns about AI generated content, and our articles herehere and here about other potential FCC and state law liability arising from such ads – note that since our last article on state AI laws, there are now over 20 states with AI laws I place).  Given the potential for a nasty election season getting even nastier, we thought that we would revisit our warning about broadcasters needing to assess the content of attack ads – particularly those from non-candidate groups. 

As we have written before, Section 315 of the Communications Act forbids broadcasters (and local cable companies) from editing the message of a candidate or rejecting that ad based on what is says except in extreme circumstances where the ad itself would violate a federal criminal law and possibly if it contains a false EAS alert (see, for instance, our articles herehere and here).  Because broadcasters cannot censor candidate ads, the Supreme Court has ruled that broadcasters are immune from any liability for the content of those ads.  (Note that this protection applies only to over-the-air broadcasters and local cable companies – the no censorship rule does not apply to cable networks or online distribution – see our articles here and here)  Other protections, such as Section 230, may apply to candidate ads placed on online platforms, but the circumstances in which the ad became part of the program offering need to be considered. Continue Reading Broadcasters Should Evaluate Attack Ads for Liability Concerns in the Final Weeks Before the November Election

Here are some of the regulatory developments of significance to broadcasters from the past week, with links to where you can go to find more information as to how these actions may affect your operations.

  • The FCC’s Media Bureau announced the opening of two filing windows for Class A TV, LPTV, and TV translator stations:

In the Washington Post last weekend, an op-ed article suggested that political candidates should voluntarily renounce the use of artificial intelligence in their campaigns.  The article seemed to be looking for candidates to take the actions that governments have largely thus far declined to mandate.  As we wrote back in July, despite calls from some for federal regulation of the use of AI-generated content in political ads, little movement in that direction has occurred. 

As we noted in July, a bill was introduced in both the Senate and the House of Representatives to require that there be disclaimers on all political ads using images or video generated by artificial intelligence, in order to disclose that they were artificially generated (see press release here), but there has been little action on that legislation.  The Federal Election Commission released a “Notice of Availability” in August (see our article here) asking for public comment on whether it should start a rulemaking to determine if the use of deepfakes and other synthetic media imitating a candidate violate FEC rules that forbid a candidate or committee from fraudulently misrepresenting that they are “speaking or writing or otherwise acting for or on behalf of any other candidate or political party or employee or agent thereof on a matter which is damaging to such other candidate or political party or employee or agent thereof.”  Comments were filed last month (available here), and include several (including those of the Republican National Committee) that question the authority of the FEC to adopt any rules in this area, both as a matter of statutory authority and under the First Amendment.  Such comments do not portend well for voluntary limits by candidates, nor for actions from an FEC that by law has 3 Republican and 3 Democratic commissioners.Continue Reading Artificial Intelligence in Political Ads – Media Companies Beware