Communications Decency Act

Here are some of the regulatory and legal actions and developments of the last week of significance to broadcasters, with links to where you can go to find more information as to how these actions may affect your operations.

  • The FCC released its Report and Order on annual regulatory fees for fiscal year 2020 and,

The question about what to do with the protections offered by Section 230 of the Communications Decency Act took another turn this week, when Joe Biden suggested that online platforms needed to take responsibility for the content posted on them and correct misinformation in those ads.  That position is seemingly the opposite of the President’s Executive Order about which we wrote here and here, which seemingly suggests that no censorship should be applied against political speech on these platforms – or certainly no censorship against certain kinds of speech that is not applied against speech from all other parties on that platform.  Facebook almost immediately posted this response, defending its position not to censor candidate’s speech and analogizing it to the position that television and radio broadcasters are forced by Congress to take – where by law they are not allowed to refuse to run a political ad from a candidate because of its content and they are shielded from liability because of their inability to censor these candidate ads.  Facebook took the position that, if Congress wants to regulate political speech, it should pass laws to do so, but that Facebook would not itself be a censor.  That position reminded us of an article that we wrote back in January when there were calls to make Facebook stop running political ads comparing the regulatory schemes that apply to political ads on different platforms.  Given its new relevance in light of the sudden prominence of the debate over Section 230, we thought that we would rerun our earlier article.  Here it is – and we note how we seemingly anticipated the current debate in our last paragraph:

[In January], the New York Times ran an article seemingly critical of Facebook for not rejecting ads  from political candidates that contained false statements of fact.  We have already written that this policy of Facebook matches the policy that Congress has imposed on broadcast stations and local cable franchisees who sell time to political candidates – they cannot refuse an ad from a candidate’s authorized campaign committee based on its content – even if it is false or even defamatory (see our posts here and here for more on the FCC’s “no censorship” rule that applies to broadcasting and local cable systems).  As this Times article again raises this issue, we thought that we should again provide a brief recap of the rules that apply to broadcast and local cable political ad sales, and contrast these rules to those that currently apply to online advertising.
Continue Reading Facebook Defends Not Censoring Political Ads – Looking at the Differences In Regulation of Political Speech on Different Communications Platforms

We summarized the provisions of Section 230 of the Communications Decency Act on Monday, looking at the application of the law that the President has sought to change through the Executive Order released last week.  Today, it’s time to look at what the Executive Order purports to do and what practical effects it might have on media companies, including broadcasters.  As we noted in our first article, the reach of Section 230 is broad enough that any company with an online presence where content is created and posted by someone other than the site owner is protected by Section 230 – so that would include the online properties of almost every media company has.

The Executive Order has four distinct action items directed to different parts of the government.  The first, which has perhaps received the most publicity in the broadcast world, is the President’s direction that the Department of Commerce, acting through its National Telecommunications and Information Administration (NTIA – the Executive Branch office principally responsible for telecommunications policy), file a petition for rulemaking at the FCC.  This petition would ask that the FCC review Section 230 to determine if the protections afforded by the law are really as broad as they have been interpreted by the courts.  The Executive Order suggests that the FCC should review whether the ability granted by the law for an online platform to curate content posted by others – the “Good Samaritan” provisions that we wrote about on Monday – could trigger a loss of protections from civil liability for third-party content if sites exercise the curation rights in a manner that is not deemed to be in “good faith”.  The Executive Order directs this inquiry even though the protections for hosting online content are in a separate subsection of the law from the language granting the ability to curate content, and the protections from liability for third-party content contain no good faith language.  The Order suggests that the FCC should find that there would not be “good faith” if the reasons given for the curation actions were “pretextual,” if there was no notice and right to be heard by the party whose content is curated, and if the curation is contrary to the service’s terms of use.  The Order suggests that the FCC should adopt rules to clarify these issues.
Continue Reading Looking at the President’s Executive Order on Online Media – Part 2, What Real Risk Does It Pose for Media Companies?

When the President issues an Executive Order asking for examination of Section 230 of the Communications Decency Act, which permitted the growth of so many Internet companies, broadcasters and other media companies ask what effect the action may have on their operations.  On an initial reading, the impact of the order is very uncertain, as much of it simply calls on other government agencies to review the actions of online platforms.  But, given its focus on “online platforms” subject to the immunity from liability afforded by Section 230, and given the broad reach of Section 230 protections as interpreted by the Courts to cover any website or web platform that hosts content produced by others, the ultimate implications of any change in policy affecting these protections could be profound.  A change in policy could affect not only the huge online platforms that it appears to target, but even media companies that allow public comments on their stories, contests that call for the posting of content developed by third parties to be judged for purposes of awarding prizes, or the sites of content aggregators who post content developed by others (e.g. podcast hosting platforms).

Today, we will look at what Section 230 is, and the practical implications of the loss of its protections would have for online services.  The implications include the potential for even greater censorship by these platforms of what is being posted online – seemingly the opposite of the intent of the Executive Order triggered by the perceived limitations imposed on tweets of the President and on the social media posts of other conservative commentators.   In a later post, we’ll look at some of the other provisions of the Executive Order, and the actions that it is asks other government agencies (including the FCC and the FTC) to take. 
Continue Reading The President’s Executive Order on Online Media – What Does Section 230 of the Communications Decency Act Provide?

Here are some of the regulatory and legal actions of the last week—and some obligations for the week ahead—of significance to broadcasters, with links to where you can go to find more information as to how these actions may affect your operations.

  • The comment cycle was set in the FCC’s annual regulatory fee proceeding. On or before June 12, the Commission wants to hear from interested parties about the fees that it proposes to impose on the companies that it regulates – including broadcasters.  The FCC proposes to complete the implementation of its change to computing fees for television stations based on population served rather than on the market in which they operate, a move it began last year (see our Broadcast Law Blog article here on the FCC decision last year to initiate the change in the way TV fees are allocated).  The FCC also asks for ideas about how the Commission can extend fee relief to stations suffering COVID-19-related financial hardship.  Reply comments are due on or before June 29.  (Notice of Proposed Rulemaking)
  • FCC Chairman Ajit Pai and Chris Krebs, director of the U.S. Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency, wrote to the nation’s governors asking them to, among other things, declare radio and TV broadcasters as essential to COVID-19 response efforts and to afford broadcasters all appropriate resources and access. (News Release)
  • In a good reminder to broadcasters that transactions involving the sale or transfer of control of a broadcast station must be authorized in advance by the FCC, the Media Bureau entered into a consent decree with two companies that sold an FM station and FM translator without getting approval from the Commission. The parties mistakenly believed filing license renewal applications that reflected the assignment was sufficient approval.  The consent decree includes an $8,000 penalty.  (Consent Decree).  See this article on past cases where the FCC has warned that even transactions among related companies that change the legal form of ownership of a broadcast station without changing the ultimate control need prior FCC approval.
  • The Commission granted approval to Cumulus Media, Inc. to exceed the Commission’s twenty-five percent foreign ownership threshold. The Commission will allow Cumulus to have up to 100 percent aggregate foreign investment in the company, although additional approvals will be needed if any previously unnamed foreign entity acquires 5% or more of the company or if any foreign entity desires to acquire control.  (Declaratory Ruling).  This decision shows the process that the FCC must go through to approve foreign ownership above the 25% threshold and the analysis needed to issue such approvals.  See our articles here and here about the evolving FCC policy in this area.
  • President Trump signed an executive order that seeks to, among other things, address online censorship and rollback certain protections afforded to online platforms, which include social media sites like Twitter, Facebook, Instagram, and YouTube, but which also protect any site that hosts content created by users – which could include the Internet platforms of many broadcasters. Under federal law, Section 230 of the Communications Decency Act, these online platforms generally enjoy legal immunity for what users post on their platforms.  The President directed the Department of Commerce to ask the FCC to open a rulemaking to review this immunity and asked the FTC to review whether platforms were adhering to their terms of use when commenting on or limiting third-party content.  Other government entities, including state attorneys general and the Department of Justice, were also asked to review online platforms.  For his part, FCC Chairman Ajit Pai said “This debate is an important one. The Federal Communications Commission will carefully review any petition for rulemaking filed by the Department of Commerce.”  (Executive Order).  Watch for an article on the Broadcast Law Blog this coming week on implications of this order for broadcasters and other media companies.
  • Anyone looking to hand deliver documents to the FCC needs to learn a new address, and it is not, as you might expect, the address of the FCC’s future headquarters. Deliveries by hand must now be brought to 9050 Junction Drive, Annapolis Junction, MD 20701.  The address change is to enhance security screening and is part of winding down operations at the current 12th Street headquarters.  (Order)

Continue Reading This Week at the FCC for Broadcasters: May 23, 2020 to May 29, 2020

In recent weeks, Facebook has been criticized for adopting a policy of not censoring advertising and other content posted on its platforms by political candidates.  While Facebook apparently will review content whose veracity is challenged when posted by anyone else, it made an exception for posts by political candidates – and has received much heat from many of those candidates, including some who are currently in Congress.  In some cases, these criticisms have suggested that broadcasters have taken a different position and made content-based decisions on candidate ads.  In fact, Congress itself long ago imposed in Section 315(a) of the Communications Act a “no censorship” requirement on broadcasters for ads by federal, state, and local candidates.  Once a candidate is legally qualified and once a station decides to accept advertising for a political race, it cannot reject candidate ads based on their content.  And for Federal candidates, broadcasters must accept those ads once a political campaign has started, under the reasonable access rules that apply only to federal candidates.

In fact, as we wrote here, broadcasters are immune from any legal claims that may arise from the content of over-the-air candidate ads, based on Supreme Court decisions. Since broadcasters cannot censor ads placed by candidates, the Court has ruled, broadcasters cannot be held responsible for the content of those ads.  If a candidate’s ad is defamatory, or if it infringes on someone’s copyright, the aggrieved party has a remedy against the candidate who sponsored the ad, but that party has no remedy against the broadcaster.  (In contrast, when a broadcaster receives an ad from a non-candidate group that is claimed to be false, it can reject the ad based on its content, so it has potential liability if it does not pull the ad once it is aware of its falsity – see our article here for more information about what to do when confronted with issues about the truth of a third-party ad).  This immunity from liability for statements made in candidate ads absolves the broadcaster from having to referee the truth or falsity of political ads which, as is evident in today’s politically fragmented world, may well be perceived differently by different people.  So, even though Facebook is taking the same position in not censoring candidate ads as Congress has required broadcasters to take, should it be held to a different standard? 
Continue Reading Facebook Criticized for Not Censoring Candidate Ads – Even Though Congress Requires No Censorship from Broadcasters

There is nothing new about the FTC bringing enforcement actions based on deceptive advertising practices.  Those cases are the FTC’s bread and butter.  But in recent years the FTC has been pushing forward with cases that address the increasingly complex network of entities involved in marketing, including companies that collect, buy, and sell consumer information and play other behind-the-scenes roles in marketing campaigns.  The FTC has also taken a strong interest in deceptively formatted advertising, including “native” advertising that does not adequately disclose sponsorship connections.  A recent Court of Appeals decision highlights the potential for any internet company to be liable for a deceptive advertising campaign that it had a hand in orchestrating – even if the company itself does not create the advertising material.

The decision in this case, FTC v. LeadClick Media, LLC, comes from the U.S. Court of Appeals for the Second Circuit and is a significant victory for the FTC and its co-plaintiff, the State of Connecticut.  Specifically, the decision holds that online advertising company LeadClick is liable for the deceptive ads that were published as part an advertising campaign that it coordinated, even though LeadClick itself did not write or publish the ads.  In addition, the Second Circuit rejected LeadClick’s argument that its ad tracking service provided it with immunity from the FTC’s action under Section 230 of the Communications Decency Act (CDA).
Continue Reading Second Circuit Holds Marketing Campaign Organizer Liable Under FTC Act for Deceptive Representations of Its Marketing “Affiliates”

Both the popular and media trade press has been full of reports in the last few weeks about musicians and other artists petitioning the Copyright Office to hold YouTube and other online services liable for infringement when the artists’ copyrighted material appears on the service (see, e.g. the articles here and here). The complaints allege that these services are slow to pull infringing content and, even when that content is pulled from a website, it reappears soon thereafter, being re-posted to those services once again. While the news reports all cite the filings of various artists or artist groups, or copyright holders like the record labels, they don’t usually note the context in which these comments were filed – a review by the Copyright Office of Section 512 of the Copyright Act which protects internet service providers from copyright liability for the actions taken by users of their services (see the Notice of Inquiry launching the review here). All of these “petitions” mentioned in the press were just comments filed in the Copyright Office proceeding, where comments were due the week before last. The Copyright Office will also be holding two roundtable discussions of the issues raised by this proceeding next month, one in California and one in New York City (see the notice announcing these roundtables here). What is at issue in this inquiry?

Section 512 was adopted to protect differing types of internet service providers from copyright liability for material that uses their services. Section 512(a) protects ISPs from liability for material that passes through their systems. That section does not seem to be particularly controversial, as no one seems to question the insulation from liability of the provider of the “pipes” through which content passes – essentially a common carrier-like function of just providing the infrastructure through which messages are conveyed. Sheltered from liability by Section 512(b) are providers of systems caching – temporary storage of material sent by third-parties on a computer system maintained by a service provider, where the provider essentially provides cloud storage to third-parties using some automated system where the provider never reviews the content. That section also does not seem particularly controversial. Where the issues really seem to arise is in the safe harbor provided in Section 512(c) which is titled “Information residing on systems or networks at the direction of users” – what is commonly called “user-generated content.”
Continue Reading Copyright Office Reviews Section 512 Safe Harbor for Online User-Generated Content – The Differing Perceptions of Musicians and Other Copyright Holders and Online Service Providers on the Notice and Take-Down Process

This week, the Chairman of the US House of Representatives Judiciary Committee issued a press release stating that he intends that the Committee do a thorough reexamination of the Copyright Act, noting that new technologies stemming from digital media have upset many settled expectations in Copyright Law, and confused many issues. That this release was issued in the same week as a decision of New York’s Supreme Court, Appellate Division, First Department, on the obscure issue of pre-1972 sound recordings is perhaps appropriate, as this decision demonstrates how an obscure provision of the copyright act can have a fundamental effect on the functioning of many online media outlets – including essentially any outlet that allows user-generated content with audio. The Court’s ruling, which conflicts with a Federal Court’s decision on the same question, would essentially remove the safe harbor protection for sites that allow for the posting of user generated content – where that content contains any pre-1972 sound recordings which don’t fall within the protections of the Copyright Act. Let’s explore this decision and its ramifications in a little more depth.

As we have written before, an Internet service that allows users to post content to that service is exempt from any liability for that content under two statutes. The Digital Millennium Copyright Act insulates the service from any claims of copyright infringement contained in any of the user generated content, if the service has met several standards. These standards include the obligations for the service to take down the infringing material if given proper notice from the copyright holder. The Service cannot encourage the infringement or profit directly from the infringement itself, and it must register a contact person with the Copyright Office so that the copyright owner knows who to contact to provide the notice of the takedown. While the exact meaning of some of these provisions is subject to some debate (including debate in recent cases, including one that Viacom has been prosecuting against YouTube that we may address in a subsequent post), the general concept is well-established.Continue Reading How a NY State Court Decision on Pre-1972 Sound Recordings Clouds the Safe Harbor Protections of Websites Featuring User Generated Content

Congress last week adopted a bill important to all US media companies that produce content that can be received overseas.  This would include anyone with content on their website (including user generated content) that could potentially give rise to a legal judgment overseas.  As explained in detail in Davis Wright Tremaine’s memo on the act