Communications Decency Act

In recent weeks, Facebook has been criticized for adopting a policy of not censoring advertising and other content posted on its platforms by political candidates.  While Facebook apparently will review content whose veracity is challenged when posted by anyone else, it made an exception for posts by political candidates – and has received much heat from many of those candidates, including some who are currently in Congress.  In some cases, these criticisms have suggested that broadcasters have taken a different position and made content-based decisions on candidate ads.  In fact, Congress itself long ago imposed in Section 315(a) of the Communications Act a “no censorship” requirement on broadcasters for ads by federal, state, and local candidates.  Once a candidate is legally qualified and once a station decides to accept advertising for a political race, it cannot reject candidate ads based on their content.  And for Federal candidates, broadcasters must accept those ads once a political campaign has started, under the reasonable access rules that apply only to federal candidates.

In fact, as we wrote here, broadcasters are immune from any legal claims that may arise from the content of over-the-air candidate ads, based on Supreme Court decisions. Since broadcasters cannot censor ads placed by candidates, the Court has ruled, broadcasters cannot be held responsible for the content of those ads.  If a candidate’s ad is defamatory, or if it infringes on someone’s copyright, the aggrieved party has a remedy against the candidate who sponsored the ad, but that party has no remedy against the broadcaster.  (In contrast, when a broadcaster receives an ad from a non-candidate group that is claimed to be false, it can reject the ad based on its content, so it has potential liability if it does not pull the ad once it is aware of its falsity – see our article here for more information about what to do when confronted with issues about the truth of a third-party ad).  This immunity from liability for statements made in candidate ads absolves the broadcaster from having to referee the truth or falsity of political ads which, as is evident in today’s politically fragmented world, may well be perceived differently by different people.  So, even though Facebook is taking the same position in not censoring candidate ads as Congress has required broadcasters to take, should it be held to a different standard? 
Continue Reading

There is nothing new about the FTC bringing enforcement actions based on deceptive advertising practices.  Those cases are the FTC’s bread and butter.  But in recent years the FTC has been pushing forward with cases that address the increasingly complex network of entities involved in marketing, including companies that collect, buy, and sell consumer information and play other behind-the-scenes roles in marketing campaigns.  The FTC has also taken a strong interest in deceptively formatted advertising, including “native” advertising that does not adequately disclose sponsorship connections.  A recent Court of Appeals decision highlights the potential for any internet company to be liable for a deceptive advertising campaign that it had a hand in orchestrating – even if the company itself does not create the advertising material.

The decision in this case, FTC v. LeadClick Media, LLC, comes from the U.S. Court of Appeals for the Second Circuit and is a significant victory for the FTC and its co-plaintiff, the State of Connecticut.  Specifically, the decision holds that online advertising company LeadClick is liable for the deceptive ads that were published as part an advertising campaign that it coordinated, even though LeadClick itself did not write or publish the ads.  In addition, the Second Circuit rejected LeadClick’s argument that its ad tracking service provided it with immunity from the FTC’s action under Section 230 of the Communications Decency Act (CDA).
Continue Reading

Both the popular and media trade press has been full of reports in the last few weeks about musicians and other artists petitioning the Copyright Office to hold YouTube and other online services liable for infringement when the artists’ copyrighted material appears on the service (see, e.g. the articles here and here). The complaints allege that these services are slow to pull infringing content and, even when that content is pulled from a website, it reappears soon thereafter, being re-posted to those services once again. While the news reports all cite the filings of various artists or artist groups, or copyright holders like the record labels, they don’t usually note the context in which these comments were filed – a review by the Copyright Office of Section 512 of the Copyright Act which protects internet service providers from copyright liability for the actions taken by users of their services (see the Notice of Inquiry launching the review here). All of these “petitions” mentioned in the press were just comments filed in the Copyright Office proceeding, where comments were due the week before last. The Copyright Office will also be holding two roundtable discussions of the issues raised by this proceeding next month, one in California and one in New York City (see the notice announcing these roundtables here). What is at issue in this inquiry?

Section 512 was adopted to protect differing types of internet service providers from copyright liability for material that uses their services. Section 512(a) protects ISPs from liability for material that passes through their systems. That section does not seem to be particularly controversial, as no one seems to question the insulation from liability of the provider of the “pipes” through which content passes – essentially a common carrier-like function of just providing the infrastructure through which messages are conveyed. Sheltered from liability by Section 512(b) are providers of systems caching – temporary storage of material sent by third-parties on a computer system maintained by a service provider, where the provider essentially provides cloud storage to third-parties using some automated system where the provider never reviews the content. That section also does not seem particularly controversial. Where the issues really seem to arise is in the safe harbor provided in Section 512(c) which is titled “Information residing on systems or networks at the direction of users” – what is commonly called “user-generated content.”
Continue Reading

This week, the Chairman of the US House of Representatives Judiciary Committee issued a press release stating that he intends that the Committee do a thorough reexamination of the Copyright Act, noting that new technologies stemming from digital media have upset many settled expectations in Copyright Law, and confused many issues. That this release was issued in the same week as a decision of New York’s Supreme Court, Appellate Division, First Department, on the obscure issue of pre-1972 sound recordings is perhaps appropriate, as this decision demonstrates how an obscure provision of the copyright act can have a fundamental effect on the functioning of many online media outlets – including essentially any outlet that allows user-generated content with audio. The Court’s ruling, which conflicts with a Federal Court’s decision on the same question, would essentially remove the safe harbor protection for sites that allow for the posting of user generated content – where that content contains any pre-1972 sound recordings which don’t fall within the protections of the Copyright Act. Let’s explore this decision and its ramifications in a little more depth.

As we have written before, an Internet service that allows users to post content to that service is exempt from any liability for that content under two statutes. The Digital Millennium Copyright Act insulates the service from any claims of copyright infringement contained in any of the user generated content, if the service has met several standards. These standards include the obligations for the service to take down the infringing material if given proper notice from the copyright holder. The Service cannot encourage the infringement or profit directly from the infringement itself, and it must register a contact person with the Copyright Office so that the copyright owner knows who to contact to provide the notice of the takedown. While the exact meaning of some of these provisions is subject to some debate (including debate in recent cases, including one that Viacom has been prosecuting against YouTube that we may address in a subsequent post), the general concept is well-established.


Continue Reading

Congress last week adopted a bill important to all US media companies that produce content that can be received overseas.  This would include anyone with content on their website (including user generated content) that could potentially give rise to a legal judgment overseas.  As explained in detail in Davis Wright Tremaine’s memo on the act

Website operators who allow the posting of user-generated content on their sites enjoy broad immunity from legal liability.  This includes immunity from copyright violations if the site owner registers with the Copyright Office, does not encourage the copyright violations and takes down infringing content upon receiving notice from a copyright owner (see our post here for more information).  There is also broad immunity from liability for other legal violations that may occur within user-generated content.  In a recent case, involving the website Roommates.com, the US Court of Appeals determined that the immunity is broad, but not unlimited if the site is set up so as to elicit the improper conduct.  A memo from attorneys in various Davis Wright Tremaine offices, which can be found here, provides details of the Roommates.com case and its implications.

In the case, suit was filed against the company, alleging violations of the Fair Housing Act, as the site had pull-down menus which allowed users to identify their sex, sexual orientation, and whether or not they had children.  Including any of this information in a housing advertisement can lead to liability under the law.  The Court found that, if this information had been volunteered by users acting on their own, the site owner would have no liability.  But because the site had the drop-down menus that prompted the answers that were prohibited under the law, liability was found.


Continue Reading

Website operators planning to allow visitors to post their own "user generated content" can, for the most part, take solace that they will not be held liable for third-party posts if they meet certain criteria.  The Communications Decency Act provides protection against liability for torts (including libel, slander and other forms of defamation) for website operators for third-party content posted on their site.  The Digital Millennium Copyright Act provides protection against copyright infringement claims for the user-generated content, if the site owner observes certain "safe harbor" provisions set out by the law.  The requirements for protection under these statutes, and other cautions for website operators, are set out in detail in our firm’s First Amendment Law Letter, which can be found here.

 As detailed in the Law Letter, the Communications Decency Act has been very broadly applied to protect the operator of a website from liability for the content of the postings of third parties.  Only recently have courts begun to chip away at those protections, finding liability in cases where it appeared that the website operator in effect asked for the offending content – as in a case where the owner of a roommate-finder site gave users a questionnaire that specifically prompted them to indicate a racial preference for a roommate – something which offends the Fair Housing Act.  However, as set forth in the Law Letter, absent such a specific prompt for offending information, the protections afforded by this statute still appear quite broad.


Continue Reading