In recent weeks, Facebook has been criticized for adopting a policy of not censoring advertising and other content posted on its platforms by political candidates.  While Facebook apparently will review content whose veracity is challenged when posted by anyone else, it made an exception for posts by political candidates – and has received much heat from many of those candidates, including some who are currently in Congress.  In some cases, these criticisms have suggested that broadcasters have taken a different position and made content-based decisions on candidate ads.  In fact, Congress itself long ago imposed in Section 315(a) of the Communications Act a “no censorship” requirement on broadcasters for ads by federal, state, and local candidates.  Once a candidate is legally qualified and once a station decides to accept advertising for a political race, it cannot reject candidate ads based on their content.  And for Federal candidates, broadcasters must accept those ads once a political campaign has started, under the reasonable access rules that apply only to federal candidates.

In fact, as we wrote here, broadcasters are immune from any legal claims that may arise from the content of over-the-air candidate ads, based on Supreme Court decisions. Since broadcasters cannot censor ads placed by candidates, the Court has ruled, broadcasters cannot be held responsible for the content of those ads.  If a candidate’s ad is defamatory, or if it infringes on someone’s copyright, the aggrieved party has a remedy against the candidate who sponsored the ad, but that party has no remedy against the broadcaster.  (In contrast, when a broadcaster receives an ad from a non-candidate group that is claimed to be false, it can reject the ad based on its content, so it has potential liability if it does not pull the ad once it is aware of its falsity – see our article here for more information about what to do when confronted with issues about the truth of a third-party ad).  This immunity from liability for statements made in candidate ads absolves the broadcaster from having to referee the truth or falsity of political ads which, as is evident in today’s politically fragmented world, may well be perceived differently by different people.  So, even though Facebook is taking the same position in not censoring candidate ads as Congress has required broadcasters to take, should it be held to a different standard? 
Continue Reading

There is nothing new about the FTC bringing enforcement actions based on deceptive advertising practices.  Those cases are the FTC’s bread and butter.  But in recent years the FTC has been pushing forward with cases that address the increasingly complex network of entities involved in marketing, including companies that collect, buy, and sell consumer information and play other behind-the-scenes roles in marketing campaigns.  The FTC has also taken a strong interest in deceptively formatted advertising, including “native” advertising that does not adequately disclose sponsorship connections.  A recent Court of Appeals decision highlights the potential for any internet company to be liable for a deceptive advertising campaign that it had a hand in orchestrating – even if the company itself does not create the advertising material.

The decision in this case, FTC v. LeadClick Media, LLC, comes from the U.S. Court of Appeals for the Second Circuit and is a significant victory for the FTC and its co-plaintiff, the State of Connecticut.  Specifically, the decision holds that online advertising company LeadClick is liable for the deceptive ads that were published as part an advertising campaign that it coordinated, even though LeadClick itself did not write or publish the ads.  In addition, the Second Circuit rejected LeadClick’s argument that its ad tracking service provided it with immunity from the FTC’s action under Section 230 of the Communications Decency Act (CDA).
Continue Reading

This week, the Chairman of the US House of Representatives Judiciary Committee issued a press release stating that he intends that the Committee do a thorough reexamination of the Copyright Act, noting that new technologies stemming from digital media have upset many settled expectations in Copyright Law, and confused many issues. That this release was issued in the same week as a decision of New York’s Supreme Court, Appellate Division, First Department, on the obscure issue of pre-1972 sound recordings is perhaps appropriate, as this decision demonstrates how an obscure provision of the copyright act can have a fundamental effect on the functioning of many online media outlets – including essentially any outlet that allows user-generated content with audio. The Court’s ruling, which conflicts with a Federal Court’s decision on the same question, would essentially remove the safe harbor protection for sites that allow for the posting of user generated content – where that content contains any pre-1972 sound recordings which don’t fall within the protections of the Copyright Act. Let’s explore this decision and its ramifications in a little more depth.

As we have written before, an Internet service that allows users to post content to that service is exempt from any liability for that content under two statutes. The Digital Millennium Copyright Act insulates the service from any claims of copyright infringement contained in any of the user generated content, if the service has met several standards. These standards include the obligations for the service to take down the infringing material if given proper notice from the copyright holder. The Service cannot encourage the infringement or profit directly from the infringement itself, and it must register a contact person with the Copyright Office so that the copyright owner knows who to contact to provide the notice of the takedown. While the exact meaning of some of these provisions is subject to some debate (including debate in recent cases, including one that Viacom has been prosecuting against YouTube that we may address in a subsequent post), the general concept is well-established.


Continue Reading

Website operators who allow the posting of user-generated content on their sites enjoy broad immunity from legal liability.  This includes immunity from copyright violations if the site owner registers with the Copyright Office, does not encourage the copyright violations and takes down infringing content upon receiving notice from a copyright owner (see our post here for more information).  There is also broad immunity from liability for other legal violations that may occur within user-generated content.  In a recent case, involving the website Roommates.com, the US Court of Appeals determined that the immunity is broad, but not unlimited if the site is set up so as to elicit the improper conduct.  A memo from attorneys in various Davis Wright Tremaine offices, which can be found here, provides details of the Roommates.com case and its implications.

In the case, suit was filed against the company, alleging violations of the Fair Housing Act, as the site had pull-down menus which allowed users to identify their sex, sexual orientation, and whether or not they had children.  Including any of this information in a housing advertisement can lead to liability under the law.  The Court found that, if this information had been volunteered by users acting on their own, the site owner would have no liability.  But because the site had the drop-down menus that prompted the answers that were prohibited under the law, liability was found.


Continue Reading