Here are some of the regulatory developments of significance to broadcasters from the past week, with links to where you can go to find more information as to how these actions may affect your operations.

  • Perhaps the biggest regulatory news of the past week came not from the FCC, but instead from the Federal Trade

Artificial Intelligence was the talk of the NAB Convention last week.  Seemingly, not a session took place without some discussion of the impact of AI.  One area that we have written about many times is the impact of AI on political advertising.  Legislative consideration of that issue has exploded in the first quarter of 2024, as over 40 state legislatures considered bills to regulate the use of AI (or “deep fakes” or “synthetic media”) in political advertising – some purporting to ban the use entirely, with most allowing the use if it is labeled to disclose to the public that the images or voices that they are experiencing did not actually happen in the way that they are portrayed.  While over 40 states considered legislation in the first quarter, only 11 have thus far adopted laws covering AI in political ads, up from 5 in December when we reported on the legislation adopted in Michigan late last year.

The new states that have adopted legislation regulating AI in political ads in 2024 are Idaho, Indiana, New Mexico, Oregon, Utah, and Wisconsin.  These join Michigan, California, Texas, Minnesota, and Washington State which had adopted such legislation before the start of this year.  Broadcasters and other media companies need to carefully review all of these laws.  Each of these laws is unique – there is no standard legislation that has been adopted across multiple states.  Some have criminal penalties, while others simply imposing civil liability.  Media companies need to be aware of the specifics of each of these bills to assess their obligations under these new laws as we enter this election season where political actors seem to be getting more and more aggressive in their attacks on candidates and other political figures. Continue Reading 11 States Now Have Laws Limiting Artificial Intelligence, Deep Fakes, and Synthetic Media in Political Advertising – Looking at the Issues

In the Washington Post last weekend, an op-ed article suggested that political candidates should voluntarily renounce the use of artificial intelligence in their campaigns.  The article seemed to be looking for candidates to take the actions that governments have largely thus far declined to mandate.  As we wrote back in July, despite calls from some for federal regulation of the use of AI-generated content in political ads, little movement in that direction has occurred. 

As we noted in July, a bill was introduced in both the Senate and the House of Representatives to require that there be disclaimers on all political ads using images or video generated by artificial intelligence, in order to disclose that they were artificially generated (see press release here), but there has been little action on that legislation.  The Federal Election Commission released a “Notice of Availability” in August (see our article here) asking for public comment on whether it should start a rulemaking to determine if the use of deepfakes and other synthetic media imitating a candidate violate FEC rules that forbid a candidate or committee from fraudulently misrepresenting that they are “speaking or writing or otherwise acting for or on behalf of any other candidate or political party or employee or agent thereof on a matter which is damaging to such other candidate or political party or employee or agent thereof.”  Comments were filed last month (available here), and include several (including those of the Republican National Committee) that question the authority of the FEC to adopt any rules in this area, both as a matter of statutory authority and under the First Amendment.  Such comments do not portend well for voluntary limits by candidates, nor for actions from an FEC that by law has 3 Republican and 3 Democratic commissioners.Continue Reading Artificial Intelligence in Political Ads – Media Companies Beware

Stories about “deepfakes,” “synthetic media,” and other forms of artificial intelligence being used in political campaigns, including in advertising messages, have abounded in recent weeks.  There were stories about a superPAC running attack ads against Donald Trump where Trump’s voice was allegedly synthesized to read one of his tweets condemning the Iowa governor for not supporting him in his Presidential campaign.  Similar ads have been run attacking other political figures, prompting calls from some for federal regulation of the use of AI-generated content in political ads.  The Federal Election Commission last month discussed a Petition for Rulemaking filed by the public interest group Public Citizen asking for a rulemaking on the regulation of these ads.  While the FEC staff drafted a “Notification of Availability” to tell the public that the petition was filed and to ask for comments on whether the FEC should start a formal rulemaking on the subject, according to an FEC press release, no action was taken on that Notification.  A bill has also been introduced in both the Senate and the House of Representatives to require that there be disclaimers on all political ads using images or video generated by artificial intelligence revealing that they were artificially generated (see press release here).

These federal efforts to require labeling of political ads using AI have yet to result in any such regulation, but a few states have stepped into the void and adopted their own requirements.   Washington State recently passed legislation requiring the labeling of AI-generated content in political ads.  Some states, including Texas and California, already provide penalties for deepfakes that do not contain a clear public disclosure when used in political ads within a certain period before an election (Texas, within 30 days and California within 60 days).Continue Reading Artificial Intelligence in Political Ads – Legal Issues in Synthetic Media and Deepfakes in Campaign Advertising – Concerns for Broadcasters and Other Media Companies

Here are some of the regulatory developments of significance to broadcasters from the past week, with links to where you can go to find more information as to how these actions may affect your operations.

  • Since the February 24 hearing designation order (HDO) from the FCC’s Media Bureau referring questions about Standard General Broadcasting’s proposed

Note from David Oxenford: Seth Resler of Jacobs Media yesterday wrote on his Connecting the Dots blog about the ease of synthesizing the voice of a celebrity, and the temptation to use that replicated voice in an on-air broadcast.  Last week, in an article on policy issues raised by AI, we mentioned that some states have adopted laws that limit the use of synthesized media in political advertising.  In Seth’s article, he quotes Belinda Scrimenti of my law firm pointing out some of the legal issues that arise from using a synthesized voice even in entertainment programming, and especially in commercials. Belinda has expanded on her thoughts and offers the following observations on the use of synthesized personalities on radio or TV. 

The advent of artificial intelligence poses interesting and often challenging legal issues because the law is still “catching up” with the technology. Consider the impact of new AI platforms that can learn a person’s voice, then speak whatever text you submit to it in that person’s voice. If a user submits 60 seconds of Taylor Swift audio to the AI platform, the platform can use this sample to learn to “speak” as Taylor Swift, and the user can then have “her” say whatever the user wants.

While some states are considering or have adopted some restrictions on impersonation by AI, many existing legal concepts applied with traditional celebrity impersonation claims are already applicable to this kind of synthesized celebrity impersonation. Thus, if the use by a broadcaster of Taylor Swift’s voice (either taped and edited or impersonated by a human) would violate the right of publicity that is already found in the law of most states, the use of her AI voice would also violate these same rights.  Continue Reading Using AI to Replicate the Voice of a Celebrity – Watch Out for Legal Issues Including Violating the Right of Publicity

Artificial intelligence has been the buzzword of the last few months.  Since the public release of ChatGPT, seemingly every tech company has either announced a new AI program or some use for AI that will compete with activities currently performed by real people. While AI poses all sorts of questions for society and issues for almost every industry, applications for the media industry are particularly interesting.  They range from AI creating music, writing scripts, reporting the news, and even playing DJ on Spotify channels.  All these activities raise competitive issues, but there have also begun to be a number of policy issues bubbling to the surface. 

The most obvious policy issue is whether artistic works created by AI are entitled to copyright protection – an issue addressed by recent guidance from the Copyright Office suggesting that a work created solely by a machine is not entitled to protection, but that there may be circumstances where a person is providing sufficient guidance to the artificial intelligence such that the AI is seen as more of a tool for the person’s creativity, and that person can claim to be the creator of the work and receive copyright protection. Continue Reading Looking at the Some of the Policy Issues for Media and Music Companies From the Expanding Use of Artificial Intelligence