Here are some of the regulatory developments of significance to broadcasters from the past week, with links to where you can go to find more information as to how these actions may affect your operations.

  • Since the February 24 hearing designation order (HDO) from the FCC’s Media Bureau referring questions about Standard General Broadcasting’s proposed

Note from David Oxenford: Seth Resler of Jacobs Media yesterday wrote on his Connecting the Dots blog about the ease of synthesizing the voice of a celebrity, and the temptation to use that replicated voice in an on-air broadcast.  Last week, in an article on policy issues raised by AI, we mentioned that some states have adopted laws that limit the use of synthesized media in political advertising.  In Seth’s article, he quotes Belinda Scrimenti of my law firm pointing out some of the legal issues that arise from using a synthesized voice even in entertainment programming, and especially in commercials. Belinda has expanded on her thoughts and offers the following observations on the use of synthesized personalities on radio or TV. 

The advent of artificial intelligence poses interesting and often challenging legal issues because the law is still “catching up” with the technology. Consider the impact of new AI platforms that can learn a person’s voice, then speak whatever text you submit to it in that person’s voice. If a user submits 60 seconds of Taylor Swift audio to the AI platform, the platform can use this sample to learn to “speak” as Taylor Swift, and the user can then have “her” say whatever the user wants.

While some states are considering or have adopted some restrictions on impersonation by AI, many existing legal concepts applied with traditional celebrity impersonation claims are already applicable to this kind of synthesized celebrity impersonation. Thus, if the use by a broadcaster of Taylor Swift’s voice (either taped and edited or impersonated by a human) would violate the right of publicity that is already found in the law of most states, the use of her AI voice would also violate these same rights.  

Continue Reading Using AI to Replicate the Voice of a Celebrity – Watch Out for Legal Issues Including Violating the Right of Publicity

Artificial intelligence has been the buzzword of the last few months.  Since the public release of ChatGPT, seemingly every tech company has either announced a new AI program or some use for AI that will compete with activities currently performed by real people. While AI poses all sorts of questions for society and issues for almost every industry, applications for the media industry are particularly interesting.  They range from AI creating music, writing scripts, reporting the news, and even playing DJ on Spotify channels.  All these activities raise competitive issues, but there have also begun to be a number of policy issues bubbling to the surface. 

The most obvious policy issue is whether artistic works created by AI are entitled to copyright protection – an issue addressed by recent guidance from the Copyright Office suggesting that a work created solely by a machine is not entitled to protection, but that there may be circumstances where a person is providing sufficient guidance to the artificial intelligence such that the AI is seen as more of a tool for the person’s creativity, and that person can claim to be the creator of the work and receive copyright protection. 

Continue Reading Looking at the Some of the Policy Issues for Media and Music Companies From the Expanding Use of Artificial Intelligence