It seems like virtually every panel at every broadcast and media convention, at some point, ends up involving a discussion of Artificial Intelligence. Sessions on AI are filled to capacity, and sessions unrelated to the topic seem to have to mention AI to appear relevant. Whenever there is a topic that so thoroughly takes over the conversation in the industry, we lawyers tend to consider the legal implications. We’ve written several times about AI in political ads (see, for instance, our articles here, here and here). We will, no doubt, write more about that subject (including addressing further action in the FCC’s proceeding on this subject about which we wrote here, on the Federal Election Commission’s pending action on its separate AI proceeding, consideration of which was again postponed at its meeting last week, and on bills pending in Congress to address AI in political advertising).
We’ve also written about concerns when AI is used to impersonate celebrities and to create music that too closely resembles copyrighted recordings (see, for instance, our articles here and here). When looking for new creative ways to entertain your audience, a broadcaster may be tempted to use AI’s ability to have a celebrity “say” something on your station by generating their voice with some form of AI. As we noted in our previous articles, celebrities have protected interests in their identity in many states, and there has been much recent activity, caused by the advent of easily accessible generative AI that can impersonate anyone, to broaden the protections for the voice, image, and other recognizable traits of celebrities. A federal NO FAKES Act has also been introduced to give individuals more rights in their voice and likeness. So being too creative with the use of AI can clearly cause concerns.Continue Reading Using Artificial Intelligence in Developing Broadcast Programming – Watch for Legal Issues