Here are some of the regulatory developments of significance to broadcasters from the past week, with links to where you can go to find more information as to how these actions may affect your operations.

  • FCC Chairwoman Rosenworcel announced that two Notices of Proposed Rulemaking (NPRMs) have been drafted, which, if adopted by

On paper, this October appears to be a busy month for regulatory deadlines.  But the lack of congressional action to fund the federal government for the coming year (or “continuing resolutions” adopted to allow government agencies to function at their current levels) is making a federal government shutdown appear inevitable.  If a government shutdown does occur, the FCC, the FTC, and the Copyright Office may also shutdown – which, as with previous shutdowns, may result in many of the regulatory deadlines discussed below being delayed. 

According to the August 2023 FCC Shutdown Plan, if a potential lapse in appropriations is imminent, the FCC will determine whether and for how long prior year funds will be made available to continue all agency operations during a lapse.  To date, however, the FCC has not stated whether it plans to remain open – and if so, for how long – if a government shutdown does occur.  Details from the FCC and other agencies should be released shortly given the shutdown that may well occur this weekend. 

Until we receive such guidance, the tentative October regulatory deadlines for broadcasters are provided below.  Even if the government does shut down, these dates will likely be rescheduled for soon after the funding issue is resolved.  So, let’s look at the upcoming deadlines. Continue Reading October Regulatory Dates for Broadcasters – Nationwide EAS Test, Annual EEO Public File Reports, Retransmission Consent Elections, Biennial Ownership Reports, and More (If the Government is Open)

Here are some of the regulatory developments of significance to broadcasters from the past week, with links to where you can go to find more information as to how these actions may affect your operations.

  • The FCC’s Media Bureau released a Public Notice reminding commercial and noncommercial broadcasters of their upcoming obligation to file biennial

Here are some of the regulatory developments of significance to broadcasters from the past week, with links to where you can go to find more information as to how these actions may affect your operations.

  • In the last two license renewal cycles, more fines have been issued for full-power stations violating the requirement that they

The Federal Election Commission last week voted to open for public comment the question of whether to start a rulemaking proceeding to declare that “deepfakes” or other AI technology used to generate false images of a candidate doing or saying something, without a disclosure that the image, audio or video, was generated by artificial intelligence and portrays fictitious statements and actions, violates the FEC’s rules.  The FEC rule that is allegedly being violated is one that prohibits a candidate or committee from fraudulently misrepresentating that they are “speaking or writing or otherwise acting for or on behalf of any other candidate or political party or employee or agent thereof on a matter which is damaging to such other candidate or political party or employee or agent thereof.”  In other words, the FEC rule prohibits one candidate or committee from falsely issuing statements in the name of an opposing candidate or committee.  The FEC approved the Draft Notice of Availability to initiate the request for public comment on a second rulemaking petition filed by the group Public Citizen, asking for this policy to be adopted.  This Notice of Availability was published in the Federal Register today, initiating the comment period.  The deadline for comments is October 16, 2023.  This is just a preliminary request for comments as to the merits of the Public Citizen petition, and whether the FEC should move forward with a more formal proceeding.

As we wrote in an article a few weeks ago, the FEC had a very similar Notice of Availability before it last month and took no action, after apparently expressing concerns that the FEC does not have statutory authority to regulate deliberately deceptive AI-produced content in campaign ads.  Apparently Public Citizen’s second petition adequately addressed that concern.  The Notice published in the Federal Register today at least starts the process, although it may be some time before any formal rules are adopted.  As we noted in our article, a few states have already taken action to require disclosures about AI content used in political ads, particularly those in state and local elections.  Thus far, there is no similar federal requirement. Continue Reading FEC Asks for Public Comment on Petition for Rulemaking on the Use of Artificial Intelligence in Political Ads

Here are some of the regulatory developments of significance to broadcasters from the past week, with links to where you can go to find more information as to how these actions may affect your operations.

  • FEMA and the FCC announced that this year’s Nationwide EAS Test is scheduled for October 4, 2023 (with a back-up

Stories about “deepfakes,” “synthetic media,” and other forms of artificial intelligence being used in political campaigns, including in advertising messages, have abounded in recent weeks.  There were stories about a superPAC running attack ads against Donald Trump where Trump’s voice was allegedly synthesized to read one of his tweets condemning the Iowa governor for not supporting him in his Presidential campaign.  Similar ads have been run attacking other political figures, prompting calls from some for federal regulation of the use of AI-generated content in political ads.  The Federal Election Commission last month discussed a Petition for Rulemaking filed by the public interest group Public Citizen asking for a rulemaking on the regulation of these ads.  While the FEC staff drafted a “Notification of Availability” to tell the public that the petition was filed and to ask for comments on whether the FEC should start a formal rulemaking on the subject, according to an FEC press release, no action was taken on that Notification.  A bill has also been introduced in both the Senate and the House of Representatives to require that there be disclaimers on all political ads using images or video generated by artificial intelligence revealing that they were artificially generated (see press release here).

These federal efforts to require labeling of political ads using AI have yet to result in any such regulation, but a few states have stepped into the void and adopted their own requirements.   Washington State recently passed legislation requiring the labeling of AI-generated content in political ads.  Some states, including Texas and California, already provide penalties for deepfakes that do not contain a clear public disclosure when used in political ads within a certain period before an election (Texas, within 30 days and California within 60 days).Continue Reading Artificial Intelligence in Political Ads – Legal Issues in Synthetic Media and Deepfakes in Campaign Advertising – Concerns for Broadcasters and Other Media Companies

Here are some of the regulatory developments of significance to broadcasters from the past week, with links to where you can go to find more information as to how these actions may affect your operations.

  • Since the February 24 hearing designation order (HDO) from the FCC’s Media Bureau referring questions about Standard General Broadcasting’s proposed

Note from David Oxenford: Seth Resler of Jacobs Media yesterday wrote on his Connecting the Dots blog about the ease of synthesizing the voice of a celebrity, and the temptation to use that replicated voice in an on-air broadcast.  Last week, in an article on policy issues raised by AI, we mentioned that some states have adopted laws that limit the use of synthesized media in political advertising.  In Seth’s article, he quotes Belinda Scrimenti of my law firm pointing out some of the legal issues that arise from using a synthesized voice even in entertainment programming, and especially in commercials. Belinda has expanded on her thoughts and offers the following observations on the use of synthesized personalities on radio or TV. 

The advent of artificial intelligence poses interesting and often challenging legal issues because the law is still “catching up” with the technology. Consider the impact of new AI platforms that can learn a person’s voice, then speak whatever text you submit to it in that person’s voice. If a user submits 60 seconds of Taylor Swift audio to the AI platform, the platform can use this sample to learn to “speak” as Taylor Swift, and the user can then have “her” say whatever the user wants.

While some states are considering or have adopted some restrictions on impersonation by AI, many existing legal concepts applied with traditional celebrity impersonation claims are already applicable to this kind of synthesized celebrity impersonation. Thus, if the use by a broadcaster of Taylor Swift’s voice (either taped and edited or impersonated by a human) would violate the right of publicity that is already found in the law of most states, the use of her AI voice would also violate these same rights.  Continue Reading Using AI to Replicate the Voice of a Celebrity – Watch Out for Legal Issues Including Violating the Right of Publicity

Artificial intelligence has been the buzzword of the last few months.  Since the public release of ChatGPT, seemingly every tech company has either announced a new AI program or some use for AI that will compete with activities currently performed by real people. While AI poses all sorts of questions for society and issues for almost every industry, applications for the media industry are particularly interesting.  They range from AI creating music, writing scripts, reporting the news, and even playing DJ on Spotify channels.  All these activities raise competitive issues, but there have also begun to be a number of policy issues bubbling to the surface. 

The most obvious policy issue is whether artistic works created by AI are entitled to copyright protection – an issue addressed by recent guidance from the Copyright Office suggesting that a work created solely by a machine is not entitled to protection, but that there may be circumstances where a person is providing sufficient guidance to the artificial intelligence such that the AI is seen as more of a tool for the person’s creativity, and that person can claim to be the creator of the work and receive copyright protection. Continue Reading Looking at the Some of the Policy Issues for Media and Music Companies From the Expanding Use of Artificial Intelligence