Stories about “deepfakes,” “synthetic media,” and other forms of artificial intelligence being used in political campaigns, including in advertising messages, have abounded in recent weeks.  There were stories about a superPAC running attack ads against Donald Trump where Trump’s voice was allegedly synthesized to read one of his tweets condemning the Iowa governor for not supporting him in his Presidential campaign.  Similar ads have been run attacking other political figures, prompting calls from some for federal regulation of the use of AI-generated content in political ads.  The Federal Election Commission last month discussed a Petition for Rulemaking filed by the public interest group Public Citizen asking for a rulemaking on the regulation of these ads.  While the FEC staff drafted a “Notification of Availability” to tell the public that the petition was filed and to ask for comments on whether the FEC should start a formal rulemaking on the subject, according to an FEC press release, no action was taken on that Notification.  A bill has also been introduced in both the Senate and the House of Representatives to require that there be disclaimers on all political ads using images or video generated by artificial intelligence revealing that they were artificially generated (see press release here).

These federal efforts to require labeling of political ads using AI have yet to result in any such regulation, but a few states have stepped into the void and adopted their own requirements.   Washington State recently passed legislation requiring the labeling of AI-generated content in political ads.  Some states, including Texas and California, already provide penalties for deepfakes that do not contain a clear public disclosure when used in political ads within a certain period before an election (Texas, within 30 days and California within 60 days).Continue Reading Artificial Intelligence in Political Ads – Legal Issues in Synthetic Media and Deepfakes in Campaign Advertising – Concerns for Broadcasters and Other Media Companies

Artificial intelligence has been the buzzword of the last few months.  Since the public release of ChatGPT, seemingly every tech company has either announced a new AI program or some use for AI that will compete with activities currently performed by real people. While AI poses all sorts of questions for society and issues for almost every industry, applications for the media industry are particularly interesting.  They range from AI creating music, writing scripts, reporting the news, and even playing DJ on Spotify channels.  All these activities raise competitive issues, but there have also begun to be a number of policy issues bubbling to the surface. 

The most obvious policy issue is whether artistic works created by AI are entitled to copyright protection – an issue addressed by recent guidance from the Copyright Office suggesting that a work created solely by a machine is not entitled to protection, but that there may be circumstances where a person is providing sufficient guidance to the artificial intelligence such that the AI is seen as more of a tool for the person’s creativity, and that person can claim to be the creator of the work and receive copyright protection. Continue Reading Looking at the Some of the Policy Issues for Media and Music Companies From the Expanding Use of Artificial Intelligence