This past weekend, we saw an ad posted on YouTube attacking Democratic Senatorial candidate James Tallarico – using words that were apparently from his own tweets, commenting on a number of social issues. What made the ad notable was that the words from the tweets were not just displayed on the screen or read by some anonymous announcer, but instead they were stitched together and read in what was seemingly Tallarico’s own voice accompanied by a very convincing AI image of Tallarico himself, and interjections were included where his AI image said approvingly things about the tweets like “I remember this one” and “so true.” It is only apparent that the ad was not an actual recording of Tallarico delivering the message by a small disclaimer in one corner of the ad labeling it “AI Generated.” The ad is a very convincing portrayal of Tallarico, and we expect similar ads will show up during the course of the current election cycle. Broadcasters and all other media companies need to be ready to deal with ads like these and comply with all legal obligations that apply to such advertising.
We have written before about the efforts during the last administration by the FCC, the Federal Election Commission (see our note here and our article here), and by Congress to regulate the use of AI in political ads on a national level. Those efforts did not lead to national rules on such uses. However, the majority of states have adopted some rules for the use of AI in political ads. For media companies, the biggest issue is that these rules are not uniform but instead impose different obligations to avoid legal liability.
We last wrote extensively about the state laws affecting the use of artificial intelligence in political ads about two years ago, when only 11 states had adopted such rules. Since then, more than 20 other states have adopted rules – and the obligations they impose are all over the board. Some states (like Minnesota) make it illegal to use AI in political ads to portray a candidate doing something that they did not actually do unless the candidate consents. Most do not go that far but instead require some form of disclosure (like that in the anti-Tallarico ad, except that in many states, the required text for the disclosures is far more extensive, though those disclosure obligations are not uniform and, in a few states, the disclosure requirements are inconsistent in the state’s own criminal and civil statutes). Continue Reading AI in Political Attack Ads – Watch State Laws on Deep Fakes and Synthetic Media in Political Content
