This past weekend, we saw an ad posted on YouTube attacking Democratic Senatorial candidate James Tallarico – using words that were apparently from his own tweets, commenting on a number of social issues. What made the ad notable was that the words from the tweets were not just displayed on the screen or read by some anonymous announcer, but instead they were stitched together and read in what was seemingly Tallarico’s own voice accompanied by a very convincing AI image of Tallarico himself, and interjections were included where his AI image said approvingly things about the tweets like “I remember this one” and “so true.” It is only apparent that the ad was not an actual recording of Tallarico delivering the message by a small disclaimer in one corner of the ad labeling it “AI Generated.” The ad is a very convincing portrayal of Tallarico, and we expect similar ads will show up during the course of the current election cycle. Broadcasters and all other media companies need to be ready to deal with ads like these and comply with all legal obligations that apply to such advertising.
We have written before about the efforts during the last administration by the FCC, the Federal Election Commission (see our note here and our article here), and by Congress to regulate the use of AI in political ads on a national level. Those efforts did not lead to national rules on such uses. However, the majority of states have adopted some rules for the use of AI in political ads. For media companies, the biggest issue is that these rules are not uniform but instead impose different obligations to avoid legal liability.
We last wrote extensively about the state laws affecting the use of artificial intelligence in political ads about two years ago, when only 11 states had adopted such rules. Since then, more than 20 other states have adopted rules – and the obligations they impose are all over the board. Some states (like Minnesota) make it illegal to use AI in political ads to portray a candidate doing something that they did not actually do unless the candidate consents. Most do not go that far but instead require some form of disclosure (like that in the anti-Tallarico ad, except that in many states, the required text for the disclosures is far more extensive, though those disclosure obligations are not uniform and, in a few states, the disclosure requirements are inconsistent in the state’s own criminal and civil statutes).
In some states, the obligation to make these disclosures is clearly on the creator of the AI-generated content. In others, the obligation extends to any distributor of the ad, so media companies can be held liable for distributing an ad that uses AI to impersonate a candidate without the required disclosures. As media companies may not know if an ad was created through the use of artificial intelligence (or, as used in some of these statutes, whether it is a “deep fake” or “synthetic media”), this liability in most states is limited. But, again, those limits are not uniform. In some states, there is liability only if the distribution of the content was done knowing that the ad (or other content) was a deep fake or constituted synthetic media. In others, if the media company is paid to distribute the content, it is not liable. In the majority of states with such laws, there is an exemption for broadcasters and others subject to the FCC’s “no censorship” rule, where these companies are legally not allowed to refuse ads from candidates for office based on the content of those ad. As the broadcaster (or local cable company) cannot censor the ad or refuse it based on its content, states have recognized that these regulated entities cannot be held liable if unlabeled AI is used in the ad. But that exception is not in all states’ laws. And in some states (like New York), some or all of these exemptions only apply if the media company has a policy disclosed to all advertisers that the use of AI in political ads must comply with all laws of the state (through disclosure or otherwise).
Obviously, media companies need to comply with all these state laws – so they must take the time necessary to understand what these laws say. But that is not the end of the analysis. Even if an ad complies with the disclosure obligations set out under state law, there may still be claims that the ad defames the candidate being attacked (see our article here). While broadcasters and local cable companies are insulated from liability for the content of ads from legally qualified candidates and their authorized committees (see our article here), they can be exposed to liability for ads from non-candidate groups. Even non-regulated companies, such as streaming companies that are not subject to the Communications Act requirements that candidate ads not be censored, may have liability for the content of candidate ads.
These companies must assess potential liability under traditional legal theories, including defamation. We regularly warn broadcasters about potential penalties for running non-candidate ads once the broadcaster has been put on notice that such ads are false or defamatory (see, for instance, our article here). The ability to generate political ads using AI will only increase the risk of such ads, and the burdens on media companies to vet these ads.
Two recent cases where broadcast companies have been sued for running non-candidate attack ads both involved using “old fashioned” editing techniques, taking the words of a candidate and editing them to make it sound like the candidate said something that they did not actually say. Suits were brought when stations continued to run those ads despite being told that the ads did not in fact accurately portray what the candidate actually said. Certainly, this same question can come up (and no doubt will come up) with images generated by AI technologies. One of the pre-AI cases was when President Trump filed lawsuit against a Rhinelander, Wisconsin TV station that had aired an issue ad which had edited of one of his speeches to assert that he had called COVID a hoax. There, the station argued that the edited message had not materially changed the meaning of Trump’s statements about the virus – see our article here), and the case was ultimately dismissed. In another case, Evan McMullen, who was running as an independent candidate for the US Senate in Utah, sued TV stations that had not pulled ads where his statements on a CNN program were edited to make it sound as if he said that all Republicans were racist, when the actual statement was only that some elements of the party were racist. While there are many defenses to any defamation claim, no matter how the cases are resolved, the media companies will bear the cost and time that go into defending against such claims, even if ultimately no liability is found. AI will likely require that media companies assess these issues even more frequently than they do now.
AI-generated political content will require that media companies carefully review and analyze complaints. One could also imagine AI being used to generate political content that has no basis in fact at all and portray political figures in all sorts of compromising positions – ads much more likely to give rise to defamation claims. Broadcasters and other media companies will likely face these questions in the months ahead, regardless of the laws specifically targeting AI-generated content. Media companies need to think carefully about these issues now to be prepared for what may come their way in the rest of this election season.
