This past weekend, we saw an ad posted on YouTube attacking Democratic Senatorial candidate James Tallarico – using words that were apparently from his own tweets, commenting on a number of social issues.  What made the ad notable was that the words from the tweets were not just displayed on the screen or read by some anonymous announcer, but instead they were stitched together and read in what was seemingly Tallarico’s own voice accompanied by a very convincing AI image of Tallarico himself, and interjections were included where his AI image said approvingly things about the tweets like “I remember this one” and “so true.”   It is only apparent that the ad was not an actual recording of Tallarico delivering the message by a small disclaimer in one corner of the ad labeling it “AI Generated.”  The ad is a very convincing portrayal of Tallarico, and we expect similar ads will show up during the course of the current election cycle.  Broadcasters and all other media companies need to be ready to deal with ads like these and comply with all legal obligations that apply to such advertising.

We have written before about the efforts during the last administration by the FCC, the Federal Election Commission (see our note here and our article here), and by Congress to regulate the use of AI in political ads on a national level.  Those efforts did not lead to national rules on such uses.  However, the majority of states have adopted some rules for the use of AI in political ads.  For media companies, the biggest issue is that these rules are not uniform but instead impose different obligations to avoid legal liability.

We last wrote extensively about the state laws affecting the use of artificial intelligence in political ads about two years ago, when only 11 states had adopted such rules.  Since then, more than 20 other states have adopted rules – and the obligations they impose are all over the board.  Some states (like Minnesota) make it illegal to use AI in political ads to portray a candidate doing something that they did not actually do unless the candidate consents.  Most do not go that far but instead require some form of disclosure (like that in the anti-Tallarico ad, except that in many states, the required text for the disclosures is far more extensive, though those disclosure obligations are not uniform and, in a few states, the  disclosure requirements are inconsistent in the state’s own criminal and civil statutes). Continue Reading AI in Political Attack Ads – Watch State Laws on Deep Fakes and Synthetic Media in Political Content

Washington DC is not the only place where there are regulatory or political decisions made that affect broadcasters and advertising for candidates or political issues.  We’ve written many times about state laws that govern the use of AI in political advertising, with more than 20 states already having laws on their books and more considering such legislation in legislative sessions this year (see our articles here and here).  We have also noted that there are a number of states that have laws requiring media companies, including digital media companies, to keep records of political advertising sales and, in some cases, to make those records available to the public (see, for example, our article here).  While there are few federal elections in 2025, there are state and local elections in many states – and most of these laws are targeted to those state and local elections, so broadcast stations and cable systems regulated by the FCC need to be aware of these state laws.  But most of these laws reach far beyond FCC-regulated entities and apply to digital and even print media – so all companies need to be paying attention to their requirements.  And a number of recent actions highlight these concerns.

No state has been as active in enforcing such requirements as Washington State.  In a December decision seemingly overlooked by much of the trade press, the Washington State Court of Appeals upheld a decision fining Facebook parent company Meta $24.6 million for its failure to comply with the extensive political disclosure rules adopted by that state.  This decision upheld a summary judgement by a state trial court finding Meta liable for a $24.6 million penalty for violating the state’s public disclosure rules that apply to political advertising (for more on the trial court decision, see our article here). Continue Reading Washington State Court of Appeals Upholds $24.6 Million Penalty Against Meta for Not Meeting State Political Advertising Disclosure Requirements – A Warning to All Media Companies to Assess and Comply with State Political Disclosure Rules

Artificial Intelligence was the talk of the NAB Convention last week.  Seemingly, not a session took place without some discussion of the impact of AI.  One area that we have written about many times is the impact of AI on political advertising.  Legislative consideration of that issue has exploded in the first quarter of 2024, as over 40 state legislatures considered bills to regulate the use of AI (or “deep fakes” or “synthetic media”) in political advertising – some purporting to ban the use entirely, with most allowing the use if it is labeled to disclose to the public that the images or voices that they are experiencing did not actually happen in the way that they are portrayed.  While over 40 states considered legislation in the first quarter, only 11 have thus far adopted laws covering AI in political ads, up from 5 in December when we reported on the legislation adopted in Michigan late last year.

The new states that have adopted legislation regulating AI in political ads in 2024 are Idaho, Indiana, New Mexico, Oregon, Utah, and Wisconsin.  These join Michigan, California, Texas, Minnesota, and Washington State which had adopted such legislation before the start of this year.  Broadcasters and other media companies need to carefully review all of these laws.  Each of these laws is unique – there is no standard legislation that has been adopted across multiple states.  Some have criminal penalties, while others simply imposing civil liability.  Media companies need to be aware of the specifics of each of these bills to assess their obligations under these new laws as we enter this election season where political actors seem to be getting more and more aggressive in their attacks on candidates and other political figures. Continue Reading 11 States Now Have Laws Limiting Artificial Intelligence, Deep Fakes, and Synthetic Media in Political Advertising – Looking at the Issues