state laws on AI in political ads

Washington DC is not the only place where there are regulatory or political decisions made that affect broadcasters and advertising for candidates or political issues.  We’ve written many times about state laws that govern the use of AI in political advertising, with more than 20 states already having laws on their books and more considering such legislation in legislative sessions this year (see our articles here and here).  We have also noted that there are a number of states that have laws requiring media companies, including digital media companies, to keep records of political advertising sales and, in some cases, to make those records available to the public (see, for example, our article here).  While there are few federal elections in 2025, there are state and local elections in many states – and most of these laws are targeted to those state and local elections, so broadcast stations and cable systems regulated by the FCC need to be aware of these state laws.  But most of these laws reach far beyond FCC-regulated entities and apply to digital and even print media – so all companies need to be paying attention to their requirements.  And a number of recent actions highlight these concerns.

No state has been as active in enforcing such requirements as Washington State.  In a December decision seemingly overlooked by much of the trade press, the Washington State Court of Appeals upheld a decision fining Facebook parent company Meta $24.6 million for its failure to comply with the extensive political disclosure rules adopted by that state.  This decision upheld a summary judgement by a state trial court finding Meta liable for a $24.6 million penalty for violating the state’s public disclosure rules that apply to political advertising (for more on the trial court decision, see our article here). Continue Reading Washington State Court of Appeals Upholds $24.6 Million Penalty Against Meta for Not Meeting State Political Advertising Disclosure Requirements – A Warning to All Media Companies to Assess and Comply with State Political Disclosure Rules

Artificial Intelligence was the talk of the NAB Convention last week.  Seemingly, not a session took place without some discussion of the impact of AI.  One area that we have written about many times is the impact of AI on political advertising.  Legislative consideration of that issue has exploded in the first quarter of 2024, as over 40 state legislatures considered bills to regulate the use of AI (or “deep fakes” or “synthetic media”) in political advertising – some purporting to ban the use entirely, with most allowing the use if it is labeled to disclose to the public that the images or voices that they are experiencing did not actually happen in the way that they are portrayed.  While over 40 states considered legislation in the first quarter, only 11 have thus far adopted laws covering AI in political ads, up from 5 in December when we reported on the legislation adopted in Michigan late last year.

The new states that have adopted legislation regulating AI in political ads in 2024 are Idaho, Indiana, New Mexico, Oregon, Utah, and Wisconsin.  These join Michigan, California, Texas, Minnesota, and Washington State which had adopted such legislation before the start of this year.  Broadcasters and other media companies need to carefully review all of these laws.  Each of these laws is unique – there is no standard legislation that has been adopted across multiple states.  Some have criminal penalties, while others simply imposing civil liability.  Media companies need to be aware of the specifics of each of these bills to assess their obligations under these new laws as we enter this election season where political actors seem to be getting more and more aggressive in their attacks on candidates and other political figures. Continue Reading 11 States Now Have Laws Limiting Artificial Intelligence, Deep Fakes, and Synthetic Media in Political Advertising – Looking at the Issues