Skip to content

Artificial Intelligence in Political Ads – Media Companies Beware

November 10, 2023

David Oxenford

David Oxenford

By: David Oxenford, Wilkinson Barker Knauer

In the Washington Post last weekend, an op-ed article suggested that political candidates should voluntarily renounce the use of artificial intelligence in their campaigns.  The article seemed to be looking for candidates to take the actions that governments have largely thus far declined to mandate.  As we wrote back in July, despite calls from some for federal regulation of the use of AI-generated content in political ads, little movement in that direction has occurred.

As we noted in July, a bill was introduced in both the Senate and the House of Representatives to require that there be disclaimers on all political ads using images or video generated by artificial intelligence, in order to disclose that they were artificially generated (see press release here), but there has been little action on that legislation.  The Federal Election Commission released a “Notice of Availability” in August (see our article here) asking for public comment on whether it should start a rulemaking to determine if the use of deepfakes and other synthetic media imitating a candidate violate FEC rules that forbid a candidate or committee from fraudulently misrepresenting that they are “speaking or writing or otherwise acting for or on behalf of any other candidate or political party or employee or agent thereof on a matter which is damaging to such other candidate or political party or employee or agent thereof.”  Comments were filed last month (available here), and include several (including those of the Republican National Committee) that question the authority of the FEC to adopt any rules in this area, both as a matter of statutory authority and under the First Amendment.  Such comments do not portend well for voluntary limits by candidates, nor for actions from an FEC that by law has 3 Republican and 3 Democratic commissioners.

While federal efforts to require labeling of political ads using AI have yet to result in any such regulation, a few states have stepped into the void and adopted their own requirements.   Washington State recently passed legislation requiring the labeling of AI-generated content in political ads.  Some states, including Texas and California, already provide penalties for deepfakes that do not contain a clear public disclosure when used in political ads within a certain period before an election (Texas, within 30 days and California within 60 days).

Even without regulation, media companies still need to be wary of the use of AI being used to generate false images of candidates for use in attack ads.  While broadcasters and local cable companies are insulated from liability for the content of ads from legally qualified candidates and their authorized committees (see our article here), they can have liability for ads from non-candidate groups.  Even non-regulated companies, such as streaming companies that are not subject to the Communications Act requirements that candidate ads not be censored, may have liability for the content of candidate ads.

These companies must assess potential liability under traditional legal theories, including defamation.  We regularly warn broadcasters about potential penalties for running non-candidate ads once the broadcaster has been put on notice that such ads are false or defamatory (see, for instance, our article here).  The ability to generate political ads using AI will only increase the risk of such ads, and the burdens on media companies to vet these ads.

Two of the most recent cases where broadcast companies have been sued for running non-candidate attack ads both involved using “old fashioned” editing techniques, taking the words of a candidate and editing them to make it sound like the candidate said something that they did not actually say.  Suits were brought when stations continued to run those ads despite being told that the ads did not in fact accurately portray what the candidate actually said.  Certainly, this same question can come up (and no doubt will come up) with images generated by AI technologies.  One such case was President Trump’s lawsuit against a Rhinelander, Wisconsin TV station that had aired an issue ad which had edited of one of his speeches to assert that he had called COVID a hoax. There, the station argued that the edited message had not materially changed the meaning of Trump’s statements about the virus – see our article here), and the case was ultimately dismissed.  In another case, Evan McMullen, who was running as an independent candidate for the US Senate in Utah, sued TV stations that had not pulled ads where his statements on a CNN program were edited to make it sound as if he said that all Republicans were racist, when the actual statement was only that some elements of the party were racist.  While there are many defenses to any defamation claim, no matter how the cases are resolved, the media companies will bear the cost and time that go into defending against such claims, even if ultimately no liability is found. And AI may require that media companies assess these issues even more frequently than they do now.

AI-generated political content will require that media companies carefully review and analyze complaints.  One could also imagine AI being used to generate political content that has no basis in fact at all and portray political figures in all sorts of compromising positions – ads much more likely to give rise to defamation claims.   Broadcasters and other media companies will likely face these questions in the months ahead, whether or not there is any further movement on the adoption of specific regulatory requirements for AI-generated content. Media companies need to be thinking carefully about these issues now to be prepared for what may come their way in 2024.

David Oxenford is MAB’s Washington Legal Counsel and provides members with answers to their legal questions with the MAB Legal Hotline. Access information here. (Members only access). There are no additional costs for the call; the advice is free as part of your MAB membership.

Scroll To Top