Skip to content

Artificial Intelligence in Political Ads – Legal Issues in Synthetic Media and Deepfakes in Campaign Advertising – Concerns for Broadcasters and Other Media Companies

July 28, 2023

David Oxenford

David Oxenford

By: David Oxenford, Wilkinson Barker Knauer

Stories about “deepfakes,” “synthetic media,” and other forms of artificial intelligence being used in political campaigns, including in advertising messages, have abounded in recent weeks.  There were stories about a super PAC running attack ads against Donald Trump where Trump’s voice was allegedly synthesized to read one of his tweets condemning the Iowa governor for not supporting him in his Presidential campaign.  Similar ads have been run attacking other political figures, prompting calls from some for federal regulation of the use of AI-generated content in political ads.  The Federal Election Commission last month discussed a Petition for Rulemaking filed by the public interest group Public Citizen asking for a rulemaking on the regulation of these ads.  While the FEC staff drafted a “Notification of Availability” to tell the public that the petition was filed and to ask for comments on whether the FEC should start a formal rulemaking on the subject, according to an FEC press release, no action was taken on that Notification.  A bill has also been introduced in both the Senate and the House of Representatives to require that there be disclaimers on all political ads using images or video generated by artificial intelligence revealing that they were artificially generated (see press release here).

These federal efforts to require labeling of political ads using AI have yet to result in any such regulation, but a few states have stepped into the void and adopted their own requirements.   Washington State recently passed legislation requiring the labeling of AI-generated content in political ads.  Some states, including Texas and California, already provide penalties for deepfakes that do not contain a clear public disclosure when used in political ads within a certain period before an election (Texas, within 30 days and California within 60 days).

Media companies need to be aware of the regulatory limitations already in place, and to watch for new ones that may arise in the coming months as these ads become more frequent, and as various regulatory bodies consider imposing new limitations.  Media companies need to assess these regulatory actions and, like so many other state laws regulating political advertising, determine the extent to which the onus is on the media company to assure that advertising using AI-generated content is properly labeled.  In some cases, the regulatory burden may be on the producer of the ad, but in other cases the regulatory burden may not be so clear.

In addition to these regulatory issues, media companies need to assess potential liability under more traditional legal theories.  We regularly warn broadcasters about potential penalties for running non-candidate ads once the broadcaster has been put on notice that such ads are fake (see, for instance, our article here). While a broadcaster or local cable company generally cannot censor the message of a legally qualified candidate once they have agreed to air the candidate’s ad, and thus have no liability for its content, that is not the case with ads from non-candidate groups.  Stations have potential liability for airing ads from non-candidate groups that are defamatory or could otherwise give rise to liability.  The ability to generate political ads using AI will only increase the risk of such ads, and the burdens on media companies to vet these ads.

Two of the most recent cases where broadcast companies have been sued for running non-candidate attack ads both involved using “old fashioned” editing techniques, taking the words of a candidate and editing them to make it sound like the candidate said something that he did not actually say.  Suits were brought when stations continued to run those ads despite being told that the ads did not in fact accurately portray what the candidate actually said.  Certainly, this same question can come up, and no doubt will come up, with images generated by AI technologies.  There may be arguments about whether there is liability for defamation in some cases.  For example, in the recent case where AI was used to make it sound like Trump was speaking, and the aural message was simply the language of one of his own tweets – an argument could be made that the edited message did not materially change the meaning of Trump’s statement.  This is somewhat similar to the facts in Trump’s lawsuit against a Rhinelander, Wisconsin TV station that had aired an issue ad which had edited of one of his speeches to assert that he had called COVID a hoax, and the station argued that the edited message had not materially changed the meaning of Trump’s statements about the virus – see our article here).  In other cases, the editing may arguably have distorted the meaning of the candidate’s statement (as in the case of Evan McMullen who sued TV stations that had not pulled ads where his statements on a CNN program were edited to make it sound as if he said that all Republicans were racist, when the actual statement was only that some elements of the party were racist), but arguments will be made that the candidate can show no real damages from such statements.  These are all questions that a court would need to weigh in assessing liability.  No matter how they are resolved, the media companies will bear the cost and time that will go into defending against such claims, even if ultimately no liability is found.

AI-generated political content will likely increase the frequency of these issues arising, requiring careful review and analysis by the media companies that receive complaints.  One could also imagine AI being used to generate political content that had no basis in fact at all and portray political figures in all sorts of compromising positions – ads much more likely to give rise to defamation claims.   Broadcasters and other media companies will likely face these questions in the months ahead, whether or not there is any further movement on the adoption of specific regulatory requirements for AI-generated content. This is one more issue that media companies need to be thinking about now in preparation for the 2024 elections.

David Oxenford is MAB’s Washington Legal Counsel and provides members with answers to their legal questions with the MAB Legal Hotline. Access information here. (Members only access). There are no additional costs for the call; the advice is free as part of your MAB membership.

Scroll To Top