Former President Donald Trump shared a series of AI-generated images over the weekend, including a fake endorsement from pop star Taylor Swift, to rally support for his presidential bid. These posts highlight the potential for Trump to use generative AI in ways that complicate efforts to police election disinformation, given that legal precedents allow candidates to lie in political ads.
Read: Huawei Pura 70 Review: Ultimately Underperforming
One of the images Trump posted seems to depict Vice President Kamala Harris addressing a crowd in Chicago, with a communist hammer and sickle in the background. Another post featured an AI-generated image of Taylor Swift dressed as Uncle Sam with the caption, “Taylor wants you to vote for Donald Trump.” Alongside these images, Trump wrote, “I accept!”
These posts likely wouldn’t fall under state laws against election deepfakes, which usually prohibit AI-generated images that convincingly depict someone doing or saying something. According to Robert Weissman, copresident of Public Citizen, there are no federal restrictions on using deepfakes in elections, except for the Federal Communications Commission’s ban on AI-generated robocalls. Public Citizen has been pushing for the Federal Election Commission to limit candidates’ ability to misrepresent their opponents with AI, but the current rules likely wouldn’t apply to obviously exaggerated content like the Harris or Swift images.
Swift might have grounds to challenge the use of her likeness under California’s Right of Publicity, which protects individuals from unauthorized use of their image for endorsement. However, both Universal Music Group, which represents Swift, and the Trump campaign did not respond to requests for comment.
Courts have generally upheld the First Amendment protections for even deliberate lies by political candidates. While private platforms could regulate misleading AI content, enforcement has been inconsistent. For instance, X (formerly Twitter) has a policy against synthetic media that could deceive or cause harm, but enforcement has been selective, as seen when Elon Musk shared an unlabelled deepfake of Harris. Truth Social, Trump’s platform, has minimal content rules.
Weissman warns that Trump’s use of AI-generated disinformation further undermines public trust, complicating the ability to maintain a democratic society where people can trust what they see and hear.