Artificial intelligence was widely expected to play a disruptive role in the 2024 U.S. elections. While it didn’t radically change the outcome, it did succeed in sowing distrust and undermining people’s faith in truth. A notable case involves an AI-generated image of Donald Trump that went viral, misleading tens of thousands and illustrating the challenges posed by manipulated digital content.
The Viral Trump Flood Image
A digitally altered image of former President Donald Trump walking through floodwaters after Hurricane Helene gained significant traction on social media. Posted on September 30, the image depicted Trump wearing an orange life vest as he waded through a flooded street. The post was shared over 160,000 times on Facebook within two days and circulated widely on Instagram.
The image, however, was confirmed to be AI-generated. Trump did visit Georgia to survey the hurricane damage and meet with affected residents, but no legitimate reports or images show him wading through water. Experts and AI-detection tools exposed the falsity of the viral image, highlighting its inconsistencies and the growing prevalence of AI in misinformation campaigns.
AI Analysis Uncovers Fakery
Walter Scheirer, an engineering professor at the University of Notre Dame, identified the image as a product of a generative AI algorithm. He pointed out several inconsistencies, such as the dry appearance of Trump’s clothing and visible artifacts around splashing water and a nearby truck.
Similarly, James O’Brien, a computer science professor at the University of California, Berkeley, emphasized irregularities in Trump’s appearance. These included a life jacket strap awkwardly crossing Trump’s face, smudged facial features, and other unnatural effects. He explained that the AI software blending Trump’s face into another image had made errors, particularly around the edges.
AI experts also highlighted additional telltale signs of manipulation, such as the distorted text on the life jackets and hats and the fact that both Trump and the man beside him appeared to have only four fingers on each hand. These flaws are common in AI-generated content, as generative models often struggle with rendering hands and legible text accurately.
A detection tool, Hive Moderation, further confirmed the fabrication, determining that the image was 99.9% likely to be AI-generated.
 Impact of the Altered Image
While the altered image did not directly influence voters’ decisions, its widespread circulation points to a troubling trend: the erosion of public trust in information. Fake content, particularly when bolstered by the power of generative AI, amplifies polarization and feeds existing partisan divides.
Supporters of Trump might interpret the image as an example of their leader’s resilience and dedication, even though the image was false. Conversely, critics could view the spread of such content as evidence of dishonesty or manipulation among Trump’s base. Regardless of interpretation, the viral image heightened divisions by reinforcing pre-existing biases rather than changing anyone’s perspective.
The Trump flood image is just one example of how AI can disrupt public discourse. Deepfakes, AI-generated images, and misleading videos have proliferated in recent years, making it increasingly difficult to discern fact from fiction. While platforms like Facebook and Instagram employ AI tools to detect such content, their efforts often lag behind the speed at which misinformation spreads.
Experts caution that generative AI tools have lowered the barrier to creating convincing fake content, enabling malicious actors to amplify divisive narratives. The ease of use of AI-powered applications means that anyone with basic technical knowledge can create and distribute fabricated media, further complicating the fight against misinformation.
Social media platforms and researchers are working to address the challenges posed by AI-generated content. Detection algorithms, like Hive Moderation, play a critical role in identifying manipulated media, but their effectiveness depends on widespread adoption and timely intervention.
Education is also a vital component of combating misinformation. Raising public awareness about the capabilities and limitations of AI can help users critically evaluate content before sharing it. Additionally, policymakers are exploring regulatory frameworks to address the ethical and legal implications of AI-driven misinformation campaigns.
Lessons from the 2024 Election
The 2024 U.S. election demonstrated that while AI might not have directly swayed voters, it contributed to an environment of mistrust and polarization. The Trump flood image exemplifies how even small instances of misinformation can go viral, fueling debates about truth and fairness.
As AI technologies continue to evolve, their role in shaping public opinion and distorting reality will likely grow. Moving forward, society must prioritize transparency, accountability, and education to mitigate the risks of AI-driven misinformation. The stakes are high, as the battle for truth increasingly plays out in a digital world dominated by powerful and often unchecked technologies.
Artificial intelligence didn’t change the outcome of the 2024 election, but it exacerbated divisions and deepened doubts about the reliability of information. The viral Trump flood image underscores the urgency of addressing AI-generated misinformation, highlighting the need for robust detection tools, public education, and thoughtful regulation. Only by confronting these challenges head-on can society safeguard the integrity of its democratic processes in an age of rapid technological change.