As the 2024 U.S. presidential election draws closer, reports have surfaced that foreign governments, specifically Iran, are attempting to use artificial intelligence (AI) tools like ChatGPT to sway American voters. A recent investigation by OpenAI uncovered that a series of accounts linked to the Iranian government allegedly used ChatGPT to generate misleading content across social media, potentially influencing the perceptions of U.S. voters, particularly within the Latino community. The use of AI in these tactics brings into focus the broader concern over how modern technology may impact democracy, particularly among targeted groups.
How ChatGPT Allegedly Fueled Election Misinformation
According to OpenAI’s August report, multiple Iranian government-affiliated accounts were found to be using ChatGPT to create and disseminate false or misleading information regarding the U.S. presidential race. The accounts generated content related to the rights of Latino communities in the U.S. and political developments in Venezuela. Some posts attempted to incite fear by making speculative claims, such as the potential increase in immigration procedure costs if Kamala Harris were to be elected. These claims were reportedly shared on various social media platforms to provoke uncertainty within certain voter demographics.
This manipulation involved both long-form articles and shorter social media posts, illustrating how AI can produce content across various formats, potentially reaching a broad audience with relative ease. Although the impact of these posts appears to have been limited, the revelation raises concerns over the influence of AI on public opinion and highlights the importance of countering such efforts.
Why Target the Latino Vote?
The Latino community has historically faced issues related to misinformation, with foreign and domestic sources attempting to manipulate their perception of political issues. Political and digital researchers, like Cristina Tardaguila, believe foreign actors may target the Latino vote as a means to create division and skepticism within this community, hoping to influence their voting behaviors or even incite unrest.
Misinformation targeted toward Latino voters is not a new phenomenon. With AI-powered tools like ChatGPT, foreign agents may find it easier to develop persuasive narratives in Spanish and English, potentially exploiting existing concerns within Latino communities. By focusing on sensitive issues like immigration and cultural identity, foreign governments may be able to exploit sentiments for political gain.
Lessons from Past Election Interference
Efforts to influence U.S. elections by foreign governments are well documented. In 2016, Russia’s Internet Research Agency (IRA) was identified as a significant actor in a misinformation campaign designed to exacerbate divisions within American society and influence voters. The IRA utilized “troll” accounts on Twitter (now known as X) to spread and amplify misleading narratives, targeting both sides of the political spectrum to foment discord and confusion.
Following the 2016 election interference reports, the Senate Intelligence Committee urged the federal government to work closely with local agencies to prevent similar cases in future elections. The recommendation was to develop stronger detection methods and deterrence strategies. Despite these efforts, emerging technologies like AI bring new complexities to the task of preventing misinformation, as bots and trolls become more sophisticated.
Combating Misinformation with Awareness and Verification
The rise of AI-generated misinformation has spurred a renewed focus on educating the public on how to identify and avoid manipulated content. Social media users are encouraged to verify sources, cross-check information, and question unusually sensational claims. For example, “Spot the Troll,” an educational tool created by Clemson University, offers insights into recognizing fake accounts by showcasing examples of past troll activities from the 2016 election.
Individuals can also rely on fact-checking platforms and explore tools that help identify bot behavior patterns, such as rapid posting or the use of generic language. According to José Troche, a local resident interviewed by Telemundo 44, checking information sources and analyzing the credibility of claims on social media are essential steps to avoid falling for sensational or false information.
The Role of Cybersecurity Agencies
U.S. cybersecurity agencies are working to maintain the integrity of the 2024 election. According to Jen Easterly, director of the Cybersecurity and Infrastructure Security Agency (CISA), current election security measures are robust enough to prevent foreign interference from influencing vote counts. Easterly reassured the public that with strong election infrastructure in place, attempts by foreign actors to alter the election outcome directly are unlikely to succeed.
Cybersecurity teams remain vigilant in monitoring and addressing potential vulnerabilities, particularly those that may arise from the increased use of AI technologies in misinformation campaigns. OpenAI, as part of its response, deactivated the Iranian accounts identified in its investigation and will continue working with cybersecurity agencies to identify and shut down similar efforts.
The use of AI tools like ChatGPT in misinformation campaigns highlights a potential shift in how foreign governments or other actors could influence future elections. Although the impact of recent AI-generated misinformation has been limited, this instance serves as an early warning for the challenges that AI-driven tools pose to information integrity. The ease with which AI can generate persuasive, language-targeted content adds complexity to misinformation management, urging further scrutiny on the ethical and security risks of AI in the public sphere.
In addressing these issues, a coordinated response from tech companies, cybersecurity agencies, and informed voters is essential. Educating communities, particularly those that are frequently targeted, like the Latino community, may mitigate the potential influence of AI-generated misinformation on future elections. By fostering awareness and providing resources to spot disinformation, the U.S. can work toward securing its democratic processes against manipulation in the AI era.