As the U.S. gears up for another high-stakes presidential election, the role of artificial intelligence (AI) and the spread of misinformation have come into sharp focus. A recent controversy involving X’s Grok chatbot—a premium feature of the social media platform formerly known as Twitter—has spotlighted these challenges.
Grok’s Misinformation Unveiled
Minnesota Secretary of State Steve Simon has revealed troubling news: Grok has been spreading false information about presidential ballot deadlines. The Minneapolis Star Tribune reported that after President Joe Biden’s exit from the race, Grok incorrectly claimed that ballot deadlines had passed in nine states, including Minnesota. Screenshots circulating on social media depicted Grok advising potential candidates to check if they had missed their chance, with a dismissive remark about the next election in 2028.
Deadline Facts vs. Fiction
In reality, Minnesota’s ballot deadline is August 26, and no state has yet closed its ballot for presidential candidates. Despite this, Grok’s inaccuracies have gone viral, misleading many. Simon expressed his frustration, noting the broad reach of the false information and questioning what other inaccuracies might surface on Grok and X in the lead-up to the election.
X’s Response and Concerns from Officials
In response to the misinformation, secretaries of state reached out to X via the National Association of Secretaries of State (NASS), which Simon heads. X representatives assured that an update to the technology was planned for August and emphasized a disclaimer encouraging users to verify information independently. Simon, however, criticized this response, fearing that X’s current approach might not be sufficient to address future misinformation issues as Election Day draws near.
Legislative Actions in Minnesota
Minnesota has taken legislative steps to counteract misinformation, particularly through AI. The state recently passed a law making it illegal to distribute deepfake videos or audio within 90 days of an election if such material is intended to influence the outcome and was made without the depicted person’s consent. Simon supports these measures, highlighting their importance given the platform’s current shortcomings in addressing misinformation.
Grok’s “Fun Mode” Complicates Matters
Adding to the concern is Grok’s “Fun Mode,” where the chatbot’s casual, humorous responses about election deadlines continue to spread misinformation. Such responses not only mislead users but also undermine the seriousness of accurate election information, raising further concerns about the platform’s content oversight.
Positive Collaborations for Accurate Information
On a more optimistic note, there have been successful efforts to provide reliable election information. NASS has partnered with OpenAI, the developer of ChatGPT, to guide users to NASS’s official resources for voter registration and absentee ballots. Simon sees this collaboration as a promising development, emphasizing the need to address misinformation proactively.
As the presidential election nears, the challenge of managing AI-driven misinformation becomes ever more pressing. The Grok chatbot’s spread of incorrect ballot information highlights the urgent need for effective oversight and rapid response mechanisms. To maintain the integrity of the electoral process, state officials, lawmakers, and tech platforms must work together to ensure voters are informed with accurate information.