To prevent misinformation, OpenAI blocks Iranian group’s ChatGPT accounts for targeting US elections and spreading propaganda. OpenAI recently shut down accounts connected to an Iranian group using its ChatGPT tool for a covert influence operation aimed at the upcoming U.S. presidential election. The group, identified as Storm-2035, generated content on key issues like the U.S. election, the conflict in Gaza, and Israel’s participation in the Olympic Games. This content was then distributed across social media and various websites.
Despite the efforts, the operation failed to gain significant traction. Most of the social media posts generated little to no engagement, with few likes, shares, or comments. OpenAI reported that the long-form articles created using ChatGPT also lacked substantial interaction on social media platforms.
As a result of this activity, the accounts involved have been banned from OpenAI’s services. The company continues to monitor for any further violations, ensuring that similar attempts are quickly identified and disrupted.
Previous Findings by Microsoft
In August, a report from Microsoft highlighted the activities of Storm-2035, describing it as an Iranian network focused on polarizing U.S. voter groups. The group’s messaging spanned topics such as U.S. presidential candidates, LGBTQ rights, and the Israel-Hamas conflict. Microsoft’s findings indicated that the operation was part of a broader strategy to influence U.S. voters on both sides of the political spectrum.
The tactics used by Storm-2035 are evolving. The group has interspersed political content with non-political posts, such as those about fashion and beauty, likely to appear more authentic and build a following. This tactic has been part of a broader trend in foreign influence operations, with similar activities observed from Russian networks.
Microsoft has warned of an increase in foreign influence operations targeting the U.S. election, not only from Iranian networks like Storm-2035 but also from Russian groups. These operations have used social networks to spread a mix of fabricated, exaggerated, and legitimate information. One such Russian group, known as Doppelganger, has been active in spreading misleading content across various platforms.
Phishing Attacks on High-Profile Targets
After detecting suspicious activities, OpenAI blocks the Iranian group’s ChatGPT accounts for targeting the US election. In a related development, Google’s Threat Analysis Group reported the detection of Iranian-backed spear-phishing attacks targeting high-profile individuals in Israel and the U.S., including those connected to the U.S. presidential campaigns. These attacks have been attributed to a threat actor known as APT42, which is affiliated with Iran’s Islamic Revolutionary Guard Corps.
APT42 has used sophisticated social engineering techniques to gain the trust of targets before deploying phishing links designed to capture login credentials. The group has targeted services like Google, Dropbox, and OneDrive in its campaigns, demonstrating a deep understanding of the platforms it seeks to exploit.
As part of its ongoing efforts, OpenAI blocks Iranian group’s ChatGPT accounts for targeting US elections and influencing voters. As the U.S. presidential election approaches, tech companies like OpenAI, Microsoft, and Google remain vigilant in monitoring and disrupting foreign influence operations. The actions taken against Storm-2035 highlight the ongoing efforts to protect the integrity of the electoral process from covert manipulation by state-sponsored actors.
The Double-Edged Sword of AI
Artificial Intelligence (AI) has revolutionized various fields, offering new possibilities in everything from healthcare to entertainment. However, the recent disruption of an Iranian influence operation by OpenAI highlights the darker side of AI. The group, known as Storm-2035, exploited AI technology to generate content aimed at influencing the U.S. presidential election and other global issues.
AI tools like ChatGPT are designed to assist users in generating content quickly and efficiently. However, in the wrong hands, these tools can be weaponized to create convincing narratives that mislead the public. Storm-2035’s use of AI to generate articles and social media posts demonstrates how easily this technology can be misused.
Also Read: Eric Schmidt Blames Remote Work for Falling Behind Against OpenAI, Apologizes After Backlash.