On Wednesday, OpenAI announced that it had successfully disrupted over 20 deceptive operations worldwide since the start of the year. OpenAI blocks 20 global malicious campaigns that target various social media platforms and websites. These networks attempted to misuse OpenAI’s platform for malicious purposes, ranging from malware debugging to generating fake profiles and articles. The networks were also found to generate AI-generated profile pictures and biographies for use on social media platforms such as X (formerly Twitter).
OpenAI emphasized that although threat actors are evolving their techniques, there is no evidence of significant breakthroughs in creating new malware or gaining viral traction through AI-assisted content.
OpenAI disclosed that it had halted attempts to create election-related social media content in countries such as the U.S., Rwanda, and India, as well as in the European Union. None of these efforts resulted in viral engagement. One case involved an Israeli company, STOIC (also known as Zero Zeno), which had been generating AI-based social media commentary on Indian elections. As OpenAI blocks 20 global malicious campaigns, efforts by actors like STOIC and SweetSpecter to exploit AI were foiled.
Cybersecurity Threats and Misuse of AI
Several cyber operations were exposed, highlighting the increasing misuse of AI tools. OpenAI blocks 20 global malicious campaigns, preventing them from generating AI-assisted content related to elections in the U.S. and other regions. One example, SweetSpecter, a China-based actor, used AI for reconnaissance, vulnerability research, and anomaly detection. This group also made unsuccessful phishing attempts to target OpenAI employees, aiming to install the SugarGh0st malware.
Another group, Cyber Av3ngers, associated with Iran’s Islamic Revolutionary Guard Corps (IRGC), was involved in research on programmable logic controllers. Similarly, the Iranian group Storm-0817 utilized AI to debug Android malware and scrape Instagram profiles for data.
Additional threat actors using OpenAI’s models were identified as part of influence operations. Two such networks, codenamed A2Z and Stop News, were producing content in both English and French to post across multiple platforms. Stop News, in particular, was noted for its frequent use of AI-generated images, often in cartoonish styles with bold colors.
AI-Generated Misinformation and Fraud

OpenAI also blocked the Bet Bot and Corrupt Comment networks. Bet Bot used AI to engage with users on X, directing them to gambling sites, while Corrupt Comment manufactured fake comments to drive traffic to certain profiles.
This crackdown comes two months after OpenAI banned accounts linked to the Iranian covert influence operation Storm-2035. This group had been using ChatGPT to generate content related to the upcoming U.S. presidential election.
Despite these disruptions, concerns remain about AI’s potential for spreading misinformation. Cybersecurity firm Sophos, in a recent report, warned that AI could be abused to spread microtargeted misinformation via tailored emails. The report highlights how AI could be misused to generate misleading political campaign content, including AI-generated personas designed to manipulate voters.
Researchers argue that AI can be reconfigured to spread disinformation at scale, linking false narratives to political movements or candidates, and confusing the public.
Collaboration Between AI Companies and Governments
At the Predict cybersecurity conference on Wednesday, senior U.S. officials discussed the global impact of AI from a cybersecurity perspective. Lisa Einstein, the Chief AI Officer at the Cybersecurity and Infrastructure Security Agency (CISA), urged AI companies to collaborate with government agencies like CISA to address AI-related threats. She emphasized the importance of forming strong relationships and trust between the private and public sectors before any crisis occurs.
Einstein expressed concern that the rush to develop AI technologies may lead to security being overlooked. She warned that this could replicate past mistakes made during the introduction of the internet and social media, complicating the cybersecurity threat landscape.
Jennifer Bachus, Principal Deputy Assistant Secretary at the U.S. State Department, raised concerns about AI being exploited for surveillance, especially by adversarial states.