OpenAI bans accounts using ChatGPT for social media monitoring, citing concerns over misuse by authoritarian regimes. OpenAI has taken action against several accounts linked to a suspected social media surveillance operation based in China. The banned accounts reportedly used ChatGPT to draft sales pitches and debug code for an AI assistant designed to monitor anti-China protests in Western nations. The findings were detailed in a report released by OpenAI.
The AI assistant, known as “Qianyue Overseas Public Opinion AI Assistant,” allegedly collected real-time data on protests in countries like the US and the UK. Reports generated by the tool were sent to Chinese authorities, intelligence agents, and embassy officials.
Concerns Over AI Misuse
OpenAI emphasized that authoritarian regimes are attempting to exploit US-built AI tools for their own agendas. The company aims to shed light on how AI technologies can be weaponized against democratic nations and civil liberties.
Ben Nimmo, a principal investigator at OpenAI, described the findings as troubling, pointing out how a non-democratic entity attempted to use democratic AI models for purposes contrary to democratic values.
Meta, whose Llama AI model was also reportedly used, responded by highlighting that AI models are becoming widely accessible. The company noted that China is heavily investing in AI development and releasing its own models at the same pace as US companies.
Additional Malicious Networks Disrupted
To prevent AI exploitation, OpenAI bans accounts using ChatGPT for social media monitoring linked to surveillance operations. OpenAI also shuts down other groups that misuse ChatGPT for illicit activities. These included:
-
North Korean Fraudulent Employment Scheme:
Accounts linked to North Korea were found generating fake résumés, job profiles, and responses to avoid suspicion in remote job applications. Some applications were reportedly posted on LinkedIn.
-
Sponsored Discontent:
A network suspected to be of Chinese origin created anti-US content in English and Spanish. These articles were published on Latin American news websites in Peru, Mexico, and Ecuador.
-
Romance Scams:
A Cambodia-linked network used AI to translate and generate social media comments in multiple languages to aid in romance and investment scams.
-
Iranian Influence Operations:
Five accounts were identified for creating content supporting pro-Palestinian, pro-Hamas, and anti-Israel narratives. These posts were shared on platforms associated with Iran’s propaganda network.
-
North Korean Cyber Threats:
Accounts operated by North Korean actors gathered intelligence on cyber intrusion tools and cryptocurrency. They also debugged code for Remote Desktop Protocol (RDP) brute-force attacks.
-
Election Influence Campaign:
A covert operation targeted the Ghanaian presidential election through English-language articles and social media content.
-
Online Task Scam:
A Cambodian-origin scam used AI to translate comments between Urdu and English to lure victims into fraudulent online tasks.
Growing AI Threats in Cybersecurity
OpenAI bans accounts using ChatGPT for social media monitoring, emphasizing the risks of AI in geopolitical conflicts. The misuse of AI in cyber-enabled disinformation and malicious activities is becoming a global concern. Google’s Threat Intelligence Group recently reported that over 57 threat actors from China, Iran, North Korea, and Russia had used AI tools to refine their attack strategies.
OpenAI emphasized that AI companies play a critical role in identifying and countering such threats. The company urged collaboration between AI providers, hosting platforms, social media firms, and cybersecurity researchers to strengthen detection and enforcement mechanisms.
The rapid growth of artificial intelligence has brought both benefits and risks. While AI is transforming industries by improving efficiency and innovation, its misuse raises serious ethical and security concerns. OpenAI’s report highlights how AI can be exploited for surveillance, disinformation, and cyberattacks.