In its latest cybersecurity report, OpenAI claims to have foiled China-backed election disinformation campaigns targeting U.S. voters. OpenAI has successfully blocked a phishing attempt allegedly carried out by a group with links to China, heightening concerns over cybersecurity threats from Beijing targeting U.S. artificial intelligence (AI) firms. The AI company disclosed that the attackers, known as “SweetSpecter,” aimed to exploit its staff by posing as ChatGPT users earlier this year.
The phishing attempt reportedly involved emails sent to OpenAI employees disguised as customer support messages. These emails contained malware attachments designed to steal sensitive information and take screenshots if opened. However, OpenAI’s security team detected and stopped the attack before it could reach employee inboxes. Immediate steps were taken to contact the employees who had been targeted.
Rising Cybersecurity Threats for AI Companies

This incident has reignited concerns over the vulnerability of leading AI companies, especially as the competition between the U.S. and China over AI development intensifies. OpenAI’s success in thwarting the attack highlights the increasing risks these firms face. The revelation comes as part of a broader wave of cybersecurity issues involving U.S. infrastructure.
Earlier this year, another case involving a former Google engineer charged with stealing AI-related trade secrets for a Chinese firm raised similar concerns. These incidents reflect a growing pattern of cybersecurity threats from state-affiliated actors.
OpenAI’s latest threat intelligence report reveals additional cases of AI misuse in phishing attempts and cybercrime. The report underscores the global cybersecurity challenges faced by the tech industry as AI adoption grows. OpenAI has been proactive in tackling threats, including shutting down accounts linked to groups in China and Iran, which were using AI for various illicit activities such as coding assistance and research.
Tackling Election Disinformation
OpenAI claims to have foiled China-backed election-related activities designed to manipulate social media narratives. In 2024 alone, OpenAI addressed more than 20 cases where AI models were used in attempts to spread election disinformation. Notable examples include accounts producing fake content related to U.S. elections, and incidents in Rwanda involving AI-generated election activity on social media. OpenAI, supported by Microsoft, is ramping up efforts to mitigate such misuse, emphasizing the urgency of protecting the integrity of AI technologies in the digital space.
The disclosure of the phishing attempt comes just days after the National Security Agency (NSA) announced it was part of a larger investigation into whether Chinese hackers had targeted U.S. telecommunications companies. China’s embassy in Washington has denied these claims.
This year has seen multiple reports of attacks on U.S. critical infrastructure. In a recent case, a hacking campaign, dubbed “Salt Typhoon,” breached American broadband networks. These intrusions gave cybercriminals access to sensitive data, further underlining the scale of cyber threats from state-backed actors.
Effective Response but Rising Threats
During 2024, OpenAI claims to have foiled China-backed election fraud attempts that exploited AI technologies. OpenAI’s quick and effective response to the phishing attempt reflects a strong cybersecurity infrastructure. By blocking the malware-laden emails before they reached employee inboxes, OpenAI demonstrated the importance of having proactive measures in place.
The company’s security team not only stopped the attack but also immediately contacted targeted employees to ensure they were informed, showing a commendable level of preparedness and vigilance. However, the fact that such a targeted attack was attempted at all indicates that AI companies like OpenAI are increasingly becoming prime targets for cybercriminals and state-backed actors.
As AI grows more central to both commercial and governmental operations, its perceived value makes it more attractive to attackers looking to steal intellectual property or disrupt operations. This phishing attempt is not an isolated incident, but rather part of a larger pattern of cyber threats, as seen in cases involving North Korea, Iran, and Russia, according to OpenAI’s report.