OpenAI has been criticized for suggesting a ban on Chinese AI models, including DeepSeek and other Chinese AI research facilities.
The suggestion, which is part of OpenAI’s “AI Action Plan,” urges the U.S. government to restrict Chinese AI models in “Tier One” countries under existing export controls.
Skepticism Surrounds OpenAI’s Claims on Chinese AI Data Risks
The company contends that these are security and privacy protocols, citing Chinese data processing laws that require companies to submit user data to the government upon request. OpenAI contends that this would lead to intellectual property theft and other security concerns.
“Chinese AI models are of particular interest in the sense that they are subject to data-sharing mandates to the government,” an OpenAI representative explained in the proposal. The company cites DeepSeek’s R1 reasoning model as an example particularly of interest.
Nonetheless, skeptics remain doubtful of such apprehensions, pointing out that there is no apparent connection between DeepSeek and the Chinese government.
Others say that DeepSeek models are usually run on servers owned by Western corporations such as Microsoft, Perplexity, and Amazon, and hence are not directly run by the Chinese government.
OpenAI Accused of Protectionism Against Chinese AI
Gong Zhe, the senior sci-tech editor of CGTN Digital, calls OpenAI’s proposal an effort to maintain market dominance rather than attempting to address genuine security issues. “This is the same pattern that we’ve seen with TikTok and Huawei,” Zhe told CGTN Digital in an interview.
“It’s a Western business using national security concerns to shut out Chinese competitors rather than competing on technical grounds.” Critics also reference what they see as a double standard: OpenAI’s own products have been charged with spreading disinformation and racial bias, but there have not been equivalent calls for increased scrutiny of those products.
The proposal comes at a time when the AI sector is faced with global challenges that most experts believe must be solved through international collaboration. They include climate modeling, pandemic forecasting, and poverty reduction, where AI can assist in solving them, but efforts can be hindered by fragmenting the global AI ecosystem.

“We need pro-human rights and transparency frameworks, not walls between nations,” explained Dr. Lin Wei, an AI ethics researcher unaffiliated with either company. “Both the EU’s AI Act and Chinese regulation are better than bans.”
OpenAI has attempted to position its product as being in the interest of “democratic AI,” but its critics perceive this as a euphemism for Western domination of the rapidly evolving AI sector. The company’s PR unit has asserted that it is interested in standards of governance, not in their origins, but critics remain unconvinced.
The strain is reflective of broader geopolitical trends driving technology development. As AI becomes more central to national security and economic competitiveness, the line between legitimate security interests and protectionism becomes increasingly blurred.
Industry commentators point out that such a proposal if implemented, would further speed up the fragmentation of the worldwide AI environment into distinct spheres of influence. The fragmentation could, in the long run, impede innovation and restrict the use of AI in solving common global problems.
As policymakers weigh OpenAI’s proposal, the debate is generating fundamental questions about how to balance security requirements, competitive fairness, and global cooperation amid an accelerating world of AI.
Whether or not the U.S. government acts on such limits is uncertain, but the proposal is a turning point intensification of the international competition for AI leadership.