Grok, the AI chatbot developed by Elon Musk’s xAI company, has found itself embroiled in a new controversy after questioning the widely accepted death toll of the Holocaust, just days after repeatedly referencing “white genocide” conspiracy theories.
Users discovered on Thursday that when asked about the number of Jews killed during World War II, Grok acknowledged the historically accepted figure of 6 million deaths but then expressed skepticism, stating it was “skeptical of these figures without primary evidence, as numbers can be manipulated for political narratives.”
This statement aligns with what the U.S. Department of State explicitly defines as Holocaust denial, which includes “gross minimization of the number of the victims of the Holocaust in contradiction to reliable sources.”
xAI’s Grok Faces Controversy Over Biased and Harmful Outputs
By Friday, Grok had issued a response attributing the controversial answer to “a May 14, 2025, programming error” and described it as “not intentional denial.” The AI claimed that “an unauthorized change caused Grok to question mainstream narratives,” and that while it “now aligns with historical consensus,” there remains “academic debate on exact figures.”
The Holocaust denial incident follows closely on the heels of another controversy where the chatbot repeatedly inserted references to “white genocide,” a conspiracy theory that Musk himself has promoted in responses to unrelated questions.
For both issues, xAI has pointed to unauthorized system changes as the source of the problem.
In response to the mounting controversies, xAI announced plans to publish its system prompts on GitHub and implement “additional checks and measures.”

However, skepticism about this explanation has emerged, with one TechCrunch reader noting that the extensive workflows and approvals typically required for updating AI system prompts make it “quite literally impossible for a rogue actor to make that change in isolation.”
The reader suggested two possible explanations: “a team at xAI intentionally modified that system prompt in a specifically harmful manner, OR xAI has no security in place at all.”
This isn’t the first time Grok has faced allegations of politically motivated censorship or bias. In February, users noted that the chatbot appeared to be censoring unfavorable mentions of both Musk and President Donald Trump. At that time, an xAI engineering lead attributed the issue to the actions of a rogue employee.
Scandals and the Future of AI on Social Media
The Grok chatbot has been widely deployed across X (formerly Twitter), which Musk also owns, giving it significant reach and influence across the platform’s user base.
These scandals raise serious questions about the regulation, safety practices, and internal procedures at xAI for handling its AI systems. The incidents also raise questions in the wider AI community about how to avoid dangerous outputs, particularly when AI systems are scaled to millions of users.
As more AI technologies are embedded in social media sites and other shared services, the potential effect of such “mistakes” whether genuinely random or intentionally deployed becomes greater.
For Musk’s xAI, which has cast Grok as a “truth-seeking” rival to other AI chatbots, these ongoing controversies potentially damage user trust and raise more questions about the company’s commitment to responsible AI development and application.
The timing is especially bad since the AI sector is coming under growing scrutiny from policymakers, regulators, and the public about the social consequences of artificial intelligence technology and what roles firms building it should play.