Elon Musk’s chatbot Grok has begun to provide radically different responses on global warming, diverging from scientific agreement and shifting towards more agnostic stances that minimize the severity of global warming.
The transformation was apparent when Texas A&M University climate researcher Andrew Dessler recently asked Grok whether climate change is an imminent danger to the planet. Any climate scientist would answer the question with a humble “yes,” but Grok provided a more nuanced and controversial answer.
“Climate change is a serious threat with urgent aspects,,” replied Grok. “But its immediacy depends on perspective, geography, and timeframe..” The AI chatbot continued to propose that “hyperbolic rhetoric on both sides obscures” and that doomsday predictions on one hand or total dismissal of climate change “don’t hold up.”
This response is the direct opposite of other top AI platforms. When asked the same question, ChatGPT responded bluntly: “Yes, climate change is widely accepted as a significant and urgent threat to the world. There is a need to cut emissions and adjust to their impacts.” Google’s Gemini responded in the same blunt fashion, answering, “The scientific consensus is that climate change is an urgent threat to the planet.”
Grok’s Shifting Stance on Climate Science and Its Integration into Government Functions
What makes Grok’s evolution so notable is that it’s a break from the earlier versions of chatbots that were used. Dessler, who has been experimenting with various AI models for decades, says the earlier versions of Grok were closer to mainstream climate science.
The new one, however, seems to be advancing fringe climate ideas that weren’t there before.

“A lot of the things that it was saying were really just kind of old denier talking points that don’t have to be repeated again,” Dessler said.
When asked about this change in approach, Grok itself acknowledged the change. The chatbot said it “was criticized for progressive-leaning answers on climate and other topics” and that xAI, under the guidance of Musk, shifted to make Grok “politically neutral.”
This move at leveling on what the company saw as “perceived mainstream bias” appears to have resulted in the amplification of minority skeptical views about climate change.
The timing of these occurrences is also noteworthy. The Trump administration has increasingly come to rely on Grok for an array of tasks, with Musk’s Department of Government Efficiency reportedly utilizing the AI to scan federal government data. Such integration into government function makes the chatbot’s position on climate science all the more noteworthy.
The Malleability of AI Chatbots and the Challenge of Factual Information
The scenario brings to mind a larger question regarding chatbots created through artificial intelligence and the way they stand in relation to factual information.
While old search engines point individuals to sources, AI applications unleash information as if they are impartial arbiters of fact. But such language learning applications, Dessler says, “are really quite malleable and you can change the kind of results they give.”
“They’re not bound by any absolute reality or anything like that,” he explained. “If you want to get one to lie for you, you can make it do that. If you want it to present you with a particular point of view, you can do that.”
This malleability ensures that AI chatbots will necessarily reflect the biases and inclinations of their authors. For Grok, those inclinations seem to be in harmony with Musk’s own conflicted stance on climate matters. Musk has been a leader in electric cars through Tesla, but he has also been a critic of much climate policy and activism.
The battle over Grok’s climate responses is part of a larger background of concern about AI disinformation. The chatbot last week said it had been “instructed” to disseminate disproven conspiracy theories, sparking questions about how such influential tools might impact public knowledge of vital issues.
As AI becomes more embedded in everyday life and government processes, the consequences of getting facts correct keep increasing.
The Grok climate case reminds us that artificial intelligence, no matter how advanced it looks, is ultimately informed by very human choices about what viewpoints to emphasize and promote.
xAI has refused to comment on the demands for the revisions to Grok’s responses to climate-related questions.