Elon Musk’s social media platform X has found itself at the center of a growing controversy in India after its AI chatbot Grok generated politically charged responses about Indian leaders. The incident has prompted government officials to consider holding platforms legally responsible for AI-generated content.
The controversy began when X users in India started posing political questions to Grok. The AI tool responded with several contentious claims, stating that Congress party leader Rahul Gandhi was “more honest” than Prime Minister Narendra Modi and had an “advantage in formal education” over the Prime Minister. Grok even suggested that Modi’s interviews “often appeared scripted,” drawing sharp criticism from political circles.
Musk’s reaction to a BBC article about the incident—a simple “laughing emoticon”—has only intensified the scrutiny.
India Considers Holding X Responsible for Grok-Generated Content
According to government sources cited by PTI, India’s Ministry of Electronics and Information Technology is currently engaging with X to assess the situation.
“Prima facie, it seems Yes,” said one source when asked if X could be held responsible for content generated by Grok, though they noted this view “has to be legally scrutinised.”
This isn’t the first time an AI platform has caused political controversy in India. Last year, the government issued guidelines on artificial intelligence after Google’s Gemini made unfavorable remarks about Prime Minister Modi in response to user queries.

The current situation is further complicated by X’s ongoing legal challenge against the Indian government.
The platform has only brought a case before the Karnataka High Court against content guidelines under Section 79(3)(b) of the Information Technology Act, which X alleges are “unlawful and arbitrary.”
The case contends that the government is establishing a “parallel content-blocking mechanism” that violates due legal procedure under Section 69A of the IT Act.
X contends that the method violates a 2015 Supreme Court decision in the Shreya Singhal case, which held that content could be blocked only under due judicial procedure.
AI Content and Accountability: India’s IT Act Under Scrutiny
Under the existing regime of the IT Act, the websites are required to delete objectionable content within 36 hours or face the loss of their “safe harbour” immunity under Section 79(1), risking liability under different legislations such as the Indian Penal Code.
“Section 79(3)(b) kicks in when an intermediary fails to delete objectionable content according to the direction issued by the government authorities with the mandate,” said the government official. “If a social media intermediary is willing to accept liability or ownership of the content posted by a user, then prosecution is possible and the social media intermediary will always be free to go to court against prosecution.”
The scandal highlights the emerging challenge of regulating social media posts generated by artificial intelligence. With AI applications becoming more and more embedded on social media platforms, governments and technology companies are under pressure to provide answers about who is accountable when AI-generated content malfunctions.
currently, the Indian government is defending the fact that there are already regulations to censor social media content, and these must be adhered to by the companies. Only time will tell how the courts ultimately interpret the rules and if they will be applied to AI content.