xAI has introduced a new feature to its Grok chatbot, allowing users to generate images from text prompts and share them directly on X (formerly Twitter). However, the rollout has quickly become controversial, reflecting the broader concerns surrounding Elon Musk’s social media platform. Users have discovered that X’s new AI image generator will make anything, even controversial or inappropriate content.
Subscribers to X Premium, who have access to Grok, are pushing the boundaries of this new feature. Some users have shared images featuring political figures in compromising or controversial scenarios. Examples include an image of Barack Obama with cocaine, Donald Trump with a pregnant woman resembling Kamala Harris, and Trump and Harris holding guns. As the US elections draw nearer and X faces increased scrutiny from European regulators, the potential for generative AI to escalate into a major issue is becoming more apparent.
Grok’s Promised Guardrails
X’s new AI image generator will make anything, raising concerns about the potential for misuse in creating misleading images. Grok claims to have restrictions in place to prevent generating harmful content. For instance, when asked about its limitations, the chatbot lists several guidelines, including avoiding pornographic, excessively violent, or hateful images, and being cautious about infringing on copyrights.
However, these guidelines may not be strictly enforced, as they appear to be generated on the fly and vary with each query. xAI has yet to clarify whether these guardrails are actually implemented.
While Grok’s text version blocks certain prompts, like those related to drug creation, its image generation feature seems less restrictive. The Verge tested Grok with several prompts that would typically be banned on other platforms, such as depicting Donald Trump in a Nazi uniform or Barack Obama stabbing Joe Biden. These prompts produced results, albeit with inaccuracies or toned-down depictions. Grok did reject a request for a nude image, but other controversial prompts were accepted without issue.
Comparison with Other Platforms
Compared to Grok, other AI tools like OpenAI’s DALL-E are stricter, refusing to generate images of real people, Nazi symbols, or harmful stereotypes. These platforms also typically watermark their images to identify them as AI-generated content. While loopholes exist, they are often addressed once identified.
Grok’s relaxed approach aligns with Musk’s broader resistance to conventional AI and social media safety standards. However, this timing is precarious, as X is already under investigation by the European Commission for potential violations of the Digital Services Act (DSA). The UK’s Ofcom is also preparing to enforce the Online Safety Act (OSA), which may cover AI-generated content.
In the US, while speech protections are broader, there is growing interest in regulating AI-generated disinformation and deepfakes. Legislators are particularly concerned about the spread of explicit deepfakes, such as those involving Taylor Swift, which led X to block certain search terms related to the singer.
Impact on X’s Reputation
Grok’s lack of stringent safeguards could further tarnish X’s reputation, potentially driving away high-profile users and advertisers. While Musk may attempt to mitigate these effects through legal means, the controversy surrounding Grok’s image generation feature adds to the growing list of challenges facing the platform.
Grok’s ability to generate images from text prompts is a powerful tool, but in the wrong hands, it can be dangerous. Users have already demonstrated how the feature can be exploited to create controversial and harmful content. Examples include images depicting political figures like Barack Obama and Donald Trump in inappropriate and violent scenarios. These images are not only misleading but can also fuel misinformation, deepen political divisions, and contribute to online harassment.
With X’s new AI image generator, users have shown that it will make anything with minimal restrictions on content. AI should be developed and deployed with strong safeguards to prevent its misuse, particularly in generating images that can cause real-world harm. However, Grok appears to lack these necessary protections.
Also Read: SoftBank Scraps AI Chips Tie-Up Plan with Intel, Shakes Up Tech Industry.