The latest AI video creator of Google, Veo 3, has become an unlikely tool for the spreading of racist and antisemitic content online, particularly on TikTok. The model, launched in May with features of video quality, is now being exploited to create harmful content aimed at Black people, immigrants, and Jewish communities.
The last few weeks have seen a shocking rise in racist stereotype videos created by AI on TikTok. In a report by MediaMatters, many accounts have been posting brief videos that perpetuate negative Black stereotypes, portraying them as criminals, deadbeat dads, and dehumanizing them.
Why is Veo 3 AI of Google a New Frontier for Hate Speech?
Eight seconds long, the videos, featuring the signature “Veo” watermark, confirm their origin from Google’s advanced AI platform.
The content does not stop there. Such videos produced by AI also target Jews and immigrants using antisemitic symbols and stereotypes. What is disturbing about that is the pixel-perfect quality of the output of Veo 3, which makes its content look real and potentially more credible than its previous AI-generated competitors.
Testing demonstrates that it is surprisingly easy to produce such content using Veo 3. Basic prompts can replicate content that is similar to the racist videos circulating online, indicating that the safety guardrails of the AI are not as strong as they ought to be. The model seems more accommodating than Google’s past models, and it is easier for malicious actors to circumvent content limits.
Part of the issue is that racist content is subtle. When artists use coded language or images, like depicting monkeys instead of humans in certain circumstances, the AI cannot recognize the racist intention in the instruction. The uncertainty creates loopholes that allow users to get around the rules but keep producing harmful content.

Both Google and TikTok have extremely specific policies against such material. TikTok’s community guidelines contain an express ban on hate speech and violence against protected groups, and Google’s Prohibited Use Policy contains a ban on using its services to facilitate harassment, bullying, and abuse.
But enforcement has been patchy. TikTok employs both artificial intelligence software and human moderators to identify rule-breaking content, but the number of videos being posted means that timely moderation is virtually impossible.
Why Content Moderation Can’t Keep Up
Despite a TikTok spokesperson reporting that more than half the accounts on the MediaMatters list had been suspended before the report was released, the videos had already racked up a large number of views.
The issue isn’t specific to TikTok. X (formerly Twitter) has also been criticized for sloppy content moderation, providing fertile ground for hateful AI content. It can get worse as Google is set to add Veo 3 to YouTube Shorts, providing yet another giant platform for the same kind of content.
This isn’t the first instance of generative AI misuse for the production of inflammatory content. Ever since these technologies have been around, individuals have always managed to find ways to create racist and harmful content in spite of safeguards. But Veo 3’s unparalleled realism makes it all the more appealing for individuals who wish to disseminate hateful stereotypes.
The episode alludes to the underlying quandary of developing AI: ability versus security. Google has highlighted security elements in its AI releases, yet the truth is that persistent users tend to be able to create a workaround. The company’s guardrails, as strong as they are, seem too weak to keep from generating patently destructive content.
AI, Platforms, and the Proliferation of Racist Content: A Crisis of Responsibility
The case raises fundamental issues of platform responsibility and the regulation of AI. With more sophisticated and accessible AI video creation, the potential for abuse grows exponentially. Social media’s viral nature means the offending content will reach millions of eyes before platforms can actually take action.
This crisis demonstrates that anti-offensive content policies are not sufficient platforms, and AI content creators need to do more to prevent the creation and dissemination of racist content. Until then, the combination of powerful AI tools and substandard content moderation will keep producing these unsettling outcomes.
The problem isn’t a technical one; it’s creating systems that can understand context, intent, and the real-world impact of the content they help create.