The world’s biggest social media firms, Meta, Google, TikTok, and X, have committed to stepping up efforts to block illegal hate speech on the internet under a new voluntary agreement with the regulators of the EU.Â
It comes as companies look to show compliance with the EU’s detailed digital regulation package: the Digital Services Act.
Tech Companies Agree to Tougher Hate Speech Measures in New EU Code
On Monday, the European Commission included a new package of obligations into the DSA through the “Code of Conduct on Countering Illegal Hate Speech Online Plus,” an updated version of a code of conduct from 2016.Â
The agreement brings together twelve big digital services: Facebook, Instagram, TikTok, Twitch, X (formerly Twitter), YouTube, Snapchat, LinkedIn, Dailymotion, Jeuxvideo.com, Rakuten Viber, and consumer services from Microsoft.
The participating platforms have accepted several key commitments under the new code to ensure that the fight against hate speech is more open and effective. One of them is a specific requirement to review “at least two-thirds of hate speech notices” within 24 hours of their receipt.Â

Another one is third-party monitors that will be authorized to assess the processes that the platforms use to review hate speech, and further transparency will be given to methods of detection and reduction.
EU Commissioner Michael McGrath emphasized the importance of these measures, stating, “Hatred and polarization are threats to EU values and fundamental rights and undermine the stability of our democracies.”Â
He added that “the internet is amplifying the negative effects of hate speech” and expressed confidence that the enhanced code would “ensure a robust response.”
These codes of practice are certainly a step forward in content moderation, but the fact remains that such is only voluntary. Platforms face no formal threat of penalty in the event of a withdrawal from these commitments, and this is precisely what Elon Musk demonstrated when at the helm of X.Â
Challenges in Combating Online Hate Speech
Indeed, in 2022, when the company was still called Twitter, Musk withdrew the company from the Code of Practice on Disinformation.
The revised code’s integration with the DSA is particularly significant as it provides platforms with a framework to demonstrate their compliance with the Act’s broader obligations regarding illegal content moderation. This alignment suggests an evolving approach to content regulation in the EU, combining voluntary industry commitments with formal regulatory requirements.
The enhanced code reflects the EU’s ongoing efforts to address online hate speech while balancing platform autonomy with regulatory oversight.Â
As social media continues to play an increasingly central role in public discourse, the effectiveness of these voluntary commitments in conjunction with formal regulations like the DSA will be closely watched by policymakers, industry observers, and digital rights advocates.
It comprises a significant proportion of the social media ecosystem as twelve major platforms are participating in this initiative and might set a new standard for content moderation practice worldwide.Â
The agreement’s voluntary nature, however, coupled with past examples of platform withdrawal, underlines ongoing challenges in achieving consistent, industry-wide approaches to combating online hate speech.