Elon Musk’s social media platform, formerly known as Twitter but now rebranded as “X,” has come under intense scrutiny for its alleged role in disseminating Russian propaganda regarding the ongoing crisis in Ukraine. A recent study conducted by the European Commission, the governing body of the European Union, has shed light on this pressing concern. The findings, unveiled this week, reveal a troubling trend in amplifying Russian propaganda related to Kyiv, reaching a substantially larger audience than before the conflict escalated.
The report serves as a stark reminder of the alarming proliferation of disinformation campaigns targeting Ukraine, which can potentially exacerbate an already volatile geopolitical situation. It also emphasizes the pivotal moment when, as tensions were reaching a critical point, several major social media companies, including Meta (the parent company of Facebook and Instagram), had pledged to implement stringent measures to combat the spread of Russian propaganda. This commitment was notably reported by the Washington Post and was seen as a proactive step in mitigating disinformation’s negative impact on international relations.
Moreover, the European Commission’s report draws a crucial connection between these recent developments and the hypothetical scenario wherein the EU’s proposed social media law, the Digital Services Act, had been enacted the previous year. Under this legislation, the unchecked dissemination of disinformation and hate speech would have been unequivocally deemed a violation. This underscores the urgency and importance of regulatory measures designed to counteract the harmful effects of misinformation in the digital age.
European Union’s Robust Response to Kremlin-Linked Social Media Influence
In a study conducted in 2022, researchers noted a significant expansion in the reach and influence of social media accounts linked to the Kremlin across Europe. This trend persisted and intensified in the first half of 2023, primarily attributed to alterations in the safety standards of Twitter. Interestingly, it’s worth mentioning that Twitter recently rebranded and now goes by the name “X.”
Unlike the United States, the European Union has displayed a more proactive stance regarding regulating government-backed propaganda. The Digital Services Act, which officially took effect for major social media platforms on August 25, contains provisions that compel these platforms to evaluate the risk associated with disseminating false information, implement measures to prevent the algorithmic amplification of the most harmful content and undergo regular performance audits.
Moreover, European regulations have resulted in banning outlets such as RT (formerly known as Russia Today) on platforms like YouTube. RT, once one of the most popular channels, has faced restrictions due to its close ties with Russian state media. This regulatory approach represents a concerted effort by the European Union to safeguard its information space and protect its citizens from the potentially manipulative influence of foreign actors. It underscores the importance of balancing freedom of expression with the need to combat disinformation in an increasingly interconnected digital world.
Ineffectiveness of Content Moderation and User Protection Across Meta and Twitter Platforms
The research in question was conducted by Reset, a non-profit analysis group that advocates for increased oversight of digital platforms. Beyond the revelations concerning Musk’s X, the report also shed light on Instagram, Telegram, and Facebook, which Meta owns, and these platforms have not escaped criticism either.
In terms of sheer numbers, it has become evident that pro-Kremlin accounts continue to amass the largest audiences across Meta’s various platforms. Furthermore, the research group highlighted that since Russia’s invasion in February 2022, the audience size for Kremlin-backed accounts on Telegram has more than tripled.
A rather disconcerting finding from the report indicates that no platform consistently applied its terms of service. This observation emerged after repeated tests of user notification systems were conducted across various Central and Eastern European languages. This inconsistency in enforcing terms of service raises significant concerns regarding the effectiveness of content moderation and user protection on these platforms.