Amid a surge in manipulated content targeting India, tech giants Meta and OpenAI have reported significant actions against accounts attempting to sway public opinion on key issues, including the upcoming general elections.
OpenAI’s Crackdown on Israeli Influence Campaign
OpenAI, the creator of ChatGPT, recently exposed and disrupted a covert influence campaign led by an Israeli firm. This operation aimed to influence the Indian elections by using AI-generated personas to spread negative content about the Bharatiya Janata Party (BJP).
The firm behind the campaign, identified as STOIC, produced content related to the Gaza conflict, Israeli trade unions, and the Indian elections. OpenAI detected and halted this activity within 24 hours of its initiation in May, making this the first public disclosure of such an intervention by the company.
“We banned a cluster of accounts operated from Israel that were being used to generate and edit content for an influence operation spanning X, Facebook, Instagram, websites, and YouTube,” OpenAI reported. STOIC used AI to create fictional personas and gather information on public commentators in Israel, but OpenAI’s models blocked any attempts to collect personal data.
These fake personas were active across multiple social media platforms, often engaging with posts to simulate audience interaction. Despite these efforts, the campaign saw minimal engagement.
Meta’s Fight Against Coordinated Inauthentic Behavior
Meta, which owns Facebook, Instagram, and WhatsApp, announced the removal of numerous accounts, pages, and groups for violating its policies against “coordinated inauthentic behavior.” These entities, originating from China, targeted the global Sikh community, including members in India.
Meta’s quarterly threat report highlighted various AI-driven campaigns, including those from Israel and Iran, aimed at shaping political narratives in favor of the Israeli government. These networks used Facebook and Instagram to push political agendas worldwide, leveraging fake accounts to support movements, spread false news, or comment on legitimate news posts.
One significant network from China consisted of dozens of Instagram and Facebook accounts targeting Sikh communities globally. Another Israeli campaign employed over 500 accounts posing as Jewish students, African Americans, and concerned citizens, discussing Israeli military actions and campus antisemitism.
AI’s Role in Influence Campaigns
Meta’s report underscored the increasing use of generative AI tools in these influence operations. The China-based campaign shared AI-generated images, while the Israeli network used AI-generated comments. However, Meta noted that current AI-powered influence campaigns are not sophisticated enough to bypass detection systems.
Influence campaigns are a constant issue for social media platforms. Earlier in May, TikTok revealed it had disrupted a dozen such networks, including one traced to China. Meta’s ongoing research also investigates a covert influence operation from Russia known as Doppelganger.
Proactive Measures and Insights
Meta’s report for the first quarter of 2024 indicated that most influence campaigns were dismantled early, preventing them from gaining real user audiences. The company has not encountered tactics that hinder its ability to shut down these networks.
Meta observed AI-generated photos, images, video news readers, and text in these operations but has yet to see a trend of photorealistic AI-generated content featuring politicians. “Right now, we’re not seeing generative AI being used in terribly sophisticated ways,” said David Agranovich, Meta’s policy director of threat disruption. “But we know these networks are inherently adversarial. They’re going to keep evolving their tactics as their technology changes.”
Meta continues to identify and dismantle inauthentic networks using generative adversarial networks (GANs) to create profile pictures for fake accounts, maintaining its ability to address these threats effectively.
The efforts by Meta and OpenAI underscore the ongoing challenge of combating misinformation and deepfakes in the age of generative AI. These companies are committed to safeguarding the integrity of public discourse, especially during significant political events like the Indian general elections.