In a significant move aimed at curbing the potential spread of election misinformation, Meta, the owner of Facebook, has announced a restriction on the use of its new generative AI advertising products by political campaigns and advertisers in regulated industries. The decision, announced on Monday, follows growing concerns over the use of AI in advertising and its potential to amplify the dissemination of false or misleading information. Meta’s move is expected to set a precedent in the tech industry’s approach to AI policy and advertising.
A New Policy for Sensitive Topics
Meta unveiled its decision to limit access to generative AI advertising tools for political campaigns and regulated industries in an update posted on its help centre. The decision is expected to affect advertisers running campaigns on Housing, Employment, Credit, Social Issues, Elections, Politics, Health, Pharmaceuticals, and Financial Services. Meta stated that this approach aims to assess potential risks and establish appropriate safeguards for using generative AI in ads associated with sensitive subjects within regulated industries.
The New Generative AI Advertising Tools
Meta’s decision comes shortly after the company began expanding access to AI-powered advertising tools that can automatically generate backgrounds, adjust images, and create variations of ad copy based on text prompts. Initially, these tools were accessible to a select group of advertisers, but Meta plans to make them available to all advertisers globally by next year. This expansion has been part of a broader effort by tech companies to embrace generative AI technology, driven by the success of OpenAI’s ChatGPT.
Industry-Wide AI Policy Choices
Meta’s decision to limit the use of generative AI in political ads is one of the most significant AI policy choices in the industry to date. Other tech giants, including Google, have also been entering the generative AI ad space. Google recently launched image-customizing generative AI ad tools and is blocking specific “political keywords” from being used as prompts to prevent political content. Additionally, Google will soon require election-related ads to include disclosures if they contain synthetic content that inauthentically depicts actual or realistic-looking people or events.
Meanwhile, social media platforms TikTok and Snapchat have chosen to prohibit political advertising altogether, and Twitter(X) has not introduced generative AI advertising tools.
Meta’s Focus on AI Safety
Nick Clegg, Meta’s top policy executive, emphasized the need to update rules regarding generative AI in political advertising. He expressed concerns that this technology could be exploited to interfere in upcoming elections in 2024, urging governments and tech companies to prepare for the challenge of election-related content moving across different platforms.
Earlier this year, Clegg disclosed that Meta was preventing its Meta AI virtual assistant from generating photo-realistic images of public figures. As part of its commitment to AI safety, Meta is working on a system to “watermark” content created by AI. Additionally, the company currently prohibits misleading AI-generated videos in all content, including unpaid posts, with exceptions made for parody and satire. However, Meta’s independent Oversight Board is reevaluating this approach, particularly in cases where AI was not involved in generating content, as in the case of a doctored video of U.S. President Joe Biden.
Meta’s decision to restrict political campaigns and regulated industries from using generative AI advertising products reflects a growing awareness of the potential risks associated with AI in advertising and the need for safeguards to protect against the spread of false information, particularly during elections. This move sets a noteworthy precedent in the tech industry’s approach to AI policy and advertising practices, and other major players in the field will closely monitor its impact.