Connect with us

International

Tech giants ban political advertisers from using AI tools over election misinformation concerns

Meta and other tech giants ban political and regulated advertisers from using their generative AI advertising tools amid concerns of misinformation in elections.

Published

on

Meta, Facebook‘s parent company, has announced a ban on political campaigns and regulated advertisers from using their new generative AI advertising products. This decision was made public by a company spokesperson on Monday (6 Nov). In a similar move, TikTok, Google, and Snapchat have also planned to implement these restrictions.

The ban comes in response to concerns raised by lawmakers regarding the potential for these AI-powered tools to accelerate the spread of misinformation during elections. According to reports from Reuters, Meta openly disclosed this decision in an update posted to their help center on Monday (6 Nov) evening.

While Meta’s advertising standards already prohibit content that has been fact-checked and debunked by the company’s fact-checking partners, there were no specific rules governing the use of AI in advertising.

The company stated, “As we continue to test our new generative AI advertising tools in Ads Manager, advertisers running campaigns that qualify as ads for Housing, Jobs, or Credit or Social Issues, General Elections, or Politics, or related to Health, Pharmaceuticals, or Financial Services are currently not allowed to use these generative AI features.”

They further added, “We believe this approach will enable us to better understand the potential risks and develop appropriate protections for the use of generative AI in advertising related to potentially sensitive topics in regulated industries.”

This policy update comes one month after Meta, the world’s second-largest digital advertising platform, announced plans to expand access for advertisers to AI-powered advertising tools that can instantly generate backgrounds, image adjustments, and ad text variations in response to simple text prompts. Initially, these tools were only available to a select group of advertisers starting in the spring and were on track for a global rollout to all advertisers next year, according to the company at the time.

Meta and other tech companies have been racing to launch generative AI advertising products and virtual assistants in recent months in response to the buzz around last year’s debut of OpenAI’s ChatGPT chatbot, which can provide human-like written responses to questions and requests.

So far, these companies have released limited information about the safeguards they plan to implement on these systems, making Meta’s decision regarding political advertising one of the most significant AI policy choices in the industry to date.

Meta’s Policy Head, Nick Clegg, stated last month that the use of generative AI in political advertising was an area where the company needed to update its regulations.

Clegg also informed Reuters that Meta was blocking the use of its AI virtual assistant to create photo-realistic images of public figures and was developing a system to label AI-generated content. The company has explicitly banned misleading AI-generated videos in all content, including unpaid organic uploads, except for parodies or satire.

Alphabet‘s subsidiary, Google, has also decided to exclude political elements from its generative AI advertising products based on image content. This tool was launched just last week. A Google spokesperson stated, “We will block ‘political keywords’ from being used as cues.”

Additionally, Google plans to update its policy in mid-November, requiring election-related ads to provide information regarding the use of AI. Such ads will be required to carry the written disclaimer, ‘this content is synthetically generated and does not authentically depict real people or events or appear realistic.’ TikTok and Snapchat‘s owners have also banned AI-generated political advertising.

Share this post via:

Trending