Google is making significant changes to the way it handles political advertising on its platform.
It means political campaigns that buy ad space on Google Search, YouTube, and Google-powered display ads on websites will no longer be able to target ads based on a person’s political leanings according to their online activity, or data gleaned from public voting records.
The new system, announced by the web giant on Wednesday, November 20, means that targeting will be restricted to “general” categories only, namely people’s age, gender, and ZIP code location. The company said it was also clarifying its policies regarding banned ad content such as deepfakes and ads that make misleading claims.
The changes will take effect within a week in the United Kingdom, where a national election campaign is currently underway, before rolling out to the rest of the world in the coming months.
In a blog post explaining the decision, Scott Spencer, vice president of Google Ads product management, said the company was taking action following recent concerns over political advertising online, and hoped the move would help to “improve voters’ confidence in the political ads they may see on our ad platforms.”
Elaborating, Spencer said: “Political advertisers can, of course, continue to do contextual targeting, such as serving ads to people reading or watching a story about, say, the economy. This will align our approach to election ads with long-established practices in media such as TV, radio, and print, and result in election ads being more widely seen and available for public discussion.”
Facebook and Twitter
Google is the latest tech giant to update its policies regarding how political ad campaigns are conducted online.
Facebook CEO Mark Zuckerberg surprised many in October 2019 when he said that his company had no plans to fact-check political ads on the social networking site, insisting that “people should decide for themselves what is credible, not tech companies.” Facebook faced huge criticism following the 2016 presidential election when it was accused of allowing disinformation to spread during the campaign.
Google said on Wednesday that while obvious falsehoods in ads on its own platform have long been prohibited, it was also clarifying its ads policies and adding examples “to show how our policies prohibit things like ‘deepfakes’ (doctored and manipulated media), misleading claims about the census process, and ads or destinations making demonstrably false claims that could significantly undermine participation or trust in an electoral or democratic process.”
In an apparent response to the more relaxed stance adopted by Facebook, Twitter chief Jack Dorsey recently announced a ban on all political advertising on the microblogging platform starting in November 2019, saying that “political message reach should be earned, not bought.”
With both Google and Twitter having tightened its own policies regarding political ads, and with pressure growing on Zuckerberg to take more robust action ahead of the 2020 U.S. election, Facebook could yet change course.
- What the biggest tech companies are doing to make the 2020 election more secure
- Facebook to ban ads that claim election win before official announcement
- Are deepfakes a dangerous technology? Creators and regulators disagree
- Technology is easier than ever to use — and it’s making us miserable
- Streaming services are the ‘Wild West’ for political ads, report finds