Google changes ad policies again


Damage control: Google has allocated more of its AI tools to deciphering YouTube’s enormous library since the recent hateful videos incident. – Reuters

SAN FRANCISCO: Two weeks into a YouTube advertising boycott over hateful videos, Google is taking more steps to curb a crisis that escalated further than the company anticipated.

Alphabet Inc’s main division is introducing a new system that lets outside firms verify ad quality standards on its video service, while expanding its definitions of offensive content.

A slew of major marketers halted spending on YouTube and Google’s digital ad network after ads were highlighted running alongside videos promoting hate, violence and racism. Google’s initial response, a promise of new controls for marketers, failed to stymie the boycott.

The crisis ignited a simmering debate in digital advertising over quality assurance, or “brand safety,” standards online.

Google has since improved its ability to flag offending videos and immediately disable ads, chief business officer Philipp Schindler told Bloomberg News in a recent interview. Johnson & Johnson, one of largest advertisers to pull spending, said it is reversing its position in most major markets.

Since the boycott began, Google has allocated more of its artificial intelligence (AI) tools to deciphering YouTube’s enormous video library. The company is a pioneer in the field and has used machine learning, a powerful type of AI, to improve many of its products and services, including video recommendation on YouTube and ad-serving.

Automatically classifying entire videos, then flagging and filtering content is a more difficult, expensive research endeavor – one that Google hasn’t focused on much, until now.

“We switched to a completely new generation of our latest and greatest machine-learning models,” said Schindler.

“We had not deployed it to this problem, because it was a tiny, tiny problem. We have limited resources.”

In talks with big advertising clients, Google discovered the toxic YouTube videos flagged in recent media reports represented about one one-thousandth of a per cent of total ads shown, Schindler said.

Still, with YouTube’s size, that can add up quickly. And the attention on the issue coincided with mounting industry pressure on Google, the world’s largest digital ad-seller, for more rigid measurement standards. A frequent demand has been for Google to let other companies verify standards on YouTube.

Google is allowing this now, creating a “brand safety” reporting channel that lets YouTube ads be monitored by external partners like comScore Inc and Integral Ad Science Inc, according to a company spokeswoman.

Google has made quick progress on its own, he said. Using the new machine-learning tools, and “a lot more people,” the company in the last two weeks flagged five times as many videos as “non-safe,” or disabled from ads, than before.

“But it’s five (times) on the smallest denominator you can imagine,” Schindler said. “Although it has historically it has been a very small, small problem. We can make it an even smaller, smaller, smaller problem.”

Vocal critics suggest Google has ignored this problem. Some publishers and ad agencies have called on Google and rival Facebook Inc to more actively police the content they host online. In a speech last week, Robert Thomson, chief executive officer of News Corp, a frequent Google critic, said the two digital companies “have prospered mightily by peddling a flat earth philosophy that doesn’t wish to distinguish between the fake and real because they make copious amounts of money from both.”

The YouTube ad boycott has pushed Google to beef up its policing. In its initial response, Google expanded its definition of hate speech to include marginalised groups. Now it’s adding a new filter to disable ads on “dangerous and derogatory content,” the company said. That includes language that promotes negative stereotypes about targeted groups or denies “sensitive historical events” such as the Holocaust.

Some researchers argue digital platforms should rely on humans to make these editorial decisions. Schindler said he has devoted more manpower to oversee brand safety issues, but stressed that only machine intelligence could contend with YouTube’s size.

“The problem cannot be solved by humans and it shouldn’t be solved by humans,” he said. – Bloomberg

Article type: metered
User Type: anonymous web
User Status:
Campaign ID: 1
Cxense type: free
User access status: 3
Join our Telegram channel to get our Evening Alerts and breaking news highlights

Business , youtube , google , stocks , shares , ad , policies ,

   

Next In Business News

Summary of top read business stories Sept 13 to 18
UK seeks to break down digital trade barriers
UK set for most widespread pay rises in over a decade
Oil giant Shell sets sights on sustainable aviation fuel take-off
U.S. opens probe into 30 million vehicles over air bag inflators
Nigeria to incorporate state-oil firm NNPC, board appointed
Egypt to sell minority stake in state payments firm e-finance
Steel players’ wish list for Budget 2022
Supporting the people and businesses during reopening of economy
Insight - Five mistakes Malaysians make with their money

Stories You'll Enjoy


Vouchers