YouTube has tried to keep violent and hateful videos off its service for years. The Google unit hired thousands of human moderators and put some of the best minds in artificial intelligence on the problem.
On March 14, that was no match for a gunman, who used social media to broadcast his killing spree in a New Zealand mosque, and legions of online posters tricking YouTube’s software to spread the attacker’s video.
When the rampage was streamed live on Facebook, police alerted the social network, which took the video down. But by then it had been captured by others, who re-posted it on YouTube.
Google said it’s “working vigilantly to remove any violent footage” and had deleted the video thousands of times by March 15 afternoon. Still, many hours after the original event, it could still be found, an unnerving reminder of how far giant Internet companies have to go to understand and control the information shared on their services.
“Once content has been determined to be illegal, extremist or a violation of their terms of service, there is absolutely no reason why, within a relatively short period of time, this content can’t be eliminated automatically at the point of upload,” said Hany Farid, a computer science professor at the University of California at Berkeley’s School of Information and a senior adviser to the Counter Extremism Project. “We’ve had the technology to do this for years.”
YouTube has worked to block certain videos from ever showing up on its site for years. One tool, called Content ID, has been around for more than a decade. It gives copyright owners such as film studios the ability to claim content as their own, get paid for it, and have bootlegged copies deleted. Similar technology has been used to blacklist other illegal or undesirable content, including child pornography and terrorist propaganda videos.
About five years ago, Google revealed it was using AI techniques such as machine learning and image recognition to improve many of its services. The technology was applied to YouTube. In early 2017, 8% of videos flagged and removed for violent extremism were taken down with fewer than 10 views. After YouTube introduced a flagging system powered by machine learning in June 2017, more than half of the videos pulled for violent extremism had fewer than 10 views, it reported in a blog.
Google executives have testified multiple times in front of the US Congress on the topic of violent and extremist videos being spread through YouTube. The repeated message: YouTube is getting better, sharpening its algorithms and hiring more people to deal with the problem. Google is widely seen as the best-equipped company to deal with this problem because of its AI prowess.
So why couldn’t Google stop a single video, that is clearly extreme and violent, from being reposted on YouTube?
“There are so many ways to trick computers,” said Rasty Turek, chief executive officer of Pex, a startup that builds a competing technology to YouTube’s Content ID. “It’s whack-a-mole.”
Making minor changes to a video, such as putting a frame around it or flipping it on its side, can throw off software that’s been trained to identify troubling images, Turek said.
The other major problem is live streaming, which by its very nature doesn’t allow AI software to analyse a whole video before the clip is uploaded. Clever posters can take an existing video they know YouTube will block and stream it live second by second – essentially rebroadcasting it online to get around Google’s software. By the time YouTube recognises what’s going on, the video has already been playing for 30 seconds or a minute, regardless of how good the algorithm is, Turek said.
“Live stream slows this down to a human level,” he said. It’s a problem YouTube, Facebook, Pex and other companies working in the space are struggling with, he added.
This rebroadcasting trick is a particular problem for YouTube’s approach to blacklisting videos that break its rules. Once the company identifies a problematic video, it puts the clip on a blacklist. Its AI-powered software is then trained to automatically recognise the clip and block it if someone else if trying to upload it to the site again.
It still takes a while for the AI software to be trained before it can spot other copies. And by definition, the video has to exist online before YouTube can set this machine-learning process in motion. And that’s before people start slicing the offending content into short live-streamed clips.
Another complicating factor is that edited clips of the shooting video are also being posted by reputable news organisations as part of their coverage of the event. If YouTube were to take down a news report simply because it had a screen shot of the video, press freedom advocates would object.
The New Zealand shooter used social media to gain maximum exposure. He posted on Internet forums used by right-wing and anti-Muslim groups, tweeted about his plans and then began the Facebook live stream on his way to carry out the attack.
He posted a manifesto filled with references to Internet and alt-right culture, most likely designed to give journalists more material to work with and therefore spread his notoriety further, said Jonas Kaiser, a researcher affiliated with Harvard’s Berkman Klein Centre for Internet and Society.
“The patterns seem to be very similar to prior events,” Kaiser said. – Bloomberg