Facebook enlists AI, human experts in new push against terrorism


  • TECH
  • Saturday, 17 Jun 2017

Facebook's logo is seen through a magnifier in front of a displayed PC motherboard, in this illustration taken April 11, 2016. REUTERS/Dado Ruvic/Illustration

Facebook Inc has hired more than 150 counterterrorism experts and is increasingly using artificial intelligence that can understand language and analyse images to try to keep terrorists from using the social network for recruiting and propaganda. 

Monika Bickert, director of global policy management, and Brian Fishman, counterterrorism policy manager, outlined aspects of Facebook’s latest efforts in a post to a new blog the company debuted on June 15. 

The blog, called “Hard Questions,” will address philosophical debates about the role of social media in society, from what should happen to a person’s digital history after they die to whether social media is good for democracy. The first post addresses how the company responds to the spread of terrorism online. 

“We agree with those who say that social media should not be a place where terrorists have a voice,” Bickert and Fishman write. 

The move comes as Facebook is being hounded by governments to do more to combat terrorism. Following attacks in London and Manchester in the past four months, UK Prime Minister Theresa May pressed other leaders from the Group of Seven nations to consider further regulation of social media companies to compel them to take additional steps against extremist content. 

Bickert and Fishman acknowledge this pressure in their post, writing that “in the wake of recent terrorist attacks, people have questioned the role of tech companies in fighting terrorism online.” 

Positive force 

Mark Zuckerberg, Facebook’s co-founder and chief executive officer, has also been trying to position the company as a positive force for building communities both online and off. This new emphasis from Zuckerberg has followed uproar over Facebook’s role in the proliferation of false news accounts during the US election campaign last year, as well as the spread of extreme content, such as videos of murder, posted to Facebook. 

“Although academic research finds that the radicalisation of members of groups like ISIS and al-Qaeda primarily occurs offline, we know that the internet does play a role – and we don’t want Facebook to be used for any terrorist activity whatsoever,” Bickert and Fishman write. 

Over the past year Facebook has increased its team of counterterrorism experts and now has more than 150 people primarily dedicated to that role. Many of these people have backgrounds in law enforcement and they collectively speak almost 30 languages. In addition, Facebook has thousands of employees and contractors around the world that respond to reports of violations of its terms of service, whether that’s online bullying, posting porn or hate speech. 

“We want to find terrorist content immediately, before people in our community have seen it,” Bickert and Fishman write. “Already, the majority of accounts we remove for terrorism we find ourselves. But we know we can do better at using technology.” 

AI tools 

Facebook is also using artificial intelligence more to find terrorist content that users attempt to post to the social network, Bickert and Fishman said. 

The company has deployed a system that stores a digital footprint of any video or photograph that has already been removed for promoting terrorism and automatically flags this to human reviewers if someone else tries to repost it. It’s also working with other social media companies to create a shared database of these digital signatures – known as hashes – to ensure that people can’t simply post the same content to Twitter or YouTube. 

Facebook is also experimenting with software capable of understanding the meaning of written language to analyse text posted to the site, the two executives said. They said they are currently training this system using text the company has previously removed for promoting terrorism.  

Once an account has been removed for posting terrorist content, Facebook has been using algorithms to search the social network connected with that account – including Pages and groups that account has joined or liked – to identify other accounts that may also be promoting terrorism. 

Across platforms 

The company is working on using artificial intelligence to try to prevent users who have had one account removed for posting terrorist content from creating new accounts with different identities, according to the post. 

Facebook is also trying to use these techniques to work across all its platforms, including WhatsApp and Instagram. 

WhatsApp’s end-to-end encryption, however, means that Facebook has no access to the content of most messages and it can’t deploy the same image and text analysis tools. Some government agencies, including the US Federal Bureau of Investigation and the UK Home Office, have called on tech companies to ensure that law enforcement can access encrypted messages. In their post, Bickert and Fishman said encryption was essential for journalists, aid workers and human rights campaigners as well as keeping banking details and personal photos secure from hackers. 

The blog post also highlighted Facebook’s efforts to fund and train anti-extremist groups to produce counternarratives, or online content designed to undercut terrorist propaganda and dissuade people from joining terrorist groups. — Bloomberg

Article type: metered
User Type: anonymous web
User Status:
Campaign ID: 1
Cxense type: free
User access status: 3
   

Did you find this article insightful?

Yes
No

Across The Star Online