Facebook told to face up to extremism after New Zealand attack


  • TECH
  • Tuesday, 19 Mar 2019

Facebook, photographed on Friday, Dec. 11, 2015 in Menlo Park, Calif. (John Green/Bay Area News Group/TNS)

Pressure is building on Facebook Inc and other social media platforms to stop hosting extremist propaganda including terrorist events, after March 15’s deadly attacks on two mosques in New Zealand were live-streamed. 

Australia’s prime minister has urged the Group of 20 nations to use a meeting in June to discuss a crack down, while New Zealand media reported the nation’s biggest banks have pulled their advertising from Facebook and Google. 

“We cannot simply sit back and accept that these platforms just exist and what is said is not the responsibility of the place where they are published,” New Zealand Prime Minister Jacinda Ardern told parliament on March 19. “They are the publisher, not just the postman. There cannot be a case of all profit, no responsibility.” 

Facebook said it had been working directly with New Zealand police and across the technology industry to “help counter hate speech and the threat of terrorism”. 

The lone shooter accused of killing 50 people in the New Zealand city of Christchurch livestreamed the murders, with the video continuing to be widely available on a range of platforms hours after the attack. The suspect, an Australian, uploaded his hate-filled manifesto online shortly before launching his assault. 

Offensive content 

It’s the latest example of social media companies struggling to keep offensive content from sites that generate billions of dollars in revenue from advertisers – a problem that’s seen Facebook founder Mark Zuckerberg grilled by US Congress. 

The shooting video was viewed fewer than 200 times during its live broadcast, and no users reported the video during that time, Facebook vice-president and deputy general counsel Chris Sonderby said in a blog post. It was reported to the company 29 minutes after the video started and viewed 4,000 times before being removed, he said. 

The G-20 should discuss the issue at its Osaka Summit in June, Australian Prime Minister Scott Morrison said Tuesday in an open letter to this year’s host, Japan counterpart Shinzo Abe. The group should work to ensure technology firms implement appropriate filtering and remove terrorist-linked content, and show transparency in meeting those requirements, he said. 

“It is unacceptable to treat the Internet as an ungoverned space,” Morrison said. “It is imperative that the global community works together to ensure that technology firms meet their moral obligation to protect the communities which they serve and from which they profit.” 

Ardern’s government will look at the role social media played and what steps it can take, including on the international stage. Previously she vowed to seek talks with Facebook, which said it blocked the upload of 1.2 million video clips and removed another 300,000 within 24 hours. 

The New Zealand business community is becoming increasingly vocal that the social-media companies should be rebuked by restricting their bottom line. 

The Association of New Zealand Advertisers is encouraging advertisers to recognise they have a choice where their advertising dollars are spent and to carefully consider where ads appear. 

“We challenge Facebook and other platform owners to immediately take steps to effectively moderate hate content before another tragedy can be streamed online,” the association said in a statement. 

Meanwhile, New Zealand’s three biggest broadband providers called on Facebook, Twitter and Google to join an urgent discussion at an industry and government level to find a solution to the live-streaming and hosting of video footage such as that produced in Christchurch. 

“The discussion must start somewhere,” the chief executives of the companies said in an open letter on their websites Tuesday. “Social media companies and hosting platforms that enable the sharing of user-generated content with the public have a legal duty of care to protect their users and wider society by preventing the uploading and sharing of content such as this video.” 

Artificial intelligence techniques could be deployed and, for the most serious types of content, more onerous requirements should apply including taking down the material within a specified period, proactive measures and fines for failure to do so, they said. 

“Now is the time for this conversation to be had, and we call on all of you to join us at the table and be part of the solution.” – Bloomberg 

Article type: metered
User Type: anonymous web
User Status:
Campaign ID: 1
Cxense type: free
User access status: 3
Join our Telegram channel to get our Evening Alerts and breaking news highlights
   

Next In Tech News

India antitrust probe finds Google abused Android dominance, report shows
Chinese version of TikTok limits use of app by those under 14
What to expect before buying an eScooter
Work together or fail: 'Operation: Tango' is a game built for two
Millions of gamers on HP computers left vulnerable by security flaw
Workplace meetings hit the road as Microsoft develops Teams for cars
You had one job: Google's alarm fails countless users after update
U.S. probes possible insider trading at Binance - Bloomberg News
Barra: GM will make 'substantial shifts' in supply chain over chips
Verizon sweetens subsidies on iPhones to match competition

Stories You'll Enjoy


Vouchers