The accused gunman behind 10 deaths and three injuries in Buffalo, New York, over the weekend had spoken explicitly about his plans to commit a terrorist act on the popular chat app Discord since at least last December, according to logs of his posts reviewed by Bloomberg.
The record of his conversations indicate that the alleged shooter, identified by authorities as Payton S. Gendron, 18, had typed out plans to commit a rampage fuelled by his White supremacist beliefs for months over a private Discord server.
On Dec 2 he wrote, “I will carry out an attack against the replacers, and will even livestream the attack via discord and twitch,” referring to a popular White supremacist belief that the white race is on the verge of extinction by non-Whites who are controlled and manipulated by Jewish people.
The suspect wrote on Dec 5 that he initially planned the attack for March 15, three years after the shooting at two mosques in Christchurch, New Zealand, which left dozens dead. He then delayed his attack, carried out at a Tops grocery store, until May 14.
The alleged shooter shared these logs with several public Discord groups as part of an effort to draw attention to his Twitch stream, where he broadcast the attack live. In December alone, he referenced his plans to commit the attack at least 17 times, according to the logs. Between November and May, 14 the shooter references the Christchurch terrorist’s name 31 times, the word “gun” 200 times, the word “shoot” 119 times, and the word “attack” over 200 times.
He also abundantly used racist and anti-Semitic language, including extremist phrases identified by multiple anti-hate research centers.
The numerous references to guns and attacks, and specifically mentioning Discord and Twitch, highlight the challenges social media companies face in rooting out violence and hate speech before events unfold when in hindsight it appears to be plainly evident for anyone to see.
Discord hosts more than 150 million monthly users and is enormously popular among young gamers who use the chat rooms to communicate via voice, video and text, while playing video games. As its popularity as grown, the site has expanded to encompass everything from study groups to art communities.
Since its 2015 launch Discord has become “the de facto place for social interactions online”, said Alex Newhouse, the deputy director of Middlebury Institute’s Center on Terrorism, Extremism and Counterterrorism. “We also know extremists have recognised that Discord has issues with large-scale content monitoring and enforcement.”
Perpetrators of both the 2017 Unite the Right Rally, in Charlottesville, Virginia, and the 2020 Capitol Riots mobilised in part over Discord.
The shooter followed patterns widely visible across similar online white-supremacist communities, including in his use of Discord, according to Newhouse. “Discord has become a haven for these particular types of small-cel, individual-focused mobilisation pathways,” he said.
In a statement, Discord said, “We extend our deepest sympathies to the victims and their families. Hate and violence have no place on Discord.” The company said it’s cooperating with law enforcement on the investigation.
Discord, a San Francisco-based startup that was recently valued at US$15 billion, is not as experienced as some of its larger tech rivals in policing its online content. Social media giants like Meta Platforms Inc’s Facebook and Alphabet Inc’s YouTube have hired tens of thousands of moderators and invested billions of dollars in trying to spot violent content and remove it before it leads to a deadly act or proliferates, and even they have had mixed results.
To amass an audience on Twitch, the shooter sent invitations to an unknown number of individuals linking to his Twitch livestream and Discord logs. The logs don’t contain his full Discord use, and primarily provide information about his White supremacist views and attack plan.
Experts have applauded Twitch for its speed in taking down the live stream less than two minutes after the violence began. While the attacker was live for a total of 25 minutes, most of that footage was of him driving, according to StreamsCharts. Gendron chose to livestream his attack on Twitch because it was free and easy for anybody to watch, he states in his manifesto. A 2019 shooting at Germany’s Halle Synagogue was also livestreamed on the platform. Facebook Live was less appealing because it’s more challenging for people to watch without their own account, Gendron said. ByteDance Ltd’s TikTok and YouTube both have follower or subscriber requirements before permitting users to livestream.
Twitch, owned by Amazon.com Inc, said in a statement to Bloomberg that it uses both proactive detection of content that violates its terms of service as well as user reports. A spokesperson said it has doubled the size of its safety operations team in recent years. Twitch and Discord both work with law enforcement agencies and the Global Internet Forum to Counter Terrorism, a nonprofit coalition of social media sites formed in 2017 by Facebook, Microsoft Corp, Twitter Inc and YouTube, to monitor and moderate harmful content
“Twitch has issues, but overall has taken a stricter route in general toward content moderation,” said Middlebury’s Newhouse.
While the attack appears to have been planned at least in part on Discord and broadcast on Twitch, videos of the livestream and the attacker’s manifesto proliferated widely across the Internet. Facebook, Twitter Inc and YouTube said they designated the Buffalo shooting as a so-called violating terrorist attack, meaning copies of the shooter’s video as well as all copies of his manifesto and links to the video of his attack would be banned from the platforms. Video and other posts praising the shooter would also be removed, the companies said.
The major tech platforms said they were also working with GIFCT, the nonprofit coalition of social media sites, to prevent the spread of the video. Social media services often use a hash – or a digital footprint of video or image – as a signal to mark inappropriate content for automatic takedown by the companies’ algorithms.
But that system still didn’t effectively curb the spread of the shooter’s manifesto and the video of his attack. In the first 24 hours after the shooting occurred on May 14, a Google Drive link to the manifesto was shared more than 1,100 times on Twitter, according to an analysis by the social media threat intelligence firm Memetica. Facebook posts that linked to a video copy of the attack on Streamable, a video-sharing site, collected 43,500 likes, comments and shares on the platform, according to an analysis on the social media web tool Buzzsumo. The Streamable link was also shared hundreds of times on Twitter and Reddit, the analysis showed.
On YouTube, portions of the video attack that didn’t show the explicit violence were uploaded to the site, raising questions about loopholes in the companies’ moderation policies.
And on far right message boards such as Patriots.win and GreatAwakening.win, a forum affiliated with the QAnon conspiracy movement, copies of both the manifesto and video of the shooter’s attack also continued to spread online. Other copies were widely shared on platforms with little to no content moderation such as 4chan, Telegram, Gab and KiwiFarms, according to Memetica.
Discord has been more actively moderating content since the Charlottesville event in 2017. But it still has a long way to go. The company removed more than 24,000 accounts and 2,000 servers associated with violent extremism in the second half of last year, according to its latest transparency report, 10% more than its previous reporting period. Fewer than half of those servers were removed as a result of the company’s proactive moderation efforts.
A Discord spokesperson said Gendron’s was a private server so only members had access to the content. “As soon as we became aware of it we took action against it and removed the server in accordance with our policies against violent extremism," the spokesperson said. Discord has a dedicated counter-extremism sub-team that tracks hateful networks and removes servers where users organise around hateful ideologies.
Last year, New York State Police investigated a 17-year-old suspect after he made threatening statements involving his high school. Law enforcement subsequently investigated the suspect, resulting in a short hospital stay. In Gendron’s Discord logs, he refers to a hospital stay and said it “only helped to prove my belief that people, even certified doctors are not concerned about helping you”.
Months later, Gendron openly discussed White supremacist views, purchasing guns, choosing an attack location, and attack strategy over Discord. A channel titled “to-do-list”, with messages as far back as March 5, contained a detailed explanation of his plans. On at least six separate days, the suspect referenced on Discord his plans to livestream the attack on Twitch. Gendron was arrested moments after the shooting and was charged with first-degree murder. He has pled not-guilty to the attack. – Bloomberg