Social media manipulation affects even US senators


  • TECH
  • Monday, 21 Dec 2020

Researchers paid three Russian companies to buy 337,768 fake likes, views and shares of posts on social media including content from verified accounts of Senators Chuck Grassley, seen here in Washington DC on Oct 12, and Chris Murphy. Both senators consented to participate. — Erin Schaff/The New York Times/Pool/AP

BRUSSELS: The conversation taking place on the verified social media accounts of two US senators remained vulnerable to manipulation, even amid heightened scrutiny in the run up to the US presidential election, an investigation by the NATO Strategic Communications Centre of Excellence found.

Researchers from the centre, a NATO-accredited research group based in Riga, Latvia, paid three Russian companies €300 (RM1,479.87) to buy 337,768 fake likes, views and shares of posts on Facebook, Instagram, Twitter, YouTube and TikTok, including content from verified accounts of Senators Chuck Grassley and Chris Murphy.

Grassley’s office confirmed that the Republican from Iowa participated in the experiment. Murphy, a Connecticut Democrat, said in a statement that he agreed to participate because it's important to understand how vulnerable even verified accounts are.

"We’ve seen how easy it is for foreign adversaries to use social media as a tool to manipulate election campaigns and stoke political unrest," Murphy said. "It’s clear that social media companies are not doing enough to combat misinformation and paid manipulation on their own platforms and more needs to be done to prevent abuse.”

In an age when much public debate has moved online, widespread social media manipulation not only distorts commercial markets, it is also a threat to national security, NATO StratCom director Janis Sarts told The Associated Press.

"These kinds of inauthentic accounts are being hired to trick the algorithm into thinking this is very popular information and thus make divisive things seem more popular and get them to more people. That in turn deepens divisions and thus weakens us as a society,” he explained.

More than 98% of the fake engagements remained active after four weeks, researchers found, and 97% of the accounts they reported for inauthentic activity were still active five days later.

NATO StratCom did a similar exercise in 2019 with the accounts of European officials. They found that Twitter is now taking down inauthentic content faster and Facebook has made it harder to create fake accounts, pushing manipulators to use real people instead of bots, which is more costly and less scalable.

"We’ve spent years strengthening our detection systems against fake engagement with a focus on stopping the accounts that have the potential to cause the most harm,” a Facebook company spokesperson said in an email.

But YouTube and Facebook-owned Instagram remain vulnerable, researchers said, and TikTok appeared "defenseless”.

"The level of resources they spend matters a lot to how vulnerable they are,” said Sebastian Bay, the lead author of the report. "It means you are unequally protected across social media platforms. It makes the case for regulation stronger. It’s as if you had cars with and without seatbelts.”

Researchers said that for the purposes of this experiment they promoted apolitical content, including pictures of dogs and food, to avoid actual impact during the US election season.

Ben Scott, executive director of Reset.tech, a London-based initiative that works to combat digital threats to democracy, said the investigation showed how easy it is to manipulate political communication and how little platforms have done to fix long-standing problems.

"What’s most galling is the simplicity of manipulation,” he said. "Basic democratic principles of how societies make decisions get corrupted if you have organised manipulation that is this widespread and this easy to do.”

Twitter said it proactively tackles platform manipulation and works to mitigate it at scale.

"This is an evolving challenge and this study reflects the immense effort that Twitter has made to improve the health of the public conversation,” Yoel Roth, Twitter’s head of site integrity, said in an email.

YouTube said it has put in place safeguards to root out inauthentic activity on its site, and noted that more than 2 million videos were removed from the site in the third quarter of 2020 for violating its spam policies.

"We’ll continue to deal with attempts to abuse our systems and share relevant information with industry partners,” the company said in a statement.

TikTok said it has zero tolerance toward inauthentic behaviour on its platform and that it removes content or accounts that promote spam or fake engagement, impersonation or misleading information that may cause harm.

"We’re also investing in third-party testing, automated technology, and comprehensive policies to get ahead of the ever-evolving tactics of people and organizations who aim to mislead others,” a company spokesperson said in an email.

Associated Press writer David Klepper in Providence, Rhode Island, contributed to this report. – AP

Article type: metered
User Type: anonymous web
User Status:
Campaign ID: 1
Cxense type: free
User access status: 3
   

Did you find this article insightful?

Yes
No

100% readers found this article insightful

Next In Tech News

New Indian social media rules could threaten free expression, critics warn
Bitcoin extends retreat from record high to hit lowest in 20 days
New Zealand to use AI-enabled drones to track endangered dolphins
Judge in Google case disturbed that even ‘incognito’ users are tracked
McDonald's will test plant-based food, following Beyond Meat partnership
Pokemon's appeal continues to endure, even during the pandemic
That cheap 2TB USB stick is probably fake - here’s how you know
In ‘Dadish 2’ you’re a radish dad looking for your lost babies
Outgrowing your smartphone camera? Consider a system camera
Used electric car batteries are heading to factories and farms

Stories You'll Enjoy


Vouchers