US standards body says ByteDance researcher wrongly added to AI safety groupchat


FILE PHOTO: A person arrives at the offices of TikTok after the U.S. House of Representatives overwhelmingly passed a bill that would give TikTok's Chinese owner ByteDance about six months to divest the U.S. assets of the short-video app or face a ban, in Culver City, California, U.S., March 13, 2024. REUTERS/Mike Blake/File Photo

WASHINGTON (Reuters) - A researcher from TikTok's Chinese owner ByteDance was wrongly added to a groupchat for American artificial intelligence safety experts last week, the U.S. National Institute of Standards and Technology (NIST) said Monday.

The researcher was added to a Slack instance for discussions between members of NIST's U.S. Artificial Intelligence Safety Institute Consortium, according to a person familiar with the matter.

In an email, NIST said the researcher was added by a member of the consortium as a volunteer.

"Once NIST became aware that the individual was an employee of ByteDance, they were swiftly removed for violating the consortium's code of conduct on misrepresentation," the email said.

The researcher, whose LinkedIn profile says she is based in California, did not return messages; ByteDance did not respond to emails seeking comment.

The person familiar with the matter said the appearance of a ByteDance researcher raised eyebrows in the consortium because the company is not a member and TikTok is at the center of a national debate over whether the popular app has opened a backdoor for the Chinese government to spy on, or manipulate Americans at scale. Last week, the U.S. House of Representatives passed a bill to force ByteDance to divest itself of TikTok or face a nationwide ban; the ultimatum faces an uncertain path in the Senate.

The AI Safety Institute is intended to evaluate the risks of cutting edge artificial intelligence programs. Announced last year, the institute was set up under NIST and the founding members of its consortium include hundreds of major American tech companies, universities, AI startups, nongovernmental organizations and others, including Reuters' parent company Thomson Reuters.

Among other things, the consortium works to develop guidelines for the safe deployment of AI programs and to help AI researchers find and fix security vulnerabilities in their models. NIST said the Slack instance for the consortium includes about 850 users.

(This story has been refiled to add the dropped word 'Consortium' to the name of the AI body in paragraph 2)

(Reporting by Raphael Satter; Editing by Sharon Singleton)

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

   

Next In Tech News

Tech giants agree to child safety principles around generative AI
What next for TikTok as US ban moves a step closer?
Translation platform DeepL launches AI assistant for crafting texts
Preview: ‘Dune: Awakening’ takes fans to Arrakis and forces them to survive a wasteland
Young Europeans are spending money in the metaverse
This exoskeleton can boost your physical capabilities
This AI-focused chip is powered by light
Study warns users about health information on TikTok
Apple renews talks with OpenAI for iPhone generative AI features, Bloomberg News reports
Google plans $3 billion data center investment in Indiana, Virginia

Others Also Read