US begins study of possible rules to regulate AI like ChatGPT


FILE PHOTO: ChatGPT logo is seen in this illustration taken, February 3, 2023. REUTERS/Dado Ruvic/Illustration/File Photo

WASHINGTON (Reuters) - The Biden administration said Tuesday it is seeking public comments on potential accountability measures for artificial intelligence (AI) systems as questions loom about its impact on national security and education.

ChatGPT, an AI program that recently grabbed the public's attention for its ability to write answers quickly to a wide range of queries, in particular has attracted U.S. lawmakers' attention as it has grown to be the fastest-growing consumer application in history with more than 100 million monthly active users.

The National Telecommunications and Information Administration, a Commerce Department agency that advises the White House on telecommunications and information policy, wants input as there is "growing regulatory interest" in an AI "accountability mechanism."

The agency wants to know if there are measures that could be put in place to provide assurance "that AI systems are legal, effective, ethical, safe, and otherwise trustworthy."

“Responsible AI systems could bring enormous benefits, but only if we address their potential consequences and harms. For these systems to reach their full potential, companies and consumers need to be able to trust them,” said NTIA Administrator Alan Davidson.

President Joe Biden last week said it remained to be seen whether AI is dangerous. "Tech companies have a responsibility, in my view, to make sure their products are safe before making them public," he said.

ChatGPT, which has wowed some users with quick responses to questions and caused distress for others with inaccuracies, is made by California-based OpenAI and backed by Microsoft Corp.

NTIA plans to draft a report as it looks at "efforts to ensure AI systems work as claimed – and without causing harm" and said the effort will inform the Biden Administration's ongoing work to "ensure a cohesive and comprehensive federal government approach to AI-related risks and opportunities."

A tech ethics group, the Center for Artificial Intelligence and Digital Policy, asked the U.S. Federal Trade Commission to stop OpenAI from issuing new commercial releases of GPT-4 saying it was "biased, deceptive, and a risk to privacy and public safety."

(Reporting by David Shepardson and Diane Bartz; Editing by Nick Zieminski)

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

   

Next In Tech News

Exclusive-Musk's Neuralink has faced issues with its tiny wires for years, sources say
GameStop, AMC rise in heavy trading as meme stocks roar again
Opinion: Can Google give AI answers without breaking the Web?
Malaysia to ask Meta to explain removal of Facebook posts on PM’s Hamas meeting
AI's use in finance may need new rules, ECB says
Opinion: AI’s ‘Her’ era has arrived
Ohio police fatally shoot Amazon warehouse guard who tried to kill supervisor, authorities say
Students, activists, entertainers: Minecraft’s global appeal
A US woman thought baseball star Trea Turner was asking her for money. Instead, an online scammer stole over RM230,000 from her, police say.
Renault to pursue autonomous minibuses but not cars

Others Also Read