Lies, racism and AI: IT experts point to serious flaws in ChatGPT


ChatGPT may have blown away many who have asked questions of it, but scientists are far less enthusiastic. Lacking data privacy, wrong information and an apparent built-in racism are just a few of the concerns some experts have with the latest 'breakthrough' in AI. — Photo: Frank Rumpenhorst/dpa

BERLIN: ChatGPT may have blown away many who have asked questions of it, but scientists are far less enthusiastic. Lacking data privacy, wrong information and an apparent built-in racism are just a few of the concerns some experts have with the latest 'breakthrough' in AI.

With great precision, it can create speeches and tell stories – and in just a matter of seconds. The AI software ChatGPT introduced late last year by the US company OpenAI is arguably today's number-one worldwide IT topic.

But the language bot, into which untold masses of data have been fed, is not only an object of amazement, but also some scepticism.

Scientists and AI experts have been taking a close look at ChatGPT, and have begun issuing warnings about major issues – data protection, data security flaws, hate speech, fake news.

"At the moment, there's all this hype," commented Ruth Stock-Homburg, founder of Germany's Leap in Time Lab research centre and a Darmstadt Technical University business administration professor. "I have the feeling that this system is scarcely being looked at critically."

"You can manipulate this system"

ChatGPT has a very broad range of applications. In a kind of chat field a user can, among others, ask it questions and receive answers. Task assignments are also possible – for example on the basis of some fundamental information ChatGPT can write a letter or even an essay.

In a project conducted together with the Darmstadt Technical University, the Leap in Time Lab spent seven weeks sending thousands of queries to the system to ferret out any possible weak points. "You can manipulate this system," Stock-Homburg says.

In a recent presentation, doctoral candidate and AI language expert Sven Schultze highlighted the weak points of the text bot. Alongside a penchant for racist expressions, it has an approach to sourcing information that is either erroneous or non-existent, Schultze says. A question posed about climate change produced a link to an internet page about diabetes.

"As a general rule the case is that the sources and/or the scientific studies do not even exist," he said. The software is based on data from the year 2021. Accordingly, it identifies world leaders from then and does not know about the war in Ukraine.

"It can then also happen that it simply lies or, for very specialised topics, invents information," Schultze said.

Sources are not simple to trace

He noted for example that with direct questions containing criminal content there do exist security instructions and mechanisms. "But with a few tricks you can circumvent the AI and security instructions," Schultze said.

With another approach, you can get the software to show how to generate fraudulent emails. It will also immediately explain three ways that scammers use the so-called "grandchild trick" on older people.

ChatGPT also can provide a how-to for breaking into a home, with the helpful advice that if you bump into the owner you can use weapons or physical force on them.

Ute Schmid, Chair of Cognitive Systems at the Otto Friedrich University in Bamberg, says that above all the challenge is that we can't find out how the AI reaches its conclusions. "A deeper problem with the GPT3 model lies in the fact that it is not possible to trace when and how which sources made their way into the respective statements," she said.

Despite such grave shortcomingss, Schmidt still argues that the focus should not just concern the mistakes or possible misuse of the new system, the latter prospect being students having their homework or research papers written by the software. "Rather, I think that we should ask ourselves, what chances are presented us with such AI systems?"

Researchers in general advocate how AI can expand – possibly even promote – our competencies, and not limit them. "This means that in the area of education I must also ask myself – as perhaps was the case 30 years ago with pocket calculators – how can I shape education with AI systems like ChatGPT?"

Data privacy concerns

All the same, concerns remain about data security and protecting data. "What can be said is that ChatGPT takes in a variety of data from the user, stores and processes it and then at a given time trains this model accordingly," says Christian Holthaus, a certified data protection expert in Frankfurt. The problem is that all the servers are located in the United States.

"This is the actual problem – if you do not succeed in establishing this technology in Europe, or to have your own," Holthaus said. In the foreseeable future there will be no data protection-compliant solution. Adds Stock-Homburg about European Union data protection regulations: "This system here is regarded as rather critical."

ChatGPT was developed by OpenAI, one of the leading AI firms in the US. Software giant Microsoft invested US$1bil (RM4.25bil) in the company back in 2019 and recently announced plans to pump further billions into it. The concern aims to make ChatGPT available to users of its own cloud service Azure and the Microsoft Office package.

"Still an immature system"

Stock-Homburg says that at the moment ChatGPT is more for private users to toy around with – and by no means something for the business sector or security-relevant areas. "We have no idea how we should be deal with this as yet still immature system," she said.

Oliver Brock, Professor of Robotics and Biology Laboratory at the Technical University Berlin, sees no "breakthrough" yet in AI research. Firstly, development of AI does not go by leaps and bounds, but is a continuing process. Secondly, the project only represents a small part of AI research.

But ChatGPT might be regarded as a breakthrough in another area – the interface between humans and the internet. "The way in which, with a great deal of computing effort, these huge amounts of data from the internet are made accessible to a broad public intuitively and in natural language can be called a breakthrough," says Brock. – dpa

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

Why was a murder suspect’s Instagram taken down, but not his Goodreads?
Huawei’s new Mate 70 phone shows its chip advances are stalling
Malaysia launches national AI office for policy, regulation
ChatGPT outage affects users globally as OpenAI works on a fix
No matter your generation, be aware of social media 'boomer traps'
US asks court to reject TikTok's bid to stave off law that could ban the app
ServiceTitan prices US IPO above range to raise about $625 million
GM abandons robotaxi operations derailed by accident
Meet SwagBot, the AI-powered robot cattle herder preventing soil degradation
OpenAI CFO thinks business users will pay thousands monthly for AI tools

Others Also Read