AI-enhanced scams are coming: How chatbots will help cybercriminals


  • AI
  • Friday, 01 Sep 2023

The abuse of AI chatbots in online scams is inevitable, and cybercrime experts believe the next wave of phishing attacks will be more sophisticated with the help of AI. For you, this means tell-tale signs of a dodgy email, like spelling mistakes, may soon disappear. — Photo: Karl-Josef Hildenbrand/dpa

BERLIN: AI chatbots can generate text of astonishingly high quality: letters, summaries, essays, stories in a particular writing style, even functioning software code.

But for all the benefits that the technology offers, there’s also the risk that it can be abused by cybercriminals.

The technology "poses novel IT security risks and increases the threat potential of some known IT security threats," Germany's Federal Office for Information Security (BSI) has concluded.

Behind every AI chatbot there’s a language model that can process natural language in written form in an automated manner. Well-known models include OpenAI's GPT and Google's Palm. Palm is used by Google for its chatbot Bard, while GPT is used in ChatGPT and Microsoft's Bing chat.

The known threats that AI language models can further amplify, according to the BSI, include the creation and enhancement of malware, and the creation of spam and phishing emails that exploit human characteristics such as helpfulness, trust and fear (known as social engineering).

Language models can also adapt the writing style of a text to resemble that of a particular organisation or person, thereby making fraudulent emails more convincing.

What's more, the tell-tale spelling and grammatical errors that used to be common in spam and phishing emails are hardly ever found in AI-generated text.

Entirely new problems and threats posed by AI language models that the BSI has identified include the risk that attackers may redirect input from users into a voice model to manipulate chats and grab information from potential victims.

Beyond the realm of phishing and hacking attacks, cybersecurity experts also fear that language models will be misused to produce fake news, propaganda or hate messages.

The ability to imitate writing styles poses a particular danger here: false information could be spread using a style that imitates specific individuals or organisations. Meanwhile machine-generated reviews could be used to promote (or discredit) services or products.

The data used to train a language model could also cause problems, the BSI warns. Questionable content such as disinformation, propaganda or hate speech used in the training set of the language model could be incorporated into the AI-generated text in a linguistically similar manner.

For anyone using an AI chatbot, perhaps the most important point is that it’s never certain that AI-generated content is factually correct. This is in part because a language model can only derive information from the existing texts it has ingested, meaning it's not always up-to-date. At the same time, the aim of the AI is to string words together that are highly likely to appear beside each other, not to state facts.

For all these reasons users should be cautious with AI-generated content. Due to their often error-free generated text, AI language models give the impression of a human-like capability and thus create trust in the content they produce, even though it may be inappropriate, factually incorrect or manipulated. – dpa

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
   

Next In Tech News

Man sexually assaults two women he met online on the same day, US cops say
AI startup Anthropic debuts Claude chatbot as an iPhone app
Microsoft will invest RM10.47bil in cloud and AI services in Malaysia
Eight US newspapers sue ChatGPT-maker OpenAI and Microsoft for copyright infringement
Sex offender asks Norway’s Supreme Court to declare social media access is a human right
Driver of lorry in crash that killed NUS law professor says he was distracted by GPS
Microsoft boosts responsible AI team from 350 to 400 personnel
Chip parts supplier Siltronic's profit falls on high client inventories
After a breakup, does an ex get to stay on your grid?
Microsoft announces RM10.48bil AI, cloud investment in Malaysia

Others Also Read