The other side of the AI coin


As students and workers harness generative artificial intelligence (AI) tools for their studies and work, so, too, have crooks.Underground forums are selling modified versions of ChatGPT that circumvent safety filters to generate scam content, the Cyber Security Agency of Singapore (CSA) said in its Singapore Cyber Landscape report for 2023, published yesterday.

FraudGPT and WormGPT, two modified versions of OpenAI’s chatbot, have been reportedly sold to more than 3,000 customers globally since July 2023, raising fears that generative AI could herald a wave of cyber attacks, scams and falsehoods.

FraudGPT is marketed on the Dark Web as a tool to learn how to hack, and write malware and malicious code.

WormGPT was developed to circumvent ChatGPT’s guard rails, such as prohibition against the generation of phishing e-mails or writing malware code.

Roughly 13% of phishing scams in 2023 analysed by CSA showed signs that they were likely made with AI.

Since OpenAI’s ChatGPT launch in late 2022, cybersecurity firms have reported a growing trend of hackers using the AI tool to gather critical details about software to find vulnerabilities in companies’ systems that can be exploited.

Microsoft, for example, disclosed that bad actors had used AI to study technical protocols for military-related equipment such as radars and satellites, illustrating how AI can be used in reconnaissance before an attack is staged.

Cracked chatbots continue to emerge despite efforts to clamp down on them. The Telegram channel promoting WormGPT was shut down, only for other similar tools to appear elsewhere.

AI-powered password generators like PassGAN, which can be deployed at scale, can also crack more than half of common passwords in under a minute.

Another way the technology is deployed maliciously is in the generation of deepfake images to bypass biometric authentication.

For example, to beat the use of facial recognition as a security feature, fraudsters turn to face- swapping apps.

There is a growing underground market of criminal developers peddling impersonation services that employ deepfakes, fake social media accounts and AI-generated spam content that can bypass anti-phishing controls of popular e-mail services, and can be used to deploy scam campaigns, said CSA.

Despite concerns about AI, the very same technology is being used by the cyber security sector to combat scams.

“Through machine learning and algorithms, AI can be trained to detect deepfakes, phishing e-mails and suspicious activities,” said CSA.

Algorithms can be trained to spot unnatural facial movements, lighting discrepancies and irregularities in eye reflections, which are all tell-tale signs of deepfakes, said CSA.

But developers face challenges in using AI for cyber security, such as false positives, false negatives, and an arms race against cyber criminals that may be unsustainable in the long run for many organisations.

“As law enforcement (agencies train) their AI systems, cyber criminals can also actively develop methods to evade and fool AI detection systems,” CSA said.

“This can result in a fast-paced, resource-intensive arms race in which AI systems constantly adapt.” — The Straits Times/ANN

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Aseanplus News

Nearly 500 elderly killed by family caregivers in Japan in FY 2006-2024
New govt app in S. Korea to allow victims to monitor stalkers
Svay Rieng casino licence revoked after over 100 scammers detained in Cambodia
HIV awareness programme educates Brunei youth on risks, prevention
Couple charged with trafficking Filipino caregivers in California
Taiwanese singer Rainie Yang’s concert in Singapore cancelled for second time
Multiple Mt Semeru eruptions send hot clouds 3.5km from crater
Tourism body stages water festival to connect six provinces across Thailand
Love and labour: A wife’s caregiving journey after husband’s stroke upends retirement dreams
Indonesia lays to rest peacekeepers killed in Lebanon

Others Also Read