AI deepfakes mean online communication can't be trusted, bank says


Instances of such fraud have climbed nearly fifty-fold in the past two years in areas such recruitment and finance, the researchers said, calling for current trust in communication to be questioned and for employees to be trained to never trust, always verify. — Pixabay

NEW YORK: The growing sophistication of so-called deepfake videos that can mimic a specific person is adding to the arsenal of cybercriminals, with almost 5% of fraud attempts last year making use of the artificial intelligence-driven technology.

Deepfakes are now able to imitate real people in real time, meaning that tech-based con artistry has moved from "simple manipulation to full-scale infiltration," according to the Citi Institute, part of US investment bank Citigroup.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Windows running slow? Microsoft’s 11 quick fixes to speed up your PC
Meta to let users in EU 'share less personal data' for targeted ads
Drowning in pics? Tidy your Mac library with a few clicks
Flying taxis to take people to London airports in minutes from 2028
Smartphone on your kid’s Christmas list? How to know when they’re ready.
A woman's Waymo rolled up with a stunning surprise: A man hiding in the trunk
A safety report card ranks AI company efforts to protect humanity
Bitcoin hoarding company Strategy remains in Nasdaq 100
Opinion: Everyone complains about 'AI slop,' but no one can define it
Google faces $129 million French asset freeze after Russian ruling, documents show

Others Also Read