Fooling deepfake detectors is possible according to American scientists


The uncompressed videos were more than 99% successful in deceiving the deepfake detectors in cases where the attacker had access to all the parameters, the scientists reported. — AFP Relaxnews

Deepfakes still have a bright future ahead of them, it would seem. It is still possible to thwart the recognition of deepfakes by even the most highly developed detectors, according to scientists at the University of San Diego. By inserting "adversarial examples" into each frame, artificial intelligence can be fooled. An alarming observation for scientists who are pushing to improve detection systems to better detect these faked videos.

During the last WACV 2021 (Winter Conference on Applications of Computer Vision), which took place from Jan 5 to 9, scientists from the University of San Diego have demonstrated that deepfake detectors have a weak point.

According to these professionals, by using "adversarial examples" in each shot of the video, artificial intelligence could make a mistake and designate a deepfake video as true.

These "adversarial examples" are in fact slightly manipulated inputs that can cause the artificial intelligence to make mistakes. To recognise deepfakes, the detectors focus on facial features, especially eye movement like blinking, which is usually poorly reproduced in these fake videos.

This method can even be applied to videos that have been compressed, which until now has been able to remove these false elements, the American scientists said.

Even without having access to the detector model, the deepfake creators who used these "adversarial examples" were able to thwart the vigilance of the most sophisticated detectors. This is the first time such actions have successfully attacked deepfake detectors, the scientists said.

Scientists are sounding an alarm and recommending improved training of the software to better detect these specific modifications and thus these new deepfakes: "To use these deepfake detectors in practice, we argue that it is essential to evaluate them against an adaptive adversary who is aware of these defences and is intentionally trying to foil these defences. We show that the current state of the art methods for deepfake detection can be easily bypassed if the adversary has complete or even partial knowledge of the detector," the researchers wrote. – AFP Relaxnews

Article type: metered
User Type: anonymous web
User Status:
Campaign ID: 1
Cxense type: free
User access status: 3
Subscribe now to our Premium Plan for an ad-free and unlimited reading experience!

Deepfake

   

Next In Tech News

Getir buys fast grocery rival Gorillas in $1.2 billion deal
Ericsson and Apple end patent-related legal row with licence deal
SAP to stop developing new functions for Business ByDesign software -Handelsblatt
FTX founder Bankman-Fried 'willing to testify' before U.S. House panel - tweet
StanChart told to probe suspicious credit card charges in Hong Kong
US woman, father found dead after her toddler answers phone call from worried co-worker
The hype around esports is fading as investors and sponsors dry up
Elon Musk Twitter leak raises concern about outside data access
‘Honest grandma’, 92, repays US$3mil in debt in 10 years after refusing to declare bankruptcy wins ‘most beautiful’ citizen award in China
French President Emmanuel Macron slams TikTok algorithm over censorship, ‘addiction’

Others Also Read