Fooling deepfake detectors is possible according to American scientists


The uncompressed videos were more than 99% successful in deceiving the deepfake detectors in cases where the attacker had access to all the parameters, the scientists reported. — AFP Relaxnews

Deepfakes still have a bright future ahead of them, it would seem. It is still possible to thwart the recognition of deepfakes by even the most highly developed detectors, according to scientists at the University of San Diego. By inserting "adversarial examples" into each frame, artificial intelligence can be fooled. An alarming observation for scientists who are pushing to improve detection systems to better detect these faked videos.

During the last WACV 2021 (Winter Conference on Applications of Computer Vision), which took place from Jan 5 to 9, scientists from the University of San Diego have demonstrated that deepfake detectors have a weak point.

According to these professionals, by using "adversarial examples" in each shot of the video, artificial intelligence could make a mistake and designate a deepfake video as true.

These "adversarial examples" are in fact slightly manipulated inputs that can cause the artificial intelligence to make mistakes. To recognise deepfakes, the detectors focus on facial features, especially eye movement like blinking, which is usually poorly reproduced in these fake videos.

Subscribe now and receive free sooka plan for 1 month. T&C applies.

Monthly Plan

RM13.90/month

Annual Plan

RM12.33/month

Billed as RM148.00/year

1 month

Free Trial

For new subscribers only


Cancel anytime. No ads. Auto-renewal. Unlimited access to the web and app. Personalised features. Members rewards.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!
Deepfake

Others Also Read


All Headlines:

Want to listen to full audio?

Unlock unlimited access to enjoy personalise features on the TheStar.com.my

Already a member? Log In