Like analysing a crime scene: Experts explain how to expose deepfakes


Deepfakes, manipulated images, videos or audios created by artificial intelligence, are becoming more sophisticated every day. Experts share tips on how to identify them. — dpa

BERLIN: Deepfakes can serve all kinds of purposes and not all of them are bad.

Deceptively real-looking, manipulated images, videos or audios created by artificial intelligence are often a source of innocent entertainment, but obviously we are not here to talk about harmless content.

Deepfakes are increasingly used to create pornographic material of real people who have never given consent, while they are also deployed to influence elections and public opinion.

People, often women, have spoken out around the world after falling victim to deepfake image abuse, calling for laws to be updated to be able to better protect them online.

Following a massive controversy sparked by Elon Musk's chatbot Grok late last year, which temporarily enabled users to create non-consensual sexual deepfakes of women and children, the European Union opened an investigation into Musk's X, while other countries also took legal action.

In February, it became illegal in the UK to create sexually explicit deepfakes without consent of the person depicted.

In Germany, the issue rocketed to the forefront of public attention again earlier this month after a famous actor and TV presenter publicly accused her ex-husband of having disseminated pornographic deepfakes of her without her consent for years.

Collien Fernandes has chosen to take legal action against her former partner, fellow actor Christian Ulmen, in Spain – not only because the couple lived there for many years, but because it is still difficult to prosecute those behind deepfakes under German law.

The accusations have sparked nationwide demonstrations, with protesters calling for legal reforms to better protect women from online abuse.

While the German government has vowed to take action, forensic experts note that deepfakes are becoming more sophisticated every day, meaning we might have to face up to a new reality as soon as next year in which nearly all content online is created or in some way influenced by artificial intelligence.

"When we talk about AI deepfakes that went online a year ago: since then, there have already been 23 better models," says Jens Kramosch from German-based Leak.Red, which offers AI-based software to help you find deepfakes of you online and take them down.

"At some point, there will be almost nothing but deepfakes, or only artificially generated content," Kramosch says, adding that we will need ways to identify originals, like a blue tick mark used to verify social media accounts.

"It might sound a bit dramatic, but: we'll be there next year," he predicts.

Leak.Red's tool, which scans for leaks, content privacy and deepfakes for €99 (RM460.50) a month, is one of many software tools that uses AI to combat AI-generated content.

These rate content on a scale from 0 to 100, explains Nicholas Müller from Germany's Fraunhofer Institute for Applied and Integrated Security.

Zero means genuine, 100 means fake. "A deepfake usually scores around 95," says Müller.

But he also notes that AI-models used to create deepfakes are constantly improving.

"The models are converging to produce images that are almost indistinguishable from real footage," he says. However, experts like him are also getting better and better at exposing deepfakes, he notes.

"It's just like in IT security: The attacker gets better and the defence has to keep up."

Check the edges

Kramosch from Leak.Red describes exposing a deepfake like tackling a crime scene.

"It's best to start by taking in the overall picture. For example, I look at the hair, the hairline, the eyelashes and the texture of the skin."

Human skin depicted in deepfakes is often too smooth, he notes.

"It’s also important not just to look at the centre of the image, but at the edges: do the lines match up? Do the shadows match up? There are various AI models that focus on the object in the centre and neglect what's around it."

Deepfakes are most successful at authentically depicting one person in front of a diffuse background, according to Kramosch.

Another way to check whether an image or a video is fake is to check the metadata. Does the geolocation make sense? Is it likely that the photo was taken in 1970?

The metadata of deepfakes created with current versions of bots like Gemini or ChatGPT labels the content as AI-generated. That data can also be removed – another potential sign that something is up, notes Müller.

He recommends trusting your own eyes and ears when taking a first look at a potential deepfake.

Classic signs that an image has been created by artificial intelligence are mismatching skin tones on the neck and upper body, a hand with six fingers or an object that merges with the hand or hovers above it.

If an interview is supposed to have been done in a particular city, check the weather forecast and see if it matches up, he says.

Müller also likes to zoom in on the shadows and edges.

When dealing with outdoor footage with just one source of light, use a still image and draw a line from the shadows to the source of light. Check whether all lines, if extended further, originate from a single point.

"If they don’t, then it is very likely a deepfake," Müller says.

Leak.Red uses its software to secure "a chain of evidence" aimed at exposing deepfakes.

"If a deepfake is put on Instagram, for example, we can freeze it as evidence, and it can no longer be altered. This means it is forensically secured," Kramosch explains.

But such AI-based tools have limits when it comes to prosecuting perpetrators, according to Tobias Wirth from the German Research Centre for Artificial Intelligence.

"AI detectors are often black-box systems. They determine with a degree of probability whether something is a deepfake, but they do not necessarily provide an explanation as to why," he warned back in February.

While AI is capable of identifying systems, patterns and indicators that are imperceptible or difficult for the human eye to detect, including subtle discrepancies at the pixel level, this poses a problem in court, as comprehensible statements are required for the assessment of evidence, Wirth said.

Despite the rapid changes taking place in the field of artificial intelligence, and the threats that come with it, Kramosch says he remains optimistic that it will continue to be possible to identify deepfakes with the help of AI.

After all, any AI model used to create them follows a certain pattern, he notes.

"But the last few months have shown that things are moving very, very quickly." – dpa

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

AI could change the world. But first it is changing Silicon Valley.
AI-generated artists break through in country music
Why 'unretired' US seniors are picking up gig work to pay the bills
Pershing Square proposes $64 billion Universal Music merger with acquisition company
Acer Malaysia unveils new AI-powered Swift laptops, priced from RM5,899
The Big Bang: AI has created a code overload
‘Tom Clancy’s The Division Resurgence’ adapts series’ action to mobile screens
AI is on its way to upending cybersecurity
Screenshots:�A sound strategy for countering hate online
Apple's foldable iPhone faces engineering snags, potential shipment delays, Nikkei Asia reports

Others Also Read