Experts advise Internet users to be on the lookout for several tell-tale signs of AI deepfakes, including micro-expressions, logical inconsistencies, audio-visual sync, and contextual logic.
An awkward-looking celebrity endorses a sketchy investment plan. Another odd clip shows snowy weather in central Kuala Lumpur. Do those videos come off as plausible, or are they plain preposterous?
With the current state of artificial intelligence (AI), it looks like it’s getting a lot harder to tell.
Back in July this year, it was widely reported that an elderly couple had been misled by an AI-generated news report about a tourist spot and resort in Perak, featuring scenic views and even a cable car ride.
After a long three-hour drive, they found it was all fake. The interviewer, the interviewees, their voices, expressions, and even the purported resort had been completely fabricated.
Last month, a video appeared to show what looked like an Ampang line LRT train carriage on fire at a station. This was also fake.
Not to mention the onslaught of scam videos made using deepfakes (AI-generated videos showing people saying or doing things they hadn’t actually said or done), puppeteering politicians and celebrities alike to promote fake investment schemes, amongst other scams.
According to a report from the Malaysian Communications and Multimedia Commission (MCMC), approximately 1,005 deepfake investment scams were removed from social media between Jan 1, 2022 and Aug 15, 2025.
Prof Dr Ho Chin Kuan, the vice-chancellor at the Asia Pacific University of Technology and Innovation (APU), says there has been a noticeable increase in cases where Malaysians are duped by such videos due to the technology’s growing sophistication, which may evolve into a larger crisis should it not be addressed.
Seeing isn’t believing
Prof Ho believes that with the technology’s advancement, sorting between what’s real and what’s fake will only get harder.
“For the casual viewer scrolling on a smartphone, high-quality deepfakes are effectively indistinguishable from reality.
“However, if you pause and scrutinise the footage on a larger screen, there are still some glitches or signs to look for – though these are disappearing fast as technology advances,” Prof Ho says.
Similarly, Dr Azree Nazri, president of the Malaysian Artificial Intelligence Society, says that the world is moving rapidly to a point where the fakes become all too convincing.
“Today, most deepfakes still fall into the uncanny valley: tiny inconsistencies in facial movement, lighting, emotion, or lip-sync let careful viewers spot something ‘off’.
“However, newer models and underground tools are shrinking this uncanny valley. Some AI-generated videos are already so smooth and realistic that ordinary viewers – especially on WhatsApp or TikTok – can’t reliably tell real from fake.
“Within a few years, deepfakes will likely cross the uncanny valley completely, making human detection nearly impossible. At that stage, trust will depend on verification tools, digital watermarks, and official confirmations – not the naked eye,” he says.
Azree further warns that “underground” AI tools and models circulated on the dark web without any safety filters in place also allow criminals to create fake calls, celebrity ads and voice clones easily, escalating fraud and misinformation risks.
From the perspective of Joseph Chin, founder of the IT community AI Tinkerers Kuala Lumpur, this is not an issue limited to just Malaysia.
“A lot of cases never get reported because people rarely admit they were fooled, so the true number is definitely higher than what we see in the news,” he says.
Chin is also the founder of AI document assistant DocuAsk, with further background in the AI video generation startup space.
He adds that the availability of tools like Sora and Veo 3 make it much easier and cheaper to create videos that look “studio quality” using something as accessible as a laptop.
“When video is cheap to fake and hard to verify, trust on the Internet gets weaker. It becomes much harder for ordinary people to know what to believe,” he says.
Prof Ho echoes these thoughts, saying that “the problem is the speed of its deployment versus our human ability to adapt”.
“When reality-grade forgery becomes affordable and accessible, the trust we place in video evidence – historically the gold standard of truth – collapses,” he says.
A similar point was made by Azree on the potential wider impact of such videos becoming more pervasive.
“Deepfakes can trigger financial scams, political manipulation, reputational damage, and social panic.
“These videos also create the ‘liar’s dividend’, where real evidence is dismissed as fake, causing unintentional harm and eroding public trust,” he says.
Prof Ho further highlights that the democratisation of advanced AI tools has effectively become a double-edged sword, especially with their huge leap in capabilities.
“Veo 3 now integrates synchronised audio and dialogue generation directly with video, removing the effect of mismatched lips and robotic voices that made it easy to detect deepfakes.
“Sora’s ability to understand complex physics means shadows, reflections, and movement now look very real,” he says.
With the capabilities of the technology in mind, Prof Ho says that this poses a significant risk to those with low digital literacy, particularly since digital literacy in Malaysia is mainly centred on “how to use” technology rather than “how to question” it.
“We generally have a culture of high trust in content shared within private circles, such as WhatsApp family groups. This ‘trust transfer’ – where we believe a fake video because someone familiar sent it – is a critical weakness. We are not yet equipped for a Zero Trust information environment,” he says.
“Zero Trust” refers to a cybersecurity practice that assumes that no user should be trusted by default.
Spotting fakes
While the technology is growing in complexity, there are still some weaknesses that the public can use to identify both AI-generated videos and deepfakes, with Prof Ho advising to be on the lookout for several tell-tale signs. This includes micro-expressions, logical inconsistencies, audio-visual sync, and contextual logic.
“AI still struggles with the subtle, involuntary twitching of eyes or the natural micro-expressions of a human face during emotional speech. Look for a ‘vacant’ stare or blinking that feels too regular or non-existent.
“Look at the background where give-aways include shadows falling in the wrong direction, texts on a street sign or a book cover appearing as gibberish.
“Even with advanced tools, there can be a millisecond delay between the sound of a plosive (like a ‘P’ or ‘B’ sound) and the lips closing, for example.
“If a video shows a prominent politician saying something completely out of character, the video is likely fake, regardless of how real it looks,” he says.
Azree advises users to watch for three common traits of videos generated entirely by AI. This includes a low resolution, which is intentionally pixelated or grainy to hide errors such as unnatural skin or impossible object movements.
AI creators also tend to compress videos heavily, blurring edges and removing details to make automated detection more difficult.
These clips are typically short, ranging from about six to 10 seconds long, as longer videos cost more to generate and are more likely to reveal mistakes.
Some videos may splice shorter AI-generated clips together, so quick cuts every few seconds can be another giveaway.
Azree also has a longer list of items to check for when checking for deepfakes, recommending starting first from the face, since that is usually the first feature to be altered.
Viewers should look for unnaturally smooth or “airbrushed” skin and check whether the texture matches the person’s age. Cheeks and the forehead are particularly prone to errors, with skin that is too smooth or too wrinkled compared with hair and eyes serving as potential red flags.
The lighting around the eyes and eyebrows are another area where AI often struggles, with shadows missing or maybe cast in the wrong place. Eyes may look glassy or overly sharp, and eyebrows may be too perfectly shaped.
Glasses can also give things away if reflections are missing, too strong, or fail to change naturally when the person moves.
Facial hair is another common weakness, with beards looking painted on, mustaches having blurry edges, or sideburns not aligning with the skin tone. New hair appearing or disappearing between frames is also a warning sign.
Small details such as moles or skin marks can be inconsistent in deepfakes, shifting slightly, appearing only in some frames, or looking too flat or too sharp.
Blink patterns may also be off, with too few blinks, rapid blinking, or awkward, incomplete blinks standing out.
Lighting and shadows are often unrealistic in AI-generated videos, with faces glowing unnaturally compared with the background or shadows falling in inconsistent directions.
Slowing down the video and reviewing it frame by frame can reveal micro-glitches such as flickering hairlines, jittery eyes, or shifting facial edges, which almost never occur in genuine footage.
However, it is important to take into account that all this advice may become less effective as the technology gets better – a point that both Prof Ho and Azree acknowledge.
Chin on the other hand says that people should be more aware about the circumstances surrounding the video, rather than just what is shown in the video on its own.
“There are still some signs you can sometimes see, but these clues are disappearing as the tech improves. Some real videos now ‘look AI’, and some AI videos look completely believable.
“The safer approach is to look at context, not just pixels: Who posted this? Is it from a trusted source? Is the behaviour or statement very out of character or designed to make you angry or shocked?
“If something feels off, slow down and verify before you share it,” he says.
What the future holds
From the perspective of intellectual property and information technology lawyer Foong Cheng Leong, platforms need to take more responsibility over what is being posted by users, which includes stricter enforcement over the overt labelling of content that is AI-generated.
He says this labelling should apply, whether it is done by the users uploading content on the platforms, the platform’s automated detection systems, or even the service provider for the tools generating AI content, whether expressly as something like a watermark, or embedding within the file.
Foong adds that post filters should also be implemented to disallow any works that can cause potential harm, in the same way that commonly available AI tools for example disallow the generation of pornographic materials.
Sharing similar thoughts, Prof Ho says that platforms should act as a first line of defence when it comes to such fake videos.
He says that there needs to be a shift to mandatory cryptographic watermarking instead of just voluntary labelling for AI outputs, while also calling for them to have rapid response teams specifically to deal with high-risk deepfakes.
Azree on the other hands says that while labels help in theory, they have limited real-world effectiveness, believing that people often don’t notice, understand, or trust AI-related labels.
He believes that most users scroll through their feeds quickly without paying attention to labelling, meaning that labels alone cannot protect the public from misleading AI content.
From his perspective, this is where the government should step in and set a shared-responsibility model to counter deepfakes, with platform safeguards, government regulation, and public awareness.




