A tiger in the Bandhavgarh Tiger Reserve in India's central state of Madhya Pradesh, on Jan 2, 2026. Fake videos of attacks by big cats risk undoing years of painstaking work by conservationists. - The New York Times
NEW DELHI: A man is sitting on a porch outside a house in the fading evening light. Suddenly, a growling tiger emerges from behind a fence and lunges at him, grabbing him by the neck and pulling him into the wild.
This frightful video – purportedly a CCTV recording from a forested area in Chandrapur in the western Indian state of Maharashtra – went viral in November 2025. Many shared it on X and WhatsApp, helping it notch up millions of views.
But this attack never happened and the video turned out to have been generated by artificial intelligence (AI).
The local authorities debunked the fake video on Nov 7, describing it as an attempt by “anti-social elements” to create unrest in the region that had recently seen an increase in human-tiger conflict cases, and urged the public not to take such videos as depicting things that actually happened.
Few were careful enough to spot the giveaways in the video. Among them was how the tiger’s movement kicks up dust but fails to stir the dead leaves lying about.
This video is among many AI-generated wildlife videos that blurred the line between reality and fiction at an unprecedented scale in India in 2025.
In one of them, a leopard is seen attacking a moving train and pulling out a passenger standing next to a coach door. In another, a tiger chases tourists on a safari at a national park. And a third video shows a leopard scampering through a mall in Mumbai, sparking panic among the visitors.
In a seemingly benign pursuit of clicks and likes for their uploaders, such AI-generated videos are causing real-world harm and panic. Not only do they misinform and spread fear, but experts warn they also distort public understanding of true animal behaviour and aggravate the threat of man-animal conflict in India.
Fake videos of big cats preying on humans risk fomenting public anger against these animals, undoing years of painstaking work that conservationists have put in to ensure animal-human co-existence.
“These fake situations are misrepresenting that there are animals out and hurting people in the community,” said Dana Wilson, director of marketing and communications at Wildlife SOS, a wildlife rescue and protection organisation in India.
“So I could see how a fake event could completely stoke the flames of real retaliatory killings,” he told The Straits Times.
Human-animal conflicts are a recurring reality in India.
Rapid urban expansion has eaten into forested habitats, pitting humans against animals. Tigers killed 73 people in 2024 and many others were killed in leopard attacks in rural and urban areas across the country. Wild animals, including tigers and leopards, have also been beaten to death.
In such a context, engineered videos of big cats in areas where they are indeed sighted are no longer seen as harmless hoaxes but actual threats. On several occasions, they have sparked unnecessary fear and even forced the authorities to launch rescue operations, wasting time and resources.
In September 2025, a 22-year-old journalism student in Lucknow’s Ruchi Khand used AI to add a leopard in a selfie of himself on his balcony, and circulated the photo on WhatsApp, suggesting the animal was on the prowl in his neighbourhood.
The prank went viral and unleashed chaos.
The local forest department deployed nine teams to track down the leopard but found no trace of the animal despite hours of patrolling and trawling through hours of CCTV footage.
Eventually, the authorities traced the images back to the student, who was detained and let off with a warning.
Circulating AI-generated or doctored wildlife videos that cause panic or mislead people can attract legal action, the authorities have warned. However, dealing legally with fake videos has proved to be challenging.
India does not have a single, dedicated law specifically for fake AI videos, with rapidly advancing technology and limited platform liability further complicating efforts.
In October 2025, the Indian government proposed an amendment to its Information Technology Rules, requiring mandatory labelling of AI-generated content by social media platforms to address rising concerns about deepfakes and misinformation.
But these rules have yet to be officially notified as law.
Many fear that being unable to distinguish between actual threats and fictionalised narratives diminishes the urgency to deal with conservation challenges.
“If all of a sudden, there are a million fake videos of leopards getting in the city, it’s much less impactful when a real event happens,” said Wilson. “It’s basically desensitising people to an actual situation.”
And in the worst case, he added, it could make people believe everything they see online, including fake videos, or cause them to disbelieve actual footage, potentially impacting engagement and fund-raising for wildlife conservation.
Another fake video that caught attention in October 2025 was that of a drunk man near the Pench Tiger Reserve in Madhya Pradesh.
He is seen petting a tiger and even offering him a drink from his bottle, suggesting dangerously that wild animals such as big cats could be approached and petted.
Reacting to the surfeit of fake tiger videos, Pench and other tiger reserves in Madhya Pradesh put out a joint statement on Nov 7, 2025, saying that such misleading videos “not only distort the image of wildlife but also disrespect the sincere work of those who protect it”.
Rajnish Kumar Singh, the deputy director of Pench Tiger Reserve, told ST such videos don’t risk aggravating human-animal conflict at Pench for now because few living in or around the tiger reserve have access to mobile Internet.
And by the time these fake videos, which usually originate in urban environments and are first circulated among urban consumers, trickle down to remote areas, they have mostly been dismissed as fake, he noted.
Pench and its surroundings are home to various marginalised indigenous tribal groups, including the Gond and Baiga communities.
But Rajnish Kumar fears this could change in the next few years, as AI technology becomes more sophisticated – making it more difficult for most people to tell fiction from fact – and locals in remote forested areas acquire smartphones and become first-time users of the Internet.
“It may so happen that before anyone can realise a video is fake… it may create chaos in the village,” he said.
Experts reckon the only effective way to deal with this challenge is to impart greater media literacy to the public, enabling people to recognise fake videos, especially in an increasingly digital world where law enforcement has struggled to deal with such threats.
Tips include watching out for inconsistent shadows or blurry figures and cross-checking videos with trusted sources. “Education is the only way out,” said Rajnish Kumar. “People need to apply their brains that this is fake.” - The Straits Times/ANN
