How AI-generated videos are distorting your child’s YouTube feed


Radesky and others raised concerns about hyperrealistic AI content, especially for children who are too young to be able to distinguish fantasy from reality. — Photo by Kelly Sikkema on Unsplash

Four seconds into one version of Old MacDonald Had a Farm, an animated horse with two arms and four legs hatches from an egg.

In another video, a pink elephant, an orange flamingo and other animals appear next to letters of the alphabet, performing complicated gymnastic maneuvers on tightropes.

The New York Times reviewed these clips, along with more than 1,000 other videos recommended to young children on YouTube, and found that the algorithm pushes bizarre, often nonsensical, artificial intelligence-generated videos from channels claiming to teach “toddlers” and “preschoolers” about the alphabet and animals.

Many of the YouTube accounts producing AI-generated videos reviewed by the Times specifically target the youngest of viewers and their parents, marketing their channels as “educational” as opposed to entertainment. Creators are profiting off this content with little oversight from YouTube.

“To me, the meaninglessness of these videos is a huge problem because they’re just attention capture,” said Dr. Jenny Radesky, a developmental behavioural pediatrician and associate professor of pediatrics at the University of Michigan Medical School. “And then the worst case is that it’s so fantastical and full of attention capture that it is going to be cognitively overloading to the child.”

Radesky and others raised concerns about hyperrealistic AI content, especially for children who are too young to be able to distinguish fantasy from reality.

McCall Booth, a developmental psychologist and researcher at Georgetown University, said children “may have a harder time in the future identifying fake content because their mental schema had already adapted to include improbable but aesthetically realistic character actions.”

The American Academy of Pediatrics issued a guide for parents on how to select media content for their young children, telling parents to avoid content that is either AI-generated or highly sensationalised. The guidance also cautioned against consuming short-form videos.

While there aren’t many studies yet on how short-form media affects young children, Rachel Barr, a developmental psychologist and director of the Georgetown University Early Learning Project, said that for children under the age of 5 whose attention systems are still developing, the videos move too rapidly, and usually aren’t long enough to include any meaningful context or story plot.

The Times focused primarily on YouTube Shorts when conducting its analysis of AI videos. Over the course of several weeks, the Times watched videos from popular children’s channels on YouTube like CoComelon, Bluey or Ms Rachel. Then we scrolled through the platform’s recommended YouTube Short videos in 15-minute intervals in order to better understand how the algorithm floods the feed with this content.

In one 15-minute session, after watching CoComelon’s Wheels on the Bus video, more than 40% of the videos watched appeared to contain AI-generated visuals. The Times manually reviewed each of the videos, some of which clearly featured YouTube’s label for “altered or synthetic content,” while others displayed visual errors or other distortions in the background.

The AI-generated content wasn’t always obviously flawed, and some videos were sufficiently seamless to evade casual detection by the human eye. To further vet the videos, the Times used an AI detector to determine with high probability that the videos, and in some cases the music and voices, were AI-generated.

Some platforms have begun to tighten their rules around the use of these tools. Pinterest has features that allow users to select how much of this kind of content they want to see. TikTok also said it was testing ways that would enable people to reduce the amount of AI content in their feeds. Last month, YouTube announced new controls that allow parents to set time limits on YouTube Shorts.

The Times requested comment from YouTube on its policy around AI videos for children, and shared five channels as examples. In response, YouTube suspended all five accounts from the YouTube Partner Program, meaning they are ineligible to earn ad revenue on YouTube and are blocked from appearing on YouTube Kids. The Times also sent three examples of hyperrealistic AI videos on YouTube Kids, which YouTube then removed from the app.

“We require creators to disclose when they’ve used AI to create realistic content, meaning things a viewer could easily mistake for a real person, place or event,” Boot Bullwinkle, a YouTube spokesperson, said in an email to the Times.

But the Times’ review found that creators are not consistently disclosing if videos contain synthetic visuals to make more realistic-looking content. And when it comes to animated AI videos for children, YouTube does not require these to be labeled at all.

This means that much of the burden of identifying AI content is falling to parents – a task that is daunting even for experts.

Allison Sims, 34, often turns on her own YouTube account to keep her 2-year-old occupied while she’s making dinner. Her daughter watches Ms Rachel, The Wiggles and other channels that play nursery rhymes. But it wasn’t long before she figured out how to scroll through YouTube Shorts.

After coming across several shorts that she found disturbing in her daughter’s watch history, Sims said she removed the app from the iPad.

“Because AI is so new and as a parent, I wouldn’t know what to look out for except for when they’re very obvious that I stop and look at it,” Sims said. “But I feel like it’s something that as parents we should kind of know and be aware of.”

Many of the YouTube accounts uploading AI content for children are largely anonymous with no contact information or identifiable details as to who is behind the account.

But one creator, Syeda Jaria Hassan, spoke to the Times and said that creating AI content for children has become her full-time job.

Hassan, who lives in Sargodha, Pakistan, said she decided to focus on making content for children after teaching at a Montessori school for children between 4 and 8. Her account, Suno Kids TV, which is described as a channel to educate and entertain children, features animated AI videos of animals and sing-along songs.

When asked about how children can be distracted by these kinds of effects, Hassan responded that TV channels and other YouTube channels for children also rely heavily on visual effects and that she’s just following a model of children’s programming that has been around for years.

However, when it comes to learning, experts say children benefit most from watching media that has a clear narrative with a beginning, middle and end, along with characters that children can attach to and scenes that relate to their real life.

Simple language and short phrases are also helpful when it comes to cognitive development. Programming that teaches children about concepts like problem-solving or feature intentional repetition can help with memory recall.

One example is PBS Kids’ Daniel Tiger’s Neighborhood, a modern spinoff of Mister Rogers’ Neighborhood, which follows a young animated tiger who teaches life skills and social strategies. The show works with child development experts when crafting stories.

Ellen Doherty, chief creative officer at Fred Rogers Productions, explained that they developed a structural pattern for the show, specifically in the format of two separate short stories in every episode with songs that strategically help reinforce the themes within the episode that parents and children can both sing and remember. This music also helps move the story along, but at a controlled speed.

In one story, Daniel Tiger teaches children about brushing their teeth through song, making sure to interact with young viewers and taking long pauses.

“That spark of human connection is everything,” Doherty said. – ©2026 The New York Times Company

This article originally appeared in The New York Times.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Scammers use fake Sumbangan Tunai Rahmah site to hijack Telegram accounts
UK lawmakers vote to reject social media ban for under-16s
Canal+ taps Google's AI for video production, content recommendation
Inside the birthplace of your favourite technology
AI incites a new wave of grieving parents fighting for online safety
China moves to curb use of OpenClaw AI at banks, state agencies
Health advice from AI Chatbots is frequently wrong, study shows
OpenAI plans to launch its Sora video tool in ChatGPT, The Information reports
'Stealth hit' Pok�mon game sends Nintendo shares soaring
Anduril to acquire space surveillance firm ExoAnalytic eyeing more Golden Dome capabilities

Others Also Read