Melanskia is not your typical Amish woman. She boasts more than 300,000 followers on Instagram and warns them about the perils of store-bought foods. She touts the benefits of removing “industrial waste” from the liver with a drink mix her followers can buy on Amazon.
With her modest white hair-covering and wire-rim spectacles, Melanskia is earnest, charming and quite convincing. She is also not real.
She is one of a handful of synthetic influencers created with artificial intelligence who are promoting an untested dietary supplement, called Modern Antidote, which sells for just under US$50 (RM196) a jar. There is no disclosure on her account that everything about her is AI-generated.
Behind Melanskia is a genuine human being, Josemaria Silvestrini, who is part of a growing vanguard of entrepreneurs taking advantage of rapid advances in AI to promote their brands using people who don’t actually exist.
It is, in many respects, a marketer’s dream. Now, virtually anyone can produce highly realistic videos featuring ersatz personalities carefully calibrated to appeal to any target audience – and do it for a fraction of the cost of a flesh-and-blood pitch person.
“AI is a game changer,” said Silvestrini, 28, who is running the company from Shanghai while he completes a master’s program. “Every piece of the business is being AI-ified.”
But the technological leap is also raising alarm that consumers could be duped by deepfakes. A February study in the British Journal of Psychology found that people overestimated their ability to recognise AI-generated faces, leaving them vulnerable to “fraud and deception.” That risk is intensifying as the technology improves.
While AI once had obvious giveaways, like hands with extra fingers, the newest videos look confoundingly authentic – and often, viewers are not told otherwise. In videos, Melanskia visits what appears to be a fully stocked Costco store, milks cows and bakes bread. Her wrinkles look realistic; shadows fall where they should.
“I thought Amish weren’t allowed to use electricity,” one confused Instagram user commented.
Timothy Caulfield, research director at the Health Law Institute at the University of Alberta, said the use of AI has grown in the wellness space, a crowded but lucrative market where consumers rely on perceptions of authenticity and identity to make buying decisions. One Instagram account with 125,000 followers, run by a self-described “hustler” in Miami, features an AI-generated Buddhist monk with an English accent who says he lives in Tibet promoting fibre supplements and soursop bitters. That content, too, is not labelled as being AI-generated.
With AI, Caulfield said, brands can experiment inexpensively with a huge variety of avatars until one works.
“It’s so tremendously efficient,” he said. “You can curate an image that perfectly fits the vibe you’re trying to produce.”
Multiple states have passed laws requiring disclosure of AI content, including one in California that requires AI companies to watermark it and another that demands social media companies detect and label it. In December, Gov Kathy Hochul of New York signed the nation’s first legislation explicitly requiring the disclosure of “synthetic performers” in advertisements.
Unlike other legislation, that law places a burden on the creators of deepfake content, not just social media and AI companies. But it doesn’t take effect until June, and it’s not yet clear whether a December executive order from President Donald Trump proposing a regulatory framework for the technology would preempt it and other state-level regulations.
Silvestrini said he was “aware” of the New York regulation and was working with his legal team to ensure his brand would be compliant with the law when it goes into effect.
At a time when people are increasingly uneasy about the technology’s ubiquity, Silvestrini has proved unusually open about his use of synthetic avatars.
Another brand, Rosabella, has used a wide array of AI avatars on TikTok to promote its moringa supplement. Some of the videos were tagged by the platform as AI-generated. But other videos probably generated by AI, like one featuring an older woman promoting moringa’s “age reversing secrets,” are not labelled, nor are posts in Spanish featuring naturopaths and nutritionists who have different faces but share the same voice lauding the benefits of moringa for gut health.
Last month, the Food and Drug Administration and Centers for Disease Control and Prevention issued recall notices for Rosabella’s moringa powder capsules after a drug-resistant Salmonella outbreak that led to multiple hospitalisations was linked to the product. Ambrosia Brands, the parent company of Rosabella, did not respond to requests for comment.
“Early adopters of AI have realised that there really is a lot of money to be made in different ways,” said Cameron Wilson, who runs The Diigitals, one of a handful of modelling agencies representing only virtual talent. “The problem is that most of them seem to be deceptive ways.”
For Silvestrini, the appeal of AI avatars was the opportunity to inject new ideas into marketing his brand. Rather than make the videos himself, he relies on more than three dozen independent creators to coax people to buy his product using what appear to be personal accounts.
A chemistry major at Williams College, Silvestrini said he developed a recipe centred around sulforaphane, an antioxidant found in broccoli and kale, and hired a lab in California to help manufacture it at scale.
Silvestrini used AI to design the supplement’s logo, packaging and website, saving him tens of thousands of dollars and months of development time compared with his first entrepreneurial endeavour, a wellness drink.
As soon as he can afford to, he said, he plans to conduct a clinical study of his product to see if it has any effect on microplastics in the body, as he claims. “I want to put my money where my mouth is,” he said.
In reviews on Amazon, some Modern Antidote customers said the supplement helped them feel better, with one claiming it resulted in a “clearer mind.” But others raised concerns about the way it was marketed; one labelled it an “AI SCAM” and another worried that the Farmer Honest account was fearmongering to generate sales.
“We take it seriously and are always thinking about how to evolve as norms develop around this,” Silvestrini said of consumers who react negatively to AI avatars. – ©2026 The New York Times Company
This article originally appeared in The New York Times.
