Dubious AI detectors drive 'pay-to-humanise' scam


As AI falsehoods explode across social media, often outpacing the capacity of professional fact-checkers, bogus detectors risk adding another layer of deception to an already fractured information ecosystem. — Photo by Worshae on Unsplash

WASHINGTON: Feed an Iranian news dispatch or a literary classic into some text detectors, and they return the same verdict: AI-generated. Then comes the pitch: pay to "humanise" the writing, a pattern experts say bears the hallmarks of a scam.

As AI falsehoods explode across social media, often outpacing the capacity of professional fact-checkers, bogus detectors risk adding another layer of deception to an already fractured information ecosystem.

While even reliable AI detectors can produce false results, researchers say a crop of fraudulent tools has emerged online, easily weaponised to discredit authentic content and tarnish reputations.

AFP's fact-checkers identified three such text detectors that claim to estimate what percentage is AI-generated. The tools – prompted in four languages – not only misidentified authentic text as AI-generated but also attempted to monetise those errors.

One detector, JustDone AI, processed a human-written report about the US-Iran war and wrongly concluded it contained "88% AI content." It then offered to scrub any trace of AI for a fee.

"Your AI text is humanising," the site claimed, leading to a page where "100% unique text" was locked behind a paywall charging up to US$9.99 (RM39.81).

Two other tools – TextGuard and Refinely – produced similar false positives and sought to monetise them.

'Scams'

AFP presented its findings to all three detectors.

"Our system operates using modern AI models, and the results it provides are considered accurate within our technology," TextGuard's support team told AFP.

"At the same time, we cannot guarantee or compare results with other systems."

JustDone also reiterated that "no AI detector can guarantee 100 percent accuracy."

It acknowledged the free version of its AI detector "may provide less precise results" due to "high demand and the use of a lighter model designed for quick access."

Echoing AFP's findings, one user on a review platform complained that "even with 100% human-written material, JustDone still flags it as AI."

AFP fed the tools multiple human-written samples – in Dutch, Greek, Hungarian, and English. All were wrongly flagged as having high AI content, including passages from an acclaimed 1916 Hungarian classic.

The tools returned AI flags regardless of input – even for nonsensical text.

JustDone and Refinely appeared to operate even without an internet connection, suggesting their results may be scripted rather than genuine technical analysis.

"These are not AI detectors but scams to sell a 'humanising' tool that will often return what we call 'tortured phrases'" – unrelated jargon or nonsensical alternatives – Debora Weber-Wulff, a Germany-based academic who has researched detection tools, told AFP.

'Liar's dividend'

Illustrating how such tools can be used to discredit individuals, pro-government influencers in Hungary claimed earlier this year that a document outlining the opposition's election campaign had been entirely created by AI.

To support the unfounded allegation, they circulated screenshots on social media showing results from JustDone.

The tools tested by AFP sought to lure students and academics as clients, with two of them claiming their users came from top institutions such as Cornell University.

Cornell University told AFP it "does not have any established relations with AI detector companies."

"Generative AI does provide an increased risk that students may use it to submit work that is not their own," the university said.

"Unfortunately, it is unlikely that detection technologies will provide a workable solution to this problem. It can be very difficult to accurately detect AI-generated content."

Fact-checkers, including those from AFP, often rely on AI visual detection tools developed by experts, which typically look for hidden watermarks and other digital clues.

However, they too can sometimes produce errors, making it necessary to supplement their findings with additional evidence such as open-source data.

The stakes are high as false readings from unreliable detectors threaten to erode trust in AI verification broadly – and feed a disinformation tactic researchers have dubbed the "liar's dividend": dismissing authentic content as AI fabrications.

"We often report on misinformers and other hoaxsters using AI to fabricate false images and videos," said Waqar Rizvi from the misinformation tracker NewsGuard.

"Now, (we are) monitoring the opposite, but no less insidious phenomenon: claims that a visual was created by AI when in fact, it's authentic." – AFP

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In Tech News

Life with AI causing human brain 'fry'
BlackRock funds provide about $57 million to IQM Quantum Computers ahead of US IPO
In the wake of US social media verdicts, a look at what limits other countries have imposed for kids
DeepSeek probes hours-long AI outage after users report errors
Another PlayStation price hike means the gaming console will cost 30% more than it did last year
AI's arrival complicates Big Tech climate goals, and some worry it's locking in more fossil fuels
World Backup Day: Don’t let your digital memories vanish
OpenAI shelves plans for erotic chatbot
Eli Lilly extends partnership with Insilico Medicine for AI-powered drug discovery
Swiss back tougher social media rules for minors, survey finds

Others Also Read