News Analysis: Concerns rise as AI blurs line between real, fake content


By Chu Yi

BERLIN, March 19 (Xinhua) -- A video on social platform X, purportedly showing Israeli soldiers weeping behind a wall, drew more than 1.6 million views before investigators found it was generated using artificial intelligence (AI).

The incident highlights how AI advances are making it increasingly difficult to distinguish between real and fabricated content, raising concerns about the reliability of digital information.

FAKING PUBLIC REALITY

The example underscores how the ongoing war in Iran is no longer fought solely on the battlefield, but also online.

An investigation by German public broadcaster ZDF found multiple inconsistencies in the above-mentioned footage. In one scene, a soldier wears a patch bearing the Israeli flag; in another, the patch is missing. The text on the patch is also nonsensical -- a common flaw in AI-generated imagery.

Comments under the post highlighted inconsistencies in the uniforms, sound effects and firearms, further casting doubts on the video's authenticity.

The spread of AI-generated content has been testing the credibility of professional news organizations.

In February, ZDF recalled one of its New York-based correspondents and later dismissed her over a report on a U.S. Immigration and Customs Enforcement (ICE) operation that was found to contain AI-generated footage. A watermark of the Sora video-generation tool was visible in the corner of the frame.

"The damage caused by disregarding journalistic standards is significant. At its core, the issue is the credibility of our reporting," said ZDF Editor-in-Chief Bettina Schausten.

Beyond individual cases of fabricated content, experts warn that AI may also reshape how public opinion itself is formed.

MANUFACTURING SOCIAL CONSENSUS

Even more troubling than fabricated content is AI's ability to manufacture the illusion of public opinion. In a recent Policy Forum article in the journal Science, an international team of researchers warned that digital manipulation is entering a new phase driven by so-called AI swarms.

In this phase, misinformation is no longer spread only by isolated bot accounts. Increasingly, it is amplified by coordinated clusters of AI-generated personas posing as real users.

"We are talking about a system of AI agents that can be controlled by an individual or an organization," said David Garcia, a social data researcher at the University of Konstanz.

"They have persistent identities and memory, and they can mimic human behavior. They can be coordinated toward a common goal while still varying their language and tone, and they are able to respond in real time to events and to human reactions," he said.

Garcia said that the risk lies in the possibility that manipulators could use large language models to construct an "alternative social reality."

"Through a gradual yet persistent process, AI swarms can create the impression that a particular view is widely shared," he said. "That can shape public opinion and even alter social norms. When many seemingly independent voices repeatedly express the same stance, the illusion of a majority can emerge, even when no such majority exists."

As these risks grow, researchers and institutions are stepping up efforts to detect and contain them.

FROM DEBUNKING to DETECTION

"Content should be screened for patterns of coordinated behavior to help identify AI swarms more quickly," Garcia said.

Researchers have also called for stronger oversight to track how AI-generated accounts influence public debate and to identify coordinated manipulation at an early stage.

Garcia said the earlier such networks are uncovered, the less able they will be to erode public trust or to present a diverse range of social views as a single consensus.

Some of these efforts are beginning to take shape. On Monday, the German Research Center for Artificial Intelligence (DFKI) said it had launched a new deepfake detection project, "Check First. Vote Smart," together with the DFKI spin-off Gretchen AI and the Rhineland-Palatinate State Agency for Civic Education.

The tool allows users to forward suspicious Instagram images in two steps, after which the system assesses whether the material is AI-generated or manipulated and estimates the likelihood it is fake.

Bernhard Kukatzki, director of the Rhineland-Palatinate State Agency for Civic Education, said disinformation had become one of the most pressing challenges in today's information environment.

At a time when citizens are confronted with manipulated content on a daily basis, the ability to critically assess information is an essential skill, Kukatzki said.

Follow us on our official WhatsApp channel for breaking news alerts and key updates!

Next In World

At least 11 killed in security operation in northern Mexico
Initial volumes of emergency oil reserves already released: IEA
Cuba adjusts labor, transport measures to ease energy crisis
Crude futures settle lower
U.S. dollar ticks down
More than one-third of people hold racist views in Germany: survey
AC/DC guitarist Stevie Young hospitalized in Buenos Aires
U.S. may lift sanctions on Iranian oil already shipped: Bessent
UK's Farage would ban mass Muslim prayer events near historic British sites
Mexico invites Spain's king to World Cup opening match

Others Also Read