BEIJING: A team of Chinese scientists has found more evidence that most people cannot tell the difference between real speech and fakes generated by artificial intelligence even if some training is given to help.
Based at Tianjin University and the Chinese University of Hong Kong, the researchers hooked 30 people up to brain scanners while they listened to voice recordings and tried to figure out which were AI-generated and which were real people.
For the most part, the answer was that they could not do so, with the team describing the group as "bad at discriminating between the two types.”
The team then sought to then train the seemingly hapless study group – efforts they said "helped only minimally."
But the tips did seem to sow some seeds for potential progress: "On a neural level, training made the brain’s responses more distinct for human versus AI speech,” the researchers said, ahead of having their findings published by the Society of Neuroscience.
"The auditory brain system seems to start picking up subtle acoustic differences, even if people can’t reliably turn that into a behavioural decision yet,” said Xiangbin Teng, the team leader, who said the faint signals of recognition were "encouraging.”
The tests followed the publication in September last year of Queen Mary University of London research warning that "deepfake" voices created using widely available software are "now indistinguishable from real human voices.”
People fare only marginally better when it comes to AI-generated imagery, it seems, with a University of New South Wales and Australian National University study published last month finding most people too confident in their ability to spot a fake face.
Last year, Citibank published a warning that such increasingly-hard-to-detect audio and visual AI fakes "are spreading across recruitment, financial operations and executive impersonation." – dpa
