A bipartisan group of lawmakers asked for a US intelligence assessment of the threat posed by technology that lets anyone make fake, but realistic, videos of real people saying things they’ve never said.
The rising capabilities of the technology are fueling concerns it could be used to make a bogus video, for example, of an American politician accepting a bribe or of a US or an adversarial foreign leader warning of an impending disaster.
Three lawmakers wrote a letter to National Intelligence Director Dan Coats asking his office to assess how these bogus, high-tech videos – known as deepfakes – could threaten US national security.
“By blurring the line between fact and fiction, deepfake technology could undermine public trust in recorded images and videos as objective depictions of reality,” wrote Adam Smith, Stephanie Murphy, and Carlos Curbelo.
“We are deeply concerned that deepfake technology could soon be deployed by malicious foreign actors.”
Deepfakes are not lip-syncing videos that are obvious spoofs. This technology uses facial mapping and artificial intelligence to produce videos that appear so genuine it’s hard to spot the phonies. Republicans and Democrats predict this high-tech way of putting words in someone’s mouth will become the latest weapon in disinformation wars against the United States and other Western democracies.
The lawmakers asked the intelligence agencies to submit a report to Congress by mid-December describing the threat and possible counter measures the US can develop or employ to protect the nation.
Realising the implications of the technology, the US Defense Advanced Research Projects Agency is already two years into a four-year programme to develop technologies that can detect fake images and videos. Right now it takes extensive analysis to separate phony videos from the real thing. It’s unclear if new ways to weed out the fakes will keep pace with technology used to make them.
Deepfakes are so named because they utilise deep learning, a form of artificial intelligence. They are made by feeding a computer an algorithm, or set of instructions, lots of images and audio of a certain person. The computer program learns how to mimic the person’s facial expressions, mannerisms, voice and inflections. If you have enough video and audio of someone, you can combine a fake video of the person with a fake audio and get them to say anything you want. – AP