Digital assistants are the latest trend in the world of technology. From Google's Assistant, Amazon's Alexa to Apple's Siri, technology firms are working aggressively in the field of artificial intelligence and smart digital assistants.
A common approach has been making the digital assistants more personalised and tailored for different individuals. A smart assistant, however, relies on your personal data to deliver better experience. This also makes it and its users highly vulnerable to cyber criminals.
A team of Chinese researchers have discovered a new flaw in popular voice assistants such as Siri and Alexa that could allow strangers to give voice commands (in a way) to your smartphones. These assistants can be controlled by inaudible ultrasonic commands, researchers said.
This essentially means a hacker with capability to execute ultrasonic commands can access your smartphone without your consent, gain access to your private data or open websites to inject malicious applications. Researchers have named this method "Dolphin" attack.
How does the Dolphin attack work?
Researchers first created a program that could translate normal human voice commands into frequencies which are way too high for humans to hear. They used frequencies more than 20,000HZ, which is inaudible to humans but audible to smartphones. To play these frequencies, the program needed simple things such as a smartphone, amplifier, ultrasonic transducer and battery.
Researchers were successful in issuing a variety of voice commands to activate the voice assistants. They could launch Siri to make FaceTime video call on an iPhone. On Android, researchers managed to command Google Assistant to turn airplane mode on. The program also worked for in-car interfaces that also use voice controls. Researchers could manipulate the navigation system in an Audio car.
Should you be worried?
The Dolphin attack does sound incredible and possibly destructive as well, but it's not really that scary. There are certain pre-conditions that are required for this attack to work. First, the transmitter has to been in five to six feets of proximity and needs a relatively quiet place to work. Though, a few previous researches have shown such attacks can be conducted from a larger distance.
Also, the smartphone has to be unlocked. In case of digital payments or any real damage, the process would require two-step verification. But then it's highly unlikely users won't notice voice commands performing certain actions on the phone. Anyway, digital assistants respond audibly to all voice commands sent to them.
"Given that this methodology is complex and has not-so-common prerequisites, there is not much danger to users as of now. If though, in any case, an attacker does attack using this technique, the user will be able to notice it since the digital assistants typically respond audio-visually when a voice command is being executed, "said Ankush Johar, Director at HumanFirewall.io, a cybersecurity company.
"The only case where this scenario becomes dangerous is if you have given complete access to your digital assistant to do tasks even when the phone is locked or, you leave your phone unlocked far from your sight. Both of these cases though are already dangerous even if an attacker does not have access to ultrasonic sound attacks. Disallow your voice assistant to access your phone when locked, and don't leave your phone unattended. This should keep you safe," he added.
While this loophole can be easily fixed by tuning the frequencies within smartphone hardware and tweaking the software, researchers have well succeeded in showing new vulnerabilities in so-called new technology trends. — The Hindustan Times/Tribune News Service