DolphinAttack: Alexa, Siri and more are vulnerable to ’silent‘ hacking
RESEARCHERS FROM A CHINESE UNIVERSITY have worked out a method of sending commands to voice recognition systems without actually saying anything.
The researchers come from Zhejiang University, and the findings come from studies into the way that dolphins communicate without forming words like „alright mate?“, or „watch out, there’s a tuna fishing boat.“ Dolphins, in case you missed, them make a funny croaky, throaty, crickky noise, but still manage to get their message across.
They have called the attack method „DolphinAttack“ and are boasting that it is effective against popular speech recognition systems including Siri, Google Now, Samsung S Voice, Huawei HiVoice, Cortana and Alexa. Pretty much all of them then.
Do not worry though, as the researchers have some recommendations on how to improve the situation at the hardware end.
„This paper aims at examining the feasibility of the attacks that are difficult to detect, and the paper is driven by the following key questions: Can voice commands be inaudible to human while still being audible to devices and intelligible to speech recognition systems? Can injecting a sequence of inaudible voice commands lead to unnoticed security breaches to the voice controllable systems? To answer these questions, we designed DolphinAttack“ they write in their paper.
„By injecting a sequence of inaudible voice commands, we show a few proof-of-concept attacks, which include activating Siri to initiate a FaceTime call on iPhone, activating Google Now to switch the phone to the airplane mode, and even manipulating the navigation system in an Audi automobile. We propose hardware and software defense solutions, “ they add.
„We validate that it is feasible to detect DolphinAttack by classifying the audios using supported vector machine (SVM) , and suggest to re-design voice controllable systems to be resilient to inaudible voice command attacks.“ µ