Voice assistants are more popular than ever and are supposed to make our lives easier, not spy on us. And yet, the possibility of hacking is a stark reality. Here’s why.
Over 56 million voice assistant products will be sold in 2018, according to a report by Canalys. With smart docks like the Apple HomePod, Google Home, and Amazon Echo for example, Siri, Google Assistant, Alexa, and Cortana are set to become as omnipresent in our homes as they are on our smartphones. They’re a great tool for making our lives easier and saving time, but they also pose a risk in terms of information security, considering the hyperconnective nature of these objects and the poor levels of protection on offer.
In an article on the Motherboard website, Israeli researchers Tal Be’ery and Amichai Shulman, both cybersecurity specialists, made the following unsettling conclusion: “We still have a bad habit of introducing new interfaces in machines without fully analysing the implications for security.”
From subvertising to ultrasonic intrusion
Recent news has proven them right, with revelations of security vulnerabilities and voice assistant hacking.
In April 2017, Burger King added to the debate with an ad just under 15 seconds long. In the ad, all an employee at the fast-food chain had to say was: “OK Google, what’s a Whopper?” This activated the voice assistants of viewers, giving the Wikipedia description of the famous burger.
For some, it was a stroke of advertising genius, for others, a clear example of the risks posed by voice assistants. Tal Be’ery and Amichai Shulman also demonstrated that hacks could be a lot more sinister. The researchers found a way to bypass the password screen on a Windows computer by using Cortana, the Windows 10 virtual assistant. How did they do it? By taking advantage of the fact that this assistant is never ‘off’, meaning that it responds to certain voice commands even when devices are locked.
A similar, more insidious example of this came when Chinese researchers developed a technique called DolphinAttack, involving ultrasonic commands inaudible to humans that can be picked up by a computer microphone. Most voice assistants can be activated remotely.
Another vulnerability that left a bad taste in Apple’s mouth: the confidentiality function can be hacked to hide the content of messages appearing on the lock screen of its devices. This effectively turns Siri (Apple’s proprietary voice assistant) into a Trojan horse. Brazilian website Mac Magazine also revealed that anyone could access these hidden messages by asking Siri to read them out loud.
Raising awareness and encryption are the only immediate countermeasures available
“By their nature, voice assistants are always on. This means that the microphones on these voice assistants, particularly through connected docks, pose real confidentiality problems,” confirms Paul Fariello, Technical Leader at Stormshield. He goes on to soften the blow: “But the risk is low when you consider that other more traditional techniques like ransomware and phishing are much easier for cybercriminals to use. Encrypting sensitive data to prevent use in the case of theft reduces the risk.”
Today, developers are content to proceed with corrections on a case-by-case basis, “technological countermeasures for this type of problem haven’t been created yet. All you can do is raise awareness amongst users, which has its limits,” he adds. It is still not possible to implement additional security solutions in connected objects or smart docks due to the closed design of these devices. Faced with the endless imaginations of hackers, the best option appears to be Security by Design, which puts the responsibility on developers. Following on from fingerprints, could vocal fingerprints be the new biometric password to authenticate who’s allowed to access vocal assistants?
Leave a Reply