A team of researchers from US and China has discovered a new way to attack smart assistants like Amazon Alexa and Google Home. They named this method "Voice Squatting".
These academics talked about this new way of attacking smart assistants in a recently published research paper. The entire method relies on the fact that smart assistants can react to similar voice triggers and activate malicious apps.
The researchers are exploiting the fact that they can register voice assistant malicious apps to be triggered by the user. The way they are able to do that is by making these apps' voice triggers similar to the voice triggers of the legitimate apps.
For example, if a user wants to access Capital One home banking app, they will use the phrase "open capital one". Now, the only thing the attacker needs to do is make their malicious app activate on the similar voice trigger like "open capital won".
This access method probably won't work very well on native speakers, but at some point even they can sound a bit unclear and therefore activate this malicious app.
Another way to do this is by adding one word to the regular app's trigger voice. In the mentioned example it would be "open capital one please". In this case, even native English speakers would be vulnerable to the attacks.
In the same research paper, the team also published another attack method called "Voice Masquerading". This method is not new, and it was already explored in a different research.
The main goal of this attack method is to prolong the interaction time of a running app without the user's knowledge. While the user thinks that the malicious app has stopped working, they are communicating with the smart assistant in order to launch a legitimate app while the malicious app is still listening and taking in the information from the user.
For all the mentioned attack methods, you can check video demos on the dedicated research website. And if you are interested in reading the entire research paper, you can check it out here.