Researchers from the University of Bohong (rub) and the Horst g ü RTZ Institute of IT security at North Carolina State University found these problems and presented their work at the network and Distributed System Security Seminar (NDSS) held on February 24 local time.
Analyze more than 90000 skills and find major safety defects
In their study, Dr. Christopher lentzsch and Dr. Martin degeling’s team studied the ecosystem of Alexa skills for the first time. These voice commands are not only developed by Amazon itself, but also by external suppliers. Users can download them from stores directly operated by Amazon, and in some cases, they will be automatically activated by Amazon.
The researchers acquired and analyzed 90194 skills from store platforms in seven countries, and found significant flaws in safe use. Dr. Martin degeling, chairman of rub system security, introduced some problems existing in “Alexa skills.” the first problem is that Amazon has partially automatically activated skills since 2017. Previously, users had to agree to use each skill. Now they can hardly generalize where Alexa’s answers came from and who programmed them in the first place. ” Dr. Martin degeling further explains that, more unfortunately, users often don’t know which skill is activated when. For example, if you ask Alexa to praise you, you can get responses from 31 different suppliers, but you can’t immediately know which one is automatically selected. The data needed for the technical implementation of the command may be inadvertently forwarded to an external provider.
Release new skills with false identity
Martin degeling said, “we can prove that we can publish skills in the wrong identity. For example, famous car companies provide voice commands for their intelligent systems. Users download the content because they believe the company has provided the skills. But that’s not always the case. ” While Amazon checks all the skills provided during the certification process, this so-called skill race (i.e., using existing vendor names and features) is usually not noticeable.
“Through experiments, we were able to publish skills in the name of a large company,” the researchers explained. It can mine valuable information provided by users. ” So, for example, if a car supplier has not yet developed skills for its intelligent systems in the car to turn up or down music in the car, then an attacker will be able to do so in the name of the supplier.
“They can use users’ trust in well-known names and Amazon to get personal information, such as location data or user behavior,” Martin degeling said However, in this process, criminals can not directly eavesdrop on the encrypted data or change the command with malicious intent to operate the smart car, such as opening the door.
Bypass Amazon’s security checks
The researchers also found another security risk: these skills may change as suppliers change. Christopher lentzsch, chairman of rub information and technology management, believes that this vulnerability puts the security of Amazon’s previous authentication process in another perspective. “Attackers can rewrite their voice commands after a period of time and ask users to provide credit card data,” he explained. Amazon’s tests usually catch such a hint, but it’s not allowed to do so. Skills after changing the program can bypass this control. “
In addition to these security risks, the research team also found that there are serious deficiencies in the general data protection statement of these skills. For example, only 24 . 2% of the skills have so-called privacy policies, and even less in particularly sensitive areas such as “kids” and “health and fitness.”. Martin degeling emphasized that this point should be greatly improved.
Amazon has identified some problems with the research team and said it is working on countermeasures.