3rd party Alexa smart devices risk users' privacy

In yet another privacy setback, a team of researchers who obtained and analysed 90,194 “Alexa Skills” developed by external providers in seven countries has found significant deficiencies for safe use of Amazon Alexa-enabled third-party smart devices.
3rd party Alexa smart devices risk users' privacy
Published on

NEW DELHI: In yet another privacy setback, a team of researchers who obtained and analysed 90,194 "Alexa Skills" developed by external providers in seven countries has found significant deficiencies for safe use of Amazon Alexa-enabled third-party smart devices.

One of the security loopholes they found was that Alexa Skills could be changed by the third-party providers afterward, putting users at data leaking risk.

In addition to these security risks, the research team also identified significant lacks in the general data protection declarations for the Alexa Skills by the third-party providers.

For example, only 24.2 per cent of the Skills have a so-called privacy Policy at all, and even fewer in the particularly sensitive areas of "Kids" and "Health and Fitness."

"Furthermore, we were able to prove that Skills can be published under a false identity. Well-known automotive companies, for example, make voice commands available for their smart systems. Users download these believing that the company itself has provided these Skills. But that is not always the case," explained Martin Degeling from Ruhr-Universitat Bochum (RUB) in Germany. Amazon has confirmed some of the problems to the research team, saying it was is working on countermeasures.

Although Amazon checks all Skills offered in a certification process, this so-called Skill squatting - the adoption of already existing provider names and functions - is often not noticeable. With the voice commands "Alexa Skills," users can load numerous extra functions onto their Amazon voice assistant.

However, these Skills can often have security gaps and data protection vendors. In their study, the researchers from the Horst Gortz Institute for IT Security at RUB and North Carolina State University in the US studied first-time the ecosystem of Alexa Skills. These voice commands are developed not only by the tech giant Amazon itself but also by external providers.

Users can download them at a store operated by Amazon directly, and in some cases, they are also activated automatically by Amazon.

The researchers obtained and analyzed 90,194 Skills from the stores in seven country platforms. "A first problem is that Amazon has partially activated Skills automatically since 2017. Previously, users had to agree to the use of each Skill. Now they hardly have an overview of where the answer Alexa gives them comes from and who programmed it in the first place," said Degeling.

Unfortunately, it is often unclear which Skill is activated at what time. "For example, if you ask Alexa for a compliment, you can get a response from 31 different providers, but it's not immediately clear which one is automatically selected," the researchers said.

Data that is needed for the technical implementation of the commands can be unintentionally forwarded to external providers, the researchers warned.

"In an experiment, we were able to publish Skills in the name of a large company," the researchers said.

According to Christopher Lentzsch from the RUB Chair of Information and Technology Management, attackers could reprogramme their voice command after a while to ask for users' credit card data.

"Amazon's testing usually catches such prompts and does not allow them - the trick of changing the program afterward can bypass this control. By trusting the abused provider name and Amazon, numerous users could be fooled by this trick," he said. The team presented their work at the "Network and Distributed System Security Symposium (NDSS)" virtual conference last week. (IANS)

Top News

No stories found.
Sentinel Assam
www.sentinelassam.com