Privacy problems in web World are widespread. Google Assistant voice apps and Amazon Alexa rank high in the list. They are often are often “problematic” and violate baseline requirements, according to researchers. The Clemson University School of Computing researchers has analysed tens of thousands of Alexa skills and Google Assistant actions. This is also to calculate the effectiveness of their data practice disclosures. The researchers characterize the current state of affairs as “worrisome” also claiming that Google and Amazon run afoul. In spite of This is of their own developer rules.
Hundreds of millions of people around the world use Google Assistant and Alexa to order products, manage bank accounts, catch up on news. And control smart home devices. Voice apps (referred to as “skills” by Amazon and “actions” by Google) extend the platforms’ capabilities, in some cases by tapping into third-party tools. But in spite of app store regulations and legislation that mandates data transparency, developers are inconsistent when it comes to disclosure, the co-authors of the Clemson study found.
To determine which Google Assistant and Alexa app developers’ privacy policies were sufficiently “informative” and “meaningful,” the co-authors scraped the content of skill and action web listings. They conducted an analysis to capture practices provided in policies and descriptions. Both Google and Amazon do the availability on the web. Also the app storefronts for their voice platforms. They are the developer of a keyword-based approach, drawing on Amazon’s skill permission list and developer services agreement. They compiled a dictionary of nouns related to data practices.
Across a total of 64,720 unique Alexa skills and 2,201 Google Assistant actions (every skill and action scrapeable via the study’s approach), the researchers sought to identify three types of problematic policies:
- Those that don’t outline data practices.
- Those with incomplete policies (i.e., apps that mention data collection in their descriptions but whose policies don’t elaborate).
- Missing policies.
The researchers report that 46,768 (72%) of the Alexa skills and 234 (11%) of the Google Assistant actions don’t include links to policies and that 1,755 skills and 80 actions have broken policy links. (Nearly 700 links lead to unrelated webpages with advertisements, and 17 lead to Google Docs documents that aren’t publicly viewable.) The dichotomy is partially attributable to Amazon’s lenient policy, which unlike Google’s.
Doesn’t require developers to provide a policy if their skills don’t collect personal information. But the researchers point out that skills which collect information often bypass the requirement by choosing not to declare it during Amazon’s automated certification process.
A Google spokesperson denied that Google’s actions don’t abide by its policies and said third-party actions. With broken policies have been removed as the company “continually” enhances its processes and technologies. “We’ve been in touch with a researcher from Clemson University and appreciate their commitment to protecting consumers. All actions are required to follow our developer policies, and we enforce against any action that violates these policies.”
More troubling still, the researchers identified 50 Alexa skills that don’t inform users of what happens to information. Like email addresses, account passwords, names, birthdays, locations, phone numbers, health data, and gender or who the information is shared with. Other skills potentially violate regulations including the Children’s Online Privacy. Protection Act (COPPA), Health Insurance Portability and Accountability Act (HIPAA), and California Online Privacy Protection Act (CalOPPA). By collecting personal information without providing a policy.
Beyond the absence of policies, the researchers take issue with linked-to policies’ lengths and formats. More than half (58%) of skills and actions policies are longer than 1,500 words. And none are available through Alexa or Google Assistant themselves; instead, they must be viewed through a store webpage or a smartphone companion app.
The researchers propose a solution to this built-in intent. That takes the interaction from model of a voice app and scans for data collection capabilities. It creates a response notifying users the skill has these specific capabilities. The intent could be invoked when the app is first enabled. They say, so the brief privacy notice could be read aloud to users. This intent could also advise users to look at a detailing policy provided by the developers.