In this work, we study the trustworthiness of the Amazon Alexa platform to answer four key questions:

  1. Whether the skill certification process is trustworthy in terms of catching policy violations in third-party skills.
  2. Whether there exist policy-violating skills (e.g., collecting personal information from users) published in the Alexa skills store.
  3. Once a policyviolating skill gets certified, how can possible adversarial developers increase the chance for their skill to reach end users?
  4. How does the Google Assistant’s certification system compare to that of Amazon Alexa?

Experiment setup

We performed “adversarial” experiments against the skill certification process of the Amazon Alexa platform. For testing the trustworthiness, we craft 234 policy-violating skills that intentionally violate specific policies defined by Amazon, and examine if it gets certified and published to the store or not.

Learn More

Experiment results

Our results showed strong evidence that Alexa's skill certification process is implemented in a disorganized manner. We were able to publish all 234 skills that we submitted although some of them required a resubmission.

Learn More

Google Assistant

We conducted a few experiments on Google Assistant platform as well. While Google does do a better job in the certification process based on our preliminary measurement, it is still not perfect and it does have potentially exploitable flaws that need to be tested more in the future.

Learn More

COPPA Compliance

It is possible that the third-party skills in Amazon Alexa suffers the legal risk of violating the Children’s Online Privacy Protection Act (COPPA) rules. . As demonstrated by our experiments, developers can certify skills that collect personal information from children without satisfying or honoring any of the requirements set forth by the FTC.

Learn More


Based on our measurements and findings, we provide recommendations to help VA platform providers to enhance the trustworthiness of VA platforms

Learn More