Incident: Smart Speakers Compromised by Malicious Apps Eavesdropping and Stealing Passwords

Published Date: 2019-10-21

Postmortem Analysis
Timeline 1. The software failure incident happened in October 2019. 2. The incident occurred in October 2019 based on the published date of the articles [91270, 90963].
System 1. Amazon Echo and Google Home smart speakers 2. Third-party voice apps designed for Amazon Echo and Google Home devices 3. Amazon's and Google's vetting processes for third-party apps 4. Amazon's Alexa and Google's Assistant voice-controlled digital-assistant software 5. Amazon's and Google's text-to-speech AI 6. Amazon's and Google's servers 7. Amazon's and Google's review processes for third-party apps 8. Amazon's and Google's privacy settings 9. Amazon's and Google's security mechanisms 10. Amazon's and Google's password protection measures 11. Amazon's and Google's developer policies 12. Amazon's and Google's skill certification process 13. Amazon's and Google's Actions (smart speaker apps) 14. Amazon's and Google's privacy issues mitigation measures 15. Amazon's and Google's accuracy-review programs 16. Amazon's and Google's privacy settings adjustments 17. Amazon's and Google's privacy backlash responses 18. Amazon's and Google's response to security vulnerabilities 19. Amazon's and Google's removal of malicious apps 20. Amazon's and Google's additional mechanisms to prevent future issues 21. Amazon's and Google's statements regarding the incident 22. Security Research Labs (SRL) [91270, 90963]
Responsible Organization 1. Security Research Labs (SRL) [91270, 90963] 2. Third-party developers who created the malicious apps [91270, 90963]
Impacted Organization 1. Amazon 2. Google 3. Users of Amazon Echo and Google Home devices [91270, 90963]
Software Causes 1. Malicious apps designed to eavesdrop were able to sneak through Google's and Amazon's vetting processes, leading to the security breach [Article 91270]. 2. The eavesdropping apps took advantage of a vulnerability where they could continue listening in the background even after a user thought the app had stopped, due to the insertion of a specific unicode character sequence [Article 91270]. 3. The apps were able to trick users into giving up their passwords by posing as security updates and requesting passwords through the voice assistants [Article 91270]. 4. The software vulnerability allowed the apps to transcribe and send back user speech to the developers without the user's knowledge [Article 90963].
Non-software Causes 1. Lack of thorough security reviews by Amazon and Google for third-party apps [91270, 90963] 2. Vulnerability in the design of smart speakers allowing apps to continue listening after being commanded to stop [90963]
Impacts 1. The software failure incident allowed malicious apps to eavesdrop on people's conversations through Amazon's Echo and Google's Nest devices, compromising user privacy and security [91270, 90963]. 2. The eavesdropping apps were able to continue listening even after users attempted to turn them off, leading to unauthorized data collection and potential password theft [91270, 90963]. 3. The incident highlighted vulnerabilities in voice-controlled digital-assistant software, raising concerns about the privacy implications of using devices like Amazon's Alexa and Google's Assistant [91270, 90963]. 4. The security researchers demonstrated that the malicious apps could trick users into giving up their passwords by posing as security updates, further exposing users to potential risks [91270, 90963]. 5. Both Amazon and Google responded to the discovery by blocking the malicious apps and implementing additional security measures to prevent similar incidents in the future [91270, 90963].
Preventions 1. Implement stricter security reviews for third-party apps: Both Amazon and Google could have prevented the software failure incident by enhancing their security review processes for third-party apps to detect malicious behavior [91270, 90963]. 2. Improve detection mechanisms for suspicious app behavior: Amazon and Google could have implemented better mechanisms to detect unusual app behavior, such as apps continuing to listen after being deactivated or requesting sensitive information like passwords [91270, 90963]. 3. Enhance user awareness and education: Users could have been better informed about the potential risks associated with using third-party apps on smart speakers, encouraging them to be cautious when interacting with such apps and to report any suspicious behavior [90963].
Fixes 1. Implement stricter security reviews for third-party apps on smart speakers to prevent malicious apps from passing through the vetting process [91270, 90963]. 2. Enhance the detection mechanisms to identify suspicious behavior in voice-controlled digital assistant software to prevent eavesdropping and unauthorized data collection [91270, 90963]. 3. Update the software to immediately deactivate apps when users command them to stop, ensuring that they do not continue running in the background [90963]. 4. Educate smart speaker owners to be cautious when apps request sensitive information like passwords, as regular apps are not supposed to do so [90963].
References 1. Security Research Labs (SRL) [91270, 90963] 2. Amazon [91270, 90963] 3. Google [91270, 90963] 4. CNET [91270] 5. ZDNet [91270] 6. BBC News [90963]

Software Taxonomy of Faults

Category Option Rationale
Recurring one_organization, multiple_organization (a) The software failure incident related to eavesdropping apps compromising smart speakers has happened again at Amazon. In April 2018, security researchers found a vulnerability in Amazon's Alexa code where malicious apps could keep the skill listening indefinitely, allowing any third-party app to eavesdrop on people. This vulnerability was discovered in a calculator app [Article 91270]. (b) The software failure incident related to eavesdropping apps compromising smart speakers has also happened at Google. Security researchers developed malicious voice apps that could eavesdrop on users through Google's Nest devices. Google responded to the discovery by removing the Actions created by the researchers and implementing additional mechanisms to prevent such issues in the future [Article 91270, Article 90963].
Phase (Design/Operation) design, operation (a) The software failure incident related to the design phase: The incident occurred due to malicious apps designed to eavesdrop on users' conversations passing through Google's and Amazon's vetting processes. Security researchers developed voice apps that could listen in on people's conversations through Amazon's Echo and Google's Nest devices. These apps took advantage of silence by inserting a unicode character sequence that kept the recording process active in the background even after the user believed the task was completed. This flaw in the design allowed the apps to continue listening and even trick users into giving up their passwords [91270, 90963]. (b) The software failure incident related to the operation phase: The failure in the operation phase was due to the smart speakers continuing to listen after users commanded the apps to stop. The modified apps were activated by specific voice commands and would continue running for several seconds after the user attempted to deactivate them. During this time, if the user said certain phrases, their speech was transcribed and sent back to the developers, compromising user privacy. Additionally, the apps could prompt users to provide their passwords, which is not a standard operation for regular apps, leading to potential security breaches [90963].
Boundary (Internal/External) within_system, outside_system (a) within_system: The software failure incident reported in the articles is primarily due to contributing factors that originate from within the system. Security researchers developed malicious voice apps that could eavesdrop on users through Amazon's Echo and Google's Nest devices. These apps passed through the companies' reviews for third-party apps, indicating a failure within the system's vetting processes [91270, 90963]. (b) outside_system: The software failure incident also involves contributing factors that originate from outside the system. The malicious apps created by the security researchers took advantage of a vulnerability in the text-to-speech AI of Amazon's Alexa and Google Home, allowing them to continue listening even after a user believed the app had stopped. This external factor of vulnerability in the text-to-speech AI contributed to the failure incident [91270, 90963].
Nature (Human/Non-human) non-human_actions, human_actions (a) The software failure incident occurring due to non-human actions: The incident involved malicious apps that were designed to eavesdrop on users and were able to bypass Google's and Amazon's vetting processes. These apps took advantage of a vulnerability where they could continue listening even after the user believed the app had stopped, by inserting a unicode character sequence that simulated silence. This allowed the apps to record conversations and even trick users into giving up their passwords without human intervention [91270, 90963]. (b) The software failure incident occurring due to human actions: The incident involved the development and deployment of malicious apps by security researchers at Security Research Labs. These researchers created the apps that could eavesdrop and steal passwords after the apps were approved by Amazon and Google. The apps were initially promoted as harmless tools for horoscopes and random numbers but were later updated to carry out the spying activities. The human actions of the researchers in creating and updating these apps led to the security breach [91270, 90963].
Dimension (Hardware/Software) software (a) The software failure incident occurring due to hardware: - The articles do not mention any hardware-related contributing factors that led to the software failure incident. Therefore, it is unknown if the incident was caused by hardware issues. (b) The software failure incident occurring due to software: - The software failure incident in the articles was primarily caused by malicious apps designed to eavesdrop on users through Amazon's Echo and Google's Nest devices. These apps passed through the companies' reviews for third-party apps, indicating a software-related issue in the vetting processes [91270, 90963]. - The eavesdropping apps exploited a vulnerability in the voice-controlled digital-assistant software by taking advantage of silence and using a specific character sequence to keep listening in the background even after the user believed the task was completed. This behavior was a software flaw in the voice assistant systems [91270]. - The malicious apps were able to trick users into giving up their passwords by simulating security update messages and requesting passwords, showcasing a software-related security vulnerability in the voice assistant systems [91270, 90963].
Objective (Malicious/Non-malicious) malicious (a) The software failure incident in the articles is malicious in nature. Security researchers developed malicious voice apps that could eavesdrop on people's conversations through Amazon's Echo and Google's Nest devices. These apps passed through the companies' reviews for third-party apps and were designed to listen in on users even after they thought the app had stopped. The apps were also capable of tricking users into giving up their passwords by pretending to offer a security update and asking for their password [91270, 90963]. (b) The software failure incident is non-malicious in the sense that the users were not intentionally trying to harm the system. They interacted with the voice apps as they normally would, requesting horoscopes or other information, without knowing that the apps were designed to continue listening and potentially steal their passwords. The users were not aware of the malicious intent behind the apps and were simply using them for their intended purposes [91270, 90963].
Intent (Poor/Accidental Decisions) poor_decisions (a) The intent of the software failure incident was due to poor_decisions. Security researchers developed malicious voice apps that could eavesdrop on people's conversations through Amazon's Echo and Google's Nest devices. These apps passed through the companies' reviews for third-party apps, indicating a failure in the vetting processes of Amazon and Google [91270, 90963]. The apps were designed to listen in on conversations by exploiting a vulnerability that allowed them to continue recording even after the user believed the task was completed. Additionally, the apps were able to trick users into giving up their passwords by posing as security updates and requesting passwords through the voice assistants [91270].
Capability (Incompetence/Accidental) development_incompetence (a) The software failure incident occurring due to development incompetence: - Security researchers found that malicious apps designed to eavesdrop could sneak through Google's and Amazon's vetting processes, indicating a failure in the review and certification process [91270]. - The eavesdropping apps created by the researchers worked by taking advantage of silence, exploiting a loophole in how Alexa and Google Assistant skills usually function, showcasing a lack of robust testing for such scenarios [91270]. - The researchers disclosed newly found vulnerabilities to Amazon and Google earlier in the year, indicating that these vulnerabilities were not identified during the initial development and testing phases [91270]. (b) The software failure incident occurring due to accidental factors: - The malicious apps developed by Security Research Labs were initially promoted as harmless apps for delivering horoscopes and generating random numbers, but were later updated to eavesdrop and steal passwords, suggesting a deliberate modification post-approval rather than an accidental flaw [90963]. - The apps continued to run in the background even after users attempted to turn them off, indicating a deliberate design to continue eavesdropping rather than an accidental behavior [90963]. - The attack involved the app asking users for their passwords under the guise of a security update, showing a deliberate attempt to deceive users rather than an accidental glitch [90963].
Duration permanent, temporary (a) The software failure incident in the articles can be categorized as a temporary failure. The incident involved malicious apps that were able to eavesdrop on users even after the users believed they had stopped the app. For example, when users tried to turn off the app, they heard a "Goodbye" message, but the software continued running for several more seconds, transcribing and sending back the user's speech to the developers [90963]. (b) The software failure incident can also be considered as a permanent failure to some extent. This is because the vulnerability in the apps allowed for indefinite eavesdropping and potential password theft, indicating a fundamental flaw in the design and implementation of the voice-controlled digital assistant software used in Amazon's Echo and Google's Nest devices [91270].
Behaviour omission, value, other (a) crash: The software failure incident described in the articles does not involve a crash where the system loses state and does not perform any of its intended functions. (b) omission: The software failure incident involves omission where the system omits to perform its intended functions at an instance(s). The malicious apps developed by security researchers continued to listen in on users even after the users attempted to turn them off. The apps gave a "Goodbye" message but continued running for several more seconds, transcribing and sending back user speech to the developers [90963]. (c) timing: The software failure incident does not involve a timing failure where the system performs its intended functions correctly but too late or too early. (d) value: The software failure incident involves a value failure where the system performs its intended functions incorrectly. The malicious apps were designed to eavesdrop on users and even trick them into giving up their passwords by pretending to offer a security update and asking for the password [91270, 90963]. (e) byzantine: The software failure incident does not involve a byzantine failure where the system behaves erroneously with inconsistent responses and interactions. (f) other: The other behavior observed in this software failure incident is that the malicious apps took advantage of silence to continue listening in the background even after the user thought the app had finished its task. This behavior was achieved by inserting a specific unicode character sequence that caused the voice assistants to keep listening [91270].

IoT System Layer

Layer Option Rationale
Perception None None
Communication None None
Application None None

Other Details

Category Option Rationale
Consequence harm, property, theoretical_consequence (b) harm: People were physically harmed due to the software failure The software failure incident involving malicious apps on Amazon Echo and Google Home devices led to potential physical harm as the apps were designed to eavesdrop on users and steal passwords. Users could be tricked into giving up their passwords through deceptive prompts, posing a risk to their security and potentially causing harm [90963].
Domain information, finance, other (a) The software failure incident reported in the articles is related to the industry of information. The incident involved malicious apps designed to eavesdrop on people's conversations through Amazon's Echo and Google's Nest devices, compromising user privacy and security [91270, 90963]. (h) The incident also has implications for the finance industry as the malicious apps were able to trick users into giving up their passwords by posing as security updates and requesting passwords through the voice assistants [91270, 90963]. (m) The incident could also be related to the "other" industry as it involves a breach of privacy and security in smart speaker devices, which can have implications beyond the industries listed in the options [91270, 90963].

Sources

Back to List