Recurring |
one_organization, multiple_organization |
(a) The software failure incident related to smart speakers being used by hackers to decipher passwords or PINs has happened before within the same organization. Researchers from the University of Cambridge previously found that a gaming app could steal a banking PIN by using the phone's microphone to pick up screen vibrations as a user enters the code [108779].
(b) The software failure incident related to smart speakers being used by hackers to decipher passwords or PINs has also happened at other organizations or with their products and services. The same researchers found that their algorithm could correctly guess login PINs based on recordings made of participants as they typed, indicating a broader vulnerability in smartphone security [108779]. |
Phase (Design/Operation) |
design |
(a) The software failure incident related to the design phase can be seen in the article where researchers from the University of Cambridge built their own version of a smart speaker to closely resemble commercially available ones. They conducted an experiment to analyze sound recordings from the gadget to investigate if the sound and vibrations caused by typing on a smartphone screen could be used to guess a passcode. This design flaw allowed the computer to guess the code with a high accuracy when the phone was placed within a certain distance from the custom-built device, highlighting a vulnerability introduced during the design phase [108779].
(b) The software failure incident related to the operation phase is evident in the article where it is mentioned that smart speakers like the Echo and Home are always listening for a wake word which triggers them and activates their functionality. The audio is recorded and goes through interpretation by artificial intelligence. Access to these recordings is heavily restricted, with only the Alexa account holder able to review them in the app. However, if criminals were to hijack a smart speaker to steal device passwords, they would have to tamper with the device physically or hack into the server to access the recordings, indicating a failure introduced during the operation phase [108779]. |
Boundary (Internal/External) |
within_system |
(a) within_system: The software failure incident described in the article is related to a vulnerability within the system itself. Researchers from the University of Cambridge conducted an experiment where they built a custom smart speaker to mimic commercially available ones. They were able to analyze sound recordings from the speaker to decipher passcodes and text messages from smartphones placed nearby. This vulnerability was exploited by using the sound and vibrations caused by typing on a smartphone screen to guess passcodes with a high level of accuracy [108779].
(b) outside_system: The software failure incident does not involve contributing factors originating from outside the system. The vulnerability exploited in this case was related to the design and functionality of the smart speaker and its interaction with nearby smartphones, rather than external factors beyond the control of the system itself [108779]. |
Nature (Human/Non-human) |
human_actions |
(a) The software failure incident occurring due to non-human actions:
The software failure incident in the article was not directly caused by non-human actions. The research conducted by the University of Cambridge focused on demonstrating how smart speakers like Amazon Echo and Google Home could potentially be used by hackers to listen to and decipher passwords or PINs being typed on nearby phones. This was achieved by analyzing sound recordings and vibrations to guess passcodes, showcasing a potential security vulnerability in smart speaker technology [108779].
(b) The software failure incident occurring due to human actions:
The software failure incident in the article can be attributed to human actions. The researchers from the University of Cambridge conducted experiments to demonstrate the vulnerability of smart speakers to potential attacks aimed at stealing passwords or PINs. The study involved human actions such as inputting passcodes on smartphones, which were then captured by the custom-built smart speaker for analysis. Additionally, the researchers highlighted the importance of users taking precautions such as limiting microphone access to trusted apps to enhance smartphone security, emphasizing the role of human actions in mitigating potential security risks [108779]. |
Dimension (Hardware/Software) |
hardware, software |
(a) The software failure incident related to hardware can be seen in the article where researchers from the University of Cambridge built their own version of a smart speaker to mimic commercially available ones. They used this custom-built device to analyze sound recordings and vibrations caused by typing on a smartphone screen to guess passcodes. The accuracy of the code-breaking was affected by the distance between the phone and the microphone, indicating a hardware-related issue [108779].
(b) The software failure incident related to software can be inferred from the fact that the researchers were able to extract PIN codes and text messages from recordings collected by a voice assistant located up to half a meter away. This suggests a vulnerability in the software of the smart speakers that allowed for the extraction of sensitive information from audio recordings [108779]. |
Objective (Malicious/Non-malicious) |
malicious |
(a) The objective of the software failure incident was malicious, as researchers from the University of Cambridge demonstrated how smart speakers like Google Home and Amazon Alexa could be used by hackers to listen to and decipher passwords or PINs being typed in on nearby phones. The researchers built a custom smart speaker to mimic commercial devices and were able to guess a five-digit passcode with 76% accuracy when the phone was placed within 20cm of the speaker. This attack was aimed at stealing device passwords and extracting sensitive information from recordings collected by voice assistants [108779].
(b) The software failure incident was non-malicious in the sense that the researchers conducted these experiments to highlight potential security vulnerabilities and raise awareness about the risks associated with sound and vibration analysis for deciphering passcodes. The study was more of an exercise in acoustic application than cybersecurity, and the researchers emphasized that while the attack was demonstrated, it was unlikely to be used currently due to restrictions on accessing recordings from smart speakers [108779]. |
Intent (Poor/Accidental Decisions) |
accidental_decisions |
(a) The intent of the software failure incident was not due to poor decisions but rather accidental decisions. The incident involved researchers from the University of Cambridge conducting experiments to demonstrate how smart speakers like Google Home and Amazon Alexa could potentially be used by hackers to decipher passwords or PINs being typed on nearby phones. The researchers built their own version of a smart speaker to mimic commercially available ones and analyzed sound recordings to investigate if vibrations and sounds from typing on a smartphone screen could be used to guess passcodes. The study was more of an exercise in acoustic application than cybersecurity, and the researchers highlighted that it's unlikely for people to use such attacks currently, but the world changes quickly and sensors improve [108779]. |
Capability (Incompetence/Accidental) |
accidental |
(a) The software failure incident related to development incompetence is not explicitly mentioned in the provided article. Therefore, it is unknown if the incident was due to contributing factors introduced due to lack of professional competence by humans or the development organization.
(b) The software failure incident related to accidental factors is highlighted in the article. The researchers from the University of Cambridge accidentally discovered that smart speakers like Google Home and Amazon Alexa could potentially be used by hackers to listen to and decipher passwords or PINs being typed on a nearby phone. This accidental discovery led to the realization that sound and vibrations caused by typing on a smartphone screen could be used to guess passcodes with a certain level of accuracy depending on the proximity of the phone to the custom-built device [108779]. |
Duration |
temporary |
The software failure incident described in the article is more temporary rather than permanent. The incident involved a specific scenario where researchers demonstrated how a custom-built smart speaker could potentially decipher a passcode being typed on a nearby phone by analyzing sound and vibrations [108779]. This incident was a result of the specific circumstances created by the experiment and the capabilities of the custom-built device, rather than a permanent flaw in the smart speaker technology itself. |
Behaviour |
omission, value, other |
(a) crash: The articles do not mention any software failure incident related to a crash.
(b) omission: The software failure incident mentioned in the articles is related to omission. The researchers demonstrated that an attacker could extract PIN codes and text messages from recordings collected by a voice assistant located up to half a meter away [108779].
(c) timing: The articles do not mention any software failure incident related to timing.
(d) value: The software failure incident mentioned in the articles is related to value. The researchers were able to guess a five-digit passcode with 76% accuracy in three attempts when the phone was placed within 20cm of the custom-built device, but the accuracy plummeted to just 20% when the phone was positioned 50cm away [108779].
(e) byzantine: The articles do not mention any software failure incident related to a byzantine behavior.
(f) other: The software failure incident mentioned in the articles is related to a different behavior. The study conducted by the researchers was more useful as an exercise in acoustic application than cybersecurity, indicating that the software failure incident was more about the potential security implications of using smart speakers for eavesdropping rather than a traditional software malfunction [108779]. |