Recurring |
one_organization, multiple_organization |
(a) The software failure incident related to facial recognition technology has happened again at the Metropolitan Police. The article [71212] mentions that the Met used facial recognition at the 2017 Notting Hill carnival, where the system was wrong 98% of the time, falsely identifying innocent individuals as suspects. This indicates a recurring issue with the software within the same organization.
(b) The software failure incident related to facial recognition technology has also occurred at other organizations. Article [71279] reports that the software used by South Wales Police returned more than 2,400 false positives since June 2017, highlighting similar issues faced by multiple organizations in the implementation of facial recognition technology. |
Phase (Design/Operation) |
design, operation |
(a) The software failure incident related to the design phase can be seen in the articles. The facial recognition technology used by the UK's Metropolitan Police and South Wales Police has shown significant inaccuracies. The technology, which links computer databases of faces to CCTV and other cameras, has resulted in incorrect matches in 98% of cases during the Met Police's trial at the Notting Hill carnival [71212]. Similarly, the software used by South Wales Police has returned more than 2,400 false positives since June 2017 [71279]. These inaccuracies point to failures in the design and development of the facial recognition software, highlighting issues with the system's algorithms, data processing, and matching mechanisms.
(b) The software failure incident related to the operation phase is evident in the articles as well. The facial recognition technology, despite being deployed by the police forces, has shown high error rates during actual operation. For example, the Met Police's system was wrong 98% of the time at the Notting Hill carnival, falsely identifying innocent individuals as suspects on 102 occasions [71212]. Similarly, South Wales Police's technology gets it wrong 91% of the time and has led to instances where police followed up on false identifications, stopping innocent people [71212]. These operational failures indicate issues with the deployment, configuration, and utilization of the facial recognition software in real-world scenarios. |
Boundary (Internal/External) |
within_system |
(a) within_system: The software failure incident related to facial recognition technology used by the UK's Metropolitan Police and South Wales Police is primarily due to factors originating from within the system itself. The articles [71212, 71279] highlight how the facial recognition software returned incorrect matches in a high percentage of cases (98% for the Met Police and more than 2,400 false positives for South Wales Police). This indicates a fundamental flaw or inaccuracy within the software algorithms or implementation, leading to misidentifications and false alerts.
(b) outside_system: The software failure incident does not seem to be primarily attributed to factors originating from outside the system. The articles do not mention external factors such as cyberattacks, external interference, or environmental conditions as the main cause of the facial recognition technology's failure. Instead, the focus is on the inherent limitations and inaccuracies within the software itself, suggesting that the failure is more related to internal system issues. |
Nature (Human/Non-human) |
non-human_actions |
(a) The software failure incident occurring due to non-human actions:
- The articles report that facial recognition technology used by the UK's Metropolitan Police and South Wales Police returned incorrect matches in a high percentage of cases, with the software failing to accurately identify individuals. This failure is attributed to the technology itself, as it struggled to match faces with the database of known faces, resulting in false positives and inaccurate identifications [71212, 71279].
(b) The software failure incident occurring due to human actions:
- The articles do not specifically mention any human actions contributing to the software failure incident. The focus is primarily on the limitations and inaccuracies of the facial recognition technology itself, rather than any direct human errors or actions leading to the failures. |
Dimension (Hardware/Software) |
software |
(a) The software failure incident occurring due to hardware:
- The articles do not mention any hardware-related issues contributing to the software failure incident. Therefore, it is unknown if the failure was due to hardware-related factors [71212, 71279].
(b) The software failure incident occurring due to software:
- The software failure incident in the articles is primarily attributed to issues with the facial recognition technology itself. The facial recognition software used by the UK's Metropolitan Police and South Wales Police returned incorrect matches in a high percentage of cases, indicating a failure in the software's functionality [71212, 71279]. |
Objective (Malicious/Non-malicious) |
non-malicious |
(a) The software failure incident related to facial recognition technology used by the UK's Metropolitan Police and South Wales Police can be categorized as non-malicious. The failure was not due to any malicious intent but rather due to technical limitations and inaccuracies in the software itself.
The articles [71212, 71279] highlight how the facial recognition software used by the police returned incorrect matches in a high percentage of cases (98% for the Met Police and 91% for South Wales Police). These inaccuracies led to false positives and misidentifications of innocent individuals, indicating a non-malicious failure of the technology rather than a deliberate attempt to harm the system.
Additionally, the articles mention concerns raised by privacy campaigners and experts about the lack of effectiveness and accuracy of the facial recognition technology, emphasizing the non-malicious nature of the software failure incident. |
Intent (Poor/Accidental Decisions) |
poor_decisions |
(a) The intent of the software failure incident related to poor_decisions:
- The software failure incident related to facial recognition technology used by the UK's Metropolitan Police and South Wales Police was due to poor decisions made in deploying the technology despite its high rate of inaccuracy. The technology was found to be wrong 98% of the time during the Met's trial at the Notting Hill carnival [71212]. Similarly, The Independent reported that the facial-recognition software used by the Met Police returned incorrect matches in 98 percent of cases, with only two out of 104 alerts being accurate matches [71279].
(b) The intent of the software failure incident related to accidental_decisions:
- The software failure incident was not primarily due to accidental decisions or unintended mistakes. Instead, it was a result of deliberate decisions to deploy facial recognition technology despite its significant inaccuracies and flaws. The Biometrics Commissioner Paul Wiles mentioned that the technology is not yet fit for use and highlighted the need for a legislative framework to govern its deployment, indicating a lack of proper decision-making processes [71279]. |
Capability (Incompetence/Accidental) |
development_incompetence, accidental |
(a) The software failure incident related to development incompetence is evident in the articles. The facial recognition technology used by the UK's Metropolitan Police and South Wales Police returned incorrect matches in a high percentage of cases. The software was wrong 98% of the time during the 2017 Notting Hill carnival, falsely identifying innocent individuals as suspects [71212]. Similarly, The Independent reported that the facial recognition software used by the Met Police returned incorrect matches in 98 percent of cases, with only two out of 104 alerts being accurate matches [71279]. These high error rates indicate a failure in the development and implementation of the facial recognition technology, highlighting a lack of professional competence in ensuring its accuracy and effectiveness.
(b) The software failure incident related to accidental factors is also apparent in the articles. The high number of false positives generated by the facial recognition software used by South Wales Police, totaling more than 2,400 since June 2017, indicates accidental failures in the system's operation [71279]. Additionally, the technology's inability to accurately identify minority ethnic women, as highlighted in the Big Brother Watch report, points to accidental failures in the software's design and algorithm that lead to biased or inaccurate results [71212]. These accidental factors contribute to the overall failure of the facial recognition technology in correctly identifying individuals. |
Duration |
temporary |
The software failure incident related to facial recognition technology used by the UK's Metropolitan Police and South Wales Police can be categorized as a temporary failure. The articles [71212, 71279] highlight that the facial recognition software returned incorrect matches in a high percentage of cases (98% for the Met Police and 91% for South Wales Police). This indicates that the failure was due to contributing factors introduced by certain circumstances, such as technical limitations and inaccuracies in the software, rather than being a permanent failure inherent to all circumstances. |
Behaviour |
crash, value, other |
(a) crash: The software failure incident described in the articles can be categorized as a crash. The facial recognition technology used by the Metropolitan Police and South Wales Police failed to accurately identify individuals, with incorrect matches occurring in a high percentage of cases. For example, the Met's facial recognition system was wrong 98% of the time at the Notting Hill carnival, and South Wales Police's system got it wrong 91% of the time [71212, 71279].
(b) omission: There is no specific mention of the software failure incident being related to omission in the articles.
(c) timing: The software failure incident is not related to timing issues where the system performs its intended functions but at the wrong time.
(d) value: The software failure incident is related to the system performing its intended functions incorrectly, as the facial recognition technology incorrectly identified individuals, leading to false positives and inaccurate matches [71212, 71279].
(e) byzantine: The software failure incident does not exhibit characteristics of a byzantine failure where the system behaves erroneously with inconsistent responses and interactions.
(f) other: The behavior of the software failure incident can be categorized as a failure due to inaccuracies in the system's recognition capabilities, leading to false identifications and incorrect matches, which can be considered as a specific type of value failure [71212, 71279]. |