Incident: Facial Recognition Software Failure in UK Law Enforcement Operations

Published Date: 2018-05-13

Postmortem Analysis
Timeline 1. The software failure incident of facial recognition technology used by the UK's Metropolitan Police and South Wales Police occurred in 2017 and 2018 [71212, 71279]. (Note: The incident timeline can be estimated based on the publication dates of the articles and the information provided in the articles.)
System 1. Facial recognition technology used by the UK's Metropolitan Police and South Wales Police [Article 71212, Article 71279]
Responsible Organization 1. The Metropolitan Police [Article 71212, Article 71279] 2. South Wales Police [Article 71212, Article 71279]
Impacted Organization 1. Innocent British citizens were impacted by the software failure incident as they were wrongly identified by the facial recognition technology [Article 71212, Article 71279].
Software Causes 1. Inaccurate facial recognition technology leading to incorrect matches in 98% of cases, as reported by The Independent [Article 71279]. 2. Facial recognition software used by police returning more than 2,400 false positives since June 2017, indicating a high rate of errors [Article 71279]. 3. The technology's inability to accurately identify minority ethnic women, as highlighted in the Big Brother Watch report [Article 71212].
Non-software Causes 1. Lack of accuracy in facial recognition technology, leading to incorrect matches in the majority of cases [Article 71212, Article 71279] 2. Inadequate governance and legislative framework for the deployment of facial recognition technology [Article 71279]
Impacts 1. The facial recognition technology used by the Metropolitan Police and South Wales Police failed to accurately identify individuals, with incorrect matches in 98% of cases for the Met Police and 91% of cases for South Wales Police [Article 71212, Article 71279]. 2. The inaccurate identification led to innocent individuals being wrongly flagged as suspects, causing police to stop and question them, potentially infringing on their privacy and civil liberties [Article 71212]. 3. The software failure incident resulted in wasted public money, as millions were spent on a technology that was almost entirely inaccurate and posed a major risk to freedoms [Article 71212]. 4. The incident raised concerns about the authoritarian surveillance potential of real-time facial recognition technology, with fears that ordinary people could be tracked, located, and misidentified everywhere they go [Article 71212]. 5. The software failure incident highlighted the need for a legislative framework to govern the development and deployment of new biometric technologies like facial recognition, as existing legislation for DNA and fingerprints was deemed insufficient [Article 71279].
Preventions 1. Proper Testing and Validation: Conducting thorough testing and validation of the facial recognition software before deploying it in real-world scenarios could have helped identify and rectify the inaccuracies and false positives [Article 71279]. 2. Legislative Framework: Implementing a legislative framework specifically for biometric technologies like facial recognition could have ensured that the deployment of such technologies is in line with legal and ethical standards, preventing misuse and inaccuracies [Article 71279]. 3. Improved Training and Oversight: Providing proper training to law enforcement officers on the use of facial recognition technology and establishing robust oversight mechanisms to monitor its deployment could have helped in minimizing errors and false identifications [Article 71212, Article 71279].
Fixes 1. Implementing stricter regulations and legislation governing the use of facial recognition technology to ensure proper oversight and accountability [Article 71279]. 2. Conducting thorough testing and validation of the facial recognition software to improve its accuracy and reduce false positives [Article 71279]. 3. Investing in research and development to enhance the technology's capabilities and accuracy, especially in identifying minority ethnic women more accurately [Article 71212]. 4. Providing proper training to law enforcement officers on the limitations and potential biases of facial recognition technology to prevent wrongful identifications and misuse [Article 71212].
References 1. Big Brother Watch report [Article 71212] 2. The Independent [Article 71279]

Software Taxonomy of Faults

Category Option Rationale
Recurring one_organization, multiple_organization (a) The software failure incident related to facial recognition technology has happened again at the Metropolitan Police. The article [71212] mentions that the Met used facial recognition at the 2017 Notting Hill carnival, where the system was wrong 98% of the time, falsely identifying innocent individuals as suspects. This indicates a recurring issue with the software within the same organization. (b) The software failure incident related to facial recognition technology has also occurred at other organizations. Article [71279] reports that the software used by South Wales Police returned more than 2,400 false positives since June 2017, highlighting similar issues faced by multiple organizations in the implementation of facial recognition technology.
Phase (Design/Operation) design, operation (a) The software failure incident related to the design phase can be seen in the articles. The facial recognition technology used by the UK's Metropolitan Police and South Wales Police has shown significant inaccuracies. The technology, which links computer databases of faces to CCTV and other cameras, has resulted in incorrect matches in 98% of cases during the Met Police's trial at the Notting Hill carnival [71212]. Similarly, the software used by South Wales Police has returned more than 2,400 false positives since June 2017 [71279]. These inaccuracies point to failures in the design and development of the facial recognition software, highlighting issues with the system's algorithms, data processing, and matching mechanisms. (b) The software failure incident related to the operation phase is evident in the articles as well. The facial recognition technology, despite being deployed by the police forces, has shown high error rates during actual operation. For example, the Met Police's system was wrong 98% of the time at the Notting Hill carnival, falsely identifying innocent individuals as suspects on 102 occasions [71212]. Similarly, South Wales Police's technology gets it wrong 91% of the time and has led to instances where police followed up on false identifications, stopping innocent people [71212]. These operational failures indicate issues with the deployment, configuration, and utilization of the facial recognition software in real-world scenarios.
Boundary (Internal/External) within_system (a) within_system: The software failure incident related to facial recognition technology used by the UK's Metropolitan Police and South Wales Police is primarily due to factors originating from within the system itself. The articles [71212, 71279] highlight how the facial recognition software returned incorrect matches in a high percentage of cases (98% for the Met Police and more than 2,400 false positives for South Wales Police). This indicates a fundamental flaw or inaccuracy within the software algorithms or implementation, leading to misidentifications and false alerts. (b) outside_system: The software failure incident does not seem to be primarily attributed to factors originating from outside the system. The articles do not mention external factors such as cyberattacks, external interference, or environmental conditions as the main cause of the facial recognition technology's failure. Instead, the focus is on the inherent limitations and inaccuracies within the software itself, suggesting that the failure is more related to internal system issues.
Nature (Human/Non-human) non-human_actions (a) The software failure incident occurring due to non-human actions: - The articles report that facial recognition technology used by the UK's Metropolitan Police and South Wales Police returned incorrect matches in a high percentage of cases, with the software failing to accurately identify individuals. This failure is attributed to the technology itself, as it struggled to match faces with the database of known faces, resulting in false positives and inaccurate identifications [71212, 71279]. (b) The software failure incident occurring due to human actions: - The articles do not specifically mention any human actions contributing to the software failure incident. The focus is primarily on the limitations and inaccuracies of the facial recognition technology itself, rather than any direct human errors or actions leading to the failures.
Dimension (Hardware/Software) software (a) The software failure incident occurring due to hardware: - The articles do not mention any hardware-related issues contributing to the software failure incident. Therefore, it is unknown if the failure was due to hardware-related factors [71212, 71279]. (b) The software failure incident occurring due to software: - The software failure incident in the articles is primarily attributed to issues with the facial recognition technology itself. The facial recognition software used by the UK's Metropolitan Police and South Wales Police returned incorrect matches in a high percentage of cases, indicating a failure in the software's functionality [71212, 71279].
Objective (Malicious/Non-malicious) non-malicious (a) The software failure incident related to facial recognition technology used by the UK's Metropolitan Police and South Wales Police can be categorized as non-malicious. The failure was not due to any malicious intent but rather due to technical limitations and inaccuracies in the software itself. The articles [71212, 71279] highlight how the facial recognition software used by the police returned incorrect matches in a high percentage of cases (98% for the Met Police and 91% for South Wales Police). These inaccuracies led to false positives and misidentifications of innocent individuals, indicating a non-malicious failure of the technology rather than a deliberate attempt to harm the system. Additionally, the articles mention concerns raised by privacy campaigners and experts about the lack of effectiveness and accuracy of the facial recognition technology, emphasizing the non-malicious nature of the software failure incident.
Intent (Poor/Accidental Decisions) poor_decisions (a) The intent of the software failure incident related to poor_decisions: - The software failure incident related to facial recognition technology used by the UK's Metropolitan Police and South Wales Police was due to poor decisions made in deploying the technology despite its high rate of inaccuracy. The technology was found to be wrong 98% of the time during the Met's trial at the Notting Hill carnival [71212]. Similarly, The Independent reported that the facial-recognition software used by the Met Police returned incorrect matches in 98 percent of cases, with only two out of 104 alerts being accurate matches [71279]. (b) The intent of the software failure incident related to accidental_decisions: - The software failure incident was not primarily due to accidental decisions or unintended mistakes. Instead, it was a result of deliberate decisions to deploy facial recognition technology despite its significant inaccuracies and flaws. The Biometrics Commissioner Paul Wiles mentioned that the technology is not yet fit for use and highlighted the need for a legislative framework to govern its deployment, indicating a lack of proper decision-making processes [71279].
Capability (Incompetence/Accidental) development_incompetence, accidental (a) The software failure incident related to development incompetence is evident in the articles. The facial recognition technology used by the UK's Metropolitan Police and South Wales Police returned incorrect matches in a high percentage of cases. The software was wrong 98% of the time during the 2017 Notting Hill carnival, falsely identifying innocent individuals as suspects [71212]. Similarly, The Independent reported that the facial recognition software used by the Met Police returned incorrect matches in 98 percent of cases, with only two out of 104 alerts being accurate matches [71279]. These high error rates indicate a failure in the development and implementation of the facial recognition technology, highlighting a lack of professional competence in ensuring its accuracy and effectiveness. (b) The software failure incident related to accidental factors is also apparent in the articles. The high number of false positives generated by the facial recognition software used by South Wales Police, totaling more than 2,400 since June 2017, indicates accidental failures in the system's operation [71279]. Additionally, the technology's inability to accurately identify minority ethnic women, as highlighted in the Big Brother Watch report, points to accidental failures in the software's design and algorithm that lead to biased or inaccurate results [71212]. These accidental factors contribute to the overall failure of the facial recognition technology in correctly identifying individuals.
Duration temporary The software failure incident related to facial recognition technology used by the UK's Metropolitan Police and South Wales Police can be categorized as a temporary failure. The articles [71212, 71279] highlight that the facial recognition software returned incorrect matches in a high percentage of cases (98% for the Met Police and 91% for South Wales Police). This indicates that the failure was due to contributing factors introduced by certain circumstances, such as technical limitations and inaccuracies in the software, rather than being a permanent failure inherent to all circumstances.
Behaviour crash, value, other (a) crash: The software failure incident described in the articles can be categorized as a crash. The facial recognition technology used by the Metropolitan Police and South Wales Police failed to accurately identify individuals, with incorrect matches occurring in a high percentage of cases. For example, the Met's facial recognition system was wrong 98% of the time at the Notting Hill carnival, and South Wales Police's system got it wrong 91% of the time [71212, 71279]. (b) omission: There is no specific mention of the software failure incident being related to omission in the articles. (c) timing: The software failure incident is not related to timing issues where the system performs its intended functions but at the wrong time. (d) value: The software failure incident is related to the system performing its intended functions incorrectly, as the facial recognition technology incorrectly identified individuals, leading to false positives and inaccurate matches [71212, 71279]. (e) byzantine: The software failure incident does not exhibit characteristics of a byzantine failure where the system behaves erroneously with inconsistent responses and interactions. (f) other: The behavior of the software failure incident can be categorized as a failure due to inaccuracies in the system's recognition capabilities, leading to false identifications and incorrect matches, which can be considered as a specific type of value failure [71212, 71279].

IoT System Layer

Layer Option Rationale
Perception None None
Communication None None
Application None None

Other Details

Category Option Rationale
Consequence theoretical_consequence (a) death: People lost their lives due to the software failure (b) harm: People were physically harmed due to the software failure (c) basic: People's access to food or shelter was impacted because of the software failure (d) property: People's material goods, money, or data was impacted due to the software failure (e) delay: People had to postpone an activity due to the software failure (f) non-human: Non-human entities were impacted due to the software failure (g) no_consequence: There were no real observed consequences of the software failure (h) theoretical_consequence: There were potential consequences discussed of the software failure that did not occur (i) other: Was there consequence(s) of the software failure not described in the (a to h) options? What is the other consequence(s)? The articles do not mention any direct consequences such as death, physical harm, impact on basic needs, property loss, or harm to non-human entities due to the facial recognition software failure incidents reported. The consequences discussed are more related to privacy concerns, civil liberties, potential misidentification of innocent individuals, and the misuse of the technology by law enforcement agencies. Therefore, the most relevant options based on the articles are (h) theoretical_consequence and (i) other.
Domain information, government (a) The failed system in the articles was related to the industry of information. The facial recognition technology was intended to be used by the police for law enforcement purposes, specifically to spot people on watch lists and identify suspects at events like music festivals and protests [71212, 71279]. (l) Additionally, the failed system was also related to the government industry as it was being used by law enforcement agencies like the Metropolitan Police and South Wales Police for public safety and crime-fighting purposes [71212, 71279].

Sources

Back to List