Incident: Facial Recognition Software Failure in Policing: Accuracy and Human Rights Concerns

Published Date: 2019-07-04

Postmortem Analysis
Timeline 1. The software failure incident involving the facial recognition software used by the police occurred between June 2018 and February 2019 as mentioned in Article 87963.
System 1. Neoface system used by the Metropolitan police and South Wales police [87963]
Responsible Organization 1. The Metropolitan police and South Wales police were responsible for causing the software failure incident by deploying the Neoface facial recognition system that misidentified people and raised concerns about human rights violations [Article 87963].
Impacted Organization 1. Police (Metropolitan police, South Wales police) [Article 87963]
Software Causes 1. The software failure incident in this case was caused by the facial recognition software misidentifying people, leading to wrongful stops and potential miscarriages of justice [87963].
Non-software Causes 1. Lack of transparency about the use of facial recognition technology [87963] 2. Outdated watchlists that included individuals wanted by the courts as well as those considered "at risk or vulnerable" [87963] 3. Hasty actions by police to stop individuals before matches could be properly checked, leading to mistakes [87963] 4. Potential bias in facial recognition software, particularly in wrongly identifying dark-skinned women [87963]
Impacts 1. The facial recognition software used by the police was found to have matches that were only correct in a fifth of cases, leading to potential misidentifications and wrongful stops of individuals [Article 87963]. 2. The system was likely to break human rights laws, particularly in terms of privacy, freedom of expression, and the right to protest [Article 87963]. 3. The software's use led to concerns about potential bias, as it was more likely to wrongly identify dark-skinned women and more likely to correctly identify light-skinned men [Article 87963]. 4. The research highlighted issues with outdated watchlists, including individuals wanted by the courts as well as those considered "at risk or vulnerable," leading to mistakes in identifying and stopping people [Article 87963]. 5. The software failure incident raised questions about the need for laws and regulations to govern the use of facial recognition technology in policing and the private sector, as well as the lack of transparency surrounding its deployment [Article 87963].
Preventions 1. Implementing stricter regulations and laws governing the use of facial recognition technology in public spaces to ensure compliance with human rights laws and protect privacy rights [Article 87963]. 2. Conducting thorough testing and validation of the facial recognition software to improve accuracy and reduce misidentifications before deploying it in live trials [Article 87963]. 3. Keeping watchlists up to date and ensuring they only include individuals who are actively wanted by the courts, rather than including individuals considered "at risk or vulnerable" [Article 87963]. 4. Providing proper training to law enforcement officers on how to use the facial recognition technology effectively and ensuring they do not act hastily based on the software's matches without proper verification [Article 87963]. 5. Addressing potential biases in facial recognition software by conducting regular audits and assessments to ensure fair and accurate identification across different demographics [Article 87963].
Fixes 1. Implement stricter regulations and laws governing the use of facial recognition technology in public spaces to protect privacy, freedom of expression, and the right to protest [87963]. 2. Conduct thorough testing and validation of the facial recognition software to improve accuracy and reduce misidentifications [87963]. 3. Update and maintain watchlists with accurate and up-to-date information to prevent wrongful stops and arrests [87963]. 4. Address potential biases in the software by ensuring it is equally accurate across different demographics, especially in identifying dark-skinned individuals [87963]. 5. Enhance transparency and oversight in the deployment of facial recognition technology by involving regulatory bodies and ensuring public awareness and consent [87963].
References 1. Academics from the University of Essex [Article 87963] 2. David Davis MP, former shadow home secretary [Article 87963] 3. Deputy assistant commissioner Duncan Ball from Scotland Yard [Article 87963] 4. The Home Office [Article 87963] 5. Information commissioner, Elizabeth Denham [Article 87963] 6. Tony Porter, the surveillance camera commissioner [Article 87963] 7. Liberty, the civil rights campaign group [Article 87963]

Software Taxonomy of Faults

Category Option Rationale
Recurring one_organization, multiple_organization (a) The software failure incident related to the misuse of facial recognition technology has happened again within the same organization, specifically with the Metropolitan police in the UK. The incident involved the use of facial recognition software that misidentified individuals, leading to wrongful stops and potential violations of human rights laws [87963]. (b) Additionally, similar incidents involving the misuse of facial recognition technology have occurred at other organizations as well. The article mentions that similar facial scanning software is being used in shopping centers and by other police forces in Manchester, Leicester, and South Wales. These organizations are also using facial recognition technology to identify individuals, including "persons of interest" and wanted criminals, raising concerns about privacy and potential biases in the technology [87963].
Phase (Design/Operation) design, operation (a) The software failure incident related to the design phase is evident in the case of the facial recognition software used by the police. The article mentions that the system regularly misidentified people, leading to wrongful stops, and warned of "surveillance creep" where the technology was used to find individuals not wanted by the courts [87963]. Additionally, the research found that police were too hasty to stop people before matches could be properly checked, leading to mistakes, and that watchlists were sometimes out of date and included individuals considered "at risk or vulnerable" [87963]. (b) The software failure incident related to the operation phase is highlighted by the fact that during the trials of the facial recognition system, of the 42 people flagged up, only 8 were actually being sought by the police. Some individuals were stopped for crimes that had already been dealt with by the courts but were arrested for more minor offenses that would not typically be considered serious enough to be tackled using facial recognition [87963]. This indicates a failure in the operation of the system, leading to unnecessary stops and potential wrongful arrests.
Boundary (Internal/External) within_system (a) within_system: The software failure incident related to the use of facial recognition software by the police was primarily due to contributing factors that originated from within the system itself. The article mentions that the system regularly misidentified people, leading to wrongful stops, and that the watchlists used were sometimes out of date and included individuals wanted by the courts as well as those considered "at risk or vulnerable" [87963]. Additionally, the research highlighted that police were too hasty to stop people before matches could be properly checked, leading to mistakes, and that officers viewed the technology as a way of detecting and deterring crime, which could have been achieved without biometric technology [87963]. These issues point to failures within the system itself, such as inaccuracies in identification and outdated or flawed watchlists, contributing to the software failure incident.
Nature (Human/Non-human) non-human_actions (a) The software failure incident in the news articles is primarily related to non-human actions. The failure is attributed to the facial recognition software's inability to correctly identify individuals, leading to mistaken stops and potential violations of human rights laws. The system's misidentification of people, outdated watchlists, and the potential for bias in identifying individuals are key factors contributing to the failure [87963]. Additionally, the software's deployment in various public spaces and its use for surveillance purposes without proper oversight and regulations further exacerbate the risks associated with non-human actions leading to software failure.
Dimension (Hardware/Software) software (a) The software failure incident related to hardware: - The article does not mention any specific hardware-related failures contributing to the software failure incident. It primarily focuses on the issues with facial recognition software misidentifying individuals and the concerns related to human rights violations and potential biases in the technology [87963]. (b) The software failure incident related to software: - The software failure incident in this case is primarily attributed to the facial recognition software misidentifying individuals, leading to wrongful stops and potential miscarriages of justice. The software was found to regularly misidentify people, have out-of-date watchlists, and be used in ways that could potentially violate human rights laws. The article highlights concerns about the software's accuracy, potential biases, and the lack of transparency surrounding its use [87963].
Objective (Malicious/Non-malicious) non-malicious (a) The software failure incident related to the use of facial recognition software by the police can be categorized as non-malicious. The incident involved failures in the accuracy and effectiveness of the facial recognition system, leading to misidentifications, wrongful stops, and potential biases in identifying individuals. The failure was attributed to factors such as the technology being prone to false positives and false negatives, especially with noisy imagery, and the system's tendency to wrongly identify dark-skinned women more often than light-skinned men [87963]. The concerns raised by academics and civil rights groups focused on the potential harm caused by the technology's inaccuracies and intrusiveness, rather than any malicious intent behind the failures.
Intent (Poor/Accidental Decisions) poor_decisions (a) The intent of the software failure incident: The software failure incident related to the use of facial recognition software by police was primarily due to poor decisions made in implementing the technology. The article highlights that the system regularly misidentified people, leading to wrongful stops, and warned of "surveillance creep" where the technology was used to find individuals not wanted by the courts. Additionally, concerns were raised about potential bias in the software, particularly in wrongly identifying dark-skinned women. The research by academics from the University of Essex emphasized that the police were too hasty to stop people before matches could be properly checked, leading to mistakes, and that watchlists were sometimes out of date and included individuals considered "at risk or vulnerable" along with those wanted by the courts [87963]. These issues point to failures resulting from poor decisions made in the implementation and use of the facial recognition software.
Capability (Incompetence/Accidental) development_incompetence, accidental (a) The software failure incident related to development incompetence is evident in the case of the facial recognition software used by the police. The independent analysis conducted by academics from the University of Essex revealed that the system had a low accuracy rate, with matches being correct only in a fifth of cases. The researchers found that the system regularly misidentified people, leading to wrongful stops and potential miscarriages of justice [87963]. (b) The software failure incident related to accidental factors is demonstrated by the mistakes made by the police in stopping individuals based on faulty identifications by the facial recognition software. The hasty actions taken by officers before matches could be properly checked resulted in errors, with some individuals being stopped for crimes that had already been dealt with by the courts or for minor offenses not serious enough to warrant such actions. This indicates accidental failures in the implementation and use of the technology [87963].
Duration temporary The software failure incident related to the use of facial recognition software by the police can be categorized as a temporary failure. The incident involved issues such as misidentifying individuals, wrongly stopping people, outdated watchlists, potential bias in identification, and hasty actions leading to mistakes [87963]. These issues were specific to the circumstances surrounding the deployment and operation of the facial recognition software by the police. The failure was not inherent to the software itself but rather to the implementation and usage of the technology in real-world scenarios.
Behaviour omission, value, other (a) crash: The software failure incident related to the facial recognition system used by the police did not involve a crash where the system lost state and did not perform any of its intended functions. The system was operational and actively scanning faces against a watchlist, although with significant issues in accuracy and potential human rights violations [87963]. (b) omission: The failure of the facial recognition software incident can be attributed to omission as the system was misidentifying people, leading to wrongful stops and potential miscarriages of justice. The system was omitting to perform its intended function of accurately matching faces against the watchlist, resulting in mistakes and negative consequences [87963]. (c) timing: The software failure incident was not primarily related to timing issues where the system performed its intended functions correctly but too late or too early. Instead, the focus was on the accuracy and potential misuse of the facial recognition technology, raising concerns about privacy, human rights violations, and wrongful arrests [87963]. (d) value: The failure of the facial recognition software incident can be categorized under the value option, where the system was performing its intended functions incorrectly. The system was misidentifying individuals, leading to wrongful stops and arrests, as well as potential biases in the identification process [87963]. (e) byzantine: The software failure incident did not exhibit characteristics of a byzantine failure where the system behaves erroneously with inconsistent responses and interactions. The main issues were related to accuracy, privacy concerns, potential biases, and human rights violations associated with the facial recognition technology [87963]. (f) other: The software failure incident can be described as a failure due to ethical and legal concerns surrounding the use of the facial recognition system. The system's behavior raised questions about privacy, freedom of expression, potential biases, and the risk of wrongful arrests, highlighting broader societal implications beyond technical malfunctions [87963].

IoT System Layer

Layer Option Rationale
Perception None None
Communication None None
Application None None

Other Details

Category Option Rationale
Consequence theoretical_consequence, other (a) death: People lost their lives due to the software failure - There is no mention of any deaths resulting from the software failure incident reported in the articles [87963]. (b) harm: People were physically harmed due to the software failure - The articles do not mention any physical harm caused to individuals due to the software failure incident [87963]. (c) basic: People's access to food or shelter was impacted because of the software failure - The articles do not discuss any impact on people's access to food or shelter as a consequence of the software failure incident [87963]. (d) property: People's material goods, money, or data was impacted due to the software failure - The software failure incident did not directly impact people's material goods, money, or data as mentioned in the articles [87963]. (e) delay: People had to postpone an activity due to the software failure - The software failure incident did not lead to any activities being postponed as per the articles [87963]. (f) non-human: Non-human entities were impacted due to the software failure - The software failure incident primarily focused on the implications of facial recognition technology on human rights and privacy, with no specific mention of non-human entities being impacted [87963]. (g) no_consequence: There were no real observed consequences of the software failure - The articles clearly outline the concerns and consequences of the software failure incident related to the accuracy, privacy implications, potential wrongful arrests, and human rights violations associated with the use of facial recognition technology by the police [87963]. (h) theoretical_consequence: There were potential consequences discussed of the software failure that did not occur - The articles discuss potential consequences such as miscarriages of justice, wrongful arrests, privacy violations, and undermining of democracy due to the software failure incident [87963]. (i) other: Was there consequence(s) of the software failure not described in the (a to h) options? What is the other consequence(s)? - The articles highlight the potential consequences of the software failure incident, including the misidentification of individuals leading to wrongful stops, outdated watchlists, potential bias in the technology, and the lack of transparency in its use, all of which raise significant concerns about the impact on individuals' rights and freedoms [87963].
Domain government The failed system in the reported software failure incident was related to the government (l) industry. The facial recognition software was being used by police forces, including the Metropolitan police in London and South Wales police, for identifying suspected criminals in public places [87963]. The system was intended to assist in law enforcement activities by scanning faces of passersby against a "watchlist" of individuals [87963]. The software was deployed in various locations such as Soho, Romford, Westfield shopping center in Stratford, Manchester, Leicester, and South Wales [87963]. Additionally, the article mentions that the use of facial recognition technology by the police raised concerns about potential human rights violations, including privacy infringement, freedom of expression, and the right to protest [87963]. There were also warnings about the system's potential for misidentifying individuals, leading to wrongful stops and arrests, which could result in miscarriages of justice and wrongful arrests [87963]. The research conducted by academics from the University of Essex highlighted issues with the accuracy of the system and its potential negative impact on democracy [87963].

Sources

Back to List