Incident: IBM Abandons Biased Facial Recognition Software for Police Use

Published Date: 2020-06-09

Postmortem Analysis
Timeline 1. The software failure incident of IBM abandoning facial recognition technology for "mass surveillance or racial profiling" happened in June 2020 [101462].
System [IBM facial recognition software] [101462]
Responsible Organization 1. IBM [101462]
Impacted Organization 1. African-American faces - The software failure incident impacted African-American faces as facial recognition algorithms were found to be less accurate at identifying them [101462]. 2. IBM - IBM announced to stop offering facial recognition software for "mass surveillance or racial profiling" which indicates a change in their business strategy due to the software failure incident [101462].
Software Causes 1. The software cause of the failure incident was the biased facial recognition algorithms used by tech companies like Microsoft, Amazon, and IBM, as highlighted in a study by the Massachusetts Institute of Technology [101462]. 2. The facial recognition tools from these companies were found to be inaccurate in recognizing men and women with dark skin, indicating a software flaw in the algorithms [101462]. 3. The study by the US National Institute of Standards and Technology also revealed that facial recognition algorithms were significantly less accurate in identifying African-American and Asian faces compared to Caucasian ones, pointing to a software fault [101462].
Non-software Causes 1. Ethical concerns and potential biases in facial recognition technology, leading to its misuse for mass surveillance and racial profiling [101462]. 2. Lack of transparency in the use of technology by law enforcement agencies, particularly in the context of facial recognition [101462]. 3. Development and promotion of smart policing platforms by tech companies like IBM, which relied on CCTV cameras and sensors processed by police forces, contributing to the misuse of technology [101462].
Impacts 1. IBM's decision to abandon facial recognition software for mass surveillance or racial profiling had a significant impact on the technology industry and law enforcement practices [101462]. 2. The move by IBM highlighted concerns about bias in facial recognition algorithms and the need for testing AI systems for bias, particularly in law enforcement applications [101462]. 3. The decision sparked discussions about the responsible use of technology, the fight against racism, and the need for a national dialogue on the use of facial recognition technology by domestic law enforcement agencies [101462]. 4. The incident raised awareness about the potential ethical risks associated with facial recognition technology, including the enhancement of existing bias and discrimination [101462].
Preventions 1. Implementing rigorous testing for bias in AI systems used in law enforcement, as suggested by IBM in their letter to Congress [101462]. 2. Engaging in a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies, as proposed by IBM [101462]. 3. Using technology that brings greater transparency, such as body cameras on police officers and data analytics, instead of relying solely on potentially biased facial recognition technology, as recommended by IBM [101462].
Fixes 1. Implementing rigorous testing for bias in AI systems used in law enforcement, as suggested by IBM in their letter to Congress [101462]. 2. Engaging in a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies, as proposed by IBM [101462]. 3. Shifting focus towards technology that brings greater transparency, such as body cameras on police officers and data analytics, instead of relying solely on potentially biased facial recognition technology [101462].
References 1. US government study 2. IBM chief executive Arvind Krishna 3. Privacy International's Eva Blum-Dumontet 4. Algorithmic Justice League 5. Massachusetts Institute of Technology study 6. US National Institute of Standards and Technology 7. Amazon 8. Maria Axente, AI ethics expert at consultancy firm PwC 9. Clearview AI 10. Facewatch

Software Taxonomy of Faults

Category Option Rationale
Recurring unknown The articles do not mention any specific software failure incident happening again at one organization or multiple organizations. Therefore, the information related to the recurrence of a software failure incident within the same organization or across multiple organizations is unknown.
Phase (Design/Operation) design, operation (a) The article discusses the failure related to the design phase of facial recognition software. IBM decided to stop offering facial recognition software for "mass surveillance or racial profiling" due to concerns about bias in AI systems used in law enforcement. IBM's chief executive mentioned the need for testing "for bias" in AI systems and emphasized the importance of responsible use of technology [101462]. (b) The article also touches upon the failure related to the operation phase of facial recognition technology. It mentions concerns about the misuse of facial recognition technology for mass surveillance, racial profiling, and violations of basic human rights and freedoms. IBM urged Congress to consider using technology that would bring greater transparency, such as body cameras on police officers and data analytics, instead of relying on potentially biased facial recognition technology [101462].
Boundary (Internal/External) within_system (a) within_system: The software failure incident related to IBM's facial recognition technology can be categorized as within_system. IBM decided to stop offering facial recognition software for "mass surveillance or racial profiling" due to concerns about bias and ethical risks associated with the technology [101462]. This decision was made internally by IBM in response to the need for police reform and responsible use of technology, indicating that the failure originated from within the system itself.
Nature (Human/Non-human) non-human_actions, human_actions (a) The software failure incident related to non-human actions can be seen in the case of IBM abandoning its facial recognition technology due to concerns about bias and misuse in mass surveillance or racial profiling. This decision was driven by the recognition that the technology itself could introduce biases and ethical risks, particularly in enhancing existing bias and discrimination [101462]. (b) On the other hand, the software failure incident related to human actions is evident in the development and deployment of facial recognition technology by companies like IBM, Microsoft, Amazon, and others. These companies have faced criticism for the inaccuracies and biases in their facial recognition algorithms, especially in identifying individuals with dark skin. The human actions involved in creating and using these technologies have led to concerns about racial biases and violations of human rights and freedoms [101462].
Dimension (Hardware/Software) software (a) The articles do not mention any software failure incident related to hardware issues [101462]. (b) The software failure incident mentioned in the articles is related to bias and inaccuracies in facial recognition algorithms developed by tech giants like Microsoft, Amazon, and IBM. These failures originate in the software itself, leading to issues in accurately identifying individuals, especially those with darker skin tones. The software failures in this case are due to contributing factors that originate in the software algorithms used for facial recognition [101462].
Objective (Malicious/Non-malicious) non-malicious (a) The objective of the software failure incident was non-malicious. IBM decided to stop offering facial recognition software for "mass surveillance or racial profiling" due to concerns about bias and ethical risks associated with the technology. The decision was made in response to calls for police reform following the killing of George Floyd and the need to address racial biases in facial recognition algorithms [101462].
Intent (Poor/Accidental Decisions) poor_decisions [101462] The intent of the software failure incident related to the decision to abandon facial recognition technology for mass surveillance or racial profiling by IBM was driven by poor decisions. IBM's chief executive mentioned in the letter to Congress that the firm firmly opposes and will not condone the uses of any technology, including facial recognition technology, for mass surveillance, racial profiling, and violations of basic human rights and freedoms. This decision to stop offering facial recognition software for such purposes was a result of acknowledging the biases and ethical risks associated with the technology, indicating a shift towards more responsible use of technology and a call for police reform.
Capability (Incompetence/Accidental) development_incompetence, unknown (a) The software failure incident related to development incompetence is evident in the article about IBM abandoning its facial recognition technology. IBM's decision to stop offering facial recognition software for "mass surveillance or racial profiling" highlights a recognition of bias and potential ethical issues in their technology. The move was seen as a response to calls for police reform following the killing of George Floyd and a recognition of the urgent need to address racism. IBM's CEO emphasized the importance of testing AI systems for bias and expressed opposition to the use of technology for mass surveillance and racial profiling [101462]. (b) The software failure incident related to accidental factors is not explicitly mentioned in the provided article.
Duration permanent The software failure incident related to IBM abandoning facial recognition technology for mass surveillance or racial profiling can be considered a permanent failure. This decision was driven by ethical concerns and the acknowledgment of biases in the technology, leading IBM to permanently discontinue offering facial recognition software for such purposes [101462].
Behaviour omission, value, other (a) crash: The articles do not mention any specific software crash incident. (b) omission: The decision by IBM to stop offering facial recognition software for "mass surveillance or racial profiling" can be seen as a form of omission where the software will omit to perform its intended functions in those specific areas [101462]. (c) timing: There is no indication of a timing-related failure in the articles. (d) value: The articles highlight the issue of bias in facial recognition technology, indicating a failure in the system performing its intended functions correctly, particularly in accurately identifying individuals of different races [101462]. (e) byzantine: The articles do not mention any behavior related to a byzantine failure. (f) other: The behavior of the software failure incident in this case could be categorized as a failure due to ethical concerns and potential biases in the technology rather than a technical malfunction [101462].

IoT System Layer

Layer Option Rationale
Perception None None
Communication None None
Application None None

Other Details

Category Option Rationale
Consequence unknown (a) death: People lost their lives due to the software failure - There is no mention of people losing their lives due to the software failure incident in the provided article [101462].
Domain government The software failure incident reported in the articles is related to the government (l) industry. IBM's decision to abandon facial recognition technology for mass surveillance or racial profiling specifically impacts law enforcement agencies and government use of such technology. The article mentions IBM's involvement in developing technology for police forces and smart policing platforms, highlighting its connection to the government sector [101462]. The use of facial recognition technology by domestic law enforcement agencies and the implications for racial profiling and violations of human rights are central to this software failure incident in the government industry.

Sources

Back to List