| Recurring |
one_organization, multiple_organization |
(a) The software failure incident related to inappropriate translations using racial slurs has happened again at WeChat. WeChat had a previous incident where its AI error resulted in translating a neutral Chinese phrase into the n-word. The incident was reported by Shanghai-based theatre producer and actor Ann James, a black American [63905].
(b) The incident of inappropriate translations using racial slurs is not unique to WeChat. Google's own translation product has also faced similar issues. For example, Google's translation product made sexist assumptions when translating gender-neutral Turkish sentences and had previously labeled a photo of two black people as "gorillas" [63905]. |
| Phase (Design/Operation) |
design, operation |
(a) The software failure incident in the articles can be attributed to the design phase. WeChat's translation error, where a neutral Chinese phrase was translated into a racial slur, was a result of the AI translation engine being trained on biased data sources containing racial slurs and stereotypical descriptions of black people. This bias in the training data led to the inappropriate translations, highlighting a flaw in the design of the translation system [63905, 64493].
(b) Additionally, the incident also involved operation-related factors as users were able to trigger the translation error by using specific negative contexts with the phrase "hei laowai." The misuse of the system by inputting certain phrases led to the generation of offensive translations, indicating issues related to the operation or usage of the translation feature [63905, 64493]. |
| Boundary (Internal/External) |
within_system |
(a) within_system:
- The software failure incident involving the translation error in WeChat was primarily attributed to the machine translation system itself. The incident occurred due to the neural network-based translation engine incorporating biases and errors from its training data sources, leading to inappropriate translations like using the n-word [63905, 64493].
- WeChat acknowledged the issue and mentioned that their automated translation engine is still undergoing the learning and optimization process to improve translation quality [63905].
- The company uses AI and machine learning to train the translation system, but the lack of human oversight contributed to incorrect and offensive translations [64493].
(b) outside_system:
- The software failure incident was not explicitly linked to factors originating from outside the system in the articles. The focus was more on the internal issues within the machine translation system that led to the inappropriate translations [63905, 64493].
- While the incident highlighted cultural sensitivity and racial biases, particularly in the context of Chinese perceptions of race, there was no direct mention of external factors causing the software failure [64493]. |
| Nature (Human/Non-human) |
non-human_actions, human_actions |
(a) The software failure incident occurring due to non-human actions:
- The software failure incident with WeChat's translation engine was attributed to a neural network-based service that incorporated biases and errors from its data sources on which it was trained [63905].
- WeChat mentioned that their automated translation engine is still undergoing the learning and optimization process, indicating that the issue stemmed from the system's training and optimization rather than direct human actions [63905].
- The article highlighted that the translation software had been retooled and no longer produced racial slurs after the incident, suggesting that the correction was made through adjustments in the system rather than direct human intervention [64493].
(b) The software failure incident occurring due to human actions:
- The incident was first noticed by an American living in Shanghai, Ann James, who used WeChat's built-in translation feature and discovered the inappropriate translation [64493].
- WeChat's spokesperson mentioned that they immediately fixed the problem after receiving users' feedback, indicating that human intervention was required to address the issue [63905].
- The article mentioned that the system removes human oversight, which can lead to incorrect and offensive words being used, implying that lack of human oversight in the system's design and implementation could have contributed to the incident [64493]. |
| Dimension (Hardware/Software) |
software |
(a) The software failure incident reported in the news articles is primarily attributed to software-related factors. WeChat's translation error, where a neutral Chinese phrase was incorrectly translated into a racial slur, was a result of an error in the artificial intelligence software used for translation [63905, 64493]. The incident was caused by the machine learning system being trained on data sources containing biases and errors, leading to inappropriate translations. WeChat acknowledged the issue and mentioned that the translation engine is still undergoing the learning and optimization process to improve accuracy [63905].
(b) The software failure incident is not directly linked to hardware-related factors. The issue with WeChat's translation feature was specifically related to the software's translation algorithm and the data it was trained on, rather than any hardware malfunctions or failures [63905, 64493]. |
| Objective (Malicious/Non-malicious) |
non-malicious |
(a) The software failure incident related to the translation error in WeChat's AI system does not appear to be malicious. It was a non-malicious failure caused by the biases and errors in the data sources on which the system was trained. The incident was attributed to the AI system incorporating racial slurs and stereotypical descriptions of black people in its translations, leading to inappropriate outputs like translating a neutral Chinese phrase into the n-word [63905, 64493]. The company promptly apologized, fixed the issue, and stated that the automated translation engine was still undergoing the learning and optimization process to improve translation quality [63905].
(b) The failure was non-malicious as it was a result of the AI system's training data containing biases and errors, rather than any intentional harm caused by individuals involved in the system's development or operation. |
| Intent (Poor/Accidental Decisions) |
poor_decisions |
(a) The software failure incident involving WeChat's translation error can be attributed to poor decisions made in the training of the machine translation system. The incident occurred due to the system being trained on a corpus of Chinese and English text that contained racial slurs and stereotypical descriptions of black people [63905, 64493]. This poor decision in the training data led to the inappropriate translation of a neutral Chinese phrase into a racial slur, causing significant backlash and necessitating immediate corrective action by WeChat. |
| Capability (Incompetence/Accidental) |
development_incompetence, accidental |
(a) The software failure incident related to development incompetence is evident in the articles. WeChat's translation error, which resulted in translating a neutral Chinese phrase into a racial slur, was attributed to the biases and errors in the data sources on which the neural network-based translation engine was trained [63905, 64493]. This indicates a lack of professional competence in ensuring that the training data used for the translation engine was free from racial slurs and biases.
(b) The software failure incident related to accidental factors is also apparent in the articles. WeChat apologized for the inappropriate translation and mentioned that the issue was due to an error in the artificial intelligence software that translates between Chinese and English [63905, 64493]. This suggests that the use of the racial slur was unintentional and not a deliberate action by the developers or the organization. |
| Duration |
temporary |
(a) The software failure incident in the articles can be categorized as temporary. WeChat's translation software used the N-word to translate a Chinese phrase meaning "black foreigner" due to an error in the artificial intelligence software [63905, 64493]. However, after receiving user feedback, WeChat immediately fixed the problem, indicating that the issue was not permanent and was rectified promptly [63905, 64493]. |
| Behaviour |
omission, value, other |
(a) crash: The software failure incident in the articles does not involve a crash where the system loses state and does not perform any of its intended functions [63905, 64493].
(b) omission: The software failure incident involves the system omitting to perform its intended functions at an instance(s) by translating a neutral Chinese phrase into a racial slur, specifically the n-word [63905, 64493].
(c) timing: The software failure incident does not involve timing issues where the system performs its intended functions too late or too early [63905, 64493].
(d) value: The software failure incident involves the system performing its intended functions incorrectly by translating a neutral Chinese phrase into a racially offensive term [63905, 64493].
(e) byzantine: The software failure incident does not exhibit byzantine behavior where the system behaves erroneously with inconsistent responses and interactions [63905, 64493].
(f) other: The software failure incident involves the system making inappropriate translations due to biases and errors in the data sources on which it was trained, leading to offensive outputs [63905, 64493]. |