| Recurring |
one_organization |
(a) The software failure incident happened again at one_organization:
The article mentions that in May 2012, a software fault caused kidney damage results to be calculated incorrectly at GSTS's St Thomas' labs. This incident was highlighted as a "near miss" and appropriate action was taken to learn from it [14650].
(b) The software failure incident happened again at multiple_organization:
The article does not provide specific information about similar incidents happening at other organizations or with their products and services. |
| Phase (Design/Operation) |
design, operation |
(a) The software failure incident related to the design phase can be seen in the article where it mentions that in May 2012, kidney damage results were calculated incorrectly after a software fault, which GSTS highlighted as a "near miss" and took appropriate action to learn from it. This incident points to a failure introduced during the design or development phase of the software system [14650].
(b) The software failure incident related to the operation phase is evident in the article where it describes an incident in January 2012 where a patient received inappropriate blood due to patient history not being flagged. This incident occurred due to an operational issue or misuse of the system, indicating a failure introduced during the operation of the software system [14650]. |
| Boundary (Internal/External) |
within_system |
(a) within_system: The software failure incident at GSTS's St Thomas' labs in 2012 was caused by a computer system that led to various issues. For example, a patient received inappropriate blood due to patient history not being flagged, kidney damage results were calculated incorrectly after a software fault, and the lab's blood group analysers had to be shut for four days after being infected by a computer virus [14650]. These incidents highlight failures within the system's software that directly impacted patient care and laboratory operations. |
| Nature (Human/Non-human) |
non-human_actions, human_actions |
(a) The software failure incident occurring due to non-human actions:
- The Corporate Watch investigation revealed that a computer system caused problems, leading to incidents such as a patient receiving inappropriate blood and incorrect kidney damage results due to a software fault [14650].
- The lab's blood group analysers had to be shut for four days after being infected by a computer virus [14650].
(b) The software failure incident occurring due to human actions:
- The article mentions that GSTS management admitted the venture "did not get off to a great start" and "the corporate functions have not always provided a joined-up service" [14650].
- It is highlighted that senior managers underestimated the challenges of running the service and acknowledged clinicians' frustrations, in part due to lack of investment in new technologies [14650]. |
| Dimension (Hardware/Software) |
hardware, software |
(a) The software failure incident occurring due to hardware:
- The article mentions that in May 2012, the lab's blood group analysers had to be shut for four days after being infected by a computer virus [14650].
- The incident of kidney damage results being calculated incorrectly in May 2012 was attributed to a software fault [14650].
(b) The software failure incident occurring due to software:
- The incident in January 2012 where a patient received inappropriate blood was attributed to patient history not being flagged due to a computer system issue [14650].
- The article highlights that the software fault in May 2012 led to incorrect kidney damage results [14650]. |
| Objective (Malicious/Non-malicious) |
non-malicious |
(a) The articles do not mention any malicious software failure incidents where the contributing factors were introduced by humans with the intent to harm the system [14650].
(b) The articles do mention non-malicious software failure incidents. For example, in May 2012, kidney damage results were calculated incorrectly after a software fault, which was highlighted as a "near miss" [14650]. Additionally, in January 2012, a patient received inappropriate blood due to patient history not being flagged, which was also attributed to a software issue [14650]. |
| Intent (Poor/Accidental Decisions) |
poor_decisions, accidental_decisions |
(a) The software failure incident at GSTS's St Thomas' labs in May 2012, where kidney damage results were calculated incorrectly after a software fault, can be attributed to poor decisions made in the management of the pathology services. The incident was highlighted as a "near miss," indicating that the software issue could have had serious consequences for patients if not caught in time [14650].
(b) The software failure incident in January 2012, where a patient received inappropriate blood due to patient history not being flagged, can be linked to accidental decisions or mistakes made in the implementation or use of the software system. This incident was taken very seriously by the company, suggesting that it was an unintended consequence of the software not functioning as intended [14650]. |
| Capability (Incompetence/Accidental) |
development_incompetence, accidental |
(a) The software failure incident related to development incompetence is evident in the article. The Corporate Watch investigation revealed that a computer system caused problems leading to various clinical incidents at GSTS's St Thomas' labs, including incidents like losing and mislabelling samples, exceeding agreed turnaround times for tests, and critical risk levels being breached [14650]. Additionally, in May 2012, kidney damage results were calculated incorrectly after a software fault, which was highlighted as a "near miss" by GSTS [14650].
(b) The software failure incident related to accidental factors is also present in the articles. For instance, in January 2012, a patient received inappropriate blood due to patient history not being flagged, which was described as an incident that the company took very seriously [14650]. Furthermore, in the same month, the lab's blood group analysers had to be shut down for four days after being infected by a computer virus, indicating an accidental software failure incident [14650]. |
| Duration |
temporary |
The software failure incident mentioned in the articles appears to have caused temporary disruptions rather than being a permanent failure. The incident in January 2012 where a patient received inappropriate blood due to patient history not being flagged was highlighted as a serious incident that the company took very seriously [14650]. Additionally, in May 2012, kidney damage results were calculated incorrectly after a software fault, which was categorized as a "near miss," and appropriate actions were taken to learn from it [14650]. These incidents suggest that the software failures were temporary in nature and were addressed to prevent further occurrences. |
| Behaviour |
omission, value |
(a) crash: The article mentions a software fault in May 2012 that led to kidney damage results being calculated incorrectly after a software fault, which was highlighted as a "near miss" [14650].
(b) omission: The article discusses incidents where the system lost and mislabeled samples, and exceeded agreed monthly turnaround times for tests, with critical risk levels breached multiple times [14650].
(c) timing: There is no specific mention of a timing-related failure in the articles.
(d) value: The incident in January 2012 where a patient received inappropriate blood due to patient history not being flagged indicates a failure in the system performing its intended functions incorrectly [14650].
(e) byzantine: The article does not provide information about the system behaving erroneously with inconsistent responses and interactions.
(f) other: The article does not describe a behavior that falls under the "other" category. |