Published Date: 2016-01-12
Postmortem Analysis | |
---|---|
Timeline | 1. The software failure incident happened between September 2014 and November 2015 as reported in Article 39434. 2. The incident occurred in the 14 months leading up to November 2015 as mentioned in Article 40023. |
System | 1. Overall stability of the autonomous driving system, including communication and system failures [39434]. 2. Autonomous technology installed in Google's self-driving cars experienced failures, such as software or perception errors [40023]. 3. Technology failures such as communication breakdowns, strange sensor readings, or problems in safety-critical systems like steering or braking [39570]. 4. Google's self-driving cars were not at fault in any of the 11 minor traffic accidents they were involved in, indicating that the self-driving system did not fail in causing the accidents [36077]. |
Responsible Organization | 1. Google's self-driving cars [39434, 39570, 40023] 2. Delphi Automotive [36077] |
Impacted Organization | 1. Google's self-driving cars experienced software failures that led to human intervention to prevent crashes [39434, 39570, 40023]. 2. The California Department of Motor Vehicles received reports of disengagements and potential crashes due to software failures in Google's autonomous vehicles [39570, 40023]. 3. Consumers and the public were impacted by the need for human intervention in Google's self-driving cars due to software failures [36077]. |
Software Causes | 1. Software or perception errors leading to failures in the autonomous technology installed in Google's self-driving cars [Article 40023] 2. Technology failures such as communication breakdowns, strange sensor readings, or problems in safety-critical systems like steering or braking resulting in immediate manual control disengagements [Article 39570] 3. Failures related to the overall stability of the autonomous driving system, including communication and system failures [Article 39434] |
Non-software Causes | 1. Hardware failures such as communication breakdowns and system failures [39434, 39570] 2. Perception errors leading to disengagements by human drivers [40023] |
Impacts | 1. The software failure incidents in Google's self-driving cars led to 341 instances where Google engineers had to take control of the autonomous vehicles between September 2014 and November 2015. These instances were primarily related to the overall stability of the autonomous driving system, including communication and system failures [39434]. 2. Out of the 341 takeovers, 272 stemmed from technology failures such as communication breakdowns, sensor readings, or safety-critical system problems. The test drivers received warnings to take manual control in these cases [39570]. 3. In 69 instances, the human drivers took control of the car on their own initiative due to perceived hazardous situations or in response to other road users. Google's simulator program was used to analyze these incidents and determine if the autonomous car would have made a bad decision [39434]. 4. The software failures resulted in 13 incidents where the self-driving cars would have crashed if human intervention had not occurred. These incidents were classified as "simulated contacts" where the car would not have taken the necessary action to avoid a crash [39570]. 5. The software failures also led to concerns about the potential for crashes caused by human error and inattention, as highlighted by the accidents involving Google's self-driving cars over the years [36077]. |
Preventions | 1. Improved communication and system stability in the autonomous driving system could have prevented the software failure incident [39434]. 2. Enhanced decision-making algorithms in the software to avoid bad decisions by the autonomous car could have prevented the software failure incident [39434]. 3. More rigorous testing and refinement of the software to address potential flaws in the car's behavior, such as proper perception of traffic lights, yielding to pedestrians and cyclists, and adherence to traffic laws, could have prevented the software failure incident [39570]. 4. Continuous monitoring and analysis of simulated driving scenarios to identify and address potential software failures could have prevented the software failure incident [39570]. 5. Implementation of software fixes based on simulator testing and real-world validation before rolling them out to the entire fleet could have prevented the software failure incident [40023]. |
Fixes | 1. Improving the autonomous driving system's overall stability to address communication and system failures [39434]. 2. Enhancing the software to ensure the autonomous car makes safe decisions to avoid potential crashes [39434]. 3. Refining the software based on simulator data to prevent incidents where the autonomous car might make a bad decision [39434]. 4. Addressing software or perception errors that lead to failures in the autonomous technology installed in the self-driving cars [40023]. 5. Testing software fixes against simulated driving scenarios and then on the road to ensure safety [40023]. | References | 1. Google's self-driving car project team, including Chris Urmson [39434, 39570, 40023] 2. California Department of Motor Vehicles [39570, 40023] 3. Ron Medford, director of safety for Google's self-driving car project [39570] 4. Bryant Walker Smith, assistant professor at the University of South Carolina [39570] 5. John M Simpson, privacy project director of Consumer Watchdog [39570, 36077] 6. Kristen Kinley, spokeswoman for Delphi Automotive [36077] 7. National Highway Traffic Safety Administration [36077] 8. Raj Rajkumar, pioneer of self-driving car technology at Carnegie Mellon University [36077] |
Category | Option | Rationale |
---|---|---|
Recurring | one_organization, multiple_organization | (a) The software failure incident having happened again at one_organization: - Google's self-driving cars experienced 272 failures due to technology failures between September 2014 and November 2015 [39570]. - Google's self-driving cars had been in 11 minor traffic accidents since it began experimenting with the technology six years ago, with all accidents being minor and not caused by the self-driving car itself [36077]. (b) The software failure incident having happened again at multiple_organization: - Other companies testing autonomous technology in California, including Nissan, VW, and Mercedes-Benz, reported higher rates of disengagements compared to Google [39434]. - Volkswagen/Audi, Mercedes-Benz, Delphi, Tesla, Bosch, and Nissan have all filed disengagement reports with the California DMV, indicating instances where human drivers had to take control of the vehicles during testing [39570]. |
Phase (Design/Operation) | design, operation | (a) In the articles, there are instances of software failure incidents related to the design phase of development: 1. Google's self-driving cars experienced 272 failures between September 2014 and November 2015, with 272 disengagements stemming from technology failures such as communication breakdowns or system issues [39434, 39570]. 2. The failures in these instances were mostly due to software or perception errors, leading to immediate manual control disengagements where the test driver had to take over due to warnings from the car [40023]. (b) There are also software failure incidents related to the operation phase: 1. In the case of Google's self-driving cars, there were 69 disengagements where the human driver took control of the car on their own initiative, typically in response to hazardous situations or other road users [39570]. 2. Google's cars have been rear-ended seven times, often when stopped, or sideswiped, indicating failures during the operation phase [36077]. |
Boundary (Internal/External) | within_system | (a) within_system: The software failure incidents related to Google's self-driving cars were primarily within the system. The failures were attributed to technology failures such as communication breakdowns, strange sensor readings, or problems in safety-critical systems like steering or braking [Article 39570]. Google's cars experienced disengagements where the autonomous technology failed, leading to human intervention to take control of the vehicle [Article 40023]. The failures were recorded and later replayed in a simulator to analyze what went wrong [Article 40023]. (b) outside_system: The software failures were not primarily due to factors originating from outside the system. The incidents were mainly related to failures within the autonomous technology and the software systems of the self-driving cars [Article 39570]. The need for human intervention in the face of potential crashes was a result of internal system failures rather than external factors [Article 40023]. |
Nature (Human/Non-human) | non-human_actions, human_actions | (a) The software failure incident occurring due to non-human actions: - Google's self-driving cars experienced failures and potential crashes due to technology failures such as communication breakdowns, sensor readings, or safety-critical system problems [39570]. - In 272 instances of disengagements, the failures were related to the overall stability of the autonomous driving system, indicating issues like communication and system failures [39434]. - Google's cars detected technology failures leading to immediate manual control disengagements, where the test driver had to take over due to software or perception errors [40023]. (b) The software failure incident occurring due to human actions: - Human drivers intervened in 69 disengagements by taking control of the car on their own initiative, indicating instances where the car might have made a bad decision [39434]. - Google reported that in 56 of the 69 driver disengagements, the car would probably not have come into contact with another object, but identified potential causes of contacts in other environments or situations due to improper behavior like not perceiving traffic lights correctly or not yielding properly [39570]. - Google's self-driving cars were rear-ended seven times due to human error and inattention, and other collisions were caused by human factors like hitting a car rolling through a stop sign [36077]. |
Dimension (Hardware/Software) | hardware, software | (a) The articles provide information about software failure incidents related to hardware and software issues: (a) Hardware-related failure: - Article 39434 mentions that out of the 341 instances where Google engineers took control of the autonomous vehicles, 272 stemmed from the "overall stability of the autonomous driving system," which included hardware-related issues like communication and system failures [39434]. - Article 40023 discusses that in 272 of the disengagements, the car detected technology failures such as communications breakdowns, strange sensor readings, or problems in safety-critical systems like steering or braking, indicating hardware-related issues [40023]. (b) Software-related failure: - Article 39434 highlights that the remaining 69 takeovers were related to the safe operation of the vehicle, indicating instances where the autonomous car might have made a bad decision, suggesting software-related issues [39434]. - Article 40023 mentions that the failures mostly consisted of software or perception errors, and in only 13 of the recorded incidents would Google's self-driving cars have crashed without human intervention, indicating software-related issues [40023]. |
Objective (Malicious/Non-malicious) | non-malicious | (a) malicious: There is no information in the provided articles indicating a software failure incident caused by malicious intent to harm the system. (b) non-malicious: The articles discuss instances where Google's self-driving cars experienced software failures that led to disengagements or potential crashes. These failures were mainly due to technology failures such as communication breakdowns, sensor errors, or safety-critical system problems, prompting human intervention to take control of the vehicle to prevent accidents ([39434], [39570], [40023]). |
Intent (Poor/Accidental Decisions) | poor_decisions, accidental_decisions | (a) The intent of the software failure incident related to poor decisions: - Google's self-driving cars experienced failures and potential crashes due to the autonomous driving system making bad decisions, leading to human intervention to prevent accidents [39434, 39570]. - In some instances, the autonomous cars would have made bad decisions that could have resulted in crashes if the human driver had not taken over control [39434, 39570]. - Google's simulator program is used to replay incidents where the AI might have made a bad decision, allowing engineers to analyze what the car would have done without human intervention, highlighting potential poor decisions by the software [39434]. (b) The intent of the software failure incident related to accidental decisions: - Some failures in Google's self-driving cars were due to software or perception errors, leading to human drivers taking manual control to prevent crashes [40023]. - Google's cars experienced disengagements where the human driver took control on their own initiative, such as grabbing the steering wheel or pressing pedals, indicating accidental decisions made by the software that prompted human intervention [39570]. - The company uses a simulator to determine whether the human driver's intervention was necessary for safety, suggesting that some interventions may have been accidental decisions by the software [39570]. |
Capability (Incompetence/Accidental) | development_incompetence | (a) The articles provide information about software failure incidents related to development incompetence. For example, in Article 39570, it is mentioned that Google's self-driving cars experienced failures due to technology failures such as communication breakdowns, strange sensor readings, or problems in safety-critical systems [39570]. These failures were categorized as "immediate manual control" disengagements, where the test driver had to take over due to software or perception errors [40023]. Additionally, the article highlights that Google's cars would have hit an object on at least 13 occasions if human drivers had not intervened, indicating software-related issues [40023]. (b) The articles also mention software failure incidents that occurred accidentally. For instance, in Article 36077, it is reported that Google's self-driving cars were involved in minor traffic accidents, but the self-driving car was not the cause of the accidents [36077]. The accidents were described as minor fender-benders caused by human error and inattention [36077]. This indicates that the accidents were accidental and not directly caused by the software itself. |
Duration | permanent, temporary | (a) In the context of permanent software failure incidents, Article 39570 mentions that Google's self-driving cars experienced 272 failures between September 2014 and November 2015, with 13 of those instances being classified as "simulated contacts" that would have resulted in a crash if human intervention had not occurred [39570]. (b) Regarding temporary software failure incidents, Article 40023 discusses how Google's self-driving cars had 341 disengagements during testing in California between September 2014 and November 2015. In 272 of these disengagements, the failures were attributed to issues in the autonomous technology, leading to human drivers taking control. The remaining 69 disengagements were initiated by the human drivers themselves due to perceived hazards or concerns about the car's behavior [40023]. |
Behaviour | crash | (a) crash: The articles mention instances where Google's self-driving cars would have crashed if human drivers had not intervened. For example, between September 2014 and November 2015, Google's autonomous vehicles experienced 272 failures, and at least 13 of those instances would have led to crashes without human intervention [39434, 39570, 40023]. (b) omission: The articles do not specifically mention instances of software failures due to omission where the system omitted to perform its intended functions at an instance(s). (c) timing: The articles do not specifically mention instances of software failures due to timing, where the system performed its intended functions correctly, but too late or too early. (d) value: The articles do not specifically mention instances of software failures due to the system performing its intended functions incorrectly. (e) byzantine: The articles do not specifically mention instances of software failures due to the system behaving erroneously with inconsistent responses and interactions. (f) other: The articles do not describe any other specific behavior of software failure incidents beyond the options provided. |
Layer | Option | Rationale |
---|---|---|
Perception | sensor, embedded_software | (a) sensor: The articles mention instances where the failure was related to the perception layer of the cyber physical system that failed due to sensor errors. For example, in Article 39570, it is stated that in 272 disengagements, the car detected technology failures such as strange sensor readings [39570]. Additionally, Article 40023 mentions that the failures mostly consisted of software or perception errors, where the car detected a technology failure such as a strange sensor reading [40023]. (b) actuator: There is no specific mention of failures related to the actuator layer in the provided articles. (c) processing_unit: The articles do not provide information on failures specifically related to the processing unit of the cyber physical system. (d) network_communication: The articles do not discuss failures related to network communication errors in the cyber physical system. (e) embedded_software: The articles do mention failures related to the embedded software layer of the cyber physical system. For instance, in Article 40023, it is mentioned that the failures mostly consisted of software errors [40023]. |
Communication | link_level, connectivity_level | (a) The failure related to the communication layer of the cyber physical system that failed is described in Article 39434. It mentions that out of the 341 instances where Google engineers took control of the autonomous vehicle, 272 stemmed from the "overall stability of the autonomous driving system" which included things like communication and system failures. This indicates that the failure was at the link_level, related to contributing factors introduced by the physical layer of the communication system [39434]. (b) The failure related to the communication layer of the cyber physical system that failed is also described in Article 39570. It states that in 272 of the disengagements, the car detected a technology failure such as a communications breakdown, indicating a failure at the connectivity_level, related to contributing factors introduced by the network or transport layer [39570]. |
Application | FALSE | [39434, 39570, 40023] The software failures reported in the articles related to Google's self-driving cars were not primarily related to the application layer of the cyber physical system. Instead, the failures were mainly attributed to technology failures, communication breakdowns, perception errors, and safety-critical system problems within the autonomous driving system. The incidents where human drivers had to take control were due to issues such as strange sensor readings, problems in steering or braking systems, and software errors, rather than bugs, operating system errors, unhandled exceptions, or incorrect usage typically associated with application layer failures. |
Category | Option | Rationale |
---|---|---|
Consequence | theoretical_consequence | (a) death: People lost their lives due to the software failure - There is no mention of any deaths caused by the software failure incidents reported in the articles [39434, 39570, 40023, 36077]. (b) harm: People were physically harmed due to the software failure - There is no mention of people being physically harmed due to the software failure incidents reported in the articles [39434, 39570, 40023, 36077]. (c) basic: People's access to food or shelter was impacted because of the software failure - There is no mention of people's access to food or shelter being impacted due to the software failure incidents reported in the articles [39434, 39570, 40023, 36077]. (d) property: People's material goods, money, or data was impacted due to the software failure - The software failure incidents mentioned in the articles did not result in any direct impact on people's material goods, money, or data [39434, 39570, 40023, 36077]. (e) delay: People had to postpone an activity due to the software failure - The software failure incidents did not lead to any activities being postponed by people [39434, 39570, 40023, 36077]. (f) non-human: Non-human entities were impacted due to the software failure - The software failure incidents primarily involved self-driving cars and the technology itself, with no mention of non-human entities being directly impacted [39434, 39570, 40023, 36077]. (g) no_consequence: There were no real observed consequences of the software failure - The software failure incidents did have consequences, such as instances where the autonomous vehicles might have made bad decisions leading to potential crashes if human intervention had not occurred [39434, 39570, 40023, 36077]. (h) theoretical_consequence: There were potential consequences discussed of the software failure that did not occur - The articles discuss potential consequences of crashes that could have occurred if human drivers had not intervened in certain situations, but these did not actually result in accidents [39434, 39570, 40023, 36077]. (i) other: Was there consequence(s) of the software failure not described in the (a to h) options? What is the other consequence(s)? - The articles do not mention any other specific consequences of the software failure incidents beyond those related to potential crashes and human interventions [39434, 39570, 40023, 36077]. |
Domain | information, transportation, government | (a) The failed system was intended to support the information industry as it involved the production and distribution of information. The software failure incident was related to Google's self-driving cars, which are part of Google's autonomous vehicle project aimed at revolutionizing transportation with self-driving technology [39434, 39570, 40023]. (b) The transportation industry was impacted by the software failure incident as it involved Google's self-driving cars, which are designed to move people and things autonomously on public roads [39434, 39570, 40023]. (c) The failed system was not directly related to the natural resources industry as it did not involve extracting materials from the Earth. (d) The sales industry was not directly impacted by the software failure incident involving Google's self-driving cars. (e) The construction industry was not directly involved in the software failure incident related to Google's self-driving cars. (f) The manufacturing industry was not directly affected by the software failure incident involving Google's self-driving cars. (g) The utilities industry, which includes power, gas, steam, water, and sewage services, was not directly linked to the software failure incident related to Google's self-driving cars. (h) The finance industry, which involves manipulating and moving money for profit, was not directly impacted by the software failure incident involving Google's self-driving cars. (i) The failed system was not directly related to the knowledge industry, which encompasses education, research, and space exploration. (j) The health industry, which includes healthcare, health insurance, and food industries, was not directly involved in the software failure incident related to Google's self-driving cars. (k) The entertainment industry, which covers arts, sports, hospitality, and tourism, was not directly impacted by the software failure incident involving Google's self-driving cars. (l) The government industry, which includes politics, defense, justice, taxes, and public services, was indirectly involved as the software failure incident with Google's self-driving cars required reporting to the California Department of Motor Vehicles and compliance with regulations [39434, 39570, 40023]. (m) The failed system was not directly related to any other industry not covered in the options provided. |
Article ID: 39570
Article ID: 39434
Article ID: 40023
Article ID: 36077