Incident: Tesla's Full Self-Driving Software Fails to Detect Child Pedestrians

Published Date: 2022-08-09

Postmortem Analysis
Timeline 1. The software failure incident involving Tesla's Full Self-Driving system, where it failed to detect a child-sized mannequin, happened in August 2022 as per Article 131115. 2. The testing conducted by The Dawn Project in October 2022 further highlighted the failure of Tesla's Full Self-Driving system to register or stop for small mannequins, suggesting a danger to child pedestrians as reported in Article 135028.
System 1. Tesla's Full Self-Driving (FSD) Beta software [131115, 135028] 2. Tesla's Autopilot active driver assistance system [135028]
Responsible Organization 1. The Dawn Project [131115, 135028] 2. Tesla [131115, 135028]
Impacted Organization 1. Child pedestrians were impacted by the software failure incident reported in the news articles [131115, 135028].
Software Causes 1. The failure incident was caused by Tesla's Full Self-Driving (FSD) Beta software failing to detect a stationary, child-sized mannequin at an average speed of 25mph, which raised concerns about the software's ability to detect and respond to child pedestrians [131115]. 2. The software failure incident involved testing conducted by The Dawn Project, a software company, to assess Tesla's Full Self-Driving system's capability to register children and stop for them. The testing revealed that the system did not register or stop for small mannequins crossing the road, indicating a potential flaw in the software's object detection and response mechanisms [135028].
Non-software Causes 1. Lack of proper detection of obstacles by Tesla's Full Self-Driving system, leading to a failure to stop for child-sized mannequins [131115, 135028] 2. Alleged reckless deployment of unsafe self-driving vehicles by Tesla [131115] 3. Concerns about the safety of Tesla's autopilot technology and associated systems, potentially exacerbating human factors or behavioral safety risks [131115] 4. Removal of the forward-looking radar sensor on some newer Teslas causing "phantom braking" incidents [131115]
Impacts 1. The software failure incident involving Tesla's Full Self-Driving (FSD) Beta software failing to detect a stationary, child-sized mannequin at an average speed of 25mph had significant impacts on public perception and safety concerns [131115]. 2. The incident led to a full-page advertisement in the New York Times warning about the life-threatening danger the software posed to child pedestrians, causing further scrutiny and negative publicity for Tesla [135028]. 3. The failure raised questions about the safety of Tesla's self-driving technology, prompting calls for a ban on the auto-driving technology until its safety can be guaranteed [131115]. 4. The incident resulted in The Dawn Project, a safe-technology campaign group, conducting tests and sharing their findings publicly, which further fueled the controversy surrounding Tesla's self-driving technology [131115, 135028]. 5. The failure incident led to Tesla sending a cease-and-desist order to The Dawn Project over the public sharing of their test results, indicating the legal implications and disputes that arose from the software failure incident [135028]. 6. The failure incident also prompted regulatory bodies like the National Highway Traffic Safety Administration (NHTSA) to expand investigations into Tesla's autopilot technology and associated systems, highlighting the broader impact on regulatory scrutiny and potential consequences for Tesla [131115].
Preventions 1. Thorough testing and validation procedures before deploying the Full Self-Driving (FSD) Beta software on public roads could have potentially prevented the software failure incident reported in the articles [131115, 135028]. This includes comprehensive testing scenarios that cover various real-world situations, such as detecting stationary objects like child-sized mannequins. 2. Implementing stricter regulatory oversight and standards for autonomous driving technologies could have helped prevent the software failure incident. This could involve more stringent requirements for safety testing, validation, and certification of self-driving systems to ensure they meet certain safety benchmarks before being allowed on public roads [131115, 135028]. 3. Continuous monitoring and feedback mechanisms to address and rectify any identified issues or shortcomings in the software could have prevented the incident. This includes actively collecting data on system performance, analyzing incidents or near-misses, and promptly addressing any identified deficiencies through software updates or modifications [131115, 135028].
Fixes 1. Conduct thorough testing and validation of the Full Self-Driving (FSD) Beta software to ensure it accurately detects and responds to child pedestrians on the road [131115, 135028]. 2. Address any identified deficiencies in the software's ability to detect stationary objects, especially child-sized mannequins, at various speeds [131115, 135028]. 3. Implement necessary improvements to the FSD software to prevent instances where the vehicle fails to stop for small mannequins or children in its path [131115, 135028]. 4. Enhance the safety features and algorithms within the FSD system to prioritize pedestrian detection and avoidance, particularly in scenarios involving child pedestrians [131115, 135028]. 5. Collaborate with regulatory authorities such as the National Highway Traffic Safety Administration (NHTSA) to address concerns raised regarding the safety and reliability of the FSD technology [131115, 135028].
References 1. The Dawn Project [Article 131115, Article 135028] 2. Dan O'Dowd [Article 131115, Article 135028] 3. Tesla [Article 131115, Article 135028] 4. National Highway Traffic Safety Administration (NHTSA) [Article 131115, Article 135028]

Software Taxonomy of Faults

Category Option Rationale
Recurring one_organization, multiple_organization (a) The software failure incident related to Tesla's Full Self-Driving (FSD) technology has happened again within the same organization. The Dawn Project conducted tests showing that Tesla's FSD software failed to detect stationary, child-sized mannequins at certain speeds, raising concerns about the technology's safety implications for child pedestrians [131115, 135028]. (b) The incident involving software failure in the context of self-driving technology has also occurred at other organizations or with their products and services. The National Highway Traffic Safety Administration (NHTSA) has been investigating Tesla's autopilot technology and associated systems in multiple crashes, including cases where Teslas hit emergency vehicles. Additionally, the NHTSA is looking into whether the removal of forward-looking radar sensors on newer Teslas is causing issues like "phantom braking" [131115].
Phase (Design/Operation) design, operation (a) The articles highlight a software failure incident related to the design phase. The failure is attributed to contributing factors introduced by system development and updates. The Dawn Project conducted tests on Tesla's Full Self-Driving (FSD) Beta software and found that it failed to detect stationary, child-sized mannequins at various speeds, raising concerns about the software's ability to ensure pedestrian safety [131115, 135028]. (b) The articles also mention a software failure incident related to the operation phase. The failure is linked to contributing factors introduced by the operation or misuse of the system. The Dawn Project's testing involved scenarios where the Tesla vehicles did not register or stop for small mannequins crossing the road, indicating a potential operational failure of the Full Self-Driving system in recognizing and responding to obstacles [135028].
Boundary (Internal/External) within_system, outside_system (a) The software failure incident related to the Tesla Full Self-Driving system can be attributed to factors within the system. The failure was highlighted by The Dawn Project's testing, which revealed that the system failed to detect stationary, child-sized mannequins at various speeds [131115, 135028]. The failure to recognize these objects raises concerns about the system's ability to identify and respond to potential hazards, indicating an issue originating from within the software system itself. Additionally, the testing conducted by The Dawn Project showed consistent failures in the system's response to obstacles, further emphasizing an internal software issue [135028]. (b) On the other hand, external factors such as public scrutiny, regulatory investigations, and safety concerns from advocacy groups like The Dawn Project have also played a significant role in highlighting the software failure incident related to Tesla's Full Self-Driving system. The National Highway Traffic Safety Administration (NHTSA) expanded investigations into Tesla's autopilot technology and associated systems, aiming to examine how these technologies may interact with human factors and behavioral safety risks [131115]. The external pressure from regulatory bodies and safety organizations has contributed to the scrutiny and evaluation of Tesla's self-driving technology, indicating influences from outside the system. Additionally, The Dawn Project's public campaigns and advertisements in major publications like the New York Times have further amplified the external attention on the software failure incident, emphasizing the impact of external factors on the situation [135028].
Nature (Human/Non-human) non-human_actions, human_actions (a) The software failure incident occurring due to non-human actions: - The Dawn Project conducted tests on Tesla's Full Self-Driving (FSD) Beta software and found that it failed to detect a stationary, child-sized mannequin at certain speeds, indicating a potential safety threat to child pedestrians [131115]. - The testing involved scenarios where the Tesla vehicle did not register or stop for small mannequins crossing the road, suggesting a failure in the software's ability to detect obstacles [135028]. (b) The software failure incident occurring due to human actions: - The founder of The Dawn Project, Dan O'Dowd, criticized Tesla's deployment of unsafe self-driving vehicles and called for a ban on Tesla's auto-driving technology, attributing the failure to the company's decisions and actions [131115]. - The Dawn Project's advertisement in The New York Times highlighted safety testing conducted by the firm, suggesting that human decisions and actions within Tesla led to the software's failure to detect and stop for obstacles like child-sized mannequins [135028].
Dimension (Hardware/Software) software (a) The articles do not provide information about a software failure incident occurring due to contributing factors originating in hardware. (b) The software failure incident reported in the articles is related to the failure of Tesla's Full Self-Driving (FSD) Beta software to detect stationary, child-sized mannequins on the road, potentially posing a lethal threat to child pedestrians [131115, 135028]. The failure is attributed to issues within the software itself, as highlighted by The Dawn Project's testing of the Tesla Full Self-Driving system, which showed that the technology did not register children and stop for them, leading to concerns about the safety of the self-driving feature.
Objective (Malicious/Non-malicious) non-malicious (a) The articles report on a non-malicious software failure incident related to Tesla's Full Self-Driving (FSD) Beta software failing to detect stationary, child-sized mannequins during tests conducted by The Dawn Project [131115, 135028]. The failure was highlighted as a potentially lethal threat to child pedestrians, with claims that the software did not register or stop for small mannequins crossing the road, suggesting a danger to children. The failure was attributed to the software's inability to accurately detect and respond to obstacles, raising concerns about the safety of Tesla's self-driving technology. (b) The articles do not provide information about a malicious software failure incident.
Intent (Poor/Accidental Decisions) poor_decisions (a) The intent of the software failure incident related to poor_decisions: - The failure incident related to poor decisions is evident in the claims made by The Dawn Project regarding Tesla's Full Self-Driving (FSD) Beta software. The software was tested by The Dawn Project, and the results showed that the software failed to detect a stationary, child-sized mannequin at an average speed of 25mph [131115]. - The founder of The Dawn Project, Dan O'Dowd, criticized Tesla's deployment of unsafe self-driving vehicles and described the test results as "deeply disturbing," highlighting the potential lethal threat posed by Tesla's software to child pedestrians [131115]. - O'Dowd called for the prohibition of self-driving cars until Tesla proves that the vehicles will not pose a danger to children in crosswalks, indicating concerns about the safety implications of Tesla's software [131115]. (b) The intent of the software failure incident related to accidental_decisions: - The failure incident related to accidental decisions is not explicitly mentioned in the articles. The focus is primarily on the claims and tests conducted by The Dawn Project regarding Tesla's Full Self-Driving (FSD) Beta software and the safety concerns raised by the test results [131115, 135028]. - The articles highlight the testing conducted by The Dawn Project, the results of which indicated that Tesla's software failed to detect child-sized mannequins, raising concerns about the potential risks posed by the technology [131115, 135028]. - The safety campaign group's findings and the subsequent advertisement in The New York Times emphasized the life-threatening danger that Tesla's Full Self-Driving system could pose to child pedestrians, indicating a critical evaluation of the software's performance [135028].
Capability (Incompetence/Accidental) development_incompetence (a) The articles highlight concerns and allegations regarding the safety and competence of Tesla's Full Self-Driving (FSD) software. The Dawn Project, led by Dan O'Dowd, has conducted tests showing that the FSD software failed to detect stationary, child-sized mannequins at various speeds, raising concerns about the software's ability to ensure pedestrian safety [131115, 135028]. These tests suggest a potential failure due to development incompetence, as the software may not have been adequately designed or tested to detect and respond to such critical scenarios. (b) The articles also mention instances where Tesla's Full Self-Driving system did not register or stop for small mannequins crossing the road during testing conducted by The Dawn Project. The group's findings suggest that the system may not have been functioning as intended, leading to the failure to detect and respond to obstacles in the road, including child-sized mannequins [135028]. This indicates a potential accidental failure where the software did not perform as expected, possibly due to unforeseen circumstances or limitations in the system's design or implementation.
Duration permanent, temporary (a) The articles describe a software failure incident related to Tesla's Full Self-Driving (FSD) Beta software failing to detect stationary, child-sized mannequins at various speeds during tests conducted by The Dawn Project [131115, 135028]. This failure seems to be a permanent issue as it is not a one-time occurrence but a consistent problem highlighted in multiple tests conducted by the group. (b) The software failure incident can also be considered temporary in the sense that it was specifically observed during the tests conducted by The Dawn Project under controlled conditions to demonstrate the failure of the Tesla FSD system to detect and stop for child-sized mannequins [131115, 135028]. The failure was not a random occurrence but was reproducible under certain circumstances during the testing.
Behaviour crash, omission, value (a) crash: Failure due to system losing state and not performing any of its intended functions - The articles mention a fiery crash in Texas in 2021 involving a Tesla where the autopilot feature was not switched on at the moment of collision, indicating a crash incident [131115]. - In February of a certain year, Tesla recalled nearly 54,000 cars and SUVs because their full self-driving software was found to let them roll through stop signs without coming to a complete halt, which could lead to a crash [135028]. (b) omission: Failure due to system omitting to perform its intended functions at an instance(s) - The Dawn Project conducted tests showing that Tesla's Full Self-Driving system failed to detect a stationary, child-sized mannequin at certain speeds, indicating an omission of detecting obstacles [131115]. - The testing by The Dawn Project in October showed that Tesla's Full Self-Driving system did not register or stop for small mannequins crossing the road, suggesting an omission in recognizing potential hazards [135028]. (c) timing: Failure due to system performing its intended functions correctly, but too late or too early - There is no specific mention of a timing-related failure in the provided articles. (d) value: Failure due to system performing its intended functions incorrectly - The articles highlight that the Full Self-Driving Beta software of Tesla failed to detect a stationary, child-sized mannequin at certain speeds, indicating a failure in performing its intended function of object detection [131115]. - The testing conducted by The Dawn Project showed that Tesla's Full Self-Driving system did not register or stop for small mannequins crossing the road, suggesting a failure in correctly identifying potential obstacles [135028]. (e) byzantine: Failure due to system behaving erroneously with inconsistent responses and interactions - There is no specific mention of a byzantine-related failure in the provided articles. (f) other: Failure due to system behaving in a way not described in the (a to e) options - The articles do not provide information on any other specific behavior of the software failure incident.

IoT System Layer

Layer Option Rationale
Perception sensor, embedded_software (a) sensor: The software failure incident related to the perception layer of the cyber physical system that failed was primarily due to contributing factors introduced by sensor error. The articles mention that The Dawn Project conducted tests where Tesla's Full Self-Driving system failed to detect stationary, child-sized mannequins at different speeds, indicating a sensor-related failure [131115, 135028]. (e) embedded_software: Additionally, the failure was also attributed to contributing factors introduced by embedded software error. The articles highlight concerns about the software's inability to register or stop for small mannequins, suggesting a flaw in the embedded software of Tesla's Full Self-Driving system [135028].
Communication unknown The articles do not provide specific information about the failure being related to the communication layer of the cyber physical system that failed. Therefore, it is unknown whether the failure was at the link_level or connectivity_level.
Application TRUE The failure related to the application layer of the cyber physical system in the reported software failure incidents is evident in the testing conducted by The Dawn Project on Tesla's Full Self-Driving (FSD) Beta software. The tests revealed that the software failed to detect stationary, child-sized mannequins at various speeds, indicating a failure due to contributing factors introduced by bugs or incorrect usage [131115, 135028]. The failure to detect and respond appropriately to objects on the road, particularly child-sized mannequins, highlights issues within the application layer of the software, leading to concerns about the safety and effectiveness of Tesla's self-driving technology.

Other Details

Category Option Rationale
Consequence death, harm, property (a) death: The articles discuss fatal crashes involving Tesla vehicles equipped with automated driving systems, with NHTSA investigating 30 crashes, 19 of them fatal, and looking into 11 crashes where Teslas hit emergency vehicles [131115]. (b) harm: The articles mention instances where people were physically harmed due to the software failure incident, such as crashes involving Tesla vehicles equipped with automated driving systems [131115]. (d) property: The articles mention that Tesla vehicles equipped with automated driving systems were involved in crashes that impacted material goods, money, or data [131115].
Domain transportation, government (a) The failed system was intended to support the transportation industry, specifically in the development of self-driving technology for vehicles like Tesla cars. The articles discuss how The Dawn Project tested Tesla's Full Self-Driving system and raised concerns about its failure to detect child pedestrians, highlighting safety issues related to transportation [131115, 135028]. (b) The transportation industry is directly impacted by the software failure incident, as it involves the movement of people and things on the roads using self-driving technology developed by Tesla [131115, 135028]. (c) The failed system is not directly related to the extraction of natural resources from the Earth. (d) The failed system is not directly related to sales or the exchange of money for products. (e) The failed system is not directly related to the construction industry or the creation of the built environment. (f) The failed system is not directly related to the manufacturing industry or the creation of products from materials. (g) The failed system is not directly related to utilities such as power, gas, steam, water, and sewage services. (h) The failed system is not directly related to the finance industry or the manipulation and movement of money for profit. (i) The failed system is not directly related to the knowledge industry, which includes education, research, and space exploration. (j) The failed system is not directly related to the health industry, which includes healthcare, health insurance, and food industries. (k) The failed system is not directly related to the entertainment industry, which includes arts, sports, hospitality, tourism, etc. (l) The failed system is indirectly related to the government industry as it involves concerns about public safety, regulations, and investigations by agencies like the National Highway Traffic Safety Administration (NHTSA) [131115, 135028]. (m) The failed system is not directly related to any other industry mentioned in the options provided.

Sources

Back to List