Recurring |
one_organization, multiple_organization |
(a) The software failure incident related to Tesla's Full Self-Driving (FSD) technology has happened again within the same organization. The Dawn Project conducted tests showing that Tesla's FSD software failed to detect stationary, child-sized mannequins at certain speeds, raising concerns about the technology's safety implications for child pedestrians [131115, 135028].
(b) The incident involving software failure in the context of self-driving technology has also occurred at other organizations or with their products and services. The National Highway Traffic Safety Administration (NHTSA) has been investigating Tesla's autopilot technology and associated systems in multiple crashes, including cases where Teslas hit emergency vehicles. Additionally, the NHTSA is looking into whether the removal of forward-looking radar sensors on newer Teslas is causing issues like "phantom braking" [131115]. |
Phase (Design/Operation) |
design, operation |
(a) The articles highlight a software failure incident related to the design phase. The failure is attributed to contributing factors introduced by system development and updates. The Dawn Project conducted tests on Tesla's Full Self-Driving (FSD) Beta software and found that it failed to detect stationary, child-sized mannequins at various speeds, raising concerns about the software's ability to ensure pedestrian safety [131115, 135028].
(b) The articles also mention a software failure incident related to the operation phase. The failure is linked to contributing factors introduced by the operation or misuse of the system. The Dawn Project's testing involved scenarios where the Tesla vehicles did not register or stop for small mannequins crossing the road, indicating a potential operational failure of the Full Self-Driving system in recognizing and responding to obstacles [135028]. |
Boundary (Internal/External) |
within_system, outside_system |
(a) The software failure incident related to the Tesla Full Self-Driving system can be attributed to factors within the system. The failure was highlighted by The Dawn Project's testing, which revealed that the system failed to detect stationary, child-sized mannequins at various speeds [131115, 135028]. The failure to recognize these objects raises concerns about the system's ability to identify and respond to potential hazards, indicating an issue originating from within the software system itself. Additionally, the testing conducted by The Dawn Project showed consistent failures in the system's response to obstacles, further emphasizing an internal software issue [135028].
(b) On the other hand, external factors such as public scrutiny, regulatory investigations, and safety concerns from advocacy groups like The Dawn Project have also played a significant role in highlighting the software failure incident related to Tesla's Full Self-Driving system. The National Highway Traffic Safety Administration (NHTSA) expanded investigations into Tesla's autopilot technology and associated systems, aiming to examine how these technologies may interact with human factors and behavioral safety risks [131115]. The external pressure from regulatory bodies and safety organizations has contributed to the scrutiny and evaluation of Tesla's self-driving technology, indicating influences from outside the system. Additionally, The Dawn Project's public campaigns and advertisements in major publications like the New York Times have further amplified the external attention on the software failure incident, emphasizing the impact of external factors on the situation [135028]. |
Nature (Human/Non-human) |
non-human_actions, human_actions |
(a) The software failure incident occurring due to non-human actions:
- The Dawn Project conducted tests on Tesla's Full Self-Driving (FSD) Beta software and found that it failed to detect a stationary, child-sized mannequin at certain speeds, indicating a potential safety threat to child pedestrians [131115].
- The testing involved scenarios where the Tesla vehicle did not register or stop for small mannequins crossing the road, suggesting a failure in the software's ability to detect obstacles [135028].
(b) The software failure incident occurring due to human actions:
- The founder of The Dawn Project, Dan O'Dowd, criticized Tesla's deployment of unsafe self-driving vehicles and called for a ban on Tesla's auto-driving technology, attributing the failure to the company's decisions and actions [131115].
- The Dawn Project's advertisement in The New York Times highlighted safety testing conducted by the firm, suggesting that human decisions and actions within Tesla led to the software's failure to detect and stop for obstacles like child-sized mannequins [135028]. |
Dimension (Hardware/Software) |
software |
(a) The articles do not provide information about a software failure incident occurring due to contributing factors originating in hardware.
(b) The software failure incident reported in the articles is related to the failure of Tesla's Full Self-Driving (FSD) Beta software to detect stationary, child-sized mannequins on the road, potentially posing a lethal threat to child pedestrians [131115, 135028]. The failure is attributed to issues within the software itself, as highlighted by The Dawn Project's testing of the Tesla Full Self-Driving system, which showed that the technology did not register children and stop for them, leading to concerns about the safety of the self-driving feature. |
Objective (Malicious/Non-malicious) |
non-malicious |
(a) The articles report on a non-malicious software failure incident related to Tesla's Full Self-Driving (FSD) Beta software failing to detect stationary, child-sized mannequins during tests conducted by The Dawn Project [131115, 135028]. The failure was highlighted as a potentially lethal threat to child pedestrians, with claims that the software did not register or stop for small mannequins crossing the road, suggesting a danger to children. The failure was attributed to the software's inability to accurately detect and respond to obstacles, raising concerns about the safety of Tesla's self-driving technology.
(b) The articles do not provide information about a malicious software failure incident. |
Intent (Poor/Accidental Decisions) |
poor_decisions |
(a) The intent of the software failure incident related to poor_decisions:
- The failure incident related to poor decisions is evident in the claims made by The Dawn Project regarding Tesla's Full Self-Driving (FSD) Beta software. The software was tested by The Dawn Project, and the results showed that the software failed to detect a stationary, child-sized mannequin at an average speed of 25mph [131115].
- The founder of The Dawn Project, Dan O'Dowd, criticized Tesla's deployment of unsafe self-driving vehicles and described the test results as "deeply disturbing," highlighting the potential lethal threat posed by Tesla's software to child pedestrians [131115].
- O'Dowd called for the prohibition of self-driving cars until Tesla proves that the vehicles will not pose a danger to children in crosswalks, indicating concerns about the safety implications of Tesla's software [131115].
(b) The intent of the software failure incident related to accidental_decisions:
- The failure incident related to accidental decisions is not explicitly mentioned in the articles. The focus is primarily on the claims and tests conducted by The Dawn Project regarding Tesla's Full Self-Driving (FSD) Beta software and the safety concerns raised by the test results [131115, 135028].
- The articles highlight the testing conducted by The Dawn Project, the results of which indicated that Tesla's software failed to detect child-sized mannequins, raising concerns about the potential risks posed by the technology [131115, 135028].
- The safety campaign group's findings and the subsequent advertisement in The New York Times emphasized the life-threatening danger that Tesla's Full Self-Driving system could pose to child pedestrians, indicating a critical evaluation of the software's performance [135028]. |
Capability (Incompetence/Accidental) |
development_incompetence |
(a) The articles highlight concerns and allegations regarding the safety and competence of Tesla's Full Self-Driving (FSD) software. The Dawn Project, led by Dan O'Dowd, has conducted tests showing that the FSD software failed to detect stationary, child-sized mannequins at various speeds, raising concerns about the software's ability to ensure pedestrian safety [131115, 135028]. These tests suggest a potential failure due to development incompetence, as the software may not have been adequately designed or tested to detect and respond to such critical scenarios.
(b) The articles also mention instances where Tesla's Full Self-Driving system did not register or stop for small mannequins crossing the road during testing conducted by The Dawn Project. The group's findings suggest that the system may not have been functioning as intended, leading to the failure to detect and respond to obstacles in the road, including child-sized mannequins [135028]. This indicates a potential accidental failure where the software did not perform as expected, possibly due to unforeseen circumstances or limitations in the system's design or implementation. |
Duration |
permanent, temporary |
(a) The articles describe a software failure incident related to Tesla's Full Self-Driving (FSD) Beta software failing to detect stationary, child-sized mannequins at various speeds during tests conducted by The Dawn Project [131115, 135028]. This failure seems to be a permanent issue as it is not a one-time occurrence but a consistent problem highlighted in multiple tests conducted by the group.
(b) The software failure incident can also be considered temporary in the sense that it was specifically observed during the tests conducted by The Dawn Project under controlled conditions to demonstrate the failure of the Tesla FSD system to detect and stop for child-sized mannequins [131115, 135028]. The failure was not a random occurrence but was reproducible under certain circumstances during the testing. |
Behaviour |
crash, omission, value |
(a) crash: Failure due to system losing state and not performing any of its intended functions
- The articles mention a fiery crash in Texas in 2021 involving a Tesla where the autopilot feature was not switched on at the moment of collision, indicating a crash incident [131115].
- In February of a certain year, Tesla recalled nearly 54,000 cars and SUVs because their full self-driving software was found to let them roll through stop signs without coming to a complete halt, which could lead to a crash [135028].
(b) omission: Failure due to system omitting to perform its intended functions at an instance(s)
- The Dawn Project conducted tests showing that Tesla's Full Self-Driving system failed to detect a stationary, child-sized mannequin at certain speeds, indicating an omission of detecting obstacles [131115].
- The testing by The Dawn Project in October showed that Tesla's Full Self-Driving system did not register or stop for small mannequins crossing the road, suggesting an omission in recognizing potential hazards [135028].
(c) timing: Failure due to system performing its intended functions correctly, but too late or too early
- There is no specific mention of a timing-related failure in the provided articles.
(d) value: Failure due to system performing its intended functions incorrectly
- The articles highlight that the Full Self-Driving Beta software of Tesla failed to detect a stationary, child-sized mannequin at certain speeds, indicating a failure in performing its intended function of object detection [131115].
- The testing conducted by The Dawn Project showed that Tesla's Full Self-Driving system did not register or stop for small mannequins crossing the road, suggesting a failure in correctly identifying potential obstacles [135028].
(e) byzantine: Failure due to system behaving erroneously with inconsistent responses and interactions
- There is no specific mention of a byzantine-related failure in the provided articles.
(f) other: Failure due to system behaving in a way not described in the (a to e) options
- The articles do not provide information on any other specific behavior of the software failure incident. |