| Recurring |
unknown |
The articles do not provide information about the software failure incident happening again at either the same organization or multiple organizations. |
| Phase (Design/Operation) |
design, operation |
(a) The software failure incident described in the article is related to the design phase of the autonomous vehicles' systems. The incident involved researchers from the University of Washington conducting experiments where they placed stickers or posters over road signs to trick smart cars into misreading them. These changes in the road signs tricked the AI's learning algorithms, causing the driverless cars to misbehave in unexpected and potentially dangerous ways. The researchers highlighted that if hackers could access the algorithm, they could create custom versions of road signs capable of confusing the car's camera [62229].
(b) The software failure incident is also related to the operation phase of the system. The article mentions that some driverless cars, like Tesla's Model S electric cars, are already equipped with sign recognition software, although the vehicles are not yet programmed to react to the signs. The simple hacks demonstrated by the researchers could cause driverless cars to run through stop junctions or come to a sudden halt in the middle of the street, highlighting potential failures in the operation of the autonomous vehicles due to misreading road signs [62229]. |
| Boundary (Internal/External) |
within_system, outside_system |
(a) The software failure incident discussed in the article is primarily within_system. The incident involves tricking autonomous vehicles into misreading road signs by placing stickers or posters over them, which can confuse the AI's learning algorithms and cause the vehicles to misbehave in unexpected and potentially dangerous ways. This manipulation of the road signs is a manipulation of the system itself, rather than an external factor directly causing the failure [62229].
(b) The article also mentions the potential for hackers to access the algorithm of autonomous vehicles and create customised versions of road signs to confuse the car's camera. This external threat of hacking and cyber attacks on the software system of driverless cars is an outside_system contributing factor to the software failure incident [62229]. |
| Nature (Human/Non-human) |
non-human_actions, human_actions |
(a) The software failure incident occurring due to non-human actions:
The article discusses how autonomous vehicles can be easily confused into misreading road signs that would appear normal to human drivers. Researchers found that placing stickers or posters over road signs could trick smart cars into ignoring stop signs or suddenly braking in the middle of the road. These changes can trick the AI's learning algorithms and cause the vehicles to misbehave in unexpected and potentially dangerous ways. The incidents of misreading signs due to subtle changes or graffiti stickers demonstrate how non-human actions, such as altering the physical appearance of road signs, can lead to software failures in autonomous vehicles [62229].
(b) The software failure incident occurring due to human actions:
The article also mentions the potential for hackers to access the algorithm of autonomous vehicles and create custom versions of road signs capable of confusing the car's camera. By using images of road signs, hackers could manipulate the AI's recognition systems and cause the vehicles to misinterpret the signs, leading to dangerous situations. The research highlights how human actions, such as hacking and manipulating the visual input of autonomous vehicles, can introduce contributing factors that result in software failures [62229]. |
| Dimension (Hardware/Software) |
hardware, software |
(a) The software failure incident occurring due to hardware:
- The article discusses how autonomous vehicles can be easily confused into misreading road signs due to changes that trick the AI's learning algorithms, potentially leading to dangerous situations [62229].
(b) The software failure incident occurring due to software:
- The article highlights how changes that trick an AI's learning algorithms can cause autonomous vehicles to misbehave in unexpected and potentially dangerous ways, emphasizing the role of software in the failure incident [62229]. |
| Objective (Malicious/Non-malicious) |
malicious |
(a) The software failure incident described in the article is malicious in nature. The incident involved researchers demonstrating how autonomous vehicles can be tricked into misreading road signs by placing stickers or posters over them, causing the driverless cars to misbehave in unexpected and potentially dangerous ways. The researchers highlighted that if hackers were able to access the algorithm of the autonomous vehicles, they could create customised versions of road signs capable of confusing the car's camera, leading to potential accidents [62229]. |
| Intent (Poor/Accidental Decisions) |
accidental_decisions |
[a62229] The software failure incident described in the article is related to accidental_decisions. The incident involved researchers conducting experiments to show how autonomous vehicles can be easily confused by placing stickers or posters on road signs, leading to misreading of the signs by the AI algorithms. The intent was not to intentionally cause harm or exploit vulnerabilities but to demonstrate the potential risks and vulnerabilities in the system. |
| Capability (Incompetence/Accidental) |
development_incompetence |
(a) The software failure incident reported in the article is more related to development incompetence. The incident involved researchers from the University of Washington conducting experiments to show how autonomous vehicles can be easily confused by placing stickers or posters on road signs, leading to misreading of the signs by the AI algorithms [62229]. The researchers highlighted that changes that trick an AI's learning algorithms can cause them to misbehave in unexpected and potentially dangerous ways, indicating a failure due to contributing factors introduced by manipulating the system intentionally.
(b) The incident does not seem to be accidental as the researchers deliberately conducted experiments to demonstrate how simple sticker graffiti on road signs can trick driverless cars into misreading the signs, showcasing a deliberate attempt to exploit vulnerabilities in the software system [62229]. |
| Duration |
temporary |
The software failure incident described in the article is more aligned with a temporary failure rather than a permanent one. The incident involved researchers demonstrating how autonomous vehicles could be tricked into misreading road signs by placing stickers or posters over them, causing the vehicles to misinterpret the signs and potentially behave in unexpected and dangerous ways [62229]. This type of failure was temporary in nature as it was caused by specific circumstances (i.e., the manipulation of road signs with stickers) rather than being a permanent issue inherent to the software itself. |
| Behaviour |
value, other |
(a) crash: The software failure incident described in the article is not a crash where the system loses state and does not perform any of its intended functions. Instead, the incident involves the system misreading road signs due to external manipulations like stickers and graffiti [Article 62229].
(b) omission: The software failure incident is not an omission where the system omits to perform its intended functions at an instance(s). The incident is more about the system misinterpreting road signs due to external manipulations rather than omitting any functions [Article 62229].
(c) timing: The software failure incident is not a timing issue where the system performs its intended functions correctly but too late or too early. The incident is related to the system misreading road signs due to external manipulations like stickers and graffiti, leading to incorrect interpretations [Article 62229].
(d) value: The software failure incident is a value issue where the system performs its intended functions incorrectly. In this case, the system misreads road signs due to external manipulations like stickers and graffiti, leading to potentially dangerous outcomes such as ignoring stop signs or misinterpreting speed limits [Article 62229].
(e) byzantine: The software failure incident is not a byzantine failure where the system behaves erroneously with inconsistent responses and interactions. The incident is more about the system misinterpreting road signs due to external manipulations rather than exhibiting inconsistent behavior [Article 62229].
(f) other: The behavior of the software failure incident can be categorized as a manipulation-induced misinterpretation. The incident involves the system being tricked into misreading road signs through the placement of stickers and graffiti, causing it to make incorrect decisions such as ignoring stop signs or misinterpreting speed limits [Article 62229]. |