Incident: Driverless Cars Vulnerable to Sticker Graffiti Attack

Published Date: 2017-08-07

Postmortem Analysis
Timeline 1. The software failure incident of tricking driverless cars with stickers on road signs was reported in an article published on 2017-08-07 [Article 62229]. Estimation: Step 1: The article does not provide a specific date for the incident. Step 2: The article was published on 2017-08-07. Step 3: Since the article does not mention the exact timing of the incident, the timeline cannot be estimated. Therefore, the timeline is unknown.
System 1. Autonomous vehicle learning algorithms 2. Driverless car visual software 3. Sign recognition software in Tesla's Model S electric cars
Responsible Organization 1. The researchers from the University of Washington conducted experiments that led to the software failure incident where driverless cars could be tricked into misreading road signs using stickers [62229].
Impacted Organization 1. Driverless cars were impacted by the software failure incident [62229].
Software Causes 1. The failure incident was caused by the vulnerability of autonomous vehicles' AI learning algorithms to changes that can trick them into misreading road signs, leading to potentially dangerous behavior [62229].
Non-software Causes 1. The failure incident was caused by placing stickers or posters over road signs to trick autonomous vehicles into misreading them [62229].
Impacts 1. The software failure incident involving tricking driverless cars with stickers on road signs could lead to misreading signs, ignoring stop signs, suddenly braking in the middle of the road, and potentially causing accidents [62229].
Preventions 1. Implementing robust cybersecurity measures to protect the algorithm of autonomous vehicles from unauthorized access and manipulation [62229]. 2. Conducting thorough testing and validation of the software to detect vulnerabilities related to misreading road signs caused by stickers or graffiti [62229]. 3. Developing advanced defense systems within autonomous vehicles to detect and prevent misbehavior in unexpected and potentially dangerous ways [62229].
Fixes 1. Implementing better defense systems into autonomous vehicles to prevent misreading road signs due to sticker graffiti [62229]
References 1. Researchers from the University of Washington [62229]

Software Taxonomy of Faults

Category Option Rationale
Recurring unknown The articles do not provide information about the software failure incident happening again at either the same organization or multiple organizations.
Phase (Design/Operation) design, operation (a) The software failure incident described in the article is related to the design phase of the autonomous vehicles' systems. The incident involved researchers from the University of Washington conducting experiments where they placed stickers or posters over road signs to trick smart cars into misreading them. These changes in the road signs tricked the AI's learning algorithms, causing the driverless cars to misbehave in unexpected and potentially dangerous ways. The researchers highlighted that if hackers could access the algorithm, they could create custom versions of road signs capable of confusing the car's camera [62229]. (b) The software failure incident is also related to the operation phase of the system. The article mentions that some driverless cars, like Tesla's Model S electric cars, are already equipped with sign recognition software, although the vehicles are not yet programmed to react to the signs. The simple hacks demonstrated by the researchers could cause driverless cars to run through stop junctions or come to a sudden halt in the middle of the street, highlighting potential failures in the operation of the autonomous vehicles due to misreading road signs [62229].
Boundary (Internal/External) within_system, outside_system (a) The software failure incident discussed in the article is primarily within_system. The incident involves tricking autonomous vehicles into misreading road signs by placing stickers or posters over them, which can confuse the AI's learning algorithms and cause the vehicles to misbehave in unexpected and potentially dangerous ways. This manipulation of the road signs is a manipulation of the system itself, rather than an external factor directly causing the failure [62229]. (b) The article also mentions the potential for hackers to access the algorithm of autonomous vehicles and create customised versions of road signs to confuse the car's camera. This external threat of hacking and cyber attacks on the software system of driverless cars is an outside_system contributing factor to the software failure incident [62229].
Nature (Human/Non-human) non-human_actions, human_actions (a) The software failure incident occurring due to non-human actions: The article discusses how autonomous vehicles can be easily confused into misreading road signs that would appear normal to human drivers. Researchers found that placing stickers or posters over road signs could trick smart cars into ignoring stop signs or suddenly braking in the middle of the road. These changes can trick the AI's learning algorithms and cause the vehicles to misbehave in unexpected and potentially dangerous ways. The incidents of misreading signs due to subtle changes or graffiti stickers demonstrate how non-human actions, such as altering the physical appearance of road signs, can lead to software failures in autonomous vehicles [62229]. (b) The software failure incident occurring due to human actions: The article also mentions the potential for hackers to access the algorithm of autonomous vehicles and create custom versions of road signs capable of confusing the car's camera. By using images of road signs, hackers could manipulate the AI's recognition systems and cause the vehicles to misinterpret the signs, leading to dangerous situations. The research highlights how human actions, such as hacking and manipulating the visual input of autonomous vehicles, can introduce contributing factors that result in software failures [62229].
Dimension (Hardware/Software) hardware, software (a) The software failure incident occurring due to hardware: - The article discusses how autonomous vehicles can be easily confused into misreading road signs due to changes that trick the AI's learning algorithms, potentially leading to dangerous situations [62229]. (b) The software failure incident occurring due to software: - The article highlights how changes that trick an AI's learning algorithms can cause autonomous vehicles to misbehave in unexpected and potentially dangerous ways, emphasizing the role of software in the failure incident [62229].
Objective (Malicious/Non-malicious) malicious (a) The software failure incident described in the article is malicious in nature. The incident involved researchers demonstrating how autonomous vehicles can be tricked into misreading road signs by placing stickers or posters over them, causing the driverless cars to misbehave in unexpected and potentially dangerous ways. The researchers highlighted that if hackers were able to access the algorithm of the autonomous vehicles, they could create customised versions of road signs capable of confusing the car's camera, leading to potential accidents [62229].
Intent (Poor/Accidental Decisions) accidental_decisions [a62229] The software failure incident described in the article is related to accidental_decisions. The incident involved researchers conducting experiments to show how autonomous vehicles can be easily confused by placing stickers or posters on road signs, leading to misreading of the signs by the AI algorithms. The intent was not to intentionally cause harm or exploit vulnerabilities but to demonstrate the potential risks and vulnerabilities in the system.
Capability (Incompetence/Accidental) development_incompetence (a) The software failure incident reported in the article is more related to development incompetence. The incident involved researchers from the University of Washington conducting experiments to show how autonomous vehicles can be easily confused by placing stickers or posters on road signs, leading to misreading of the signs by the AI algorithms [62229]. The researchers highlighted that changes that trick an AI's learning algorithms can cause them to misbehave in unexpected and potentially dangerous ways, indicating a failure due to contributing factors introduced by manipulating the system intentionally. (b) The incident does not seem to be accidental as the researchers deliberately conducted experiments to demonstrate how simple sticker graffiti on road signs can trick driverless cars into misreading the signs, showcasing a deliberate attempt to exploit vulnerabilities in the software system [62229].
Duration temporary The software failure incident described in the article is more aligned with a temporary failure rather than a permanent one. The incident involved researchers demonstrating how autonomous vehicles could be tricked into misreading road signs by placing stickers or posters over them, causing the vehicles to misinterpret the signs and potentially behave in unexpected and dangerous ways [62229]. This type of failure was temporary in nature as it was caused by specific circumstances (i.e., the manipulation of road signs with stickers) rather than being a permanent issue inherent to the software itself.
Behaviour value, other (a) crash: The software failure incident described in the article is not a crash where the system loses state and does not perform any of its intended functions. Instead, the incident involves the system misreading road signs due to external manipulations like stickers and graffiti [Article 62229]. (b) omission: The software failure incident is not an omission where the system omits to perform its intended functions at an instance(s). The incident is more about the system misinterpreting road signs due to external manipulations rather than omitting any functions [Article 62229]. (c) timing: The software failure incident is not a timing issue where the system performs its intended functions correctly but too late or too early. The incident is related to the system misreading road signs due to external manipulations like stickers and graffiti, leading to incorrect interpretations [Article 62229]. (d) value: The software failure incident is a value issue where the system performs its intended functions incorrectly. In this case, the system misreads road signs due to external manipulations like stickers and graffiti, leading to potentially dangerous outcomes such as ignoring stop signs or misinterpreting speed limits [Article 62229]. (e) byzantine: The software failure incident is not a byzantine failure where the system behaves erroneously with inconsistent responses and interactions. The incident is more about the system misinterpreting road signs due to external manipulations rather than exhibiting inconsistent behavior [Article 62229]. (f) other: The behavior of the software failure incident can be categorized as a manipulation-induced misinterpretation. The incident involves the system being tricked into misreading road signs through the placement of stickers and graffiti, causing it to make incorrect decisions such as ignoring stop signs or misinterpreting speed limits [Article 62229].

IoT System Layer

Layer Option Rationale
Perception sensor (a) sensor: The failure was related to the perception layer of the cyber physical system that failed due to contributing factors introduced by sensor error. The incident involved tricking autonomous vehicles into misreading road signs by placing stickers or posters over them, causing the vehicles to ignore stop signs or misinterpret speed limit signs. The hackers could use images of road signs to create custom versions capable of confusing the car's camera, ultimately leading to dangerous behaviors. This manipulation targeted the sensor component of the autonomous vehicles' perception system [Article 62229].
Communication link_level The software failure incident described in the article [62229] is related to the communication layer of the cyber physical system that failed at the link_level. The incident involved tricking autonomous vehicles into misreading road signs by placing stickers or posters over them, which could confuse the cars into ignoring stop signs or braking unexpectedly. This manipulation of the physical appearance of road signs directly impacted the visual recognition systems of the driverless cars, causing them to misinterpret the signs and potentially leading to dangerous situations on the road. The failure was not due to network or transport layer issues but rather at the physical layer where the visual input was compromised by external factors introduced by the attackers.
Application TRUE The software failure incident described in the article [62229] is related to the application layer of the cyber physical system. The incident involved tricking autonomous vehicles into misreading road signs through the placement of stickers or posters, which caused the vehicles to misbehave in unexpected and potentially dangerous ways. This failure was attributed to changes that tricked the AI's learning algorithms, leading to the misinterpretation of road signs by the driverless cars [62229].

Other Details

Category Option Rationale
Consequence harm, property, non-human, theoretical_consequence (a) death: There is no mention of people losing their lives due to the software failure incident in the provided article [62229]. (b) harm: The article discusses the potential harm that could result from tricking autonomous vehicles into misreading road signs, leading to accidents or sudden halts in the middle of the street [62229]. (c) basic: There is no mention of people's access to food or shelter being impacted due to the software failure incident in the provided article [62229]. (d) property: The article mentions the potential impact on property in terms of accidents caused by tricking autonomous vehicles through road sign manipulation [62229]. (e) delay: The article does not mention any delays caused by the software failure incident [62229]. (f) non-human: The software failure incident primarily impacts autonomous vehicles and their ability to correctly interpret road signs [62229]. (g) no_consequence: The article discusses real consequences of the software failure incident, particularly related to potential accidents caused by misleading autonomous vehicles [62229]. (h) theoretical_consequence: The article discusses potential consequences of the software failure incident, such as hackers accessing algorithms to create custom signs that confuse autonomous vehicles [62229]. (i) other: The article does not mention any other specific consequences of the software failure incident beyond those related to potential harm, property damage, and theoretical implications [62229].
Domain transportation, finance, government (a) The failed system was related to the transportation industry as it involved autonomous vehicles being tricked by stickers on road signs, potentially leading to accidents [62229]. (h) The incident also highlighted the importance of cyber protection in smart vehicles to prevent hacking and potential accidents, indicating a connection to the finance industry as it involves protecting vehicles from cyber attacks that could lead to financial losses or liabilities [62229]. (l) Additionally, the UK government issued guidelines to better protect internet-connected vehicles from cyber attacks, emphasizing the role of government regulations in ensuring the cybersecurity of smart vehicles [62229].

Sources

Back to List