Incident: Vulnerability in Self-Driving Cars' Remote Sensing Technology Allows Hacking

Published Date: 2015-09-07

Postmortem Analysis
Timeline 1. The software failure incident of hacking self-driving cars using a laser pointer happened in November 2015 as the security researcher, Jonathan Petit, was set to present his findings at the Black Hat conference in Amsterdam in November [51605].
System 1. Lidar sensors on self-driving cars 2. Radar sensors on self-driving cars 3. Cameras and sensors on self-driving cars 4. Encryption of Lidar's pulse unit on self-driving cars 5. Systems of the Jeep Cherokee that were remotely accessed and controlled by hackers [51605]
Responsible Organization 1. The software failure incident was caused by security researcher Jonathan Petit who developed the hack to trick the sensors on self-driving cars [51605].
Impacted Organization 1. Self-driving cars' remote sensing technology was impacted by the software failure incident [51605].
Software Causes 1. The software vulnerability in self-driving cars that allowed for remote attacks using a simple laser pointer and basic computer board [51605].
Non-software Causes 1. The vulnerability of self-driving cars to being hacked through the manipulation of remote sensing technology using a laser pointer and basic computer board [51605].
Impacts 1. The software failure incident allowed a security researcher to easily fool the remote sensing technology on self-driving cars using a simple laser pointer, potentially causing the cars to swerve, stop randomly, or be immobilized completely [51605]. 2. The incident highlighted the vulnerability of self-driving cars to cyberattacks, specifically through blinding the cameras with high-brightness infrared LEDs or lasers, which could lead to the sensors taking diversory actions to avoid non-existent obstacles [51605]. 3. The hack demonstrated by the security researcher showed that self-driving cars using Lidar sensors could be tricked into seeing 'ghost' cars and obstacles from a distance of up to 330ft (100 meters), impacting the cars' ability to accurately determine distances and shapes of objects [51605]. 4. The incident raised concerns about the lack of encryption on the Lidar sensors in current test vehicles, making them susceptible to such attacks, although it is likely that encryption will be added before the cars are sold to the general public [51605]. 5. This software failure incident is part of a series of hacks that have exposed the ease with which vehicles, including self-driving cars, can be remotely attacked, as seen in a previous incident where hackers remotely took control of a car and crashed it into a ditch [51605].
Preventions 1. Encryption of the Lidar's pulse unit could have prevented the software failure incident described in the article [51605]. 2. Implementing security measures to prevent unauthorized access to the self-driving car's systems could have helped prevent the hack [51605].
Fixes 1. Implement encryption on the Lidar's pulse unit to prevent unauthorized access and manipulation of sensor data [51605]. 2. Enhance security measures to protect against remote attacks on self-driving cars, such as implementing robust authentication protocols and intrusion detection systems [51605].
References 1. Security researcher Jonathan Petit [51605] 2. University of Cork [51605] 3. Black Hat conference in Amsterdam [51605] 4. IEEE Spectrum [51605]

Software Taxonomy of Faults

Category Option Rationale
Recurring one_organization, multiple_organization (a) The software failure incident related to hacking self-driving cars has happened again within the same organization or with its products and services. The incident involved a security researcher, Jonathan Petit, who demonstrated how easy it is to hack self-driving cars by tricking their sensors with a laser pointer [51605]. (b) The software failure incident related to hacking self-driving cars has also happened at multiple organizations or with their products and services. The article mentions a previous incident where hackers remotely took control of a Jeep Cherokee by breaking into its systems from a distance, causing the car to crash [51605]. This indicates that similar vulnerabilities exist in different self-driving car systems, making them susceptible to hacking attacks.
Phase (Design/Operation) design, operation (a) The software failure incident related to the design phase is evident in the vulnerability of self-driving cars to hacking due to flaws in their remote sensing technology. Security researcher Jonathan Petit demonstrated how easy it is to trick the sensors on self-driving cars using basic, off-the-shelf equipment like a laser pointer and a computer board [51605]. (b) The software failure incident related to the operation phase is highlighted by the potential cyberattacks on autonomous vehicles due to vulnerabilities in their sensors. For example, the cameras and sensors on self-driving cars can be blinded by high-brightness infrared LEDs or lasers, which can lead to the cars taking diversory actions to avoid non-existent obstacles or stopping if they can't maneuver around them [51605].
Boundary (Internal/External) within_system, outside_system (a) within_system: The software failure incident described in the articles is primarily within the system. The vulnerability exploited by the security researcher, Jonathan Petit, involved tricking the sensors on self-driving cars by beaming images of 'ghost' cars and obstacles using a laser pointer and basic computer board [51605]. (b) outside_system: The software failure incident also involves factors originating from outside the system. In this case, the vulnerability exploited by the security researcher was due to the lack of encryption on the Lidar's pulse unit, making it susceptible to attacks from external sources [51605]. Additionally, the article mentions how the sensors on self-driving cars can be blinded by high-brightness infrared LEDs or lasers, which are external factors that can impact the system's functionality [51605].
Nature (Human/Non-human) non-human_actions (a) The software failure incident occurring due to non-human actions: - The article reports on a software failure incident related to self-driving cars being vulnerable to hacking through the manipulation of sensors using a laser pointer, which is a non-human action [51605]. (b) The software failure incident occurring due to human actions: - The article does not provide information about the software failure incident being caused by human actions.
Dimension (Hardware/Software) hardware, software (a) The software failure incident related to hardware: - The article discusses a security researcher, Jonathan Petit, who demonstrated how self-driving cars' sensors could be tricked using a simple laser pointer, which is a hardware device [51605]. - The attack involved beaming images of 'ghost' cars and obstacles to the sensors on self-driving cars from a distance of up to 330ft using basic equipment like a laser pointer and a computer board [51605]. - The Lidar sensors on self-driving cars, which are a hardware component, create 3D maps to detect obstacles by bouncing a laser beam off them [51605]. (b) The software failure incident related to software: - The incident involves a vulnerability in the software of self-driving cars that allows external manipulation of the sensors, leading to potential cyberattacks [51605]. - The software vulnerability lies in the unencrypted Lidar's pulse unit, which allows for the manipulation of sensor data [51605]. - The article mentions that the attack was a proof-of-concept demonstrating how software vulnerabilities in the sensors of self-driving cars could be exploited to cause them to swerve, stop randomly, or become immobilized [51605].
Objective (Malicious/Non-malicious) malicious (a) The objective of the software failure incident was malicious, as it involved a security researcher demonstrating how easy it is to hack self-driving cars by tricking their sensors with a laser pointer and basic computer board. The attack could potentially make the cars swerve, stop randomly, or immobilize them completely, posing a significant threat to the safety and functionality of the vehicles [51605].
Intent (Poor/Accidental Decisions) unknown (a) The intent of the software failure incident was not due to poor decisions but rather intentional hacking and security research. The incident involved a security researcher, Jonathan Petit, who intentionally developed a proof-of-concept attack to demonstrate the vulnerability of self-driving cars to being hacked using a laser pointer and basic computer board [51605]. The attack was aimed at showing how easy it is to fool the remote sensing technology on self-driving cars, potentially causing them to swerve, stop randomly, or be immobilized completely. This incident was not a result of poor decisions but rather a deliberate attempt to highlight security vulnerabilities in autonomous vehicles.
Capability (Incompetence/Accidental) development_incompetence, accidental (a) The software failure incident in the articles can be attributed to development incompetence. The incident involved a security researcher, Jonathan Petit, who demonstrated how easy it was to fool the remote sensing technology on self-driving cars using basic, off-the-shelf equipment like a laser pointer and a computer board [51605]. (b) Additionally, the incident can also be categorized as accidental as the vulnerability in the self-driving cars' sensors was exploited accidentally by Jonathan Petit during his research at the University of Cork. The attack was a proof-of-concept that showed how the sensors could be tricked into seeing 'ghost' cars and obstacles from a distance, potentially causing the cars to swerve or stop randomly [51605].
Duration temporary The software failure incident described in the articles can be categorized as a temporary failure. This is evident from the security researcher Jonathan Petit's demonstration of how easy it was to trick the sensors on self-driving cars using a laser pointer and basic computer board [51605]. The incident was a proof-of-concept attack that highlighted a vulnerability in the sensors of self-driving cars, specifically the Lidar sensors, which could be exploited to manipulate the cars' behavior. Additionally, the article mentions that many of the current test vehicles are yet to add encryption to the sensor, indicating that this vulnerability is not a permanent issue but rather a temporary one that can be addressed before the cars go on general sale [51605].
Behaviour other (a) crash: The software failure incident described in the articles does not involve a crash where the system loses state and does not perform any of its intended functions. The incident involves a security vulnerability in self-driving cars that could be exploited by hackers to manipulate the sensors and potentially cause the cars to swerve or stop randomly [51605]. (b) omission: The incident does not involve the system omitting to perform its intended functions at an instance(s). Instead, it focuses on the manipulation of sensors in self-driving cars to deceive the system into taking incorrect actions [51605]. (c) timing: The failure is not related to the system performing its intended functions correctly but too late or too early. It is about the potential manipulation of sensors to mislead the self-driving cars into taking inappropriate actions [51605]. (d) value: The software failure incident is not about the system performing its intended functions incorrectly in terms of providing incorrect outputs or results. It is more about the security vulnerability that could lead to unauthorized control of the self-driving cars [51605]. (e) byzantine: The incident does not involve the system behaving erroneously with inconsistent responses and interactions. It is primarily focused on the security implications of potential attacks on self-driving cars through sensor manipulation [51605]. (f) other: The behavior of the software failure incident can be categorized as a security vulnerability exploit that could lead to unauthorized control of self-driving cars by manipulating their sensors. This behavior falls under the category of a cybersecurity threat rather than a traditional software failure in terms of system functionality [51605].

IoT System Layer

Layer Option Rationale
Perception sensor (a) The failure was related to the perception layer of the cyber physical system that failed due to contributing factors introduced by sensor error. The incident involved a security researcher, Jonathan Petit, who demonstrated how self-driving cars' sensors could be tricked using a simple laser pointer, creating 'ghost' cars and obstacles to confuse the sensors. The Lidar sensors on self-driving cars create 3D maps and bounce laser beams off obstacles to accurately determine distance and shape of objects. Petit's attack exploited the vulnerability of these sensors to external manipulation, leading to potential misperceptions by the self-driving cars' systems [51605].
Communication link_level The software failure incident described in the articles is related to the communication layer of the cyber physical system that failed at the link_level. The incident involved a security researcher, Jonathan Petit, demonstrating how self-driving cars' sensors could be hacked using a simple laser pointer and basic computer board to trick the sensors into seeing 'ghost' cars and obstacles from a distance of up to 330ft (100 metres) [51605]. This attack exploited vulnerabilities in the Lidar sensors, which create a 3D map by bouncing a laser beam off obstacles to allow the car to accurately determine the distance and shape of objects. The attack could potentially make the cars swerve, stop randomly, or immobilize them completely by sending false information to the sensors [51605].
Application TRUE The software failure incident described in the articles is related to the application layer of the cyber physical system. This failure was due to contributing factors introduced by bugs, operating system errors, unhandled exceptions, and incorrect usage. The incident involved a security researcher, Jonathan Petit, demonstrating how easy it is to hack self-driving cars by tricking their sensors with a laser pointer and basic computer board [51605]. This manipulation of the sensors could lead to the cars swerving, stopping randomly, or being immobilized completely, showcasing a failure at the application layer of the cyber physical system.

Other Details

Category Option Rationale
Consequence non-human, theoretical_consequence, other (a) death: There is no mention of people losing their lives due to the software failure incident in the provided article [51605]. (b) harm: There is no mention of people being physically harmed due to the software failure incident in the provided article [51605]. (c) basic: There is no mention of people's access to food or shelter being impacted because of the software failure incident in the provided article [51605]. (d) property: There is no mention of people's material goods, money, or data being impacted due to the software failure incident in the provided article [51605]. (e) delay: There is no mention of people having to postpone an activity due to the software failure incident in the provided article [51605]. (f) non-human: The software failure incident impacted non-human entities, specifically self-driving cars, as they were vulnerable to being hacked and manipulated [51605]. (g) no_consequence: There were observed consequences of the software failure incident, particularly in terms of the vulnerability of self-driving cars to hacking attacks [51605]. (h) theoretical_consequence: The article discusses potential consequences of the software failure incident, such as the ability to remotely attack self-driving cars and manipulate their sensors, potentially leading to accidents or disruptions in autonomous driving [51605]. (i) other: The software failure incident could lead to serious safety risks on the roads if exploited by malicious actors, affecting the overall trust and adoption of self-driving technology [51605].
Domain transportation (a) The failed system in the article is related to the transportation industry, specifically self-driving cars. The incident involves a security researcher demonstrating how easy it is to hack into the sensors of self-driving cars, potentially causing them to swerve, stop randomly, or be immobilized completely [51605]. The self-driving cars being tested by companies like Google use various components such as Lidar sensors, radar remote sensing technology, GPS, and stereo cameras to navigate and detect obstacles on the road [51605]. (b) The transportation industry is directly impacted by this software failure incident as it involves the security vulnerabilities of self-driving cars, which are a significant innovation in the transportation sector [51605]. (c) The incident does not directly relate to the extraction of materials from Earth. (d) The incident does not directly relate to the exchange of money for products. (e) The incident does not directly relate to the creation of the built environment. (f) The incident does not directly relate to the creation of products from materials. (g) The incident does not directly relate to power, gas, steam, water, and sewage services. (h) The incident does not directly relate to manipulating and moving money for profit. (i) The incident does not directly relate to education, research, or space exploration. (j) The incident does not directly relate to the healthcare, health insurance, or food industries. (k) The incident does not directly relate to arts, sports, hospitality, or tourism. (l) The incident does not directly relate to politics, defense, justice, taxes, or public services. (m) The failed system in the article is related to the development and testing of autonomous vehicles, which falls under the broader category of the automotive industry.

Sources

Back to List