Mitigating the Cybersecurity Risks of Autonomous Vehicles

A new report by the European Union Agency for Cybersecurity (ENISA) and the Joint Research Centre (JRC) looks at cybersecurity risks connected to Artificial Intelligence (AI) in autonomous vehicles and provides recommendations for mitigating them.

By removing the most common cause of traffic accidents – the human driver – autonomous vehicles are expected to reduce traffic accidents and fatalities. However, they may pose a completely different type of risk to drivers, passengers and pedestrians.

Autonomous vehicles use artificial intelligence systems, which employ machine-learning techniques to collect, analyze and transfer data, in order to make decisions that in conventional cars are taken by humans. These systems, like all IT systems, are vulnerable to attacks that could compromise the proper functioning of the vehicle.

The AI systems of an autonomous vehicle are working non-stop to recognize traffic signs and road markings, to detect vehicles, estimate their speed, to plan the path ahead. Apart from unintentional threats, such as sudden malfunctions, these systems are vulnerable to intentional attacks that have the specific aim to interfere with the AI system and to disrupt safety-critical functions.
Adding paint on the road to misguide the navigation, or stickers on a stop sign to prevent its recognition are examples of such attacks. These alterations can lead to the AI system wrongly classifying objects, and subsequently to the autonomous vehicle behaving in a way that could be dangerous.

In order to improve the AI security in autonomous vehicles, the report contains several recommendations, one of which is that security assessments of AI components are performed regularly throughout their lifecycle. This systematic validation of AI models and data is essential to ensure that the vehicle always behaves correctly when faced with unexpected situations or malicious attacks.

Another recommendation is that continuous risk assessment processes supported by threat intelligence could enable the identification of potential AI risks and emerging threats related to the uptake of AI in autonomous driving. The report authors say that proper AI security policies and an AI security culture should govern the entire supply chain for automotive.

They add that the automotive industry should embrace a security by design approach for the development and deployment of AI functionalities, where cybersecurity becomes the central element of digital design from the beginning. Finally, the report recommends that the automotive sector must increas­e its level of preparedness and reinforce its incident response capabilities to handle emerging cy­bersecurity issues connected to AI.

Read the full report at ENISA

(Visited 162 times, 1 visits today)

The Government Technology & Services Coalition's Homeland Security Today (HSToday) is the premier news and information resource for the homeland security community, dedicated to elevating the discussions and insights that can support a safe and secure nation. A non-profit magazine and media platform, HSToday provides readers with the whole story, placing facts and comments in context to inform debate and drive realistic solutions to some of the nation’s most vexing security challenges.

Leave a Reply

Latest from Cybersecurity

Go to Top
X