A new report released by the European Union Agency for Cybersecurity (ENISA) stated that autonomous vehicles – based on Artificial Intelligence to guide the car without needing a driver – are “highly vulnerable to a wide range of attacks” that can be dangerous for passengers, pedestrians and people in other vehicles. The report focuses on undetectable cyber security intrusions for humans, including possible sensor attacks with light beams, oppressive object detection systems and malicious back-end activity.
The scenarios presented in the report include the possibility of attacks on decision-making and spoofing algorithms, which can trick the autonomous vehicle when it comes to “recognizing” cars, people or obstacles that do not exist. “The attack can be used to make AI ‘blind’ to pedestrians. This can cause confusion on the streets, as autonomous cars can reach pedestrians on the streets or in pedestrian crossings, ”says the report.
The AI systems and sensors needed to make the vehicle run increase the area prone to a hacker attack. The authors of the research indicate that in order to resolve the vulnerabilities, it is necessary that companies, together with third-party suppliers, develop a safety culture throughout the production chain. In addition, cars will need a continuous review of systems to ensure that they have not been tampered with.
Studies on possible problems in autonomous cars are not new. In 2015, researchers from the Universities of Washington and Michigan used an attack as evidence to take control of a smart vehicle and send it off the road. The same happened in 2019, when Tencent’s cybersecurity team used stickers to make Tesla’s autopilot steer off to the wrong track. Last year, researchers took an autonomous vehicle system to accelerate from 56 km / h to 136 km / h in a few seconds, just strategically placing a few pieces of tape on the road.