CYBERSECURITYWhy Do Self-Driving Cars Crash?
As they traverse the air, land, or sea, encountering one another or other obstacles, these autonomous vehicles will need to talk to each other. Experts say we need to inject cybersecurity at every level of the autonomous vehicle networks of the future.
Whether they are built by billionaires plagued by social media addictions, or long-standing corporations of the traditional automotive industry, self-driving vehicles are the future of moving people and stuff.
As they traverse the air, land, or sea, encountering one another or other obstacles, these autonomous vehicles will need to talk to each other. And even cars with actual drivers could benefit from the ability to work together to avoid calamity and assure efficiency. Temporary, ad hoc networks will form around sets of vehicles and the onboard sensors and communications technology that allow them to navigate; they will interact, enabling them all to “see” one another and react accordingly.
“The current trend toward incorporating powerful sensors and communications to form vehicle networks has tremendous potential,” says Rick Blum, the Robert W. Wieseman Professor of Electrical and Computer Engineering at Lehigh University. “But it also has the potential to create a cybersecurity nightmare―or worse. Developing theory around the impact and mitigation of cyberattacks on networks of autonomous and human driven vehicles is critical and urgent, and further study is greatly needed.”
Blum intends to drive that study forward through a new three-year, $500,000 grant from the Office of Naval Research entitled “Cybersecurity in Dynamic Multiple Agent Vehicle Networks.” The project has an official start date of August 1, 2022.
“This project hopes to show that incorporating powerful sensors and communications to form vehicle networks can actually provide greatly enhanced cybersecurity―if these resources are used properly,”says Blum. “While autonomous vehicles are typically tested for deployment just by driving them, this testing alone will not provide suitable information on cyberattack vulnerability.”
In this effort, Blum and his team will develop theory and algorithms for “near optimum low complexity cyberattack mitigation” on sensor-equipped networks of autonomous and human-driven vehicles.
“These algorithms will employ engineering models coupled with unsupervised and supervised machine learning and incorporate all relevant information,” he continues. “The incorporation of engineering models will allow the overall process to be interpretable, which is important for trust in such dangerous cyber physical systems.”