Who's Responsible When Your Car Gets Hacked?

Anderson’s new study looks at how courts might assign blame if a hacker taps into an autonomous vehicle and causes trouble. That could be the nightmare scenario, a hacker cutting the brakes, commandeering the wheel, and steering the car into a collision. But it also could be a hacker swiping personal information from a car’s driver logs; or threatening to disable a car’s electronics if its owner doesn’t pay a ransom.

If the hacker gets away, who else might expect a lawsuit?

Legal Precedents
Fewer than half the states have any laws on the books governing autonomous vehicles. And the technology is too new for courts to have much experience with it. But that doesn’t mean these questions haven’t come up before.

In 1947, for example, an interior decorator stepped out to buy some wallpaper and forgot to lock the door of a house he was working on. A thief snuck in and stole a diamond bracelet. The court found the decorator liable for the loss, in a case that is still taught in law schools today. In modern terms, he had created a vulnerability that the thief was able to exploit.

It’s not a perfect precedent, of course. For one thing, locking down the millions of lines of code in an autonomous vehicle will not be so simple as turning a deadbolt. But it gives some idea of the legal thinking a court might apply in a claim arising from a hacked vehicle. Could someone have foreseen the problem and taken reasonable steps to fix it? The tougher question might be, Who’s the someone?

If it’s your car, that someone might be you. In a future of autonomous vehicles, software updates might be as routine a part of car care as oil changes. Miss one, and you might have just left the door unlocked.

“Think about the car of the future as, essentially, a laptop with an engine, wheels, and windshield wipers,” said Nahom Beyene, an engineer at RAND and coauthor of the study. “It’s going to be continually redesigned, revised, and updated. It opens up a whole new dimension of vulnerability when the final product is almost to-be-determined.”

Local governments could also face claims. Most visions of autonomous vehicles imagine them communicating in real time with their surroundings, the streets and traffic signals. If a hacker can exploit that connected infrastructure, government officials might have to explain how it happened to a court.

Any lawsuit involving a car will almost certainly name the car maker and the software provider as well. For them, one challenge will be staying on top of any potential vulnerabilities as they arise, possibly even years after the car comes off the assembly line. Courts have come down hard in negligence and product-liability cases when a manufacturer knew—or should have known—of a potentially dangerous defect.

Several years ago, for example, the Supreme Court of Alabama ordered General Motors to pay $15 million in punitive damages to the family of a young boy killed in a crash. The boy had been riding in a new pickup truck that stalled just as it drove into an intersection. A logging truck coming from the side couldn’t stop in time to avoid it. The court found that a defective computer chip had killed the engine.

Existing laws and legal precedents like that should be enough to address most claims arising from hacked vehicles, the researchers concluded. “The legal system has been coping with new technologies for many, many, many years,” Anderson said. “Everything doesn’t just come crashing to a halt any time there’s a new technology.”

But there is one scenario that policymakers might want to consider. Call it the Rhode Island exception.

Tesla founder Elon Musk once mused that—in principle—hackers could someday tunnel into an entire fleet of connected cars and route every one of them to Rhode Island. The damage caused by such a fleet-wide hack might be so large that no single insurance policy or class-action lawsuit could cover it. In a case like that, the researchers wrote, policymakers might want to have a legal backstop to cover the flood of claims, much like one they established after the 9/11 attacks.

“We have no way of knowing the probability of hackers exploiting autonomous vehicles,” Anderson said. “I’ll make the claim that it’s not zero. That’s about as strong a claim as I’m willing to make. Hopefully this will help advance the conversation about these issues, to bring that risk closer to zero.”

If all else fails, the owners of hacked vehicles might have one other line of recourse. Every state has a law requiring manufacturers to replace any car shown to have such a serious defect that it can’t be fixed. For all their high technology, autonomous vehicles will still be subject to those Lemon Laws.

Doug Irving is a communication analyst at RAND. This article is published courtesy of RAND.