Can a Cyber shuffle Stop Hackers from Taking Over a Military Aircraft?

Moving Target Defense Must Keep Cyberattackers Guessing
Like a game of three-card monte, in which a con artist uses sleight of hand to shuffle cards side-to-side, moving target defense requires randomness. Without it, the defense unravels. Researchers wanted to know whether a moving target defense would work to constantly change network addresses, unique numbers assigned to each device on a network. They weren’t sure it would work because, compared to other types of networks, MIL-STD-1553’s address space is small and therefore difficult to randomize.

For example, the strategy has proven useful with internet protocols, which have millions or billions of network addresses at their disposal, but 1553 only has 31. In other words, Sandia had to come up with a way to surreptitiously shuffle 31 numbers in a way that couldn’t easily be decoded.

“Someone looked me in the face and said it’s not possible because it was just 31 addresses,” Jenkins said. “And because the number is so small compared to millions or billions or trillions, people just felt like it wasn’t enough randomness.”

The challenge with randomizing a small set of numbers is that “Nothing in computer software is truly random. It’s always pseudorandom,” said Sandia computer scientist Indu Manickam. Everything must be programmed, she said, so there’s always a hidden pattern that can be discovered.

With enough time and data, she said, “A human with an Excel sheet should be able to get it.”

Manickam is an expert in machine learning, or computer algorithms that identify and predict patterns. These algorithms, though beneficial to cybersecurity and many other fields of research and engineering, pose a threat to moving target defenses because they can potentially spot the pattern to a randomization routine much faster than a human.

“We’re using machine-learning techniques to better defend our systems,” Vugrin said. “We also know the bad guys are using machine learning to attack the systems. And so, one of the things that Chris identified early on was that we do not want to set up a moving target defense where somebody might use a machine-learning attack to break it and render the defense worthless.”

Sophisticated algorithms don’t necessarily spell the end for this type of cyberdefense. Cybersecurity designers can simply write a program that changes the randomization pattern before a machine can catch on.

But the Sandia team needed to know how fast machine learning could break their defense. So, they partnered with Bharat Bhargava, a professor of computer science at Purdue University, to test it. Bhargava and his team had been involved previously in researching aspects of moving target defenses.

For the last seven years, Bhargava said, the research fields of cybersecurity and machine learning have been colliding. And that’s been reshaping concepts in cybersecurity.

“What we want to do is learn how to defend against an attacker who is also learning,” Bhargava said.

Test Results Inform Future Improvements to Cybersecurity
Jenkins and the Sandia team set up two devices to communicate back and forth on a 1553 network. Occasionally, one device would slip in a coded message that would change both devices’ network addresses. Jenkins sent Bhargava’s research team logs of these communications using different randomization routines. Using this data, the Purdue team trained a type of machine-learning algorithm called long short-term memory to predict the next set of addresses.

The first randomization routine was not very effective.

“We were not only able to just detect the next set of addresses that is going to appear, but the next three addresses,” said Ganapathy Mani, a former member of the Purdue team who contributed to the research.

The algorithm had scored 0.9 out of a perfect 1.0 on what’s called a Matthews correlation coefficient, which rates how well a machine-learning algorithm performs.

But the second set of logs, which used a more dynamic routine, resulted in a radically different story. The algorithm only scored 0.2.

“0.2 is pretty close to random, so it didn’t really learn anything,” Manickam said.

The test showed that moving target defense can fundamentally work, but more importantly it gave both teams insights into how cybersecurity engineers should design these defenses to withstand a machine-learning-based assault, a concept the researchers call threat-informed codesign.

Defenders, for example, could “Add fake data into it so that the attackers cannot learn from it,” Mani said.

The findings could help improve the security of other small, cyber-physical networks beyond MIL-STD-1553, such as those used in critical infrastructure.

Jenkins said, “Being able to do this work for me, personally, was somewhat satisfying because it showed that given the right type of technology and innovation, you can take a constrained problem and still apply moving target defense to it.”