CYBERSECURITYRobustly Detecting Sneaky Cyberattacks That Might Throw AI Spacecraft Off-Course

By Shamim Quadir

Published 20 September 2025

Cyberattacks on future, AI-guided spacecraft could be thwarted by unpicking what the AI has been “thinking.”

A recent study suggests that cyberattacks on future, AI-guided spacecraft could be thwarted by unpicking what the AI has been “thinking.”

Such “adversarial AI” attacks could lead spacecraft off-course which, if not corrected, could spell disaster, including craft crashing into one another and, in the worst case, the tragic loss of human life.

The study from the European Space Agency and City St George’s, University of London was led by Professor Nabil Aouf, Director of the University’s Autonomy of Systems Research Center. It investigated how a type of AI guidance system, which works from the images of a craft’s on-board, photographic cameras, might be protected from such attacks. The paper is published in Advances in Space Research.

AI guidance systems hold many potential benefits over less autonomous, more human-guided systems, including the ability to more accurately predict on the fly where two craft are to one another in space.

They are also less vulnerable to some visual distractions than people are, such as lighting conditions, an awkward viewpoint of the camera, or a cluttered photo image, but are nevertheless vulnerable to less overt distractors to which humans are not.

Modeling the Attack
In the study, the researchers explored an attack which takes the form of very subtle, visibly imperceptible changes being made to a true camera image, before it is processed by an AI guidance system. In this case, one they designed using a type of deep learning AI called a Convolutional Neural Network (CNN).

A neural network is an artificial, mathematical model which mimics the nerve cells (neurons) in the brain that send signals to one another to perform computations.

In the virtual lab, such sneaky changes have been known to cause CNN systems to make significant errors in computing the positions, and rotations, of two moving objects relative to one another.

To achieve their goal, the researchers first used 3D modeling software to virtually train and test their CNN system. They used it to simulate the view from a camera mounted on a chaser spacecraft taking 13 possible trajectories to dock with NASA’s Jason-1 satellite from a distance of 60 meters.

Using a method called FGSM, they applied the subtle, “adversarial AI attack,” visual changes to some of the virtual images of Jason-1 that the chaser craft would see in the 3D-generated system.

Unpacking What the AI Is “Thinking”

The researchers passed the total of 32,500 virtually generated images through their CNN, and then used a mathematical modeling technique called SHAP (SHapley Additive exPlanations) on a stage of computation that comes just before the AI delivers its prediction of the positions and rotation in space of the chaser craft relative to Jason-1.

SHAP models different ways of decomposing what the AI is likely to have thought about each image it has looked at, and quantifies how important those components will have been to its decision-making.

By looking at the components of SHAP for each image with a type of neural network that is able to remember and compare successive images over time (LSTM), the researchers were able to spot the contaminated images from attacks with on average a 99.2% accuracy.

Refining the Model with Real-World Data
But they didn’t stop there. They further tested their system through a second version of the experiment which used a real-life, scaled model they created of the Jason-1 satellite. This provided a higher resolution, more realistic image of what a camera on board a spacecraft would actually see.

While using this method, their system detected attacks with a reduced accuracy of 96.3%, the study authors still find the result extremely promising for application in practice.

Reflecting on the study, lead author Dr. Ziwei Wang, Research Fellow, at City St George’s University of London said, “Identifying adversarial attacks is beneficial; however, the primary objective is to alleviate their impact on the Guidance, Navigation and Control (GNC) systems of autonomous spacecraft.

The team at City St George’s is enhancing our efforts to strengthen the deep learning models we have created by employing robust optimization theory to mitigate the effects of these harmful sensory adversarial attacks. Thereby ensuring safe operational capabilities for spacecraft.”

Shamim Quadir is the Senior Communications Officer for the School of Science & Technology and The City Law School, City St George’s, University of London. The article was originally posted to the website of City St George’s, University of London.