ADVERSARIAL AIAiding Evaluation of Adversarial AI Defenses

Published 4 January 2022

There are many inherent weaknesses that underlie existing machine learning (ML) models, opening the technology up to spoofing, corruption, and other forms of deception. Evaluation testbed, datasets, tools developed on GARD program were released to jump-start community and encourage creation of more robust defenses against attacks on ML models.

There are many inherent weaknesses that underlie existing machine learning (ML) models, opening the technology up to spoofing, corruption, and other forms of deception. Attacks on AI algorithms could result in a range of negative effects – from altering a content recommendation engine to disrupting the operation of a self-driving vehicle. As ML models become increasingly integrated into critical infrastructure and systems, these vulnerabilities become ever more worrisome. DARPA’s Guaranteeing AI Robustness against Deception (GARD) program is focused on getting ahead of this safety challenge by developing a new generation of defenses against adversarial attacks on ML models.

GARD’s response to adversarial AI focuses on a few core objectives. One of which is the development of a testbed for characterizing ML defenses and assessing the scope of their applicability. Since the field of adversarial AI is relatively nascent, methods for testing and evaluating potential defenses are few, and those that do exist lack rigor and sophistication. Ensuring that emerging defenses are keeping pace with – or surpassing – the capabilities of known attacks is critical to establishing trust in the technology and ensuring its eventual use. To support this objective, GARD researchers developed a number of resources and virtual tools to help bolster the community’s efforts to evaluate and verify the effectiveness of existing and emerging ML models and defenses against adversarial attacks.

“Other technical communities – like cryptography – have embraced transparency and found that if you are open to letting people take a run at things, the technology will improve,” said Bruce Draper, the program manager leading GARD. “With GARD, we are taking a page from cryptography and are striving to create a community to facilitate the open exchange of ideas, tools, and technologies that can help researchers test and evaluate their ML defenses. Our goal is to raise the bar on existing evaluation efforts, bringing more sophistication and maturation to the field.”