CYBERSECURITYFour Fallacies of AI Cybersecurity
To date, the majority of AI cybersecurity efforts do not reflect the accumulated knowledge and modern approaches within cybersecurity, instead tending toward concepts that have been demonstrated time and again not to support desired cybersecurity outcomes.
As with many emerging technologies, the cybersecurity of AI systems has largely been treated as an afterthought. The lack of attention to this topic, coupled with increased realization of both the potential and perils of AI, has opened the door for the development of various AI cybersecurity models—many of which have emerged from outside the cybersecurity community. Absent active engagement, the AI community is now positioned to have to relearn many of the lessons that have been developed by software and security engineering over many years.
To date, the majority of AI cybersecurity efforts do not reflect the accumulated knowledge and modern approaches within cybersecurity, instead tending toward concepts that have been demonstrated time and again not to support desired cybersecurity outcomes. I’ll use the term “fallacies” to describe four such categories of thought:
Cybersecurity is linear. The history of cybersecurity is littered with attempts to define standards of action. From the Orange Book (PDF) to the Common Criteria, pre-2010s security literature was dominated by attempts to define cybersecurity as an ever-increasing set of steps intended to counter an ever-increasing cyber threat. It never really worked. Setting compliance as a goal breeds complacence and undermines responsibility.
Starting in the 2010s with the NIST RMF framework, the cybersecurity community came to the realization that linear levels of increasing security were damaging to the goals of cybersecurity. Accepting that cybersecurity is not absolute and must be placed in context shifted the dialogue away from level-based accreditation and toward threat-based reasoning—in the same way that entities handle many other types of risk.
___________________________________
The majority of AI cybersecurity efforts tend toward concepts that have been demonstrated time and again not to support desired cybersecurity outcomes.
___________________________________
In addition to bypassing some stickier issues that come with level-based evaluations (What if I do everything for a given level except one element? What if an element of this level doesn’t exist in my organization? What if I do them all, but poorly?), the risk-based worldview recognizes that the presence of a thinking adversary and the ever-evolving technological landscape results in a shifting environment that defies labels and levels. As new tactics and technologies emerge, so too do new dynamics as both defense and offense optimize around these lines in the sand.