De-Risking Authoritarian AI

·  products and services (often physical infrastructure), where PRC ownership exposes democracies to risks of espionage (notably surveillance and data theft) and sabotage (especially disruption and denial of products and services)

·  technology that facilitates foreign interference (malign covert influence on behalf of a foreign power), the most pervasive example being TikTok

·  ‘large language model AI’ and other emerging generative AI systems—a future threat that we need to start thinking about now.

The report focuses on the first category and looks at TikTok through the prism of the espionage and sabotage risks posed by such apps.

The underlying dynamic with Chinese AI-enabled products and services is the same as that which prompted concern over Chinese 5G vendors: the PRC government has the capability to compel its companies to follow its directions, it has the opportunity afforded by the presence of Chinese AI-enabled products and services in our digital ecosystems, and it has demonstrated malign intent towards the democracies.

But this is a more subtle and complex problem than deciding whether to ban Chinese companies from participating in 5G networks. Telecommunications networks are the nervous systems that run down the spine of our digital ecosystems; they’re strategic points of vulnerability for all digital technologies. Protecting them from foreign intelligence agencies is a no-brainer and worth the economic and political costs. And those costs are bounded because 5G is a small group of easily identifiable technologies.

In contrast, AI is a constellation of technologies and techniques embedded in thousands of applications, products and services. So the task is to identify where on the spectrum between national-security threat and moral panic each of these products sits, and then pick the fights that really matter.

A prohibition on all Chinese AI-enabled technology would be extremely costly and disruptive. Many businesses and researchers in democracies want to continue collaborating on Chinese AI-enabled products because it helps them to innovate, build better products, offer cheaper services and publish scientific breakthroughs. The policy goal is to take prudent steps to protect our digital ecosystems, not to economically decouple from China.

What’s needed is a three-step framework to identify, triage and manage the riskiest products and services. The intent is similar to that proposed in the recently introduced draft US RESTRICT Act, which seeks to identify and mitigate foreign threats to information and communications technology products and services—although the focus here is on teasing out the most serious threats.

Step 1: Audit. Identify the AI systems whose purpose and functionality concern us most. What’s the potential scale of our exposure to this product or service? How critical is this system to essential services, public health and safety, democratic processes, open markets, freedom of speech and the rule of law? What are the levels of dependency and redundancy should it be compromised or unavailable?

Step 2: Red team. Anyone can identify the risk of embedding many PRC-made technologies into sensitive locations, such as government infrastructure, but, in other cases, the level of risk will be unclear. For those instances, you need to set a thief to catch a thief. What could a team of specialists do if they had privileged access to a candidate system identified in Step 1—people with experience in intelligence operations, cybersecurity and perhaps military planning, combined with relevant technical subject-matter experts? This is the real-world test because all intelligence operations cost time and money, and some points of presence in a target ecosystem offer more scalable and effective opportunities than others. PRC-made cameras and drones in sensitive locations are a legitimate concern, but crippling supply chains through accessing ship-to-shore cranes would be devastating.

We know that TikTok data can be accessed by PRC agencies and reportedly also reveal a user’s location, so it’s obvious that military and government officials shouldn’t use the app. Journalists should also think carefully about this, too. Beyond that, the merits of a general ban on technical security grounds are a bit murky. Can our red team use the app to jump onto connected mobiles and ICT systems to plant spying malware? What system mitigations could stop them getting access to data on connected systems? If the team revealed serious vulnerabilities that can’t be mitigated, a general ban might be appropriate.

Step 3: Regulate. Decide what to do about a system identified as ‘high risk’. Treatment measures might include prohibiting Chinese AI-enabled technology in some parts of the network, a ban on government procurement or use, or a general prohibition. Short of that, governments could insist on measures to mitigate the identified risk or dilute the risk through redundancy arrangements. And, in many cases, public education efforts along the lines of the new UK National Protective Security Authority may be an appropriate alternative to regulation.

Democracies need to think harder about Chinese AI-enabled technology in our digital ecosystems. But we shouldn’t overreact: our approach to regulation should be anxious but selective.

Simeon Gilding is a senior fellow at ASPIThis article is published courtesy of the Australian Strategic Policy Institute (ASPI).