U.S. Voluntary AI Code of Conduct and Implications for Military Use

The discussion at the UNSC can be seen as elevating the focus from shorter term AI threat of disinformation and propaganda in a bilateral context between governments and Big Tech companies to a larger, global focus on advancements in AI and the need to follow certain common standards, which are transparent, respect privacy of individuals whose data is ‘scraped’ on a massive scale, and ensure robust cybersecurity.

Threat Posed by AI
Lawmakers in the US have been attempting to rein in the exponential developments in the AI field for some time now, since not much is known about the real impact of the technology on a longer-term basis. The reactions to the so-called danger of AI have been polarizing, with some even equating AI with the atom bomb and terming the current phase of growth in AI as the ‘Oppenheimer moment’5 , after the scientist-philosopher J. Robert Oppenheimer, under whom the Manhattan Project was brought to a fruitful conclusion with the testing of the first atomic bomb. This was the moment that signaled the start of the first nuclear age—an era of living under the nuclear shadow that persists to this day. The Oppenheimer moment, therefore, is a dividing line between the conventional past and the new present and presumably the unknown future.

Some academics, activists and even members of the Big Tech community, referred to as ‘AI doomers’ have coined a term, P(doom), in an attempt to quantify the risk of a doomsday scenario where a ‘runaway superintelligence’ causes severe harm to humanity or leads to human extinction.6 Others refer to variations of the ‘Paperclip Maximiser’, where the AI is given a particular task to optimise by the humans, understands it in the form of maximising the number of paperclips in the universe and proceeds to expend all resources of the planet in order to manufacture only paperclips.7

This thought experiment was used to signify the dangers of two issues with AI: the ‘orthogonality thesis’, which refers to a highly intelligent AI that could interpret human goals in its own way and proceed to accomplish tasks which have no value to the humans; and ‘instrumental convergence’ which implies AI taking control of all matter and energy on the planet in addition to ensuring that no one can shut it down or alter its goals.8

Apart from these alleged existential dangers, the new wave of generative AI& Company, 19 January 2023.">9 , which has the potential of lowering and in certain cases, decimating entry barriers to content creation in text, image, audio and video format, can adversely affect societies in the short to medium term. Generative AI has the potential to birth the era of the ‘superhuman’, the lone wolf who can target state institutions through the click of his keyboard at will.10

The use of generative AI in the hands of motivated individuals, non-state and state actors, has the potential to generate disinformation at scale. Most inimical actors and institutions have so far struggled to achieve this due to the difficulties of homing onto specific faultlines within countries, using local dialects and generating adequately realistic videos, among others. This is now available at a price—disinformation as a service (DaaS)—at the fingertips of an individual, making the creation and dissemination of disinformation at scale, very easy. This is why the voluntary commitments by the US Big Tech companies are just the beginning of a regulatory process that needs to be made enforceable, in line with legally binding safeguards agreed to by UN members for respective countries.

Military Uses of AI   
Slowly and steadily, the use of AI in military has been gaining ground. The Russia-Ukraine war has seen deployment of increasingly efficient AI systems on both sides. Palantir, a company which specialises in AI-based data fusion and surveillance services,11 has created a new product called the Palantir AI Platform (AIP). This uses large language models (LLMs) and algorithms to designate, analyse and serve up suggestions for neutralising adversary targets, in a chatbot mode.12

Though Palantir’s website clarifies that the system will only be deployed across classified systems and use both classified and unclassified data to create operating pictures, there is no further information on the subject available in the open domain.13 The company has also assured on its site that it will use “industry-leading guardrails” to safeguard against unauthorized actions.14 The absence of Palantir from the White House declaration is significant since it is one of the very few companies whose products are designed for significant military use.

Richard Moore, the head of United Kingdom’s (UK) MI6, on 19 July 2023 stated that his staff was using AI and big data analysis to identify and disrupt the flow of weapons to Russia.15 Russia is testing its unmanned ground vehicle (UGV) Marker with an inbuilt AI which will seek out Leopard and Abrams tanks on the battlefield and target them. However, despite being tested in a number of terrains such as forests, the Marker hasn’t been rolled out for combat action in ongoing conflict against Ukraine.16

Ukraine has fitted its drones with rudimentary AI that can perform the most basic edge processing to identify platforms like tanks and pass on only the relevant information (coordinates and nature of platform) amounting to kilobytes of data to a vast shooter network.17 There are obviously challenges in misidentifying objects and the task becomes exceedingly difficult when identifying and singling out individuals from the opposing side. Facial recognition softwares have been used by the Ukrainians to identify the bodies of Russian soldiers killed in action for propaganda uses.18

It is not a far shot to imagine the same being used for targeted killings using drones. The challenge here of course is systemic bias and discrimination in the AI model which creeps in despite the best intentions of the data scientists, which may lead to inadvertent killing of civilians. Similarly, spoofing of the senior commanders’ voice and text messages may lead to passing of spurious and fatal orders for formations. On the other hand, the UK-led Future Combat Air System (FCAS) Tempest envisages a wholly autonomous fighter with AI integrated both during the design and development phase (D&D) as well as the identification and targeting phase during operations.19 The human, at best, will be on the loop.

Conclusion
The military use of AI is an offshoot of the developments ripping through the Silicon Valley. As a result, the suggestions being offered to rein in the advancements in AI need to move beyond self-censorship and into the domain of regulation. This will be needed to ensure that the unwarranted effects of these technologies do not spill over into the modern battlefield, already saturated with lethal and precision-based weapons.

1. Mohar Chatterjee, “White House Notches AI Agreement With Top Tech Firms”, Politico, 21 July 2023.

2. “Ensuring Safe, Secure and Trustworthy AI, The White House, 21 July 2023.

3. Farnaz Fassihi, U.N. Official Urge Regulation of Artificial Intelligence”, The New York Times, 18 July 2023.

4. Ibid.

5.  Catherine Bauer, “Movie Director Christopher Nolan Warns of AI’s ‘Oppenheimer Moment”, NBC News, 22 July 2023.

6.  Edelman Gary Grossman, AI  Doom, AI Boom and the Possible Destruction of Humanity”, Venture Beat, 4 June 2023.

7. “Squiggle Maximizer (Formerly “Paperclip Maximizer)”, Lesswrong, 25 July 2023.

8.Nick Bostrom, “Ethical Issues in Advanced Artificial Intelligence”, 2002.

9. “What is Generative AI, McKinsey & Company, 19 January 2023.

10. Willian Beard, “Russia’s Cyber Strategies”, University of Hawai‘i, 2 December 2021.

11. Jeffery Dastin, “Ukraine is Using Palantir’s Software for ‘Tageting,’ CEO Says”Reuters, 2 February 2023.

12.  Andrew Tarantola, “Palantir Shows Off an AI That Can Go to War”, Engadget, 26 April 2023.

13.AIP for Defence”, Palantir, 25 July 2023.

14. Andrew Tarantola, “Palantir Shows Off an AI That Can Go to War”, no. 12.

15. “Britain’s M16 Chief Says His Spies are Using AI to Disrupt Flow of Weapons to Russia”The Hindu, 19 July 2023.

16. Ellie Cook, “How Russia’s ‘Marker’ Combat Robots Could Impact Ukraine War”, Newsweek, 18 January 2023.

17. “The War in Ukraine Shows how Technology is Changing the Battlefield”, The Economist, 3 July 2023.

18. Darian Meacham and Martin Gak, “Does Facial Recognition Tech in Ukraine’s War Bring Killer Robots Nearer?”Open Democracy, 30 March 2022.

19. Akshat Upadhyay, “Civil-Military Fusion for Emerging Technologies in India”, Synergy, Vol. 2, No. 1, February 2023

Lt. Col. Akshat Upadhyay is Research Fellow, Strategic Technologies Centre at Manohar Parrikar Institute for Defence Studies and Analyses, New Delhi. This article was originally published by Institute for Defense Studies and Analyses. Views expressed are of the author and do not necessarily reflect the views of the Manohar Parrikar IDSA or of the Government of India.