AIU.S. Voluntary AI Code of Conduct and Implications for Military Use
Seven technology companies including Microsoft, OpenAI, Anthropic and Meta, with major artificial intelligence (AI) products made voluntary commitments regarding the regulation of AI. These are non-binding, unenforceable and voluntary, but they may form the basis for a future Executive Order on AI, which will become critical given the increasing military use of AI.
Seven technology companies including Microsoft, OpenAI, Anthropic and Meta, with major artificial intelligence (AI) products made voluntary commitments regarding the regulation of AI at an event held in the White House on 21 July 2023.1 These eight commitments are based on three guiding principles of safety, security and trust. Areas and domains which are presumably impacted by AI have been covered by the code of conduct. While these are non-binding, unenforceable and voluntary, they may form the basis for a future Executive Order on AI, which will become critical given the increasing military use of AI.
The voluntary AI commitments are the following:
1. Red-teaming (internal and external) products to be released for public use. Bio, chemical and radiological risks and ways in which barriers to entry can be lowered for weapons development and design are some of the top priorities. The effect on systems which have interactions and the ability to control physical systems needs to be evaluated apart from societal risks such as bias and discrimination;
2. Information sharing amongst companies and governments. This is going to be challenging since the entire model is based on secrecy and competition;
3. Invest in cybersecurity and safeguards to protect unreleased and proprietary model weights;
4. Incentivize third party discovery and reporting of issues and vulnerabilities;
5. Watermarking AI generated content;
6. Publicly report model or system capabilities including discussions of societal risks;
7. Accord priority to research on societal risks posed by AI systems; and
8. Develop and deploy frontier AI systems to help address society’s greatest challenges.2
The eight commitments of US’s Big Tech companies come a few days after the United Nations Security Council (UNSC) for the first time convened a session on the threat posed by AI to global peace and security.3 The UN Secretary General (UNSG) proposed the setting up of a global AI watchdog comprising experts in the field who would share their expertise with governments and administrative agencies. The UNSG also added that UN must come up with a legally binding agreement by 2026 banning the use of AI in automated weapons of war.4