WORLD ROUNDUPRunaway AI Is an Extinction Risk: Experts | Can China Escape the Malacca Dilemma? | South American Currency, and more

Published 1 June 2023

·  China Is Flirting with AI Catastrophe
Why accidents pose the biggest risk

·  Medvedev Says UK Officials Are Legitimate Military Targets for Russia
Former president describes Britain as Moscow’s eternal enemy

·  Corruption, Pollution and Exploitation: The Fallout from China’s Push into Africa
In Sierra Leone, the Chinese have stepped into the void left by the British, plundering natural resources and threatening livelihoods

·  Runaway AI Is an Extinction Risk, Experts Warn
A new statement from industry leaders cautions that artificial intelligence poses a threat to humanity on par with nuclear war or a pandemic

·  Can China Escape the Malacca Dilemma?
Beijing has openly discussed its vulnerabilities in the Strait of Malacca

·  Is This Latin American Conservatives’ Last Chance?
Latin America’s conservatives focused too much on policy and not enough on popular messaging

·  Prigozhin Erupts: Has a Russian Succession Struggle Begun?
The Wagner chief’s furious attack on elites and the war could portend turmoil with potential to extend well beyond Moscow

·  Brazil’s Lula Proposes South American Currency
Lula is trying to revive the Unasur bloc, which had largely become defunct after it was shunned by right-wing leaders in recent years

China Is Flirting with AI Catastrophe  (Bill Drexel and Hannah Kelley, Foreign Affairs)
Few early observers of the Cold War could have imagined that the worst nuclear catastrophe of the era would occur at an obscure power facility in Ukraine. The 1986 Chernobyl disaster was the result of a flawed nuclear reactor design and a series of mistakes made by the plant operators. The fact that the world’s superpowers were spiraling into an arms race of potentially world-ending magnitude tended to eclipse the less obvious dangers of what was, at the time, an experimental new technology. And yet despite hair-raising episodes such as the Cuban missile crisis of 1962, it was a failure of simple safety measures, exacerbated by authoritarian crisis bungling, that resulted in the uncontrolled release of 400 times the radiation emitted by the U.S. nuclear bomb dropped on Hiroshima in 1945. Estimates of the devastation from Chernobyl range from hundreds to tens of thousands of premature deaths from radiation—not to mention an “exclusion zone” that is twice the size of London and remains largely abandoned to this day.
As the world settles into a new era of rivalry­—this time between China and the United States—competition over another revolutionary technology, artificial intelligence, has sparked a flurry of military and ethical concerns parallel to those initiated by the nuclear race. Those concerns are well worth the attention they are receiving, and more: a world of autonomous weapons and machine-speed war could have devastating consequences for humanity. Beijing’s use of AI tools to help fuel its crimes against humanity against the Uyghur people in Xinjiang already amounts to a catastrophe.
But of equal concern should be the likelihood of AI engineers’ inadvertently causing accidents with tragic consequences. Although AI systems do not explode like nuclear reactors, their far-reaching potential for destruction includes everything from the development of deadly new pathogens to the hacking of critical systems such as electrical grids and oil pipelines. Due to Beijing’s lax approach toward technological hazards and its chronic mismanagement of crises, the danger of AI accidents is most severe in China. A clear-eyed assessment of these risks—and the potential for spillover well beyond China’s borders—should reshape how the AI sector considers the hazards of its work.