California’s Fire-Insurance Crisis Just Got Real | Should Killer Robots Be Allowed to Disobey Orders?, and more
A major smartphone manufacturer, customers of a fintech company, and a multibillion-dollar cybersecurity company are counted among the thousands of organizations that inadvertently exposed secrets. As part of his efforts to stem the tide, Demirkapi hacked together a way to automatically get the details revoked, making them useless to any hackers.
California’s Fire-Insurance Crisis Just Got Real (Caroline Mimbs Nyce, The Atlantic)
This is the reality of California’s new age of fire. Wildfires have gotten more ferocious in recent years, thanks in part to warming temperatures: Park is the fourth largest in the state’s recorded history. As homes in high-risk areas become harder to insure, premiums are rising, and some insurers are leaving the state altogether. The safety net that people once depended on has developed holes, and now people are falling through.
California’s insurance crisis first started around 2017. In that year and the ones that followed, a series of costly fires erased decades of profits, and forced insurance companies to reconsider their rates and their presence in the state. Premiums began rising, and in the past two years, major national companies including State Farm, Farmers, and Allstate, as well as smaller firms, have pulled back, declining to renew tens of thousands of policies. Coming on top of rising inflation and building costs, wildfires have made the cost of doing business just too high, insurers argue. For those living in areas where no private company will take on the risk, California offers a last-resort option called FAIR. From 2019 to 2024, as insurance companies retreated, the number of California FAIR plans has more than doubled. But FAIR plans are also getting more expensive. Many Californians are underinsured—and some are opting to go without insurance at all.
What China’s Dominance in Electronics Manufacturing Means for U.S. National Security (Brian J. Cavanaugh, National Interest)
Washington needs a multifaceted economic security strategy that strengthens our relationships with allies while building domestic capability, ensures the integrity of our supply chains, and maintains our technological edge.
What Do Americans Really Think About the Bombing of Hiroshima and Nagasaki? (Scott D. Sagan and Gina Sinclair, Bulletin of the Atomic Scientists)
In mid-August 1945, within weeks of the end of World War II, Americans were polled on whether they approved of the atomic bomb attacks on Hiroshima and Nagasaki. An overwhelmingly high percentage of Americans—85 percent—answered “yes.” That level of approval has gone down over the years, with (depending on the precise wording of the question) only a slim majority (57 percent in 2005) or a large minority (46 percent in 2015) voicing approval in more recent polls.
This reduction in atomic bombing approval over time has been cited as evidence of a gradual normative change in public ethical consciousness, the acceptance of a “nuclear taboo” or what Brown University scholar Nina Tannenwald has called “the general delegitimation of nuclear weapons.” This common interpretation of US public opinion, however, is too simplistic. Disapproval has indeed grown over time, but most Americans remain supportive of the 1945 attacks, albeit wishing that alternative strategies had been explored. These conclusions can be clearly seen in the results of a new, more complex public opinion survey, conducted for this article, that asked a representative sample of Americans about their views on the bombing of Hiroshima and Nagasaki, examined alternative strategies for ending the war, and provided follow-on questions to determine how the public weighs the costs and benefits of different strategies. Scratch beneath the surface, and the American public today, as in 1945, does not display an ethically based taboo against using nuclear weapons or killing enemy civilians, but rather has a preference for doing whatever was necessary to win the war and save American lives.
‘I’m Afraid I Can’t Do That’: Should Killer Robots Be Allowed to Disobey Orders? (Arthur Holland Michel, Bulletin of the Atomic Scientists)
It is often said that autonomous weapons could help minimize the needless horrors of war. Their vision algorithms could be better than humans at distinguishing a schoolhouse from a weapons depot. They won’t be swayed by the furies that lead mortal souls to commit atrocities; they won’t massacre or pillage. Some ethicists have long argued that robots could even be hardwired to follow the laws of war with mathematical consistency.
And yet for machines to translate these virtues into the effective protection of civilians in war zones, they must also possess a key ability: They need to be able to say no.
Consider this scenario. An autonomous drone dispatched to destroy an enemy vehicle detects women and children nearby. Deep behind enemy lines, without contact with its operator, the machine has to make a decision on its own. To prevent tragedy, it must call off its own mission. In other words, it must refuse the order.
“Robot refusal” sounds reasonable in theory. One of Amnesty International’s objections to autonomous weapons is that they “cannot … refuse an illegal order”—which implies that they should be able to refuse orders. In practice, though, it poses a tricky catch-22. Human control sits at the heart of governments’ pitch for responsible military AI. Giving machines the power to refuse orders would cut against that principle. Meanwhile, the same shortcomings that hinder AI’s capacity to faithfully execute a human’s orders could cause them to err when rejecting an order.
Militaries will therefore need to either demonstrate that it’s possible to build ethical, responsible autonomous weapons that don’t say no, or show that they can engineer a safe and reliable right-to-refuse that’s compatible with the principle of always keeping a human “in the loop.”