The Southern Border & Terrorism Fears | The Mystery of AI Gunshot-Detection Accuracy | New Weapons Will Eclipse Atomic Bombs, and more
In months following the Swift deepfakes, lawmakers introduced myriad proposals, but the window for meaningful inauthentic content legislation from Congress is rapidly closing. The House and Senate have fewer than 40 days in session before the election.
One of the most effective regulatory options to combat the spread of false information remains on the table: to direct social media platforms to use tools already at their disposal.
Platforms have the most reliable and effective tools to reduce the creation and spread of inauthentic content. Use of these tools is currently at the discretion of the platform. Most platforms opt not to tap into these effective interventions. Congress could correct that missed opportunity by mandating that platforms use proven tools to reduce the odds of users creating inauthentic content. A legislative approach would improve on current efforts to increase enforcement of outdated and ineffective laws or pass new laws unlikely to systematically address this important issue.
The proliferation of inauthentic content, such as AI-generated images, poses significant challenges for lawmakers and platforms alike. Despite recent legislative efforts at the state level to penalize creators of such content, these laws face practical limitations in enforcement and jurisdiction, rendering them insufficient for a comprehensive regulatory response. Furthermore, traditional laws and reactive measures like watermarking AI-generated content may not effectively curb the spread of inauthentic content due to technical and behavioral loopholes.
A more promising approach lies in platform-level interventions. Compared to lawmakers, platforms can more easily implement and regularly fine-tune effective measures, such as generating warning screens, developing normative prompts, and using AI to detect potentially violative posts. These interventions, endorsed by the Prosocial Design Network, offer a proactive means to foster healthier online behavior. Platforms have the responsibility and capability to implement these strategies, creating friction that discourages the spread of harmful content. If platforms continue to fail to realize that responsibility, Congress should direct platforms to implement evidence-based interventions. These interventions would alter user behavior and contribute to a more authentic digital environment.
The Mystery of AI Gunshot-Detection Accuracy Is Finally Unraveling (Todd Feathers, Wired)
Liz González’s neighborhood in East San Jose can be loud. Some of her neighbors apparently want the whole block to hear their cars, others like to light fireworks for every occasion, and occasionally there are gunshots.
In February 2023, San Jose began piloting AI-powered gunshot detection technology from the company Flock Safety in several sections of the city, including Gonzalez’s neighborhood. During the first four months of the pilot, Flock’s gunshot detection system alerted police to 123 shooting incidents. But new data released by San Jose’s Digital Privacy Office shows that only 50 percent of those alerts were actually confirmed to be gunfire, while 34 percent of them were confirmed false positives, meaning the Flock Safety system incorrectly identified other sounds—such as fireworks, construction, or cars backfiring—as shooting incidents. After Flock recalibrated its sensors in July 2023, 81 percent of alerts were confirmed gunshots, 7 percent were false alarms, and 12 percent could not be determined one way or the other.
For two decades, cities around the country have used automated gunshot detection systems to quickly dispatch police to the scenes of suspected shootings. But reliable data about the accuracy of the systems and how frequently they raise false alarms has been difficult, if not impossible, for the public to find. San Jose, which has taken a leading role in defining responsible government use of AI systems, appears to be the only city that requires its police department to disclose accuracy data for its gunshot detection system. The report it released on May 31 marks the first time it has published that information.
Three Ideas to Beat the Heat, and the People Who Made Them Happen (Somini Sengupta, New York Times)
An app that helps people find relief from the heat.
A tiny insurance policy that pays working women when temperatures soar.
Local laws that help outdoor workers get water and shade on sweltering days.
As dangerous heat becomes impossible to ignore, an array of practical innovations are emerging around the world to protect people most vulnerable to its hazards. What’s notable is that these efforts don’t require untested technologies. Instead, they’re based on ideas that are practical and already known to work.
They offer a window into the need to adapt to the new dangers of extreme heat that have played out vividly in recent weeks, killing still-untold numbers of religious pilgrims, tourists and election workers around the world and driving up emergency room visits for heat-related ailments in the United States.
The World Meteorological Organization has said that heat now kills more people than any other extreme-weather hazard and has called for many more “tailored climate products and services” to protect people’s health, including easy-to-use tools to find help.
What Happened to Stanford Spells Trouble for the Election (Renée DiResta, New York Times)
The 2024 rerun is already being viciously fought. Since 2020, the technological landscape has shifted. There are new social media platforms in the mix, such as Bluesky, Threads and Truth Social. Election integrity policies and enforcement priorities are in flux at some of the biggest platforms. What used to be Twitter is under new ownership, and most of the team that focused on trust and safety was let go.
Fake audio generated by artificial intelligence has already been deployed in a European election, and A.I.-powered chatbots are posting on social media platforms. Overseas players continue to run influence operations to interfere in American politics; in recent weeks OpenAI has confirmed that Russia, China and others have begun to use generative text tools to improve the quality and quantity of their efforts.
Offline, trust in institutions, government, media and fellow citizens is at or near record lows, and polarization continues to increase. Election officials are concerned about the safety of poll workers and election administrators — perhaps the most terrible illustration of the cost of lies on our politics.
As we enter the final stretch of the 2024 campaign, it will not be other countries that are likely to have the greatest impact. Rather, it will once again be the domestic rumor mill. The networks spreading misleading notions remain stronger than ever, and the networks of researchers and observers who worked to counter them are being dismantled.
Universities and institutions have struggled to understand and adapt to lies about their work, often remaining silent and allowing false claims to ossify. Lies about academic projects are now matters of established fact within bespoke partisan realities.
Costs, both financial and psychological, have mounted. Stanford is refocusing the work of the Observatory and has ended the Election Integrity Partnership’s rapid-response election observation work. Employees, including me, did not have their contracts renewed.
The work of studying election delegitimization and supporting election officials is more important than ever. It is crucial that we not only stand resolute but speak out forcefully against intimidation tactics intended to silence us and discredit academic research. We cannot allow fear to undermine our commitment to safeguarding the democratic process.
New Weapons Will Eclipse Atomic Bombs. Their Builders Ask Themselves This Question. (Alexander C. Karp and Nicholas W. Zamiska, Washinton Post)
The atomic age could soon be coming to a close. This is the software century; wars of the future will be driven by artificial intelligence, whose development is proceeding far faster than that of conventional weapons. The F-35 fighter jet was conceived of in the mid-1990s, and the airplane — the flagship attack aircraft of American and allied forces — is scheduled to be in service for 64 more years. The U.S. government expects to spend more than $2 trillion on the program. But as retired Gen. Mark A. Milley, former chairman of the Joint Chiefs of Staff, recently asked, “Do we really think a manned aircraft is going to be winning the skies in 2088?”
In the 20th century, software was built to meet the needs of hardware, from flight controls to missile avionics. But with the rise of artificial intelligence and the use of large language models to make targeting recommendations on the battlefield, the relationship is shifting. Now software is at the helm, with hardware — the drones in Ukraine and elsewhere — increasingly serving as the means by which the recommendations of AI are carried out.
The trouble is that the young Americans who are most capable of building AI systems are often also most ambivalent about working for the military. In Silicon Valley, engineers have turned their backs, unwilling to engage with the mess and moral complexity of geopolitics. While pockets of support for defense work have emerged, most funding and talent continue to stream toward the consumer.
The engineering elite of our country rush to raise capital for video-sharing apps and social media platforms, advertising algorithms and shopping websites. They don’t hesitate to track and monetize people’s every movement online, burrowing their way into our lives. But many balk when it comes to working with the military. The rush is simply to build. Too few ask what ought to be built and why.
We do not advocate a thin and shallow patriotism — a substitute for thought and genuine reflection about the merits of our nation as well as its flaws. We only want America’s technology industry to keep in mind an important question — which is not whether a new generation of autonomous weapons incorporating AI will be built. It is who will build them and for what purpose.