F.B.I. Releases Redacted Report on Havana Syndrome | Chatbots Are Primed to Warp Reality | Design Principles for Responsible AI in Homeland Security, and more

According to a report called “Missing the Mark,” published last year by the Columbia Climate School and Headwaters Economics, an independent, nonprofit research group based in Montana, the most effective strategies to reduce communities’ wildfire risk aren’t just those that focus on forests, but also those that construct and adapt our homes and neighborhoods. Yet the analysis found that strategies to manage the built environment receive less funding and policy support in the U.S. than traditional approaches that focus on what’s happening in the forest.
Why doesn’t American society focus on wildfire risks at home as much as we do in the forest? And why are state and municipal building codes more common for flood- and earthquake-prone, but not wildfire-prone, areas?

Chatbots Are Primed to Warp Reality  (Matteo Wong, The Atlantic)
More and more people are learning about the world through chatbots and the software’s kin, whether they mean to or not. Google has rolled out generative AI to users of its search engine on at least four continents, placing AI-written responses above the usual list of links; as many as 1 billion people may encounter this feature by the end of the year. Meta’s AI assistant has been integrated into Facebook, Messenger, WhatsApp, and Instagram, and is sometimes the default option when a user taps the search bar. And Apple is expected to integrate generative AI into Siri, Mail, Notes, and other apps this fall. Less than two years after ChatGPT’s launch, bots are quickly becoming the default filters for the web.
Yet AI chatbots and assistants, no matter how wonderfully they appear to answer even complex queries, are prone to confidently spouting falsehoods—and the problem is likely more pernicious than many people realize. A sizable body of research, alongside conversations I’ve recently had with several experts, suggests that the solicitous, authoritative tone that AI models take—combined with them being legitimately helpful and correct in many cases—could lead people to place too much trust in the technology. That credulity, in turn, could make chatbots a particularly effective tool for anyone seeking to manipulate the public through the subtle spread of misleading or slanted information. No one person, or even government, can tamper with every link displayed by Google or Bing. Engineering a chatbot to present a tweaked version of reality is a different story.
Of course, all kinds of misinformation is already on the internet. But although reasonable people know not to naively trust anything that bubbles up in their social-media feeds, chatbots offer the allure of omniscience. People are using them for sensitive queries: In a recent poll by KFF, a health-policy nonprofit, one in six U.S. adults reported using an AI chatbot to obtain health information and advice at least once a month.

The Plot to Attack Taylor Swift’s Vienna Shows Was Intended to Kill Thousands, a CIA Official Says  (Stefanie Dazio, AP)
The suspects in the foiled plot to attack Taylor Swift concerts in Vienna earlier this month sought to kill “tens of thousands” of fans before the CIA discovered intelligence that disrupted the planning and led to arrests, the agency’s deputy director said.
The CIA notified Austrian authorities of the scheme, which allegedly included links to the Islamic State group. The intelligence and subsequent arrests ultimately led to the cancellation of three sold-out Eras Tour showsdevastating fans who had traveled across the globe to see Swift in concert.
CIA Deputy Director David Cohen addressed the failed plot during the annual Intelligence and National Security Summit, held this week in Maryland.
“They were plotting to kill a huge number — tens of thousands of people at this concert, including I am sure many Americans — and were quite advanced in this,” Cohen said Wednesday. “The Austrians were able to make those arrests because the agency and our partners in the intelligence community provided them information about what this ISIS-connected group was planning to do.”

Applying Design Principles for Responsible AI in Homeland Security  (Ana Maria Dimand, Kayla Schwoerer, Andrea S. Patrucco, and Ilia Murtazashvili, HSToday)
Homeland Security Today has partnered with the IBM Center for the Business of Government to share insights from their “Future Shocks” initiative and subsequent book, Transforming the Business of Government: Insights on Resiliency, Innovation, and Performance. The means and methods traditionally employed by government face a significant challenge posed by the advent of disruptive technologies like artificial intelligence, the changing nature of physical and cyber threats, and the impact of social media and miscommunication on society. This partnership will share insights on how our homeland community can build resilience in thinking and action, innovate while running, and stay ahead of the enemy.  Through an on-going column and paired webinars, we will explore how best practices, questions about the unknown, and insights from several IBM Center initiatives can be applied to YOUR leadership thinking. 
Government is being asked to handle “everything everywhere all at once.” Homeland Security Today seeks to elevate our understanding of, and planning for, how numerous disparate factors interact and translate these insights into actionable goals for the homeland community.