How AI Is Changing Our Approach to Disasters
During a disaster response, AI can provide a better picture of a crisis than traditional methods. Computer vision models using drone or satellite imagery can assess damage and help locate survivors. After Hurricanes Helene and Milton struck North Carolina and Florida in 2024, the nonprofit GiveDirectly used a Google-developed AI tool to identify areas with high concentrations of storm damage and poverty and send $1,000 in cash relief to affected households. The idea was that targeted direct payments would be faster and more efficient than traditional aid programs.
Robots still in pilot testing have been used in simulated missions to rescue survivors. Drones can measure radiation after a disaster in zones too hazardous for humans. And emergency management agencies are already using natural language processing to translate warnings and alerts into different languages. After a disaster, AI systems can help track fraud and abuse to ensure that aid reaches the people who need it. Health care systems already use AI systems to track injuries and care for long-term follow-up, and the same could be done after disasters.
There are many definitions of AI, but one way to think about the technology is in terms of specific tools. Table 1 shows AI tools, roughly organized by their use before, during, and after disasters. The table lists commercial systems used for general purposes, and examples of current or potential uses in emergency management.
__________________________________
Table 1: AI Tools and Example Uses
Tools: Predictive Analytics
Description: Finds patterns in data and forecasts future outcomes.
Examples of Commercial Systems: Salesforce
Uses in Emergency & Disaster Management: Risk modeling; disease outbreak spread prediction; flood/wildfire spread prediction; dashboards and situational awareness
Tools: Generative AI and Natural Language Processing
Description: Understands and translates human language and creates new text, images, or video
Examples of Commercial Systems: ChatGPT, Claude, DALL·E
Uses in Emergency & Disaster Management: Drafting emergency communication templates; creating scenarios for training; Multilingual crisis communication; rumor detection
Tools: Robotics & Automation
Description: Performs physical tasks with or without human control, including operating vehicles.
Examples of Commercial Systems: iRobot Roomba, Da Vinci Surgical System; Boston Dynamics robots; Waymo
Uses in Emergency & Disaster Management: Search-and-rescue in dangerous areas; supply delivery; debris clearing
Tools: Computer Vision
Description: Identifies and interprets objects, people, and activities images/video.
Examples of Commercial Systems: Google Photos, Clearview AI; Tesla Autopilot
Uses in Emergency & Disaster Management: Damage assessment via drones/satellites; search-and-rescue; wildfire smoke mapping
Tools: Speech Recognition & Generation
Description: Converts speech to text and produces human-like speech from text.
Examples of Commercial Systems: Siri, Alexa
Uses in Emergency & Disaster Management: Voice-to-text for field reporting; hands free operations
Tools: Recommendation Systems
Description: Suggests products, content, or actions based on user behavior.
Examples of Commercial Systems: Netflix, Spotify, Amazon
Uses in Emergency & Disaster Management: Resource allocation; shelter options; individual risk alerts
Tools: Fraud Detection & Security
Description: Identifies anomalies to call attention to risks.
Examples of Commercial Systems: Mastercard AI Security, Darktrace, PayPal
Uses in Emergency & Disaster Management: Detecting fraud in payments; cybersecurity
________________________________________
The use of AI to manage disasters is in its early days, but the table shows its potential for a range of uses.
How to Implement AI
After a wave of enthusiasm about AI’s potential to transform work and economies, some news reports provide caution about how transformative AI will really be. As with other technologies, AI’s effects will come down to how it is integrated in organizational routines. If it is difficult to use, costly, provides incorrect output, is subject to bias, or lacks traceability, or the ability to understand why it made certain decisions, then users will lose confidence. AI systems reflect the data they are trained on. To take just one example, prioritizing aid based on property damage will favor wealthier areas. AI systems alone cannot solve ethical and policy challenges.
We reviewed uses of AI in wildfire management, and in emergency management more broadly. We found that organizations that adopted and deployed AI and other emerging technologies took some of these approaches to mitigate the potential negative effects:
· use of pilot testing, red teaming, or stress testing AI systems to identify points of failure
· regular monitoring of AI performance, especially relative to the technology or process being replaced
· providing specific guidance to the AI for specific problems so that it executes narrow tasks well, and iterating to improve performance
· use of ethical guidelines so that certain decisions are off the table for AI
· comparison of AI performance to human performance for specific tasks and weighing of the advantages and disadvantages of each to decide where to use AI and where to use humans
· use of AI for planning and implementation, or where risk to humans is high (as in the most dangerous parts of wildland firefighting)
· identifying appropriate trade-offs between efficiency and oversight since AIs can operate quickly on a large scale.
Opportunities and Challenges for AI-Enhanced Disaster Management
AI technologies promise to help identify disasters before they begin, and guide planners in reducing risk. They can also find and help save people and property during a disaster, and help make sense of large, unstructured data to guide recovery and planning for the next event.
In the short term, using AI well requires overcoming implementation hurdles. In the longer term, using AI well comes back to classic governance questions of deciding who has legitimate authority and how to make collective decisions. If we can make AI do what we want technically, can we agree on what we want? Technical experts call this the problem of alignment, referring to aligning AI models with human values, goals, and intentions. For example, after a hurricane, aid could plausibly first go to the areas with the highest storm surge, the areas with the greatest property damage—which may also be the wealthiest—or the poorest areas with less capital to rebuild. Humans will need to make the value judgments that underlie AI systems to prioritize and deliver aid.
Deciding on the highest-order values and the appropriate training data now could influence AIs of the future. Since AI is not a single but a group of capabilities that are embedded in many different tools in practice and can engage in independent decisions, efforts to ensure AI does what humans want will need to focus on networks and systems, not just a single tool. For example, it is hard to locate responsibility for an AI-based disaster response decision because AI systems are made up of many different tools or “agents” working together. A system of agents might see an area damaged by hurricanes, assess harm based on indicators such as roof damage, assess transportation routes, and provide recommendations for where to prioritize sending resources. Separate AI agents would conduct each of these steps.
Like any tool or outsourced activity, using AI well will require setting up expectations and legal and technical guardrails, and working with stakeholders to make sure the AI does what we want. The private sector is making big investments in the technology, but potential users also need to invest in understanding and planning for how best to use it. Otherwise, we risk repeating an old story with new tools: trusting the map more than the territory, the model more than the messy, human reality it was meant to serve.
Patrick S. Roberts is a senior political scientist at RAND and a professor of policy analysis at the RAND School of Public Policy. This article is published courtesy of RAND.