Critics get tough on emergency preparedness drills

Published 28 November 2006

High cost a major factor, especially when real-life consistently befuddles even the most exercised of agencies; consultants often sell cookie-cutter drills of little value; after-action anlaysis is often more important than the program itself

An exercise in futility? Emergency preparedess officials these days drill more often than Marine recruits — hurricanes and terrorism being first and foremost on their agendas — but a growing chorus of critics is casting doubt on the utility of such drills. Preparedness exercises are supposed to test everyone from first responders to senior level decisionmakers, yet long after the tests are complete, critics say, real-life events prove the lessons were not properly learned. “Exercises are not all created equal,” said Michael Wermuth, director of homeland security programs at Santa Monica, California-based RAND. “There are a lot of different kinds of exercises, a lot of different methodologies used to conduct exercises. There are exercises that sometimes seem to be destined to ensure success or at least a successful outcome in the exercise.” Exercises, the critics say, must be strong enough to expose vulnerabilities — an outcome many bureacrats do not want to face.

Ever since the July 2002 National Strategy for Homeland Security instructed the Homeland Security Department to create the National Exercise Program, prepardedness programs have proliferated up and down the government ladder. They run the gamut from full-scale live exercises to tabletop drills that ask decisonmakers to respond to an escalating series of crises. In one messy affair, first responders in Washington, D.C. simulated a rescue of fake blood-bedaubed passengers from a subway train stopped in a tunnel under the Potomac River. In another, volunteers in Mankato, Minnesota used Halloween favorite Smartie candies to see how quickly drugs could be distributed in a crisis. With enough money and imagination, the possibilities are endless. That, however, does not mean they should be pursued.

Cost remains a major issue. The first TOPOFF drill in 2000 — which tested responses to WMD attacks — cost the federal government $3 million. TOPOFF 3, however, cost $21 million after the number of federal agencies involved soared from eighteen to twenty-seven and came to include dozens of state and local agencies and 156 private organizations. As for TOPOFF 4, scheduled for next year, the expense is expected to be even higher, with a precursor event alone — June’s TOPOFF 4 Command Post Exercise - including more than 4,000 people and costing $3.5 million. Not suprisingly, some blame private industry for driving up cost without adding value. “If you’re a consultant, it’s pretty easy to go anywhere with these templates on [Microsoft] Word and scratch out ‘Boise, Idaho,’ and put in ‘Orlando, Fla.,’ ” says Eric Noji, retired associate director of the Bioterrorism Preparedness and Response program at the Centers for Disease Control and Prevention and now director of the Pandemic Avian Influenza Preparedness program at the Uniformed Services University.

Problems abound. Participants in tabletop drills may be less than honest, or attempt to brush away difficult situations. During a recent FEMA drill on hurricare preparedness, FEMA seemed unabashed about its lack of preparedness. “It happened a lot - the conversation would stop over something like generators or ice, and a FEMA guy would say, ‘Look, don’t worry about that, we’ve got contracts in place, you’ll get your million gallons of water a day or whatever,’ ” recounted one participant. To overcome this problem, independent evaluation is critical to ensure that exercises accurately reflect capabilities and deficiences.

All these exercises don’t mean anything unless there is some type of after-action report, [but] some people in some agencies see the exercise as the end in itself rather than a means to an end,” said professor Carl Osaki of the University of Washington, who has designed several simulations. “A lot of times the findings of the after-action reports require additional training or policy. Sometimes [producing the reports is] time-consuming, or they’re costly. So once they hit some of those barriers, the after-action report is sometimes seen as an academic exercise.”

-read more in Zack Phillips’ GovExec report