61.4 F
Washington D.C.
Wednesday, April 24, 2024

Preparedness Insanity: Why We Need to Think Differently About how to Measure Preparedness

Since 9/11, there have been billions of federal grant dollars awarded to state and local governments and an ongoing quest to measure preparedness. Congress, the Department of Homeland Security (DHS), Federal Emergency Management Agency (FEMA), the media and the American public have all expressed their desire to understand precisely how prepared we are, and how prepared we need to be. Yet, attempts to measure preparedness continue to fail, and homeland security officials still struggle to explain and assess preparedness.

It has been said that the definition of insanity is doing the same thing over and over and expecting a different result. This is exactly what has occurred with many of our national preparedness measurement efforts to date, as we have continued to seek a single system that computes preparedness based on a series of data inputs. This flawed notion included the ill-fated Cost-to-Capability program (in 2009) that aimed to measure the impact of grant dollars and has extended to the current approach used for the Threat Hazard Identification Risk Assessment (THIRA)/State Preparedness Report (SPR) requirements.

Like Cost-to-Capability, the THIRA/SPR relies on a series of data and information inputs to articulate preparedness levels. The ironic thing is that both initiatives require a tremendous amount of time and effort for very little return on investment (ROI) — yet we continue to charge down the path of a single solution to the problem, as all of these approaches seek one system to measure everything. It is also ironic that the very quantitative measures pursued in the name of analyzing ROI should also deliver such a limited return.

Measuring preparedness against the broad array of threats and hazards we face does not lend itself only to simple, objective statistics in the way that measuring specific reductions in vehicle accidents, residential fire deaths or workplace injuries does. We need to think differently about how to measure preparedness, and this thinking must begin to embrace the notion of subjectivity.

When it comes to any sort of evaluation or assessment, subjectivity is generally viewed as a negative concept. Evaluations should be objective or based on a standard set of criteria. That is true when the topic of the evaluation is well defined or understood and there are solid metrics to consider. However, preparedness is a topic that lacks comprehensive standards or solid metrics, and “being prepared” often means different things to different people.

There are other fields in which complex concepts and risks are assessed using subjective but rigorous processes. For example, in the realm of cyber security – a notoriously tough area in which to find objective measures that can be compared across firms, sectors and jurisdictions – several pioneering information security experts have created a very well respected, but nonetheless subjective, tool for measurement.

Dan Geer (a computer security analyst and risk management specialist recognized for raising awareness of critical computer and network security issues before the risks were widely understood) and Mukul Pareek (a risk professional and has worked extensively in audit, advisory and risk management), have created the Cybersecurity Index, a survey based tool in which experts in the field subjectively assess the changing levels of cyber risk. They argue, quite rightfully, that, “Subjectivity in determining an index does not erode credibility so long as transparency and consistency are maintained.”

Objective measures are useful when they can be well defined and there is consensus on their validity and utility; however, in the vast world of preparedness, there are fairly few — and fairly narrow — areas where there has been that kind of convergence on trusted metrics. Often there has been an attempt to take narrow “objective” measures and extrapolate much broader capability assessments, an analytic mistake that creates misleading statements about broad capabilities based on small components of the broader concept.

For example, the THIRA/SPR assesses unique capability targets (an often narrow sub-set of each capability) instead of looking at the entire capability in a standardized way, which results in challenges related to data aggregation and misleading summary information. In this case, the pursuit of specific targets or objectives actually produces less valuable and reliable information.

We certainly can and should continue to develop “objective” preparedness standards and metrics, but we must also think more broadly and begin to develop and utilize preparedness analysts in a similar way that we have developed intelligence analysts. Much like an intelligence analyst, preparedness analysts must be able to look at several data sets and multiple information sources to “connect the dots” and offer their insights.

While “preparedness analysts” do exist at the federal level, in some cases, they tend to be focused on strategic level assessments and less on truly understanding the operational abilities (and limitations) of state and local agencies. Keeping the federal government at the strategic level makes sense in this case, and it must fall to the states and localities to develop a new kind of preparedness analysis and a new kind of preparedness analyst.

So what would preparedness analysts use as the raw material for their assessments? Intelligence analysts have feeds of human intelligence, signals intelligence, and various other ingredients to create their finished intelligence. Preparedness analysts too have numerous streams of information on which to draw. These include:

  • Capability and gap assessments;
  • Surveys and interviews of subject matter experts (SMEs) – responders, emergency managers, etc.;
  • Training data;
  • Grant spending trends;
  • Plan and policy reviews;
  • Exercise assessments and after action reports (AARs); and
  • Assessments and AARs following real world events

When this type of information is examined in totality, the true picture of preparedness emerges. This picture is indeed complex, and far harder to communicate than a simple “objective” measure presented as a pie chart.  To use an example from the health field, there are many variables which contribute to understanding heart disease, such as: family history, diet, exercise and tobacco use. Statistics alone about the rates of heart disease-related deaths would not contribute to better health if there wasn’t ongoing and complex analysis of underlying causes and the steps to mitigate them.

Like intelligence analysts who routinely have to communicate complex intelligence pictures (using descriptions of confidence in judgments, information gaps and levels of uncertainty) or public health analysts, preparedness analysts, too, could create methods of succinctly communicating the insights of preparedness without misleading decision makers by measuring broad capabilities with narrow and non-representative quantitative measures.

Every day, we trust intelligence analysts to analyze information and offer their expert insights related to potential threats against this country. While there has been a push to involve more analytic methodology in intelligence analysis and making it less intuitive and more scientific, there remains a keen understanding in that field that there is more to analysis than mere analytics.

Quantitative and qualitative analysis both tell important stories, and the job of the analyst is to use judgment to weave them into a “decision advantage” for their audience. We must begin to apply this same type of approach and thinking as it relates to preparedness, an equally important and sometimes esoteric concept. If we do not begin to think and act differently, we are destined for continued failure on this important topic.

In addition to the insights from preparedness analysts, the direct feedback and opinions of first responders and other homeland security professionals should be solicited as well, as a great deal of insight can be gained from asking the right people the right questions, particularly when the dialogue is honest and not tainted by the notion of diminished grant funding for wrong answers.

In short, assessing preparedness levels is a complicated endeavor that is still — even more than 13 years after 9/11 — more of an art than a science. Current efforts to measure preparedness have generally focused around the search for a single system or tool that would “crack the code” in terms of preparedness levels nationwide. The results of this search have been underwhelming to date. Such undertakings might be worthwhile if allowed to develop slowly over time, with transparent stakeholder consultation and supported by a large number of preparedness analysts.

But short turnaround times, attempts at cost minimization, and lack of consultation with those who have to do the work of assessment, have doomed such efforts to irrelevance. As any engineer can tell you, you can have a product “Good, fast or cheap. Pick two.”

Instead, homeland security officials should consider three major factors in their search for a better system to assess preparedness levels. These factors include leveraging multiple systems and data sources to understand preparedness, the development and use of analysts much like we have in the intelligence field, and embracing some degree of subjectivity to help assess preparedness.

This approach may not be as cheap, or as fast, but it will produce preparedness assessments that are good – which will make us all better off.  This is a different approach, and one that will require some changes in focus; but by adopting a different approach, we can shift the dialogue and begin to end the “preparedness insanity.”

Terry Hastings is a senior policy advisor with the New York State Division of Homeland Security and Emergency Services and an adjunct instructor at the Rockefeller College of Public Affairs and Policy at the State University of New York’s University at Albany.


Brian Nussbaum is an Assistant Professor of Public Administration and Policy at the Rockefeller College of Public Affairs and Policy at the State University of New York’s University at Albany.  He formerly served as senior intelligence analyst with the New York State Office of Counter Terrorism.

author avatar
Homeland Security Today
The Government Technology & Services Coalition's Homeland Security Today (HSToday) is the premier news and information resource for the homeland security community, dedicated to elevating the discussions and insights that can support a safe and secure nation. A non-profit magazine and media platform, HSToday provides readers with the whole story, placing facts and comments in context to inform debate and drive realistic solutions to some of the nation’s most vexing security challenges.
Homeland Security Today
Homeland Security Todayhttp://www.hstoday.us
The Government Technology & Services Coalition's Homeland Security Today (HSToday) is the premier news and information resource for the homeland security community, dedicated to elevating the discussions and insights that can support a safe and secure nation. A non-profit magazine and media platform, HSToday provides readers with the whole story, placing facts and comments in context to inform debate and drive realistic solutions to some of the nation’s most vexing security challenges.

Related Articles

- Advertisement -

Latest Articles