The Catastrophe Method: Using Intolerable Consequences To Detect Concealed Threats

THE CATASTROPHE METHOD: USING INTOLERABLE CONSEQUENCES TO DETECT CONCEALED THREATS

By Gary Oleson, TASC Senior Engineer​

(I chose this paper, as the writer brings elements to consider thinking in terms of teams responding.)

The budget reductions being imposed on both the military and intelligence communities will result in greater risk from emerging or concealed threats as information about such threats declines along with Washington’s abilities to respond to them. The decline in intelligence expenditures, in particular, increases the likelihood that serious threats will evade detection and identification. A potential adversary could suddenly present the United States with a long-duration threat to its electrical grid or a crippling blow to its overseas force projection. Failure to look for such concealed threats is a formula for unacceptable strategic surprise. Since the natural tendency of intelligence operations is to focus available resources on known threats, looking for concealed threats is as much a challenge to resource allocation as search methodology. Both challenges can arguably be addressed by focusing on the most intolerable of potential consequences: national catastrophes.

GROWING RISK

For Defense Department (DoD) and Intelligence organizations to allocate analytical and collection resources to speculative threats is increasingly difficult in the current budget environment. Resources are never sufficient to take every desirable action against known threats. How, then, can spending intelligence resources on threats for which there is no distinctive evidence be justified?

The temptation is to focus the Intelligence Community’s (IC) analysis and collection only on threats it already knows about. The broad sweep of U.S. intelligence collection and analysis is trusted to be good enough to detect any major emerging threat before it becomes too immediate for an effective response. America’s record of intelligence success would seem to justify this confidence. Threats have been underestimated but rarely has the IC failed to detect indications of them. The country’s historical failures have come most often because its leaders did not pay attention to the available indications or failed to understand their significance until it was too late.

Budget contraction necessarily affects the Intelligence budget, and reduced resources increase risk. The IC and defense sectors are likely to focus even more on known threats while the intelligence collection and analysis that is expected to detect concealed threats is itself reduced. Reduced investment in strategic warning by the Intelligence Community would be an indicator of an increasing risk.

In focusing more narrowly on known threats, emerging or concealed threats will likely go undetected. If any of these concealed threats becomes a serious challenge to U.S. national interests, the nation could be subject to severe strategic surprise, becoming even more vulnerable to its “unknown unknowns.”1

RESOURCE ALLOCATION

In order to justify spending any resources on as yet undetected threats, a means of limiting the search is necessary. The number of things to be looked for needs to be reduced even before the IC knows exactly what it is looking for. The focus must be on threat scenarios that most deserve attention in an era of shrinking resources.

Not all potential threats are of equal concern. Many known threats could do little more than harass, causing little damage, but much annoyance. Others could produce significant damage, but not cause damage that could conceivably rise to the level of an existential or systemic threat to U.S. national security.

An existential threat is one that threatens the survival of the nation as a functioning political or economic entity. A systemic threat is one that threatens significant damage to a major component of the national interest that cannot be easily or quickly repaired. An example of systemic threat would be bringing down a regional power grid for a prolonged period. Another example would be serious degradation of the country’s ability to conduct global military operations, such as an enemy’s ability to defeat U.S. carrier battle groups or disabling the Defense Department’s ability to communicate with its overseas forces.

While the country’s leadership may reasonably decline to pursue speculative threats that, even if present, could do little more than minor injury to America’s national interest, failure to detect an existential or systemic threat cannot be tolerated. Spending a few resources on speculative threats that could threaten the national interest to a really serious degree makes sense. The challenge is how to spend those resources to maximum effect.

SEARCH METHOD

The scope of a search for concealed threats can be effectively reduced by limiting it to those threats that have a potential to cause intolerable consequences, that is, existential or systemic threats with potentially catastrophic consequences. Since the IC can’t know what threats it hasn’t yet detected, the process cannot start with the threats. By the process of elimination, a search for concealed existential threats is best accomplished by starting with the potential catastrophes.

By starting with the end result (a hypothetical catastrophe) and working back to the possible causes (concealed threats), an established intelligence analysis technique called “What If? Analysis” can be utilized. What If? Analysis begins by assuming that a particular end result has already happened, then works to develop scenarios for how it could have happened.2 The method can be used for any case where the importance of the end result is clear, but the potential chain of events leading to that result is unclear or has not yet been defined. It is especially useful when an end result is achievable, but regarded as too unlikely to consider by current conventional wisdom.

DEFINING CATASTROPHE

The first step is to create a well-defined end result that will support the purpose of the analysis. After that, scenarios are created to establish, as much as possible, what range of events could possibly lead to that result. The most plausible scenarios are then examined to determine what indicators might show that each scenario is becoming more likely. Finally, the scenarios are ranked in order of the attention they merit and the associated indicators are monitored periodically.

Defining a catastrophe to drive each analysis is the most important and, in subtle ways, perhaps the most difficult step in the analysis. The greater the level of damage needed to define a catastrophe, the fewer means will be able to cause it and the fewer threats that will be indicated as a result. The greater the defined catastrophe, the fewer threats will be needed to look for as a result of the analysis. Conversely, if the bar is lowered too far, the risk develops of ending up back at the beginning, trying to look for too many speculative threats and instead looking for none.

The source of the initial idea for a catastrophe doesn’t really matter, but two sources naturally present themselves. One source is to start with a plausible threat that has catastrophic potential and use it to create the initial idea for a catastrophe, for example, a cyber threat to some part of the national infrastructure. It doesn’t matter if the original threat later turns out to be impractical as long as it helps define a useful hypothesis.

The other source is to identify an area of vulnerability that would demand the attention of any adversary. For example, the United States’ dependence on space capabilities must provide tempting targets for near-peer adversaries, if they can only find ways to engage them. If such ways exist and are concealable, the need to be watching for them becomes clear.

After the initial idea is created, the next step is to define a hypothetical catastrophe serious enough to demand IC attention. For example, much has been written about cyber threats to the U.S. national electrical grid. The question arises as to when a blackout would become a catastrophe. The two key variables are geographical range and duration, as shown in Figure 1. Duration may be the more critical variable. The Northeast Blackout of 1965 covered a large area with a large population, but lasted only twelve hours at the maximum in any single location. At that duration, a blackout amounts mostly to an unscheduled holiday, especially since many essential services have generator backup.

How long, then, would a regional or national blackout have to last for it to have a catastrophic impact on the national economy? Some analysis is necessary to provide a basis for judgment. Taking down a regional grid for one day would not be enough. Would it take a week or longer? Or several weeks? The longer it takes for a blackout to become a catastrophe, the smaller the number of such threats will become. Taking the grid down would not be enough. An adversary would have to keep it down against the nation’s best efforts to restore it, a far more difficult challenge.

Figure 1. Defining catastrophe: Electrical blackout example.

Another example of defining a catastrophe derives from American dependence on space. Space capabilities are a foundation of America’s ability to project force around the globe. The U.S. uses them to communicate, gather intelligence, monitor the weather, perform precision navigation, and conduct precision strikes. Without their space capabilities, U.S. forces deployed around the globe would be unable to see, hear, speak, move, or hit back nearly as well as they can now. They might even have to cease operations until replacements or substitutes for the missing capabilities are activated.

How effective would an attack on U.S. space assets have to be to create a catastrophic effect on American operations in an adversary’s region? Disabling U.S. overhead communications might be the first priority, as information superiority is central to American military effectiveness. Disabling the space-based portion of U.S. communications would require disabling a large number of spacecraft in geosynchronous Earth orbit (GEO), the highest orbit that the country commonly uses.

Degrading U.S. precision navigation might be the second priority. That would require disabling a significant portion of the Global Positioning System (GPS), which would also degrade the country’s ability to conduct precision strikes and the time checks needed by its communications systems. At a little more than half GEO altitude, GPS occupies the second highest commonly used orbit.

Analyses might show that disabling all U.S. spacecraft in those high orbits (orbital altitudes shown to scale in Figure 2) would not be necessary to create a catastrophic effect, but the greater part of their function would have to be degraded for as long as the adversary deemed necessary. Disabling some U.S. satellites in low Earth orbit might also be necessary.

Figure 2. Orbital altitudes for key U.S. assets.

CREATING SCENARIOS

After defining the nature of a hypothetical catastrophe, the next step is to create scenarios for a range of means that could produce that event while remaining concealed from the expected intelligence collection. Brainstorming is likely to be a critical tool for identifying potential scenarios. Determining exactly how a scenario might work is not necessary. What’s needed is to define boundaries within which it must work. For example, any scenario for an enemy’s disabling large numbers of U.S. spacecraft in high orbits will have to overcome a series of challenges, especially since the preparations must be concealed. The goal in this scenario is to be as exhaustive as possible in identifying possible approaches, while avoiding the temptation to converge on a single attractive scenario.

This step can also usefully rule out some approaches. For example, even if a ground-based laser could theoretically disable spacecraft in GEO, analysis is likely to show that the effort would be so expensive, involve so many people, and produce such distinctive activities as to guarantee early detection. In all such cases, if the IC and DoD haven’t seen it, a potential enemy isn’t doing it.

IDENTIFYING INDICATORS

For the less easy to detect scenarios, the next step is to identify indicators that the means to execute a particular scenario are being developed. Looking for negative indicators, that is, indicators that the means for a scenario are not being developed, is also a good idea. The list of indicators associated with each scenario can then be used to guide analyses of existing information sources. They can define filters for existing data sets and provide questions for open source analysis. Identifying any indicators that can be observed by filtering easily accessible data sets is especially important since these can be monitored less expensively.

If these analyses reveal indications that potentially catastrophic threats are being developed, the issue moves into the more usual domains of intelligence operations. If not, testing the hypothesis that other powers are not developing the means to produce the selected catastrophe may then be possible, as opposed to the usual hypothesis that they are.

PERIODIC REVIEWS

If the possibility of concealed catastrophic threats cannot be ruled out using existing data sources, a set of indicators should be established to be monitored in an attempt to show whether any of the more plausible scenarios are becoming more or less likely. Also useful is ranking the threat scenarios in order of the danger they appear to present and focusing attention on their associated indicators accordingly. These indicators should then be reviewed periodically. Depending on the results of these periodic reviews, considering additional analysis or even collection of new intelligence may eventually be advisable. In this way, the expense of these exercises can be kept low, while providing some confidence over time that a concealed catastrophic threat is not being created.

As a final note, special attention should be paid to threat domains, such as cyber and space, where little historical experience of major conflict exists. Yet, without real world experience, all analytical ideas about these domains should be regarded as being based on theories that could prove totally false in practice.

IMPROVING DETECTION

If failure to detect concealed threats having the potential to cause a national catastrophe is unacceptable, then a method to search for such threats that is affordable even in this era of reduced resources becomes necessary. What If? Analysis provides such a method. By focusing first on the potential catastrophes, rather than the unknown threats, it becomes possible to work back and define a limited range of potential causes. Further analysis can narrow the range to those causes that could be concealed from the expected intelligence collection and analysis. Indicators common to all the potential threats within the focused range of potential causes can then be sought. Monitoring these indicators would provide an improved chance to detect any concealed threats, even while the threats themselves remain unknown.

REFERENCES

1 For discussions on analysis and surprise, see Richards J. Heuer, Jr. and Randolph H. Pherson,Structured Analytic Techniques for Intelligence Analysis (Washington, DC: CQ Press, 2011). See also Jack Davis, Strategic Warning: If Surprise is Inevitable, What Role for Analysis?, Occasional Papers: Vol. 2, No. 1, January 2003 (Washington, DC: Sherman Kent Center for Intelligence Analysis).

2 What If? Analysis is distinct from High Impact=Low Probability Analysis in starting from the consequence and working back to the causes. High Impact= Low Probability Analysis starts with a low probability development that seems to be increasingly likely and works forward to explore the consequences. What If? Analysis often addresses events with unknown probabilities.

Are you sure you want to delete this "resource"?
This item will be deleted immediately. You cannot undo this action.

Related Resources

Guidance material
20 Aug 2014
Wild Fire Safety Checklist that includes: how to prepare ahead of time, what to do if there is a wild fire in your area, and returning home after a wild fire.  American National Red Cross. 
Tags: Guidance material, Wildfire
Guidance material
14 May 2020
Cash transfer programming: Guidelines for mainstreaming and preparedness.
Tags: Guidance material, COVID-19 (Coronavirus), Post-Disaster Recovery
Awareness material, Guidance material
10 May 2020
Find the following guidance about Mental Health and Psychosocial Support: Psychosocial Centre IFRC – COVID-19 Resources Psychosocial Centre IFRC – COVID-19 Webinars / Videos Introduction to Psychological First Aid (PFA) in Epidemics IASC Addressi...
Tags: Awareness material, Guidance material, COVID-19 (Coronavirus)
Scroll to Top