# Explore or Ignore?

January 16, 2012 Leave a comment

The objective of this post is to set out a detailed explanation of the fact that **radius of stability** models of robustness are, by definition, models of **local** robustness and are therefore unsuitable for the management of a severe uncertainty that is characterized by a vast uncertainty space.

Some readers may wonder whether going to such great lengths to elaborate a fact that seems so patently obvious is in fact necessary. After all, the situation is crystal clear. The fact that radius of stability models are models of **local** robustness means that they are **not designed to seek a global** robustness. From this it follows that they are unsuitable to take on a severe uncertainty that is manifested in a vast uncertainty space because such an uncertainty requires to be handled by means of a **global** robustness analysis.

This observation is of course perfectly legitimate. Still, I submit that a detailed explanation of this issue is very much required, because it is important to make it clear to those who are not at home in this area of expertise that a number of (experienced) risk analysts/scholars do indeed propose to use radius of stability models (read: info-gap robustness models) to tackle problems that are subject to a severe uncertainty of this type. And what is more, it is important to make it clear that such a proposition is based on a misguided attribution of properties and capabilities of models of **global** robustness to models of local robustness. It is important to call attention to this error and to elucidate it because it is generally buried (in info-gap publications) under a pile of misleading rhetoric.

The discussion that follows makes this contention abundantly clear.

Consider then the following three clearly distinct (but related) sets that are associated with a parameter , call it , whose true value is unknown, indeed is subject to severe uncertainty:

The uncertainty space

Let denote the set of all the possible/plausible values of . The basic assumption is that is an element of this set. Hence, the idea is to determine the robustness of the system considered against variations in the value of over .The active uncertainty set

Let denote the subset of that effectively takes part in the robustness analysis. In other words, the set is that subset of , all of whose elements are reckoned with in the analysis so that they affect, or can affect, the results of the robustness analysis. We shall refer to the elements of as theactivevalues of .The inactive uncertainty set

Let denote thecomplementof . This is that subset of the uncertainty space whose elements arenotreckoned with in the analysis so that theyhave noimpact, indeedcan have noimpact, on the results generated by this analysis. We shall refer to the elements of as theinactivevalues of .

#### Example 1: global approaches

There are approaches to dealing with severe uncertainty that take the active uncertainty space to be equal to the uncertainty space . Obviously, such approaches hold that every possible/plausible value of counts and must therefore be considered in the analysis.

The picture is this:

Figure 1

This picture speaks for itself so that no further comment on it is required.

#### Example 2: scenario generation

It is common practice in many applications to base the uncertainty/robustness analysis on a relatively small number of possible realizations (scenarios) of the parameter of interest, rather than on the entire uncertainty space . In fact, in some applications, the uncertainty set may consist of infinitely many elements, but the analysis would take into account no more than three values of : an “optimistic” value of , a “pessimistic” value of , and an “average” value of . In such cases, the active uncertainty space consists of no more than three elements.

The generic situation is depicted in the figure below:

Figure 2

This picture speaks for itself so that no further comment on it is required.

In cases where the objective is to test the system’s behavior over the uncertainty space , it may prove necessary to generate an extremely large number of scenarios so as to ensure that one adequately represents the variation of the value of over . See for example the report Enhancing strategic planning with massive scenario generation: theory and experiments (2007), notably the following extract from the *Preface*:

As indicated by the title, this report describes experiments with new methods for strategic planning based on generating a very wide range of futures and then drawing insights from the results. The emphasis is not so much on “massive scenario generation” per se as on thinking broadly and open-mindedly about what may lie ahead. The report is intended primarily for a technical audience, but the summary should be of interest to anyone curious about modern methods for improving strategic planning under uncertainty.

That said, consider now the following extract from Wikipedia (January 10, 2012, emphasis added):

Scenarios planning starts by dividing our knowledge into two broad domains: (1) things we believe we know something about and (2) elements we consider uncertain or unknowable. The first component — trends — casts the past forward, recognizing that our world possesses considerable momentum and continuity. For example, we can safely make assumptions about demographic shifts and, perhaps, substitution effects for certain new technologies. The second component — true uncertainties — involve indeterminables such as future interest rates, outcomes of political elections, rates of innovation, fads and fashions in markets, and so on.

The art of scenario planninglies in blending the known and the unknown into a limited number of internally consistent views of the future that span a very wide range of possibilities.

It is important to take note then that scenario planning, as suggested here by the term “art”, is generally an extremely difficult task. This is due to the difficulties that one would face in quantifying and implementing the many **qualitative** guidelines offered by the vast literature on scenario generation.

#### Example 3: local analysis

There are many applications where the object of interest is the behavior of a system in a small neighborhood of the uncertainty space rather than in the entire set . In such cases, the active set can be a neighborhood, call it , of radius around some element of , where is given a priori and is determined according to some given recipe.

The generic situation is depicted in the figure below:

Figure 3

This picture speaks for itself so that no further comment on it is required.

Obviously, in cases such as these, one had better be able to provide a cogent argument explaining why the analysis is conducted **not on **the uncertainty space itself, but rather on a relatively small neighborhood thereof, where denotes the center point of the neighborhood and denotes its size (radius). The point is that experience has shown that there could be “good” and “bad” reasons for prescribing a local analysis. It is important therefore that the proposition to employ a local analysis be fully verifiable to allow prospective users to ascertain (in each case) whether such a proposition is indeed sound for the case considered.

It goes without saying that such a justification would be imperative in cases where the uncertainty analysis is claimed to deal with the full spectrum of the variability of , including values representing “rare events”, “surprises”, “catastrophes”, and so on. The onus would then be on anyone making such claims (ex. info-gap scholars) to explain, indeed justify, how a local analysis in a small neighborhood of the large space can possibly do the job it is claimed to do.

### Discussion

Having clarified the distinctive characteristics of the sets that might figure in an uncertainty analysis, my next task is to call attention to the fact that, surprising though it may sound, some scholars/analysts seem to confuse the following two distinctly different concepts/objects:

- The set of possible/plausible values of a parameter, denoted above by .
- The set of values of the parameter that actively participate in an analysis, denoted above by .

It is important to be aware of this fact as it is a prerequisite for a correct assessment of the results reported on in certain publications dealing with the management of severe uncertainty. For, as might be expected, those scholars/analysts who confuse the two concepts/objects effectively misconstrue the results yielded by the analysis that they perform. The following example illustrates this point.

#### Example: info-gap decision theory

The robustness analysis that is prescribed by info-gap decision theory is manifestly a local robustness analysis, namely the robustness of a decision is determined/defined by a model of local robustness. The implication therefore is that unless proven otherwise, the decision in question is/can be only locally robust/fragile. But, the **rhetoric** in this literature depicts the results yielded by the robustness analysis as though they were yielded by a **global** analysis. So much so that, some info-gap scholars contend that info-gap’s robustness analysis yields a decision that generates satisfactory outcomes under the **widest set** of possible values of the parameter of interest.

So, the question that those asserting this claim must answer is this:

How can a model of

localrobustness possibly yield a decision that generates satisfactory outcomes under thewidest setof possible values of the parameter of interest?

If you take the trouble to look into the issues raised by this question, you immediately see that info-gap’s robustness analysis does not seek such a decision because:

- The robustness analysis prescribed by info-gap decision theory
does not seeka decision that generates satisfactory outcomes under thewidest setof possible values of the parameter of interest.Allthat this robustness analysis prescribes doing is toseek a decisionthat generates satisfactory outcomes over thelargest neighborhoodof a given nominal value of the parameter.

And to see that this is so, keep in mind the above distinction between the uncertainty space and the active uncertainty set . Take note then that in the case of info-gap decision theory, insofar as a decision is concerned, the active uncertainty set is the following:

where can be any real number that is larger than

recalling that denotes the robustness of decision .

This is shown schematically in the figure above, where the uncertainty space is represented by the large rectangle and the active set is represented by the small yellow circle.

Of course, the more basic issue that this figure brings into sharp focus is info-gap decision theory’s “unique” approach to severe uncertainty. And to explain this point let us examine the following question:

What measures does info-gap decision theory take, more precisely, what measures does info-gap’s robustness model put in place in order:

- to deal adequately with the uncertainty being
severe, namely- to ensure that the active uncertainty set properly represents the
variabilityof over ?

And the answer to this question is this:

The measures that info-gap decision theory takes to deal with the difficulties arising from the uncertainty being severe is to …

ignorethe severity of the uncertainty. This fact is brought out forcefully by the above figure, which illustrates the profound (one might say comical) incongruity between the huge challenge posed by the severity of the uncertainty that the theory claims to address, and the localized robustness analysis that it prescribes to meet this challenge.More specifically, while the theory claims to take on a severe uncertainty that is manifested in

- a vast (e.g. unbounded) uncertainty space and a poor estimate that can turn out to be substantially wrong,

the weapon that it proposes to deal with this uncertainty is

- a robustness analysis that makes do with establishing the size of the
smallestperturbation in the estimate that can cause a violation of the performance requirement.

And if this were not enough, then this profound incongruity is further exacerbated by declarations in the info-gap literature that this type of analysis puts at the analyst disposal a reliable methodology for dealing with “rare events”, “surprises”, “catastrophes”, etc.

I refer to this incongruity as the **explore but ignore** effect, to wit:

**Explore:**By expressly positing a vast (e.g. unbounded) uncertainty space, info-gap decision theory presumably gives notice that it proposes to explore in depth the possible/plausible variations in the value of over its entire uncertainty space.

However,

**Ignore:**By employing a radius of stability robustness analysis, the theory**effectively**confines the search to decisions that are robust against small perturbations in the value of the estimate. It therefore ignores the performance of decisions over the bulk of the uncertainty space (No Man’s Land).

And what is so remarkable in all this is that info-gap scholars seem to have no clue of this incongruity. For how else can one explain the big fuss that is made in the info-gap literature about info-gap’s robustness analysis supposed capabilities to explore the entire uncertainty space .

The (misguided) argument that info-gap scholars put forward to substantiate this claim runs as follows:

- The nested neighborhoods in info-gap’s uncertainty model expand as increases.
- Furthermore, these neighborhoods are constructed/defined so that as the neighborhood approaches .
- So for a sufficiently large , the neighborhood is sufficiently similar to .

In short, the family of nested neighborhoods spans the uncertainty space .

This is illustrated in the picture below:

Figure 4

The neighborhoods are represented by the gray circles centered at and the uncertainty space is represented by the largest (blue) circle.

However!

While it is no doubt true that these neighborhoods do indeed span the uncertainty space, the important point to note here is that the key factor that drives info-gap’s robustness analysis and directly determines the results yielded by it is the **performance level** that the decisions are **required** to meet. In other words, this valid argument does not address at all the extent to which, if any, info-gap’s robustness analysis takes into consideration the performance levels in areas of the uncertainty space that are distant from the estimate .

Differently put, the info-gap robustness of decision is determined in total disregard to the performance of in relation to values of that are outside the neighborhood , where is any real number greater than .

For this reason I refer to the set as the No Man’s Land of decision at .

What this metaphor brings out is that it is immaterial whether your algorithm for computing the value of is capable of exploring the entire uncertainty space . The key element here is that this value is not affected by the performance of over the **No Man’s Land** . The inference therefore is that if to determine the value of it is unnecessary to explore the **No Man’s Land** , then exploring the entire uncertainty space is in fact wasteful.

It goes without saying that some algorithms for determining the radius of stability of systems exploit this fact.

The following sequence of pictures is designed to make vivid the errors in the argument that info-gap decision theory seeks decisions that are robust against severe uncertainty.

The first picture calls attention to the fact that, an info-gap analysis presupposes a distinction between “acceptable” and “unacceptable” values of . The shaded area represents the set of “acceptable” values of .

Figure 5

Next, only neighborhoods that are contained in the region of acceptable values of are admissible in an info-gap robustness analysis. This is illustrated in the picture below.

Figure 6

Hence, the info-gap robustness of the decision depicted in this picture is equal to the radius of the largest circle (neighborhood) contained in the shaded area. The largest circles (neighborhoods) are not admissible.

So the result of info-gap’s robustness analysis can be summarized by the following picture:

Figure 7

The info-gap robustness of decision , denoted , is equal to the radius of the largest (green) circle centered at that is contained in the shaded area. To be precise, the radius of this circle, namely the info-gap robustness of the decisions under consideration, takes no account whatsoever of the decision’s performance (shape of the shaded area) outside a circle that is slightly larger than the shown green circle, denoted , where is slightly larger than .

This is illustrated in the following picture that depicts the **No Man’s Land** of info-gap’s robustness analysis.

Figure 8

Now.

To the best of my knowledge, most countries do not ban analysts from conducting their robustness analysis in the **No Man’s Land**.

Still.

The whole point of the **No Man’s Land** metaphor is to illustrate a situation where the performance of a decision over its **No Man’s Land** is not taken into account in the robustness analysis. The message of this illustration is of course that in such a situation (as in the case of info-gap’s robustness analysis) the robustness of such a decision cannot be claimed to represent the decision’s performance over the **No Man’s Land**. So, if the **No Man’s Land** is vast and/or the performance of the decision over this region of the uncertainty space is a determinant factor in the analysis, then the robustness analysis cannot be claimed to reflect the robustness of the decision over the uncertainty space.

### Size of the **No Man’s Land**

I have been accused, on a number of occasions, of deliberately misrepresenting the implications of info-gap’s robustness analysis by drawing the region covered by this analysis as being far smaller than the **No Man’s Land**, let alone the uncertainty space.

Of course, given my discussion so far, I could have dismissed this claim outright as lacking in any merit. However, because a reply to these claims brings out the full dimensions of the fundamental flaw in info-gap’s robustness analysis, I think it important to take it up.

My reply to these accusations is that far from misrepresenting the facts about info-gap’s robustness analysis, my depiction of the **No Man’s Land** effect in the context of info-gap’s robustness analysis is in fact, extremely charitable. Indeed, my depiction of info-gap’s **No Man’s Land** is greatly in info-gap’s “favor” because its size in this picture is in fact immeasurably smaller than what it ought to be.

This is so because, according to the Father of info-gap decision theory (Ben-Haim 2001, p. 208; 2006, p. 210; emphasis added):

Mostof the commonly encountered info-gap models areunbounded.

This means that in the case of the “most commonly encountered” models, the **No Man’s Land** would typically be **unbounded**. And the implication is that my depiction of the region covered by info-gap’s robustness analysis vastly exaggerates its size: the region covered by info-gap’s robustness is infinitesimally small compared to the **No Man’s Land**.

### The Explore and/or Ignore quandary

My experience of the past eight years has shown that those who have fallen under the spell of info-gap decision theory have no clue of the basic contradiction that lies at its core. This is a contradiction between what this theory claims to do — seek decisions that are robust against severe uncertainty — and what it actually does — seeks decisions that are robust against small perturbations in a given value of the parameter of interest.

Based on numerous discussions that I have had over the past eight years with info-gap scholars, I can confidently state that their failure to detect this contradiction is due to a more basic incomprehension of the difference between **local** and **global** robustness. Info-gap users therefore have no qualms to assert that info-gap’s robustness analysis **explores** the uncertainty space in depth, not realizing that because this analysis is confined to the neighborhood of a point estimate, (which is assumed to be poor and can be substantially wrong), it effectively **ignores** the bulk of the the uncertainty space , hence the severity of the uncertainty under consideration.

Hence, the quandary that info-gap scholars find themselves in is this. If they advocate info-gap decision theory as a tool for dealing with a severe uncertainty of the type it claims to address, they must explain how this theory, which in fact ignores the severity of the uncertainty, can properly be advocated for this task. If on the other hand, they recognize that as a model of **local** robustness info-gap’s robustness model is not designed to explore in depth the uncertainty space, then they must explain how this fact squares with the rhetoric in the info-gap literature which hails info-gap decision theory as particularly suitable for the management of severe uncertainty.

That said, I should point out though that these matters are not discussed in the info-gap literature. As a result, the already entrenched misconceptions about info-gap’s robustness model purported capabilities to explore the uncertainty space become further exacerbated as illustrated for instance by a recent peer-reviewed article which proposes info-gap decision theory as a suitable method for handling not only Black Swans, but also unknown unknowns (See Review 17).

Viva la Voodoo!

Moshe