A wild-goose chase after a wild guess

In this post we examine the role of a “wild guess” in decision-making, with the view to demonstrate that methodologies that make a “wild guess” a key ingredient of their analysis, can easily turn out to be voodoo methodologies namely, exponents of a voodoo decision theory. Our main objective is to bring to light the pitfalls lurking in this proposition, because these may not be easily discernible to all those who come across a methodology proposing a “wild guess” as a key ingredient of its analysis. For, as experience has shown, the rhetoric accompanying such methodologies can easily conceal the plain facts about what these methodologies actually do, so that even experts can be fooled by them.

It ought to be pointed out, though, that the voodoo methodologies that are of concern to us in this discussion can be identified for what they are by universally accepted “tests”. It is therefore most unfortunate that experts who fail in their duty to apply these simple, well-known tests, are instrumental in legitimizing the use and promotion of voodoo decision making.

To illustrate this point we examine info-gap decision theory’s alleged great prowess to obtain reliable results, under conditions of severe uncertainty, on the basis of a local analysis in the neighborhood of a poor point estimate that can be substantially wrong. For the benefit of those who are not familiar with this theory, it should be pointed out that info-gap decision theory is hailed (in the info-gap literature) as a theory offering a reliable methodology that seeks robust decisions for decision problems where a key parameter is subject to a severe uncertainty. The severity of the uncertainty is characterized by:

  • a vast (eg. unbounded) uncertainty space,
  • a poor estimate of the parameter of interest that can be substantially wrong, and
  • a likelihood-free uncertainty model.

To simplify matters, assume that the parameter of interest is a real number u and that the uncertainty space is \mathbf{U}=(-\infty,\infty). This means that the true value of u can be any real number and what is more, that we are in no position to specify any likelihood structure on \mathbf{U} relating to the true value of u.

No doubt, the uncertainty in the true value of u, as stipulated by info-gap decision theory, is indeed severe.

So, how does info-gap decision theory propose to deal with such a situation?

The first question that immediately comes to mind is this. What would happen in situations where we would be unable to put forward an estimate of the true value of u. After all, there are situations where, for various reasons, this piece of information is not available. In such cases all we would be able to say is that the true value of u is an element of some set \mathbf{U}, call it the uncertainty space, and that this set consists of all the possible/plausible true values of u under consideration. In other words, the uncertainty under consideration would be characterized by these two properties:

  • The uncertainty space \mathbf{U} is vast (e.g. unbounded).
  • The uncertainty is likelihood-free

It is important to note that the second property entails thatthere are no grounds to assume that the true value of the parameter of interest is more/less likely to be in the neighborhood of a given value of u, say u', than in the neighborhood of any other value of u, say u''.

The question is then what would be the implications of this state of affairs, where no estimate of the true value of u would be available, for info-gap decision theory

It is important to appreciate that this question goes to the heart of info-gap decision theory. This is so because its application hangs on the availability of a point estimate of the true value of u. To wit, the point estimate is a key ingredient of info-gap’s uncertainty model, so that by necessity it is the fulcrum of its robustness model and its decision model. In other word, you cannot even contemplate using the info-gap methodology unless you put forward a point estimate of the true value of the parameter of interest.

It goes without saying that the results generated by info-gap’s robustness analysis are affected by the value of the point estimate. But what is more, the “quality” of the results generated by the analysis is contingent on the “quality” of the estimate used in the analysis.

Considering then the pivotal role of the estimate in info-gap decision theory’s methodology, the question arising is this: what should/can be done to enable implementing it in “pathologic” cases where no estimate is available?

As this question opens a “pandora’s box” of complicated issues, we shall not go into it here. Instead, let us examine two possible practical solutions:

  • We nominate a completely arbitrary element of \mathbf{U} to serve as the point estimate. Namely, we let the estimate be a wild guess!
  • We decline to use the theory.

Still, neither option implies a smooth sailing. So let us examine what each option entails.

Chasing a wild  GOOSE  guess

In many applications, nominating “a wild  GOOSE  guess” as an estimate of the (unknown) true value of the parameter of interest is done as a matter of course. So, in this sense, info-gap decision theory is not unique. What makes it unique, though, is its treatment of this “wild guess”. Namely, the role and place that it assigns to the “estimate” in the analysis that it prescribes performing to define and identify robust decisions. In fact, this is what makes the theory a voodoo decision theory par excellence.

And to see why this is so take note that:

  • All that the theory prescribes doing to this end is to conduct a local robustness analysis in the neighborhood of the (poor) estimate. More bluntly, it makes do with a local robustness analysis in the neighborhood of the (poor) estimate.
  • It prescribes no sensitivity analysis whatsoever for this point estimate.

Consider then the consequences of this methodology.

Local vs Global robustness

Keeping in mind the above characterization of the likelihood-free property, it is clear that the immediate implication of info-gap’s uncertainty model being likelihood-free is that there are no grounds whatsoever to assume that the true value of the parameter is more/less likely to be in the neighborhood of the estimate than in the neighborhood of any other value of the parameter. This implies that there are no grounds to focus the robustness analysis on the neighborhood of the estimate. Because, not only methodologically, but practically as well, there are no grounds to assume that the local robustness of a decision in the neighborhood of the estimate would be a good, or, for that matter, a bad indication of its robustness to severe uncertainty.

Indeed, to determine the robustness of a decision against the severe uncertainty in the parameter’s true value, it is imperative to examine the decision’s performance over the entire uncertainty space. This means that the robustness analysis must be global in nature. Namely, it must be based on a suitable definition of global robustness that seeks to take adequate account of the entire uncertainty space.

All this goes to show that info-gap decision theory’s prescription for robustness to uncertainty is reminiscent of the Lamppost Trick . Clearly, in the framework of info-gap decision theory’s methodology, the wild guess is assigned the role of the lamp in the Lamppost Trick.

And what is so remarkable in all this is that the estimate, which is admitted to be no more than a “wild guess”, is not even subjected to a sensitivity analysis.

Sensitivity analysis

For many years now, sensitivity analysis has been a vital component of the analysis of quantitative models, so much so that the proposition that a model’s key parameters be submitted to a sensitivity analysis is taken for granted. Thus, consider the following statements:

1. Introduction. A parameter sensitivity analysis (SA) is considered to be so important in any modeling activity that it has become a routine exercise that is expected of any modeling project.

Hearne, J. (2010, p. 107)
Ab automated method for extending sensitivity analysis to model functions
Natural Resource Modeling, 23(2), 107-120, 2010

6. Conclusion. Although parameter SA is expected of all models, an analysis of the functions used in a model is performed less frequently.

Hearne, J. (2010, p. 119)

But, in info-gap decision theory, which claims to offer a reliable methodology for decision under a “truly” severe uncertainty, the estimate \tilde{u} of its robustness model’s key parameter is not put to the test of a sensitivity analysis (stress test). And this in spite of the fact that the point estimate of this key parameter is assumed to be a poor indication of the parameter’s true value that can turn out to be substantially wrong.

It is important to realize that the local info-gap robustness analysis that info-gap decision theory conducts with respect to the parameter u is no substitute for a sensitivity analysis with respect to the point estimate \tilde{u}. The point to note here is that the info-gap robustness of a decision, that is obtained on grounds of \tilde{u}, can (and usually does) vary with the value of the estimate \tilde{u}.

To illustrate this point, consider the following figure. It displays the info-gap robustness analysis of a decision for two values of the estimate, namely \tilde{u}' and $\tilde{u}’&s=1$. The rectangle represents the uncertainty space \mathbf{U}.

The radii of the two blue circles represent the info-gap robustness of the decision under consideration relative to the two point estimates. Clearly, the decision under consideration is far more info-gap robust at \tilde{u}' than it is at \tilde{u}''.

However, in view of the fact that the uncertainty space of info-gap’s model is likelihood-free, the question arising is which of the two analyses, if indeed any of them, properly reflects the decision’s robustness against the severe uncertainty in the true value of the parameter. Which means of course that the estimate yielding such results ought to be put to the test. And yet, info-gap decision theory is utterly oblivious to this basic issue.

In sum, the theory seems utterly unconcerned about the fact that the results yielded by the analysis that it prescribes are based on a “wild guess” and that they may therefore be ” highly suspect”. To the contrary, the results obtained from this analysis are given official sanction by the theory to the effect that the theory seeks decisions that are robust to severe uncertainty.

Universally accepted tests

Of course, the promulgation of unsubstantiated theories/methods is not a new phenomenon. So, over the years, common-sense tests have been put forward to enable the diagnosis of such theories. These tests seek to pinpoint the flaws in methods/theories that render them suspect. We illustrate how two such tests would be used to identify the flaws that undermine info-gap decision theory.

The “no free lunch” test

The no free lunch metaphor seeks to highlight the point made by the so-called No Free Lunch theorems in a range of areas of expertise. Broadly speaking, the idea here is that problems have certain inherent difficulties that must be dealt with directly, that is in a manner that takes on the specific issues that these difficulties give rise to, because otherwise these problems cannot be considered “solved”. This means that if these difficulties are not properly reckoned with in the analysis, they are certain to resurface at a later stage, when one would discover that the results yielded by the analysis in fact … failed to address the issues that one had set out to resolve in the first place.

The no free lunch effect is illustrated perfectly in the case of info-gap decision theory.

To explain, because an info-gap methodology (as we saw above) effectively ignores the severity of the uncertainty that info-gap decision theory claims to address, users of this theory are bound to discover that … having completed the analysis prescribed by this theory, they need to go back to … the difficulties presented by the severity of the uncertainty to deal with them themselves. This fact is stated eloquently in this observation:

Analysts who were attracted to IGT because they are very uncertain, and hence reluctant to specify a probability distribution for a model’s parameters, may be disappointed to find that they need to specify the plausibility of possible parameter values in order to identify a robust management strategy.

Hayes, K. (2011, p. 88, emphasis added)
Uncertainty and Uncertainty Analysis Methods
Final report for the Australian Centre of Excellence for Risk Analysis (ACERA)
CSIRO Division of Mathematics, Informatics and Statistics, Hobart, Australia
130 pp.

The point made by this observation is that if one sets out to tackle a problem that is subject to severe uncertainty, then sooner or later one will have to deal with the specific issues arising from the … uncertainty being severe.

So, in the case of info-gap decision theory, because its local likelihood-free robustness model cannot possibly cope with the issues arising from the severity of the uncertainty that info-gap decision theory postulates, users of this model sooner or later discover that they must come up with their own measures to deal with this uncertainty.

The inference therefore is that one would be well-advised to examine carefully any decision theory which nominates a poor point estimate as the basis of the analysis, to make sure that it does not offer a “free-lunch”. Namely, one had better make sure that the theory faces up to the estimate’s poor quality and that it takes the appropriate measures to meet this fact. Otherwise, as indicated by Hayes (2011), one might be disappointed to learn that it is impossible to justify, indeed verify, the validity of the results generated by the theory, without dealing properly with the … poor quality of the estimate.

Another means that should immediately reveal whether a methodology centered on a poor estimate is offering a “free lunch” is an appeal to the following maxim.

GIGO

Clearly, the well-known and frequently appealed to Garbage In — Garbage Out (GIGO) maxim requires no commentary. Still, it is worth noting that keeping this maxim in mind can save one from the ambarassment of falling into the trap of a “free lunch methodology”. For, keeping in mind the instruction it gives, should remind one that: the default assumption about the quality of the results generated by a model or an analysis fed with poor quality (garbage) input is that … the output can be expected to be on a par: garbage. Schematically,

\textrm{{\bf Garbage}\ In} \rightarrow \fbox{\raisebox{0.85cm}{\ }\ Model/Analysis\ \raisebox{-0.5cm}{\ }} \rightarrow \textrm{{\bf Garbage}\ Out}

Corollary:
The results of an analysis are only as good as the estimates on which they are based.

Hence,

\textrm{{\bf Poor}\ Estimate} \rightarrow \fbox{\raisebox{0.85cm}{\ }\ Model/Analysis\ \raisebox{-0.5cm}{\ }} \rightarrow \textrm{{\bf Poor}\ Results}

It is important to realize that the real trouble with voodoo decision-making methodologies is not that their results are obtained from poor estimates based analyses, but that their interpretation of the results contravenes the GIGO maxim and its many corollaries. The picture then this:

\textrm{{\bf Garbage}\ In} \rightarrow \fbox{\raisebox{0.85cm}{\ }\ Voodoo\  Analysis\ \raisebox{-0.5cm}{\ }} \rightarrow \textrm{{\bf Gold}\ Out}

Thus, given the severe uncertainty predicated by info-gap decision theory hence, the poor quality of the estimate on which its analysis is based, the GIGO maxim implies the following obvious conclusions:

  • The inherently local orientation of info-gap’s robustness model, and the assumed poor quality of the estimate, mean that the results would be commensurate: of poor quality.
  • To meet this problem, it is imperative to adopt a global approach to robustness.

Methodologies that are based on decision theories that do not address these issues should be suspected of being voodoo methodologies.

Declining the use of a theory

Just as it is vital to establish whether a theory has the capabilities to deliver on what it claims to deliver, it is important to be able to determine when a theory should be rejected on the grounds that it is unsuitable for application in a case under consideration. Thus, when it comes to info-gap decision theory, the situation is straightforward. It is eminently clear that this theory utterly unsuitable for the treatment of severe uncertainty of the type that it stipulates because it clearly lacks the capabilities required for this task.

That this is so is evidenced by the fact that the robustness model that it puts forward for the pursuit of robustness to uncertainty is a model of local robustness, to be precise a radius of stability model. As such, this model is designed to seek decisions that are robust against small perturbations in the nominal value of a parameter, meaning that, by definition, this is the only task it ought to be used for. However, as the pursuit of robustness to severe uncertainty (of the type postulated by it) requires the employment of a model that is designed to seek global robustness, it is clear that info-gap decision theory must be rejected on the grounds that it’s robustness model is unsuitable for this task.

That said, it is important to point out that instruction on how to properly approach the problem of robust decision-making under severe uncertainty can be found in the vast literature on robust optimization. It is also important to point out to the followers of info-gap decision theory that info-gap’s robustness model is in fact an extremely simple robust optimization model.

The bottom line

A decision theory claiming to deal with severe uncertainty must demonstrate that it properly addresses the … severity of the uncertainty under consideration. This requirement is particularly binding on any decision theory nominating a poor point estimate of the true value of the parameter of interest as the key ingredient of the robustness analysis. Such a theory must show that it properly address the … poor quality of the point estimate under consideration.

Universally accepted maxims such as there is no free lunch and garbage in — garbage out can easily identify/detect voodoo decision methodologies that are based on poor estimates but … ignore the poor quality of the estimate.

One can also watch out for the too good to be true signals.

Viva la Voodoo!

Moshe

Comments are closed.