Saturday, March 7, 2009

Computer models and cognitive failure

[Note: as useful new links turn up, I will update this post.]

Even before computers, there were problems with models. A physicist's testimony to the US House of Representatives on climate change (pdf) provides a lovely historical example:
Modelers have been wrong before. One of the most famous modeling disputes involved the physicist William Thompson, later Lord Kelvin, and the naturalist Charles Darwin. Lord Kelvin was a great believer in models and differential equations. Charles Darwin was not particularly facile with mathematics, but he took observations very seriously. For evolution to produce the variety of living and fossil species that Darwin had observed, the earth needed to have spent hundreds of millions of years with conditions not very different from now. With his mathematical models, Kelvin rather pompously demonstrated that the earth must have been a hellish ball of molten rock only a few tens of millions of years ago, and that the sun could not have been shining for more than about 30 million years. Kelvin was actually modeling what he thought was global and solar cooling. I am sorry to say that a majority of his fellow physicists supported Kelvin. Poor Darwin removed any reference to the age of the earth in later editions of the “Origin of the Species.” But Darwin was right the first time, and Kelvin was wrong. Kelvin thought he knew everything but he did not know about the atomic nucleus, radioactivity and nuclear reactions, all of which invalidated his elegant modeling calculations.
One of the more mordantly amusing aspects of the current credit crisis is the massive failure of relying on computer models for assessing risk. A failure that was quite comprehensive:
In fact, most Wall Street computer models radically underestimated the risk of the complex mortgage securities … The people who ran the financial firms chose to program their risk-management systems with overly optimistic assumptions and to feed them oversimplified data. This kept them from sounding the alarm early enough.
Paul Volker has been publicly scathing about such financial engineering. A make-believe universe where spreadsheets were oracles and bigger and better computer models bigger and better oracles.

Even worse, under Basel II computer models became part of the regulatory framework:
Instead of applying a uniform standard (such as a specific debt to equity ratio) to all financial institutions, Basel II contemplated that each regulated financial institution would develop its own individualized computer model that would generate risk estimates for the specific assets held by that institution and that these estimates would determine the level of capital necessary to protect that institution from insolvency. But in generating this model and crunching historical data to evaluate how risky its portfolio assets were, each investment bank gave itself a discretionary opportunity to justify higher leverage. Because each model was ad hoc, specifically fitted to a unique financial institution, no team of three SEC staffers was in a position to contest these individualized models or the historical data used by them. Thus, the real impact of the Basel II methodology was to shift the balance of power in favor of the management of the investment bank and to diminish the negotiating position of the SEC's staff. Basel II may offer a sophisticated tool, but it was one beyond the capacity of the SEC's largely legal staff to administer effectively.
Financial institutions used highly sophisticated computer models, put together by highly-paid people using masses of data based on what was taken to the most up-to-date understanding of how things work. All of which gave the output of the models huge credibility.

The problem was precisely that they had such credibility. In particular, their output was treated as empirical evidence: as telling people about the state of their risk exposure.

They did nothing of the kind. All they did—all computer models can ever do—is tell you the consequences of your premises, both empirical and analytical/causal. They do not tell you about how the world is. They tell you about how you think the world is. One can then test your thinking about the world by comparing what your model(s) churn out to how the world turns out to be.

But this is a distinction that is very, very easy to lose sight of. Particularly given the effort and analytical power put into creating the things and their “black box”—facts in one end, results out the other end—nature.
Read More...
A piece on why economists failed to predict the crash makes some very pertinent points about the problems in the use of computer models. First was the allure of new technology with the demands of the flashy new toys overwhelming empirical difficulties:
As computers have grown more powerful, academics have come to rely on mathematical models to figure how various economic forces will interact. But many of those models simply dispense with certain variables that stand in the way of clear conclusions, says Wharton management professor Sidney G. Winter. Commonly missing are hard-to-measure factors like human psychology and people's expectations about the future, he notes.
Which led to major problems, as a recent report has noted:
"The economics profession appears to have been unaware of the long build-up to the current worldwide financial crisis and to have significantly underestimated its dimensions once it started to unfold," they write. "In our view, this lack of understanding is due to a misallocation of research efforts in economics. We trace the deeper roots of this failure to the profession's insistence on constructing models that, by design, disregard the key elements driving outcomes in real world markets."
The paper, generally referred to as the Dahlem report, condemns a growing reliance over the past three decades on mathematical models that improperly assume markets and economies are inherently stable, and which disregard influences like differences in the way various economic players make decisions, revise their forecasting methods and are influenced by social factors.
The prevalence of the use of the flashy new tools led to their own problems:
When certain price and risk models came into widespread use, they led many players to place the same kinds of bets, the authors continue. The market thus lost the benefit of having many participants, since there was no longer a variety of views offsetting one another. The same effect, the authors say, occurs if one player becomes dominant in one aspect of the market. The problem is exacerbated by the "control illusion," an unjustified confidence based on the model's apparent mathematical precision, the authors say. This problem is especially acute among people who use models they have not developed themselves, as they may be unaware of the models' flaws, like reliance on uncertain assumptions.
But a model is based on judgments about what matters and what does not, which is why empirical testing is essential:
Much of the financial crisis can be blamed on an overreliance on ratings agencies, which gave complex securities a seal of approval, says Wharton finance professor Marshall E. Blume. "The ratings agencies, of course, use models" which "grossly underestimated" risks.
"Any model is an abstraction of the world," Blume adds. "The value of a model is to provide the essence of what is happening with a limited number of variables. If you think a variable is important, you include it, but you can't have every variable in the world.... The models may not have had the right variables."
The false security created by asset-pricing models led banks and hedge funds to use excessive leverage, borrowing money so they could make bigger bets, and laying the groundwork for bigger losses when bets went bad, according to the Dahlem report authors.
A piece on failures of bank stress testing (pdf) puts the problem of such computer models very clearly:
Of course, all models are wrong. The only model that is not wrong is reality and reality is not, by definition, a model. But risk management models have during this crisis proved themselves wrong in a more fundamental sense. They failed Keynes’ test – that it is better to be roughly right than precisely wrong. With hindsight, these models were both very precise and very wrong.
To repeat: computer models are a way of testing your assumptions (as one statistician said, no model is true but some are useful) by checking the results of such simulations against the evidence, they are not evidence of how the world is. At best, in engineering models for example, one can say this is a system we know very well, and a model which we have tested a lot, so based on its very well-established record of extreme accuracy, we have run this proposed new structure in the model and ... In such cases, we have a very useful—because it is much tested based on a vast array of cases and knowledge about cases—model. But our much-hyped climate models are nowhere near that stage. Nor, clearly, are financial or economic models.

There are good reasons to be sceptical of fiscal stimulus as an anti-recession tool. But the Obama Administration's offered support for its amazing fiscal proposals do provide a wonderful example of the (mis)use of computer models--the computer model as modern policy oracle tool:
The Administration's estimates (pdf) for the effect of a stimulus plan cite no new evidence and no theory at all for their large multipliers. The multipliers come ".. from a leading private forecasting firm and the Federal Reserve’s FRB/US model." (Appendix 1) Multipliers are hard-wired in these models by assumption, rather than summarizing any evidence on the effectiveness of fiscal policy, and the models reflect the three theoretical fallacies above. The multipliers in this report are not conditioned on "slack output" or something else -- they state that every dollar of government spending generates 1.57 dollars of output always! If you've got magic, why not 2 trillion dollars? Why not 10 trillion dollars? Why not 100 trillion, and we can all have private jets? If you don't believe that, why do you think it works for a trillion dollars? Their estimates of industry effects come from a blog post (p. 8)! Ok, they did their best in the day and a half or so they had in the rush to put the report together. But really, before spending a trillion dollars of our money, wouldn't it make sense to spend, say one tenth of one percent on figuring out if it will work at all? (That would be 100 million dollars, more than has ever been spent on economic research in the entire history of the world.)
Which really is the modern equivalent of consulting the sheep’s entrails. (But it must be science, they used a computer! And mathematics! Together!)

Many of the financial models have, as a reader pointed out in a private email, the worse problem
they made up their own [models], in secret and separate from each other (they contain confidential data that could provide competitive leverage to their opposites)
so lacked outside checkability. Such lack of outside ability to check makes things worse, but allowing it does not solve the inherent problems of computer models being models.

That computer models were apparently allowed to set regulatory standards is simply horrifying and shows the extent of the cognitive failure, the failure in basic understanding of what computer models are and what they can do. Even the above quote on the use of computer models in regulation under Basel II misunderstands the problem, treating it as the SEC being "outclassed" (no doubt true) rather than a completely inappropriate use of computer models based on a profound misunderstanding of what they can and can not do and in what circumstances.

Moving on from proprietary financial models, consider this rather good Economist article on general economic models. Which is, and has been, more intensively studied: how economies work or how the world's climate works? (Think Central Banks, Treasuries, Banks, Investment Houses, Faculties, Consultancies ...) Which has more financial and other returns people with money and power care about riding on getting things right? OK, so which models do you think generally perform better: general economic models or climate models? (If you answered “[whichever,] but it makes no difference”, go to the top of the class.) More to the point, have all these resources and cognitive effort poured into economic and financial models provided any guarantee of success? Even given there are reasons to think it is inherently harder to model human systems than physical ones and macroeconomics does not even have a common analytical language? In the words of recent testimony before the US Congress:
Using models within economics or within any other social science, is especially treacherous. That’s because social science involves a higher degree of complexity than the natural sciences. The reason why social science is so complex is that the basic unit in social science, which economists call agents, are strategic, whereas the basic unit of the natural sciences are not. Economics can be thought of the physics with strategic atoms, who keep trying to foil any efforts to understand them and bring them under control. Strategic agents complicate modeling enormously; they make it impossible to have a perfect model since they increase the number of calculations one would have to make in order to solve the model beyond the calculations the fastest computer one can hypothesize could process in a finite amount of time.
Put simply, the formal study of complex systems is really, really, hard. Inevitably, complex systems exhibit path dependence, nested systems, multiple speed variables, sensitive dependence on initial conditions, and other non-linear dynamical properties. This means that at any moment in time, right when you thought you had a result, all hell can break loose.
Any analytical discipline that comes to rely on computer models is in deep, deep trouble: particularly if they are treated as providing empirical evidence. And the more there are things that matter in understanding some system that are poorly understood, the deeper the trouble.

In the case of the financial institutions, clearly the various financial instruments being modelled where not understood anywhere near as well as people thought. The modellers covered them, they just did not cover them accurately. So all the models ended up showing was (after fact) what their modellers did not understand: their “unknown unknowns” (i.e. they apparently did not know that they did not know). And a basic principle of modelling is, that if you do not understand it, you cannot model it.

The case is hardly better if one has known unknowns: things that you know that you do not know. Since one then has to put “patches” in one’s computer models to “guess” at their operation. Guesses that are unlikely to be accurate but which the existence of the computer model disguises them as being guesses. Indeed, it is expected that we will have the computing power to fully model a cup of coffee in about 10 years. Compare that level of difficulty in fluid dynamics to an entire climate.

Such as, for example:
... the oceans and their circulations are the thermal and inertial flywheels of the climate system; as the ocean circulation changes, the atmosphere and its climate respond. Our knowledge of subsurface ocean circulations and their variability is limited. Without this vital input, projections of future climate are tenuous at best.
Climate models vary quite a bit in their assumptions about carbon sensitivity but come up with much the same results:
The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy.
Then there is "adjustment” of sea level data:
So it was not a measured thing, but a figure introduced from outside. I accused them of this at the Academy of Sciences in Moscow —I said you have introduced factors from outside; it's not a measurement. It looks like it is measured from the satellite, but you don't say what really happened. And they ans-wered, that we had to do it, because otherwise we would not have gotten any trend!
That is terrible! As a matter of fact, it is a falsification of the data set. Why? Because they know the answer. And there you come to the point: They “know” the answer; the rest of us, we are searching for the answer. Because we are field geologists; they are computer scientists. So all this talk that sea level is rising, this stems from the computer modeling, not from observations. The observations don't find it!
The sort of thing that might make an IPCC expert reviewer a bit annoyed:
My main complaint with the IPCC is in the methods used to "evaluate" computer models. ... This has become so complex that many have failed to notice that it has no scientific basis, but is just an assembly of the "gut feelings" of self-styled "experts". It has been developed to a complex web of "likelihoods", all of which are assigned fake "probability" levels.
As Australian economist David Clark used to say of econometrics:
if the data is sufficiently tortured it will confess.
This is not to knock climate science, still less empirical results. But if one is going to be sceptical about economic models, fine. But don't then turn around and worship climate models. They still can't work out how to handle clouds. Though they are getting better. This without getting into some of the weird and wonderful economic forecasting assumptions built into prominent climate models. And it does not help that we still cannot model the Sun and its activity properly:
"It turns out that none of our models were totally correct," says Dean Pesnell of the Goddard Space Flight Center, NASA's lead representative on the panel. "The sun is behaving in an unexpected and very interesting way."
In the words of Freeman Dyson:
The climate-studies people who work with models always tend to overestimate their models. They come to believe models are real and forget they are only models.
Modelling is a somewhat dodgy business. It does not become less dodgy simply because a "hard" scientist is modelling a really complex system. It may well be better to lean more on statistical extrapolation of a relatively simple and transparent sort than complex computer modelling. Particularly as the IPCC models do not do better than the "no-change" model:
The errors of the IPCC projection over the years 1992 to 2008 were little different from the errors from the no-change model, when compared to actual measured temperature changes. When the IPCC’s warming rate is applied to a historical period of exponential CO2 growth, from 1851 to 1975, the errors are more than seven times greater than errors from the no-change model.
A former NASA scientist is sceptical about anthropogenic global warming, in large part because he does not think climate models are worth much:
My own belief concerning anthropogenic climate change is that the models do not realistically simulate the climate system because there are many very important sub-grid scale processes that the models either replicate poorly or completely omit ... Furthermore, some scientists have manipulated the observed data to justify their model results. In doing so, they neither explain what they have modified in the observations, nor explain how they did it. They have resisted making their work transparent so that it can be replicated independently by other scientists. This is clearly contrary to how science should be done. Thus there is no rational justification for using climate model forecasts to determine public policy.
Someone with a lot of background in forecasting is also unimpressed with climate models. While this annoyed scientist not impressed about almost anything about AGW including the modelling. Models are sensitive to which specifications for key processes or variables are used.

Even a supporter of the climate models can suggest that they have been somewhat oversold:
But perhaps the greatest danger is climate scientists blatantly overselling what we know. That could bring everything down and cost the world valuable time.
Overselling what we know could bring climate science down.
Or simply acknowledging how much they are still a work in progress:
"Model biases are also still a serious problem. We have a long way to go to get them right. They are hurting our forecasts," said Tim Stockdale of the European Centre for Medium-Range Weather Forecasts in Reading, UK.
There are plenty of voices concerned about the problems with computer models in their application to climate science. And it is so easy to have some fun with sub-prime metaphors. James Lovelock, he of the Gaia hypothesis, is also worried that computer modelling is undermining science:
Gradually the world of science has evolved to the dangerous point where model-building has precedence over observation and measurement, especially in Earth and life sciences. In certain ways, modelling by scientists has become a threat to the foundation on which science has stood: the acceptance that nature is always the final arbiter and that a hypothesis must always be tested by experiment and observation in the real world.
It is notoriously difficult to find geologists who support CAGW:
I do not know one geologist who believes that global warming is not taking place. I do not know a single geologist who believes that it is a man-made phenomenon.
Geologist Ian Plimer is certainly not one of them. He has some things to say about the use of computer models:
Much of what we have read about climate change, he argues, is rubbish, especially the computer modelling on which much current scientific opinion is based, which he describes as "primitive".
More broadly, environmental “science” based on computer models has a track record – a poor one. (I was a bit annoyed to find that claims I had seen around a lot had been generated by computer models and I was not aware of that.) There are profound and inherent difficulties with modelling complex systems:
[p]erhaps the single most important reason that quantitative predictive mathematical models of natural processes on earth don't work and can't work has to do with ordering complexity. Interactions among the numerous components of a complex system occur in unpredictable and unexpected sequences.
Yet the models become talismans, impressive in their "power" and convenience:
environmental science finds itself caught in the grip of "politically correct modeling" (the authors' emphasis) in which there is enormous pressure on scientists, many of whom discover "that modeling results are easier to live with if they follow preconceived or politically correct notions." The models take on a life of their own, and become obstacles to conducting serious field studies that might strengthen our empirical grasp of ecosystem dynamics. "Applied mathematical modeling has become a science that has advanced without the usual broad-based, vigorous debate, criticism, and constant attempts at falsification that characterize good science ..."
Our climate models are not scientific predictions, still less projections. They are guesses, in the sense of conjectures rather than estimations. Partially informed, fairly sophisticated, guesses: but guesses. Guesses that them being computer models by clever people disguise being guesses. Exactly how many billions of dollars are such guesses worth the wager of? There may be a few folk on Wall St who can give you a heads up on that.

UPDATE See also this critique of models in general. Which is not the same as arguing that all models are known to be false.

12 comments:

  1. Bravo. People always want more details than "Garbage in, garbage out!" and "It's philosophy not science." From now on, I'll point them to this post.

    ReplyDelete
  2. For the remaining hard questions, Science has to move beyond Reductionism, Models, and even Logic. This is the focus of my current work; I have posted video of a talk named "Science Beyond Reductionism" and another named "A New Direction In AI Research" that both promote my idea that Model Free Methods (AKA Holistic Methods) are an absolute requirement if we want to achieve AI; but they are also the key to other places where Reductionist Models cannot be made or are known to fail.

    See http://videos.syntience.com for videos and http://monicasmind.com for my blog. My AI work is described on http://artificial-intuition.com .

    ReplyDelete
  3. Monica: thanks for the link. Sorry it took me so long to get around to reading it. You might be interested in the reasoning behind Jeffrey Friedman's analysis (pdf) taking Hayek's cognitive arguments into the realm of politics, it seemed relevant to your area of interest.

    ReplyDelete
  4. This comment has been removed by a blog administrator.

    ReplyDelete
  5. This comment has been removed by a blog administrator.

    ReplyDelete
  6. The best debunking of climate models I have read as a comment from a person who uses real computer models to do very real things in the space industry …

    http://wattsupwiththat.com/2011/09/22/ipcc-models-hadcrut-and-cherrymandering/#comment-749815

    ReplyDelete
  7. This comment has been removed by a blog administrator.

    ReplyDelete
  8. A very erudite speech on cognitive bias, the inability of experts to predict anything and most important of all, the need for heresy in science.

    http://www.bishop-hill.net/blog/2011/11/1/scientific-heresy.html

    ReplyDelete
  9. Yes, I have read it, it is excellent.

    ReplyDelete
  10. " One can then test your thinking about the world by comparing what your model(s) churn out to how the world turns out to be."

    The observable data is out pacing the predictions. Thats the irony the problem with the models is on the flip side.

    ReplyDelete