This article is a guest contribution by John Hussman, Hussman Funds.
"Is this something that could happen once in a million times? Is it something that could happen once in a thousand times, or once every 5,000 times? What exactly are the risks involved?"
President Barack Obama 5/17/10, addressing the Gulf oil spill
As we observe the recent oil spill in the Gulf of Mexico, the recent banking crisis, and the ongoing concerns about sovereign debt in Europe, one of the things that strikes me is that few analysts are much good at assessing probabilities for worst case scenarios.
We typically refer to the probability of some event Y as P(Y), and write the probability of Y, given some information X, as P(Y|X). So for example, the probability of a vehicle being a school bus might be only 1%, but given some extra information, like "the vehicle is yellow and full of children," the estimated "conditional" probability would go up enormously.
With regard to oil spills, however low one might have believed P( we'll have an oil spill ) to be, prior to the recent accident, the "prior" probability estimate should change given that we've now observed one of the worst oil spills in history. Even if the oil industry previously argued that the probability of an oil spill was one in a million, it's hard to hold onto that assessment after the oil spill occurs, unless your faith in the soundness of the technology is entirely unmoved in the face of new information.
See, if P( the technology is flawed | we had an oil spill) is 80%, and P( we'll have another oil spill | the technology is flawed ) is 80%, then regardless of how extremely unlikely you thought oil spills were before we observed one, or how unlikely you thought it was that the technology was flawed, you would now estimate P( we'll have another oil spill | we had an oil spill) at no less than 80% x 80% = 64%*.
While there are about 3800 oil platforms in the Gulf of Mexico, only about 130 deep water projects have been completed, compared with just 17 a decade ago. So in 10 years, applying a new technology, we've had one major oil spill thus far. Unless there is some a priori reason to assume that the technology is pristine, despite the fact that it has failed spectacularly, the first back-of-the-envelope estimate a statistician would make would be to model deep water oil spills as a "Poisson process." Poisson processes are often used to model things that arrive randomly, like customers in a checkout line, or insurance claims across unrelated policy holders. Given one major oil spill in 10 years, you probably wouldn't be way off the mark using an average "arrival frequency" of 0.10 annually.
From that perspective, a simple Poisson estimate would suggest a 90.5% probability that we will see no additional major oil spills from deep water rigs over the coming year, dropping to a 36.8% chance that we'll see no additional major oil spills from deep water rigs over the coming decade. Moreover, you'd put a 36.8% chance on having exactly one more major spill in the coming decade, an 18.4% chance on having two major spills, a 6.1% chance of having three major spills, and a 1.9% chance of having four or more major spills in the coming decade. This is quite a bit of inference from a small amount of data, but catastrophes contain a great deal of information when the "prior" is that catastrophes are simply not possible.
Given that the worst offshore oil spill in Australia's history happened only in November 2009 (which took months to shut down), this sort of estimate does not seem unreasonable. In any event, disasters contain information. It's no longer reasonable to apply previous risk estimates even after we've observed a major spill.
Similarly, before the housing crisis, it might have been tempting to shrug off mortgage defaults as relatively isolated events, since the price of housing had generally experienced a long upward trend over time. Indeed, historically, sustained declines in home prices could be shown to be very low probability events. But as the bubble continued, investors made little attempt to assess the probability of a debt crisis given that home prices had become detached from all reasonable metrics of income and ability to pay. Just as buy-and-hold investors assumed that the long-term return on stocks was constant at about 10%, despite the late 1990's valuation bubble, investors during the housing bubble kept looking at the "unconditional probability" P( credit crisis ) based on decades of normal housing valuations, when they should have recognized that the conditional probability P( credit crisis | extreme housing overvaluation and lax credit standards ) was probably higher. This turned out to be a profound oversight.
But that was evidently not a sufficient lesson. As soon as the surface appearance of the problem was covered up by an expensive and opaque band-aid of government bailouts and suspension of accounting transparency by the FASB, investors went right back to using those unconditional probability estimates. Indeed, until the spike in credit spreads that began a few weeks ago, the amount of additional yield investors demanded for taking credit risk had fallen back to the lows of 2007. We've had a major credit crisis, we have failed to restructure the debt underlying that crisis, and yet investors are approaching the market as if the debt has simply been made whole and we can continue along the former path.
Knowing that the cash flows from mortgage payments cannot possibly be adequate to service the original debt, that delinquencies continue to hit new records, and that there is an enormous overhang of nonperforming debt and unforeclosed homes - it seems utterly naive to assume that the problems we saw over a year ago have been adequately addressed.
What holds for oil also holds for red ink. Disasters contain information. It's no longer reasonable to apply previous risk estimates even after we've observed a major spill.
Credit Update
My impression is that the main struggle of the stock market here is not about Europe, but rather centers on the likelihood that the U.S. will experience a second wave of credit strains. Clearly, the level of credit strains will be a function of mortgage delinquencies and foreclosure losses, both realized and anticipated. I've noted that we entered the primary window for these strains to emerge only a few months ago. Alt-A and Option-ARM mortgage resets will hit their stride between now and November, with a second, even higher peak in 2011, before finally trailing off in early 2012. I am most concerned about the "recognition phase" in which the current, very low estimates of credit strains are revised by investors.