WTI Crude


Brent Crude


Natural Gas




Heating Oil


Rotate device for more commodity prices

Breaking News:

French Total Considers Stake In Aramco IPO

Alt Text

Toshiba Can’t Shake Its Westinghouse Woes

Toshiba’s nuclear construction business, Westinghouse,…

Alt Text

Virginia Plans To Turn Abandoned Coal Mines Into Hydropower Storage

Virginia’s innovative strategy to use…

Calculating calamity: Japan's nuclear accident and the antifragile alternative

Famed student of risk and probability and author of The Black Swan Nassim Nicholas Taleb tells us that in 2003 Japan's nuclear safety agency set as a goal that fatalities resulting from radiation exposure to civilians living near any nuclear installation in Japan should be no more than one every million years. Eight years after that goal was adopted, it looks like it will be exceeded and perhaps by quite a bit, especially now that radiation is showing up in food and water near the stricken Fukushima Dai-ichi plant. (Keep in mind that "fatalities" refers not just to immediate deaths but also to excess cancer deaths due to radiation exposure which can take years and even decades to show up.)

Taleb writes that it is irresponsible to ask people to rely on the calculation of small probabilities for man-made systems since these probabilities are almost impossible to calculate with any accuracy. (To read his reasoning, see entry 142 on the notebook section of his website entitled "Time to understand a few facts about small probabilities [criminal stupidity of statistical science].") Natural systems that have operated for eons may more easily lend themselves to the calculation of such probabilities. But man-made systems have a relatively short history to draw from, especially the nuclear infrastructure which is no more than 60 years old. Calculations for man-made systems that result in incidents occurring every million years should be dismissed on their face as useless.

Furthermore, he notes, models used to calculate such risk tend to underestimate small probabilities. What's worse, the consequences are almost always wildly underestimated as well. Beyond this, if people are told that a harmful event has a small chance of happening, say, 1 in a 1,000, they tend to dismiss it, even if that event might have severe consequences. This is because they don't understand that risk is the product of probability times severity.

If the worst that walking across your room could do is cause a bruise from falling, you wouldn't think much about it. Even if the chance of getting a bruise were significant, you'd probably be careful and figure it's worth the risk. But if walking across your room subjected you to the possibility of losing your arm, you might contemplate your next move a bit more.

But, the point Taleb makes is that the people of Japan did not know they were subjecting themselves to this severe a risk. If they had, they might have prepared for it or they might have even rejected nuclear power altogether in favor of other energy sources. But, both the probability and severity of this event were outside the models the regulatory agencies used. This is one of the major reasons we often underestimate risk and severity. But even if such an event had been included, the consequences would most likely have been considerably underestimated.

It is the nature of complex societies to continually underestimate risks. What we tend to do is to assign a probability to a possible harmful event and think that by assigning that probability we have understood the event and its consequences. It is a kind of statistical incantation that is no more useful than shouting at the rain. But because it comes wrapped inside a pseudo-scientific package, we are induced to believe it. If important men and women with PhDs have calculated the numbers, they must be reliable, right?

When it comes to calculating the extremes of physical attributes such as the height or weight of human beings, we have a large number of cases and we have the limits of biology and physics to guide us. No human can be 100 feet tall or weigh 10,000 pounds. But when it comes to social phenomena, we are often lost. Human-built systems produce unpredictable outcomes precisely because humans are so unpredictable. They have behavior patterns, but those patterns can't be described in equations. In our world, millions and even billions of people are making decisions which affect markets, technology and society every day, and no one is capable of observing and calculating the effects of such decisions. This makes any resulting patterns difficult if not impossible to ascertain. And, when we try to gauge the effect of actual and possible natural phenomena on human-built systems and vice versa with the precision of several decimal places, we are only fooling ourselves.

So what should we do? Normally, we say we should try to make our systems more robust, that is, harder to destroy or cripple under extreme conditions. This seems altogether reasonable. But what if there is another choice? What if it is possible to build systems that thrive when subjected to large variations? Taleb points to such a possibility in an article entitled "Antifragility or The Property Of Disorder-Loving Systems." The text is difficult unless you've read his other work extensively. But look at the chart, and you will begin to get an idea of what he means by antifragility.

The relocalization movement should take note that as serious a thinker as Taleb has characterized a decentralized, artisan-based culture as one that is antifragile. It might be useful to figure out how to explain this advantage to interested audiences who are watching the complex systems of modern society crumble around them.

By. Kurt Cobb

Source: Resource Insights

Back to homepage

Leave a comment
  • Anonymous on March 23 2011 said:
    Kurt, you have taken something that is extremely simple and made it difficult. Somebody in Japan made a mistake. It's like the mistake I made when I went into the orderly room of my company and called the first sergeant out. I got the hell beat out of me, and I didn't make that kind of mistake any more.During a leisurely stroll in Vienna many years ago, a very smart Japanese gentleman explained the Japanese energy future to me, and it did NOT have anything to do with abandoning nuclear. It had to do with not constructing a lot of light water plants when the breeder was the way to go. I thought he was nuts when he said that, but now I get the message. I also hope that when those breeders go up, they do not go up in the wrong places.
  • Anonymous on March 23 2011 said:
    Glad you mentioned the subject of breeder reactors. A big problem with the liquid metal fast breeder reactor is, the liquid metal is usually sodium and you know what would happen if you pumped water into a vessel containing sodium. With a helium gas cooled fast reactor, you are likewise going to have a problem, as you do with any nuclear reactor, of getting rid of afterheat if a Fukushima-type incident were to occur. The place to site a breeder reactor is where, if its not guaranteed to be earthquake-free (even midwest USA can't guarantee that!), at least you won't have tsunamis.
  • Anonymous on March 23 2011 said:
    Alex, breeders are certain. I only hope that the people designing and operating them are as smart as some of the engineers who constructed the present reactors in the U.S. and here in Sweden were, and might still be. Of course, you can never be certain that very smart people might for reasons of their own decide to try stupid. That is what the academic world is unfortunately all about.

Leave a comment

Oilprice - The No. 1 Source for Oil & Energy News