If we are really able to predict, how is it that we repeatedly fail?
Why is it that ‘experts’ continually miss signs of looming disaster? In his Market Watch article, Brett Arends (August 24, 2015) noted that Wall Street experts failed to predict the housing bust, the majority of economists polled in early 2008 failed to predict the biggest recession in 70 years, all of the experts at the International Monetary Fund failed to predict the financial crisis, and since 2011 most Wall Street experts have missed the crashes in emerging market stocks and commodities.
Chances are, you’re not a statistician and have very little interest in formulas. But understanding their conceptual meanings are critical to the way you engage when presented with predictions, forecasts, and projections.
In statistics, there is a concept known as “margin of error." It tells us something about confidence levels and the degree of uncertainty one is accounting for when forecasting.
The reality is, every single forecast you see is built upon a model that includes mathematical assumptions, like estimated error. Generally, these are determined by human judgment. Some judgments turn out to be more accurate than others, but time has shown that even experts are pretty lousy at it.
The concept of the assumed margin of error and its implications have been discussed most eloquently by Nassim Nicholas Taleb in his book, The Black Swan: The Impact of the Highly Improbable. In the chapter “The Scandal of Prediction," he argues that forecasting without incorporating an error rate uncovers three fallacies, all arising from the same misconception about the nature of uncertainty.
Fallacy One: Variability matters. This first error lies in taking a projection too seriously, without heading its accuracy. Yet, for planning purposes, the accuracy in your forecast matters far more than the forecast itself. Therefore, the policies we need to make decisions on should depend far more on the range of possible outcomes than on the expected final number. He shares the dire consequences of financial and government institutions projecting cash flows without wrapping them in the thinnest layer of uncertainty.
Fallacy Two: Failing to take into account forecast degradation as the projected period lengthens. We do not realize the full extent of the difference between the near and far futures. Historically, forecasting errors have been enormous, and there is no reason for us to believe that we are suddenly in a more privileged position to see into the future compared to our predecessors. From Facebook to Apple to Ali Baba -in their early days very few would have predicted their dominance in their respective markets.
Fallacy Three: Misunderstanding the random character of the variables being forecasted. Owing to the Black Swan, these variables can accommodate far more optimistic or far more pessimistic scenarios than currently expected. While there are numerous examples in the tech world, Fab stands out as the quintessential example of a bad prediction: a company that was at one point valued at ~$1bn was acquired in a fire sale for about $20 million.
But what does all of that mean for you?
Whether you are a venture capitalist, corporation, or non-profit- the next time you're presented with forecasts, projections, and predictions, make sure you put a significant amount of thought into the assumptions and margin of error.
Meet the next generation of consumer insights