• Invest with Us
  • Our Funds
  • About Us
  • Insights
  • Contact Us
  • Investor type

    Change Country

    Clare Lindeque

    Head of Risk

    October 2016

    Is all forecasting folly?

    “Experts are no better at forecasting than a chimpanzee with a dart board.” This is the most often repeated summary of the conclusions of political scientist Philip Tetlock’s landmark 2005 forecasting study. As a description of Tetlock’s work, however, it lacks nuance and accuracy. In fact, that study concluded that it ispossible to achieve reasonable forecast accuracy over short time horizons, but as the horizon moves out to three to five years, even experts’ forecasts demonstrate a success rate no better than random chance. In Superforecasting, Tetlock and co-author Dan Gardner start by clarifying this result for readers. They then go further, maintaining that not only are some types of forecast possible, but that some forecasters are consistently able to attain levels of accuracy that would make even a chimp sit up and take notice.

    These latest conclusions stem from the results of a new forecasting tournament launched by Tetlock and his team called The Good Judgment Project, involving thousands of participants and hundreds of questions. Tetlock found that certain individuals, who seem to have fewer cognitive biases than the rest of us, are “superforecasters”. They are measurably better forecasters than average. His other main finding is that superforecasters’ skills are not innate, but can be learned. They are the product of habits of thought that anyone can cultivate.

    Forecasts are everywhere. Pundits and news anchors are fluent forecasters, whether they admit it or not. Arguably, anything we do based on an expectation of an uncertain future outcome – such as taking an umbrella with us in the morning – entails acting on a forecast. Many of these forecasts (particularly those in the media) are so poorly framed as to make it impossible, after the fact, to determine how accurate they were.

    The Good Judgment Project is concerned with geopolitical questions like “will the oil price be above $50 per barrel in three months’ time?” and “what is the probability that Turkey will experience a civil war in the next two years?” You will notice that the first question is well-formed, in the sense that a precise timeframe is specified, and the language is unambiguous, requiring a yes-or-no (binary) answer. There will be little debate, at the end of three months, over its accuracy.

    The second question calls for a single probabilistic forecast, which is impossible to evaluate for accuracy: if I say there is a 70% chance of a Turkish civil war, and a war does not occur, was my forecast correct or not? Implicit in my forecast was a 30% probability of there being no civil war. One can evaluate whether probabilistic forecasts are accurate, but multiple forecasts are required (which is why Tetlock’s forecasting tournaments were such a rich source of data). If I have assigned a 70% probability to 100 different events, and am an excellent forecaster (“well calibrated”), then approximately 70 of the events will occur, and 30 will not.

    At Prudential we do not attempt to peer into the future in the manner of a superforecaster. Using current and historical data and long-run valuation anchors, we identify opportunities based on assets’ deviations from fair value. By overweighting or underweighting an asset in our portfolios, we are usually expressing an expectation of its future behaviour. This behaviour may be that the asset will out- or underperform the benchmark, provide diversification, or reduce risk, for example. Philip Tetlock would call this a forecast.

    There is a fundamental difference, however, between the forecasts in The Good Judgment Project and the type of “forecasting” in an investment process. This difference is that the number of times we are correct is not the metric by which success is measured. We are measured on our portfolios’ return. The catch here is that the behaviour of a single asset within a portfolio can overwhelm all of the other positions, even if they are correct. So for investment managers, it matters which of our forecasts are correct as much as, or perhaps even more than, how many. Risk management mitigates the portfolio impact of negative outcomes, in terms of position size and diversification, for example.

    Above all, this book is a lesson in good thinking. After reading it, you will be able to identify all the forecasts around you, and to distinguish the useful ones from the useless. Superforecasters are open-minded, careful, curious and self-critical. These are mental attributes that you can cultivate yourself, while enjoying the unintended consequence of being in demand at parties for your good listening skills and interesting, well-informed conversation!

    Share

    Did you enjoy this article?

    Sign up for our newsletter