January’s most engrossing Small Data

The Global Risks Report

David Aldous
Towards Data Science

--

As a mathematician of the old school, I don’t seek to engage Big Data. Instead, this post is about Small Data on a Big Subject. Each January since 2006 the annual Global Risks Report (GRR) has been published, as background material for the annual World Economic Forum (see Footnote 1 below). The reports are lengthy documents, freely available here analyzing risks in the sense of events that would have substantial effect on the world economy over the next few years (“medium term”). The reports provide a consensus view derived from a large panel of experts. For my purpose here, the centerpiece is a graphic showing the perceived likelihood and economic effect of each of 36 risks (see Footnote 2). Look at this graphic below from January 2020 (or click this link).

Global Risks Report 2020

The graphic is in a standard format: the horizontal axis is relative likelihood and the vertical axis is relative economic impact (see Footnote 3). So the most substantial perceived risks (on the eve of COVID-19) were those at the top right, starting with extreme weather and climate action failure.

To me the most interesting underlying conceptual issue is how predictable is the medium term future? One often sees casual assertions of the form that “no-one predicted” (or “I predicted”) the collapse of the Soviet Union, or a 9/11-like attack, or the financial crisis of 2007–2008, or the COVID-19 pandemic. Such assertions are nonsensical. These are intrinsically unpredictable events, and so it makes sense to talk only about probabilities. As a specific example, on the topic of the Cold War in Europe, one 1985 probability assessment (see Footnote 4) for 1985–1995 was
65%: status quo
25%: internal revolts in Eastern Europe lead to decrease in Soviet control
5%: military attack by Soviet Union on West Germany
5%: Soviet Union falls apart for internal reasons

and their phrase “the empire crumbles” for the last option turned out rather accurate.

Can we judge the accuracy of past probability assessments?

Under the controlled conditions of a prediction tournament (see Footnote 5) one can indeed determine relative forecasting abilities via a scoring rule, but this requires that different forecasters assess the same collection of events. I’ve never seen such data for medium-term forecasts. And many events in the GRR are rather vaguely specified, so it’s not clear how one would score prediction accuracy.

So one bottom line is that we cannot formally test how accurate the GRR has been in the past. Also note this is Small Data: 13 years times 36 events times 2 assessments equals roughly 1000 numbers.

But it’s interesting to look at past GRR analyses and discuss informally their accuracy.

So, how did they do at predicting COVID-19? In fact, pandemic risk has been featured in GRR every year in the top left quadrant (low likelihood, large impact). As many people have noted in retrospect, experts had long predicted that such a pandemic would happen sometime, while acknowledging that the chance in any given year may be small.

What about the financial crisis of 2007–2008? The January 2007 report graphic (before the crisis was apparent) judged the most substantial risk to be asset price collapse. So that’s a success.

Let’s pick a random year. For 2013 the major risks were chronic fiscal imbalances, water supply crises, rising greenhouse gas emissions, and severe income disparity. What actually happened in subsequent years? GRR stopped worrying so much about fiscal imbalances (at least before 2021). Water supply crises haven’t made headlines but remain quite high on the GRR’s list. Rising greenhouse gas emissions has been renamed climate action failure and will no doubt remain one of the top risks for the foreseeable future. Severe income disparity, renamed as social instability has (somewhat surprisingly to me) subsequently dropped toward the middle. So no notable success in 2013, but conversely no major occurrence that was not contemplated.

Readers having a competitive nature might try composing their own list of risks, for comparison with the forthcoming 2021 GRR. My own top guesses would include fiscal crises and social unrest due to COVID-19, and cyberattacks. In previous years the GRR has appeared mid-January, but for 2021 the physical World Economic Forum meeting is currently postponed to May (and likely to be further postponed?), so I’m unsure when the 2021 GRR will appear.

So why care?

At the individual level, few of us live entirely day to day without ever thinking about our personal future. And anyone reading medium.com is surely paying attention to at least some aspects of what’s going on now in the wide world, with attendant hopes and fears about the future. However each of us tends to focus on only a few ways in which the future may be different, implicitly assuming other aspects will remain the same. For instance the original Star Trek envisaged a rather utopian sophisticated technological future, but juxtaposed with a cringeworthy patronizing 1960s-style portrayal of women. Reading the GRR list of risks may prompt you to consider how your existing hopes and fears would be affected if one or other of these risks come to pass.

At an organizational level, any attempt at rational planning for an uncertain future requires some combination of scenario planning and probabilistic forecasting, and these are complementary rather than antagonistic: if you think some geopolitical event has 40% chance, can you devise 2 plausible scenarios under which it happens and 3 where it doesn’t?

Finally, the GRR exemplifies a very broad baseline principle: in any setting of uncertainty, if other people have thought about it and one can ascertain a consensus or “middle opinion”, then that is valuable as a baseline. When formulating your own view or studying someone else’s view, can you or they articulate why they differ from the baseline? If not, don’t pay much attention to their opinion.

Graphic of yearly change

___________________________________________________
Footnote 1: The World Economic Forum is best known (and widely critiqued) for its annual January meeting in Davos. But it also coordinates production of many analyses other than the GRR. Personally I guess that such reports will tend to be more accurate than those from academic or governmental institutions, because of a wider range of input and being less beholden to specific interests or ideologies.

Footnote 2 : The risks studied have changed slowly over time, and originally fewer were considered.

Footnote 3: The 2007 graphic has numerical likelihoods and numerical dollar impacts. Over the years, the axes labels have been changed to be more qualitative: for the 2020 graphic, the experts were asked to assess likelihood and impact on a rather ill-defined scale of 1–7, and the averages are plotted. This further complicates attempts to assess accuracy.

Footnote 4: A Quick & Dirty Guide to War by James F. Dunnigan and Austin Bay.

Footnote 5: Prediction tournaments involve stating a probability that a given specific event will occur before a given deadline, typically 6–12 months ahead. See this high school level exposition or this more sophisticated account. But for medium term events such tournaments are not so practical, because one needs to wait until the event outcomes are determined.

--

--

After a research career at U.C. Berkeley, now focussed on articulating critically what mathematical probability says about the real world.