The case against assigning probabilities to existential risk

5 minute read

Probability

I’m currently reading “The Precipice” by Toby Ord (review to follow), wherein he talks about the existential risks faced by humanity. By this he means risks that “threaten the destruction of humanity’s potential”. I don’t agree with all the risks being considered here, but I agree with more thought going into them and ways to ensure they don’t come to pass. So far so good.

In doing this, Ord considers a number of possible scenarios, separated into natural risks, anthropogenic risks, and future risks. Here comes the first issue: they are all future risks. Granted, there have been pandemics, nuclear weapon scares and climate-based extinction events in the (sometimes distant) past. But here we consider something which effectively shunts humanity to an inescapable trough from which it can never escape (and thus destroying its potential). As this has never happened, this is all a future risk. I understand the logic of separating out those events which threaten humanity now from those which will be able to threaten humanity in the future, but either way, it’s all about prediction. And as Niels Bohr famously wrote, “prediction is very difficult, especially if it’s about the future”.

Such a scenario comes up for me a often. One example is for us to take patients with a disease (in my current role, Alzheimer’s disease; AD) and those without the disease. We collect information on them (age, sex, education etc.) and biological samples (blood, neuroimaging, cerebrospinal fluid etc.), then we try to “predict” who will get AD. We do this by splitting the dataset up, training on the larger portion and then predicting on the smaller, where we don’t incorporate the smaller into the model. AD is a complex disease involving genetic and non-genetic factors which intertwine in an as-yet unknown number of ways to bing about synapse loss and the symptomatic hallmarks of AD. As Bohr noted, this is already difficult, and we’re not even predicting the future! He we have patients who already have AD, and we have tried to “predict” by blinding ourselves to the outcome. This is mainly because it’s easier to get high numbers in the disease group than taking a more expensive cohort study of healthy individuals (at baseline) and following them up more years to see who gets AD.

For prediction of AD from genetic factors, there are many points in the genome which are not associated with AD in our sample. There are some which are seemingly not involved in AD, but might be associated with cognitive impairment or other neurodengenerative conditions. In genetics we’re lucky - there have been many large studies, and if a genetic factor isn’t associated with AD in my sample, I can check huge databases of other individuals and studies and see if it’s been found to be associated in those. I can even just check for that variant across all traits with a simple search. But - and it’s a big but - even if I find a weak association from a rare event in a database, I will be extremely cautious about taking it as true, or the probability of AD given the variant as accurate. In my own analyses I might discard an association if it’s reported as associated but the genetic factor is extremely rare.

So how does this relate to existential risk? My problem isn’t that Ord considers future events. He should. My problem isn’t that these events have never happened before. If they risk destroying humanity’s potential, they are still worth considering. My issue is with assigning probabilities to those future events, for which there is effectively no precedent. This doesn’t just mean there is no past events from which we can estimate what the absolute risk is. We also struggle to talk about risk factors, as there are no events for us to check associations with.

Perhaps if the numbers were followed-up with an explanation then they would be more palatable. But they’re not. Discussing the risk of nuclear weapons being used during the cuban misile crisiss, Ord reports that Kennedy put the chance of nuclear war at “somewhere between one out of three, and even”. Ord then remarks that he puts the risk at between 10 and 50 percent. No justification follows those numbers. Where did they come from? Gut feeling? Such prediction are peppered throughout the book, all the time unsubstantiated.

So what’s the solution? Never make predictions when there are no past events? Of course not. A potential solution is to use words and not numbers. Whether you intend to or not, by assigning a hard number, or even a range of hard numbers, you indicate a level of certainty you don’t have. When you say there’s a 10-50% chance of war, you’re also saying there’s not a 51% chance. Or a 9% chance. They’re not even being considered by you. You haven’t even given a centre of the distribution here. Is your best estimate 30% (the midpoint), or is it uniformly distibuted between 10 and 50, so that you think all values are equally likely? Most frustratingly to me, the statement is also implying that the means to make such a numerical estimate are available to author, which they aren’t.

Finally, to give some credit to Ord, he does try to address this. In chapter 1 he says that words are open to interpretation, such that a “grave risk” could be from anywhere 1% to 99%. The language could easily be more specific, and besides, the interval for any estimate of a long-term existential risk with no precedent is probably around to 1-99%, so grave risk might be about as accurate as we can be without being misleading. I think the framing of the this is a neat side-step with some especially vague language. Talking about how low the risk is, what you think modifies the risk and which factors contributed most to your estimate is the absolute bare minimum I would expect from any prediction. It’s easily done and it just isn’t forthcoming here.