In a previous article (part I), I wrote about two top priorities for effective altruists who are concerned about far future human and animal welfare: the reduction of existential risks (X-risks) and the reduction of wild animal suffering. Elsewhere, I gave a personal probability estimate that wild animal welfare gets priority. However, in that article I argued that the two cause areas (reduction of existential risks and wild animal suffering) are interconnected: reducing X-risks can help reduce wild animal suffering in the far future. Here I present a highly simplified model to prioritize between those two cause areas, assuming that reducing wild animal suffering is the prime objective for the far future (i.e. assuming that if humans survive the far future, and keeping in mind progress in human welfare improving technologies and institutions, human suffering would be almost eliminated).

The model is based on an expected value estimation, assuming an asymmetric suffering focused population ethic (as explained here). In other words, the prime objective is to minimize suffering, and more precisely minimize the total negative welfare (negative utility) of all wild animals who have net-negative lives (i.e. lives not worth living, with more negative than positive experiences). Hence, total suffering is defined as the total negative welfare of all animals with net-negative lives. The expected (dis)value of suffering is the total suffering times the probability that this suffering occurs in the future.

Suppose we have the choice between two options. Option 1 means we start doing research about wild animal suffering right now, such that we can sooner intervene in nature with safe and effective technologies to eliminate wild animal suffering. Option 2 means we first do research about prevention of existential risks, and wait t years (e.g. 100 years) before we switch to research about wild animal suffering (at least if we are still alive by then).

Assume that if humans go extinct and wild animals survive, those animals have to wait T years (e.g. 100 million years) before new intelligent life forms arise who are able and willing to eliminate wild animal suffering. Assume that the amount of wild animal suffering is linearly proportional to the number of years, i.e. X/x is proportional to T/t, with X and x being the amounts of wild animal suffering in T years and t years respectively.

Assume there is a critical period in the near future (e.g. 100 years), such that if humans survive that period, the probability that they will go extinct after that period is negligible. The end of that period could for example be an intelligence explosion: if the artificial superintelligence does not kill us, it is sufficiently value aligned with us such that it can effectively help us avoiding all future existential risks. The probability that humans will not go extinct and that we will invent effective technologies to eliminate wild animal suffering, is P (e.g. 90%). The probability that if we do not invest in X-risk reduction research (but we invest in wild animal suffering reduction research instead), humans will go extinct and animals will not go extinct, and if we do invest in that X-risk research, humans will not go extinct, is p. This probability is the product of the probability that there will be a potential extinction event (e.g. 10%), the probability that, given such an event, the extra research in X-risk reduction (with the resources that would otherwise have gone to wild animal suffering research) to avoid that extinction event is both necessary and sufficient to avoid human extinction (e.g. 1%) and the probability that animals will survive the extinction event even if humans do not (e.g. 1%). Hence, p is likely to be very low, because with increasing human survival capacity, it becomes less likely that an extinction event will eliminate all humans but not also all wild animals. If something powerful is able to kill all humans, it will likely also kill all animals. So the total probability p could be 0,001%.

With these assumptions, we can conclude that option 2 (doing t years of research to prevent existential risks) is better in terms of expected value of wild animal suffering reduction, if X/x>P/p. Both X and P are much larger than x and p respectively, but it is not clear whether X/x is larger than P/p. With the above estimates, for t equal to 100 years, X/x is 1 million, and P/p is 100.000, so existential risk reduction research should get priority if elimination of wild animal suffering is primarily important. However, the period that we should prioritize existential risk reduction (i.e. the number of years t) should not exceed 1000 years in this example (assuming the same probability p), because that would delay elimination of wild animal suffering with too many years.

Whether the expected value of X-risk reduction is bigger or smaller than wild animal suffering reduction, depends on many crucial considerations. Hence, it is not clear whether wild animal suffering is a smaller or bigger problem than X-risks. It is also unclear to me whether wild animal suffering is more or less tractable (solveable) than X-risks. But what we do know, is that wild animal suffering is much much more neglected than X-risks. This latter aspect favors wild animal suffering prioritization.

Pingback: Reducing existential risks or wild animal suffering? | Stijn Bruers, the rational ethicist

Pingback: Probability estimate for wild animal welfare prioritization | Stijn Bruers, the rational ethicist