Population ethics is probably the most important area in moral philosophy for effective altruists who want to do the most good. There are some very serious problems in population ethics that relate to crucial considerations about doing the most good. In this article I present a new population ethical theory that solves many of those problems: variable critical level utilitarianism says we should choose the situation that maximizes the sum of everyone’s relative utilities, where a relative utility measures a person’s preference for his or her actual situation relative to a critical level. People can always choose their own preferred critical level. This is a more flexible version of classical critical level utilitarianism, because different persons in different situations can have different critical levels. When everyone’s critical level is minimal (i.e. 0), we get total utilitarianism, and when everyone’s critical level is maximal, we get negative utilitarianism. I discuss game-theoretical considerations of this variable critical level theory and apply it to existential risks and the far future. The theory says we have to give a high but not an absolute priority to preventing existential risks.
Note: this is a draft article. The final sections and conclusions are very tentative.
Population ethics is one of the most difficult areas in moral philosophy that studies which choices are the best when populations are variable and the choices determine the existence or non-existence of individuals. As our current choices have an influence on the far future, and the far future can potentially contain a huge number of people, population ethics is probably the most important area of moral philosophy, because it relates to crucial considerations. It has an enormous influence on our cause prioritization.
In a consequentialist-axiological approach to population ethics, we are looking for a welfare function W such that the best choice is the one that maximizes this welfare function. When this welfare function is an aggregate of the utility functions U(i), summed over all individuals i, we get a utilitarian theory. The utility U(i) of an individual is a function of everything that the individual values or prefers, such as happiness or well-being.
Total and average utilitarianism
The problem in population ethics is that maximizing a welfare function almost always leads to counter-intuitive results. If we include all possible future generations, there are basically two most important population ethical theories: total utilitarianism and average utilitarianism.
In total utilitarianism, the welfare function W = T = the sum of everyone’s utility. This has a drastic implication: a so-called sadistic repugnant conclusion. Suppose we can choose between two situations. In situation 1, everyone is maximally happy. In situation 2, this group of people is maximally miserable, they have the worst lives possible. But there is a huge extra population of people with a very low but positive happiness, so their lives are barely worth living. If this added population is big enough, and the utility function is an increasing function of happiness, the second situation has a higher total utility and hence total utilitarianism says that we should prefer the second situation. This seems very counter-intuitive.
To avoid this implication, we can take another welfare function W = T/N = A. Here, N is the number of people who exist or will exist, and A is the average utility. In the above example, the second situation has a much lower average utility than the first, because the population size in the second situation is much higher.
However, also average utilitarianism is counter-intuitive, with a sadistic conclusion. Suppose we can choose between two situations. In situation 1, everyone is maximally happy, except one person who is maximally miserable. In the second situation, everyone is maximally happy, including this one person, but there is a huge extra population of people with a very high but not maximal happiness. If this added population is big enough, the second situation has a lower average utility and hence average utilitarianism says that we should prefer the first situation. This again seems very counter-intuitive: in the first situation there is one person in extreme misery whereas in the second situation everyone is at least very happy and the total happiness in the world is much higher.
Critical level utilitarianism
Philosophers have proposed a lot of other population ethical theories, but basically all of them face some counter-intuitive implication. That is because most of them have as welfare function a combination of total and average utilities. An important example is the theory with the welfare function W = T.(1-C/A). Here C is a positive constant. If A is very big, this theory becomes a total utilitarian theory.
This theory is the so-called critical level utilitarianism, where the constant C represents a critical level of utility. The welfare function is the sum of the difference between the utilities and this critical level. It is as if everyone’s effective utility equals their relative utility U(i)-C.
The critical level will always be non-negative. A negative critical level would be very counter-intuitive because it means that it is good to add extra people with lives not worth living, decreasing both total and average utility. The lowest possible critical level C = 0 corresponds with total utilitarianism where we have to maximize the sum of utilities. Considering the other extreme: if we take C to be the highest possible value (e.g. the value of the most preferred life), we get a kind of negative utilitarianism where we have to minimize the gap between our utilities and the critical level.
Setting C high enough avoids the sadistic conclusion of total utilitarianism. But too high is not good, because we risk the counter-intuitive implication of average utilitarianism. Furthermore, a very high critical level faces the problem of negative utilitarianism: it gives a preference to non-existence. It increases the probability that newborn people will have levels of utility below this critical level. Even if these people would have very happy lives, adding those people would be bad according to negative utilitarianism, because their relative utilities U(i)-C become negative, decreasing the welfare function.
Variable critical level utilitarianism
There could be an interesting solution to the above problems faced by critical level utilitarianism. In that theory, the level C was constant and the same for everyone. But what if different individuals in different situations have different critical levels? What if the critical level became variable? So instead of being a constant, the critical level becomes dependent on the individual i and the situation S. In a given situation S, an individual i has a critical level C(i,S).
And we can be altruistic here and respect everyone’s autonomy; each individual can determine what his or her critical level is. They can even choose negative critical levels. The only condition is that the critical level of a non-existing person is 0. When everyone can choose their own critical levels, the variable critical level utilitarianism is identical to the maximum self-determined relative preferences principle. If a group of people is faced with the abovementioned dilemma that led to the sadistic repugnant conclusion, and if everyone chooses the value 0 as their critical level, i.e. if everyone is a total utilitarian, then everyone would prefer situation 2 containing nothing but miserable people and people with lives barely worth living. We may disagree with their choice, we may think that preferring that choice is counter-intuitive, but we must accept or tolerate their preference in order to respect their autonomy. Who am we to say that their critical level value is wrong and should be higher?
There are other possible values of the critical level besides 0. For example, if everyone chooses as their critical level the average utility of all other individuals, we arrive at average utilitarianism. Another possibility is that everyone can choose as their critical level their utility for their most preferred situation. So in situation S an individual has a utility U(i,S), which measures the preference for that situation S. But in that situation S, the individual can also have a stronger preference for his or her most preferred situation M which is different from S. That maximum preference can be set equal to the critical level C(i,S). The relative utility U(i,S)-C(i,S) now measures a complaint: in situation S the individual can complain that his or her most preferred situation M should have been chosen. This special version of variable critical level utilitarianism equals the minimum complaint theory which is a more flexible kind of negative utilitarianism.
It is possible that if people choose a critical level that is very high, the variable critical level theory (which corresponds with a maximum relative preferences theory or the minimum complaint theory) selects a situation that is not preferred by those people. For example, if future people have a high critical level, they have strong complaints and it would be better if those people do not exist. But if they have a positive preference for the situation in which they do exist, it is better for them not to complain so hard, as complaining would select their non-existence. In general, future possible people might prefer a critical level that is as high as possible but does not result in the selection of a situation that they prefer less. So they prefer a highest possible safe critical level.
Similarly, we have a lowest possible safe critical level, namely 0. When people would choose a negative critical level, they can end up in a situation where their utilities or preferences are higher than this negative critical level (which means it positively contributes to the total relative preferences), but their utilities are negative. Therefore, no-one would choose a negative critical level.
In summary: if everyone chooses the lowest possible safe critical level, we end up with total utilitarianism. If everyone chooses the highest possible safe critical level, we end up with a variation of some population ethical theory known in the literature as negative utilitarianism (which is in some way close but not equal to person affecting utilitarianism and antifrustrationism). If everyone chooses a positive, fixed critical level, we arrive at critical level utilitarianism. And if everyone chooses the average utility of all other individuals as their own critical level, we arrive at average utilitarianism.
Just like critical level utilitarianism, our variable critical level utilitarianism lies between the two extremes of total utilitarianism and negative utilitarianism, but it avoids the problems faced by total, average, critical level and negative utilitarianism.
Game-theoretical considerations and negative feedback mechanisms
The choice of critical level can be very flexible and can depend on what other individuals choose. This flexibility has some interesting game-theoretical consequences, where people can make strategic choices for their own critical levels dependent on the choices of other people.
We can split up people in two groups: the grateful people or positivists and the complainers or negativists. Positivists have a positive relative utility, i.e. their utility level U(i,S) is higher than their critical level C(i,S). Total utilitarians with positive utilities and a critical level of 0 are an example of grateful people because their positive relative utility acts as a gratitude.
Negativists, for example negative utilitarians, have a negative relative utility. These negativists are complainers because their negative utilities act as complaints. The problem faced by negativists is that they risk non-existence even if their lives would have been worth living (i.e. their utilities would have been positive). Suppose we can choose between a situation where no-one exists and a situation where a complainer with a positive utility exists. The first situation would be better because it minimizes complaints (or maximizes relative utilities). To guarantee existence with a life worth living, the complainer could decide to lower his or her critical level such that his relative utility becomes positive and the complainer becomes a grateful person.
However, this risk of non-existence can also be avoided when the existence of negativists is coupled to the existence of positivists. Suppose in the second situation there are a lot of positivists next to the one negativist complainer. Their total gratitude can be bigger than the complaint of this one complainer, resulting in a positive value of the welfare function, i.e. higher than the welfare function of the first situation where no-one exists, which is zero. So the complainer can exist and may even be able to increase his or her critical level as long as the welfare function remains positive.
The more positivists (e.g. total utilitarians) there are, the more opportunities there will be for complainers to raise their critical levels and the more other people might increase their critical levels and become complainers. And vice versa: the more people have high values for their critical levels, the more other people would become positivists by decreasing their critical levels. If potential future generations would raise their critical levels too high, it would lead to their non-existence because their complaints become too big. Other situations where they do not exist and have no complaints become preferential. Increasing critical levels means increasing the risk of non-existence, even if one could have a very happy life worth living. Hence, increasing the critical level dampens further population growth. Reversely, if a lot of future people would lower their critical levels, it can excite further population growth. But this also means that other future people could raise their own critical levels without risking non-existence.
Hence there is a kind of negative feedback mechanism. The average critical level can act like a thermostat. If the temperature becomes too low, the heating turns on. If the temperature becomes too high, the heating switches off to cool down the situation.
Existential risk and the far future
We can apply game theoretical considerations to the abovementioned dilemma that resulted in a sadistic repugnant conclusion for total utilitarians. The problem of existential risks and the far future gives us a more concrete example. Suppose we can choose between two options. In the first option (situation 1) we are very happy, but we create an existential risk (e.g. a rogue artificial superintelligence that blows up the planet) which means we are the last generation. (Suppose our deaths will be quick so we will not suffer.) There will be no future people, the future is empty of consciousness. In the second option, we avoid this existential risk and there will be many people in the near and far future who have lives worth living. However, choosing this second option, avoiding this existential risk, comes at a cost we have to bear. Our lives (of the current generation) can become very miserable in our attempts to avoid the existential risk (e.g. to stop the rogue artificial superintelligence). If our lives become too miserable, we would prefer situation 1. We (the current generation) can justify this by raising our critical levels so we are no longer total utilitarians. Raising our critical levels is a strategic counteracting measure.
However, this justification might not be sufficient when we consider the far future, i.e. when the total number of people in the future becomes very big when we choose the second option. If all those future generations would be total utilitarians, the collection of their preferences for a critical level of 0 trumps our preferences for a higher critical level. Their gratitudes (positive relative utilities) trump our complaints (negative relative utilities). But in reality, we don’t know what future people would prefer as their critical level. It is unlikely that they will all be total utilitarians when a lot of people in the current, existing generation want to avoid the sadistic repugnant conclusion.
The preferences of critical levels of the current generation can serve as a best estimate for the preferences of critical levels of the future generations. As most people of the current generation find the sadistic repugnant conclusion of total utilitarianism highly counter-intuitive, we can expect that also a lot of possible future people share this moral intuition and have a preference for situation 1 where a smaller population of maximally happy people exist, even if this means they themselves do not exist. Imagine that our ancestors once faced an existential threat. Luckily for us, this threat was avoided and we have lives worth living. However, in avoiding this existential risk, our ancestors had to suffer a lot. Their lives became maximally miserable. If they would not have prevented extinction, their lives would have been very positive, but we would not exist. Even if my life is worth living (which means my utility U is positive), I would prefer the situation where we do not exist but our ancestors had very happy lives (which means my critical level C is higher than my utility U). So we can expect that also in the future there will be people who prefer to avoid the sadistic repugnant conclusion at the cost of non-existence. This is a kind of altruistic or cooperative choice, because one chooses what one thinks is best for the whole (past, present and future) population and not merely for oneself. If everyone cooperates and focuses on what is best for the whole population, the choice that is best for the whole population will be selected.
If the above applies, i.e. if future generations set their critical levels altruistically according to what they believe is best for the whole population and if those critical levels are the same as what we would altruistically choose, it means we do not have to make extreme efforts to avoid existential risks, making our lives very miserable. However, doing nothing against existential risks is not good either. Suppose preventing an existential risk would come at almost no cost for us. Our happiness decreases only a tiny bit when we make efforts to prevent extinction. If we go for our own maximum happiness, resulting in global extinction, we prevent the existence of future grateful people (e.g. total utilitarians) who would contribute a lot to the welfare function. Similarly, if our ancestors could prevent extinction at almost no cost, and if they decreased their happiness only a tiny bit (from a maximum happiness to an almost maximum happiness), I would no longer prefer the situation where we do not exist and our ancestors had maximum happiness. That means my critical level drops below my utility for the non-extinction situation in which I exist, I become a grateful person and my positive relative utility contributes to the welfare function.
The above indicate that variable critical level utilitarianism does not give absolute priority to preventing existential risks. It can give a high priority, though, because the future might easily contain a lot of grateful people.
Conclusion and final remarks
Suppose we (i.e. currently existing people) face a population ethical dilemma where we can choose between different options that affect future populations. We have intuitions about what option is the best. This preference for the best option corresponds with an optimal distribution of critical levels that everyone should adopt, i.e.: if everyone (both present and future people) adopt those critical levels in our most preferred option, this most preferred option becomes the best option according to the variable critical level utilitarian theory. For example, when faced with the population ethical dilemma, 40% of the current generation adopt total utilitarianism with minimum critical level 0, and 60% adopt negative utilitarianism with maximum critical level (corresponding to the highest preference that a person can have), then these percentages give us the optimal distribution of critical levels in our dilemma.
Suppose we set our own critical levels always altruistically or cooperatively, i.e. according to what we think is best for the whole population when everyone cooperates. If future generations have preferences similar enough to ours, and if they also set their critical levels altruistically, they will set them accordingly such that the optimal choice according to variable critical level utilitarianism is the one we think is best. These are reasonable assumptions. If some people set their own critical levels egoistically, i.e. according to what they think is best for them and not for the whole population, then everyone else is allowed to set their critical levels egoistically and then we may all end up in a suboptimal state. In this way, we arrive at a population ethical theory that selects the best options that best fit our preferences (the options we believe are the best).