Variable critical level utilitarianism as the solution to population ethics


Population ethics is probably the most important area in moral philosophy for effective altruists who want to do the most good. There are some very serious problems in population ethics that relate to crucial considerations about doing the most good. In this article I present a new population ethical theory that solves many of those problems: variable critical level utilitarianism says we should choose the situation that maximizes the sum of everyone’s relative utilities, where a relative utility measures a person’s preference for his or her actual situation relative to a critical level. People can always choose their own preferred critical level. This is a more flexible version of classical critical level utilitarianism, because different persons in different situations can have different critical levels. When everyone’s critical level is minimal (i.e. 0), we get total utilitarianism, and when everyone’s critical level is maximal, we get negative utilitarianism. I discuss game-theoretical considerations of this variable critical level theory and apply it to existential risks and the far future. The theory says we have to give a high but not an absolute priority to preventing existential risks.

Note: this is a draft article. The final sections and conclusions are very tentative.

For an introductory version, see here.

For a more recent (oktober 2019), detailed and technical paper, see here.


Population ethics is one of the most difficult areas in moral philosophy that studies which choices are the best when populations are variable and the choices determine the existence or non-existence of individuals. As our current choices have an influence on the far future, and the far future can potentially contain a huge number of people, population ethics is probably the most important area of moral philosophy, because it relates to crucial considerations. It has an enormous influence on our cause prioritization.

In a consequentialist-axiological approach to population ethics, we are looking for a welfare function W such that the best choice is the one that maximizes this welfare function. When this welfare function is an aggregate of the utility functions U(i), summed over all individuals i, we get a utilitarian theory. The utility U(i) of an individual is a function of everything that the individual values or prefers, such as happiness or well-being.


Total and average utilitarianism

The problem in population ethics is that maximizing a welfare function almost always leads to counter-intuitive results. If we include all possible future generations, there are basically two most important population ethical theories: total utilitarianism and average utilitarianism.

In total utilitarianism, the welfare function W = T = the sum of everyone’s utility. This has a drastic implication: a so-called sadistic repugnant conclusion. Suppose we can choose between two situations. In situation 1, everyone is maximally happy. In situation 2, this group of people is maximally miserable, they have the worst lives possible. But there is a huge extra population of people with a very low but positive happiness, so their lives are barely worth living. If this added population is big enough, and the utility function is an increasing function of happiness, the second situation has a higher total utility and hence total utilitarianism says that we should prefer the second situation. This seems very counter-intuitive.

To avoid this implication, we can take another welfare function W = T/N = A. Here, N is the number of people who exist or will exist, and A is the average utility. In the above example, the second situation has a much lower average utility than the first, because the population size in the second situation is much higher.

However, also average utilitarianism is counter-intuitive, with a sadistic conclusion. Suppose we can choose between two situations. In situation 1, everyone is maximally happy, except one person who is maximally miserable. In the second situation, everyone is maximally happy, including this one person, but there is a huge extra population of people with a very high but not maximal happiness. If this added population is big enough, the second situation has a lower average utility and hence average utilitarianism says that we should prefer the first situation. This again seems very counter-intuitive: in the first situation there is one person in extreme misery whereas in the second situation everyone is at least very happy and the total happiness in the world is much higher.


Critical level utilitarianism

Philosophers have proposed a lot of other population ethical theories, but basically all of them face some counter-intuitive implication. That is because most of them have as welfare function a combination of total and average utilities. An important example is the theory with the welfare function W = T.(1-C/A). Here C is a positive constant. If A is very big, this theory becomes a total utilitarian theory.

This theory is the so-called critical level utilitarianism, where the constant C represents a critical level of utility. The welfare function is the sum of the difference between the utilities and this critical level. It is as if everyone’s effective utility equals their relative utility U(i)-C.

The critical level will always be non-negative. A negative critical level would be very counter-intuitive because it means that it is good to add extra people with lives not worth living, decreasing both total and average utility. The lowest possible critical level C = 0 corresponds with total utilitarianism where we have to maximize the sum of utilities. Considering the other extreme: if we take C to be the highest possible value (e.g. the value of the most preferred life), we get a kind of negative utilitarianism where we have to minimize the gap between our utilities and the critical level.

Setting C high enough avoids the sadistic conclusion of total utilitarianism. But too high is not good, because we risk the counter-intuitive implication of average utilitarianism. Furthermore, a very high critical level faces the problem of negative utilitarianism: it gives a preference to non-existence. It increases the probability that newborn people will have levels of utility below this critical level. Even if these people would have very happy lives, adding those people would be bad according to negative utilitarianism, because their relative utilities U(i)-C become negative, decreasing the welfare function.


Variable critical level utilitarianism

There could be an interesting solution to the above problems faced by critical level utilitarianism. In that theory, the level C was constant and the same for everyone. But what if different individuals in different situations have different critical levels? What if the critical level became variable? So instead of being a constant, the critical level becomes dependent on the individual i and the situation S. In a given situation S, an individual i has a critical level C(i,S).

And we can be altruistic here and respect everyone’s autonomy; each individual can determine what his or her critical level is. They can even choose negative critical levels. The only condition is that the critical level of a non-existing person is 0. When everyone can choose their own critical levels, the variable critical level utilitarianism is identical to the maximum self-determined relative preferences principle. If a group of people is faced with the abovementioned dilemma that led to the sadistic repugnant conclusion, and if everyone chooses the value 0 as their critical level, i.e. if everyone is a total utilitarian, then everyone would prefer situation 2 containing nothing but miserable people and people with lives barely worth living. We may disagree with their choice, we may think that preferring that choice is counter-intuitive, but we must accept or tolerate their preference in order to respect their autonomy. Who are we to say that their critical level value is wrong and should be higher?

There are other possible values of the critical level besides 0. For example, if everyone chooses as their critical level the average utility of all individuals (times a population factor (N-1)/N, which is close to 1 for large N), we arrive at average utilitarianism. Another possibility is that everyone can choose as their critical level their utility for their most preferred situation. So in situation S an individual has a utility U(i,S), which measures the preference for that situation S. But in that situation S, the individual can also have a stronger preference for his or her most preferred situation M which is different from S. That maximum preference can be set equal to the critical level C(i,S). The relative utility U(i,S)-C(i,S) now measures a complaint: in situation S the individual can complain that his or her most preferred situation M should have been chosen. This special version of variable critical level utilitarianism equals the minimum complaint theory which is a more flexible kind of negative utilitarianism.

It is possible that if people choose a critical level that is very high, the variable critical level theory (which corresponds with a maximum relative preferences theory or the minimum complaint theory) selects a situation that is not preferred by those people. For example, if future people have a high critical level, they have strong complaints and it would be better if those people do not exist. But if they have a positive preference for the situation in which they do exist, it is better for them not to complain so hard, as complaining would select their non-existence. In general, future possible people might prefer a critical level that is as high as possible but does not result in the selection of a situation that they prefer less. So they prefer a highest possible safe critical level.

Similarly, we have a lowest possible safe critical level, namely 0. When people would choose a negative critical level, they can end up in a situation where their utilities or preferences are higher than this negative critical level (which means it positively contributes to the total relative preferences), but their utilities are negative. Therefore, no-one would choose a negative critical level.

Another option of critical levels can bring us to person affecting utilitarianism. People can believe whether or not they exist in other possible situations. If someone in situation S1 believes that s/he is the same person as someone in situation S2, and vice versa, they can choose to set the same critical level in both situations. Hence, everyone who exist (or rather believes to exist) in all possible situations (i.e. all actual existing people) choose the same critical level in all those situations. On the other hand, there may be someone who exist in situation S1 but is not able to identify him/herself with anyone existing in situation S2. The existence of such a person depends on the choice of situation: s/he only exists in situation S1. People who believe they do not exist in all possible situations, can choose their actual utility level in a situation as their critical level in that situation (i.e. C(i,S1)=U(i,S1)). With this choice of critical levels, we arrive at a person affecting utilitarianism, where only the utilities of actually (or necessarily) existing people count.

In summary: if everyone chooses the lowest possible safe critical level, i.e. zero, we end up with total utilitarianism. If everyone chooses the average of all people as their own critical level (times a factor (N-1)/N, with N the total number of people), this becomes average utilitarianism. If everyone chooses a positive, fixed critical level, we arrive at critical level utilitarianism. If everyone chooses the highest possible utility (above the highest safe critical level) as their critical level, we arrive at negative utilitarianism. If everyone who is not able to identify themselves with other individuals in each other possible situations chooses their own actual utility level as their critical level in that situation, we end up with person affecting utilitarianism. If everyone chooses the highest possible safe critical level, we end up with a variation of a population ethical theory known in the literature as antifrustrationism, which resembles some aspects of person affecting utilitarianism and negative utilitarianism.

Just like critical level utilitarianism, our variable critical level utilitarianism lies between the two extremes of total utilitarianism and negative utilitarianism, but it avoids the problems faced by total, average, critical level and negative utilitarianism.

Choice dependency

The utility U(i,S) depends on the situation S, but not on the choice set: the set of all possible or eligible situations that one can choose from. On the other hand, the critical level C(i,S) can depend on the choice set.

For example: suppose in situation S1 a newborn person i will exist and will be very happy, with a positive utility U(i,S1)>0. However, suppose another situation S2 is available for us (i.e. we can choose situation S2), in which that person i will not exist, but everyone else is maximally happy, with maximum positive utilities. Although person i in situation S1 will have a positive utility, that person can still prefer the situation where he or she does not exist and everyone else is maximally happy. It is as if that person is a bit altruistic and prefers his or her non-existence in order to improve the well-being of others. That means his or her critical level C(i,S1) can be higher than the utility U(i,S1), such that his or her relative utility becomes negative in situation S1. In that case, it is better to choose situation S2 and not let the extra person be born.

If instead of situation S2, another situation S2’ becomes available, where the extra person does not exist and everyone else has the same utility levels as in situation S1, then the extra person in situation S1 could prefer situation S1 above S2’, which means that his or her new critical level C(i,S1)’ remains lower than the utility U(i,S1).

In other words: the choice of the critical level can depend on the possible situations that are eligible or available to the people who must make the choice about who will exist. If situations S1 and S2 are available, the chosen critical level will be C(i,S1), but if situations S1 and S2’ are available, the critical level can change into another value C(i,S1)’. Each person is free to decide whether or not his or her own critical level depends on the choice set.

Game-theoretical considerations and negative feedback mechanisms

The choice of critical level can be very flexible and can depend on what other individuals choose. This flexibility has some interesting game-theoretical consequences, where people can make strategic choices for their own critical levels dependent on the choices of other people.

We can split up people in two groups: the grateful people or positivists and the complainers or negativists. Positivists have a positive relative utility, i.e. their utility level U(i,S) is higher than their critical level C(i,S). Total utilitarians with positive utilities and a critical level of 0 are an example of grateful people because their positive relative utility acts as a gratitude.

Negativists, for example negative utilitarians, have a negative relative utility. These negativists are complainers because their negative utilities act as complaints. The problem faced by negativists is that they risk non-existence even if their lives would have been worth living (i.e. their utilities would have been positive). Suppose we can choose between a situation where no-one exists and a situation where a complainer with a positive utility exists. The first situation would be better because it minimizes complaints (or maximizes relative utilities). To guarantee existence with a life worth living, the complainer could decide to lower his or her critical level such that his relative utility becomes positive and the complainer becomes a grateful person.

However, this risk of non-existence can also be avoided when the existence of negativists is coupled to the existence of positivists. Suppose in the second situation there are a lot of positivists next to the one negativist complainer. Their total gratitude can be bigger than the complaint of this one complainer, resulting in a positive value of the welfare function, i.e. higher than the welfare function of the first situation where no-one exists, which is zero. So the complainer can exist and may even be able to increase his or her critical level as long as the welfare function remains positive.

The more positivists (e.g. total utilitarians) there are, the more opportunities there will be for complainers to raise their critical levels and the more other people might increase their critical levels and become complainers. And vice versa: the more people have high values for their critical levels, the more other people would become positivists by decreasing their critical levels. If potential future generations would raise their critical levels too high, it would lead to their non-existence because their complaints become too big. Other situations where they do not exist and have no complaints become preferential. Increasing critical levels means increasing the risk of non-existence, even if one could have a very happy life worth living. Hence, increasing the critical level dampens further population growth. Reversely, if a lot of future people would lower their critical levels, it can excite further population growth. But this also means that other future people could raise their own critical levels without risking non-existence.

Hence there is a kind of negative feedback mechanism. The average critical level can act like a thermostat. If the temperature becomes too low, the heating turns on. If the temperature becomes too high, the heating switches off to cool down the situation.


Existential risk and the far future

We can apply game theoretical considerations to the abovementioned dilemma that resulted in a sadistic repugnant conclusion for total utilitarians. The problem of existential risks and the far future gives us a more concrete example. Suppose we can choose between two options. In the first option (situation 1) we are very happy, but we create an existential risk (e.g. a rogue artificial superintelligence that blows up the planet) which means we are the last generation. (Suppose our deaths will be quick so we will not suffer.) There will be no future people, the future is empty of consciousness. In the second option, we avoid this existential risk and there will be many people in the near and far future who have lives worth living. However, choosing this second option, avoiding this existential risk, comes at a cost we have to bear. Our lives (of the current generation) can become very miserable in our attempts to avoid the existential risk (e.g. to stop the rogue artificial superintelligence). If our lives become too miserable, we would prefer situation 1. We (the current generation) can justify this by raising our critical levels so we are no longer total utilitarians. Raising our critical levels is a strategic counteracting measure.

However, this justification might not be sufficient when we consider the far future, i.e. when the total number of people in the future becomes very big when we choose the second option. If all those future generations would be total utilitarians, the collection of their preferences for a critical level of 0 trumps our preferences for a higher critical level. Their gratitudes (positive relative utilities) trump our complaints (negative relative utilities). But in reality, we don’t know what future people would prefer as their critical level. It is unlikely that they will all be total utilitarians when a lot of people in the current, existing generation want to avoid the sadistic repugnant conclusion.

The preferences of critical levels of the current generation can serve as a best estimate for the preferences of critical levels of the future generations. As most people of the current generation find the sadistic repugnant conclusion of total utilitarianism highly counter-intuitive, we can expect that also a lot of possible future people share this moral intuition and have a preference for situation 1 where a smaller population of maximally happy people exist, even if this means they themselves do not exist. Imagine that our ancestors once faced an existential threat. Luckily for us, this threat was avoided and we have lives worth living. However, in avoiding this existential risk, our ancestors had to suffer a lot. Their lives became maximally miserable. If they would not have prevented extinction, their lives would have been very positive, but we would not exist. Even if my life is worth living (which means my utility U is positive), I would prefer the situation where we do not exist but our ancestors had very happy lives (which means my critical level C is higher than my utility U). So we can expect that also in the future there will be people who prefer to avoid the sadistic repugnant conclusion at the cost of non-existence. This is a kind of altruistic or cooperative choice, because one chooses what one thinks is best for the whole (past, present and future) population and not merely for oneself. If everyone cooperates and focuses on what is best for the whole population, the choice that is best for the whole population will be selected.

If the above applies, i.e. if future generations set their critical levels altruistically according to what they believe is best for the whole population and if those critical levels are the same as what we would altruistically choose, it means we do not have to make extreme efforts to avoid existential risks, making our lives very miserable. However, doing nothing against existential risks is not good either. Suppose preventing an existential risk would come at almost no cost for us. Our happiness decreases only a tiny bit when we make efforts to prevent extinction. If we go for our own maximum happiness, resulting in global extinction, we prevent the existence of future grateful people (e.g. total utilitarians) who would contribute a lot to the welfare function. Similarly, if our ancestors could prevent extinction at almost no cost, and if they decreased their happiness only a tiny bit (from a maximum happiness to an almost maximum happiness), I would no longer prefer the situation where we do not exist and our ancestors had maximum happiness. That means my critical level drops below my utility for the non-extinction situation in which I exist, I become a grateful person and my positive relative utility contributes to the welfare function.

The above indicate that variable critical level utilitarianism does not give absolute priority to preventing existential risks. It can give a high priority, though, because the future might easily contain a lot of grateful people.


Conclusion and final remarks

Suppose we (i.e. currently existing people) face a population ethical dilemma where we can choose between different options that affect future populations. We have intuitions about what option is the best. This preference for the best option corresponds with an optimal distribution of critical levels that everyone should adopt, i.e.: if everyone (both present and future people) adopt those critical levels in our most preferred option, this most preferred option becomes the best option according to the variable critical level utilitarian theory. For example, when faced with the population ethical dilemma, 40% of the current generation adopt total utilitarianism with minimum critical level 0, and 60% adopt negative utilitarianism with maximum critical level (corresponding to the highest preference that a person can have), then these percentages give us the optimal distribution of critical levels in our dilemma.

Suppose we set our own critical levels always altruistically or cooperatively, i.e. according to what we think is best for the whole population when everyone cooperates. If future generations have preferences similar enough to ours, and if they also set their critical levels altruistically, they will set them accordingly such that the optimal choice according to variable critical level utilitarianism is the one we think is best. These are reasonable assumptions. If some people set their own critical levels egoistically, i.e. according to what they think is best for them and not for the whole population, then everyone else is allowed to set their critical levels egoistically and then we may all end up in a suboptimal state. In this way, we arrive at a population ethical theory that selects the best options that best fit our preferences (the options we believe are the best).

Dit bericht werd geplaatst in Artikels, Blog, English texts en getagged met , , , . Maak dit favoriet permalink.

12 reacties op Variable critical level utilitarianism as the solution to population ethics

  1. Zeke Sherman zegt:

    Utilitarianism with any nonzero critical level is counterintuitive because it orders that people are better off being killed when they would desire to keep on living.

    Variable critical level utilitarianism is not really a moral theory at all – it is relativism, permitting a variety of views based on what people believe. You may as well ask “who are we to say that their views are wrong” for any other view of population ethics or any other moral issue. If there is no mind-independent fact of the matter as to how good a life should be before it has moral value, then why would there be any mind-independent facts of the moral value of anything else?

    Moreover, you haven’t avoided counterintuitive conclusions, since for instance a society of negative utilitarians would basically vote on implementing negative utilitarianism with a negative critical level, and this is still counterintuitive, and so on for any other counterintuitive implications of population ethics views. In a way, rather than avoid the counterintuitive implications of population ethics views, you’ve opened yourself up to all of them.

    • stijnbruers zegt:

      Thanks for the remarks.
      Critical level utilitarianism does not imply killing people with utilities below the critical level. Suppose my critical level is 100 utility points, my utility for the situation in which I live a long life is 90, and my utility for the situation in which I am killed now is 50. Killing me means I contribute 50-100=-50 utility points to the welfare function, whereas not killing me only contributes -10 utility points, which is less bad than killing me.

      Variable critical level utilitarianism is indeed a kind of relativism, in the sense that there is no-one who can claim to poses the absolute standard for which population ethical theory or which critical level is the best. It respects autonomy and the preferences of individuals.
      About the question “who are we to say that their views are wrong” for any other moral issue: there is one absolute standard: we have to avoid unwanted arbitrariness (or in other words: for every choice you make, you have to be able to give a justifying rule of which you can consistently want that everyone follows that rule in all possible situations). This principle to avoid unwanted arbitrariness also means radical relativism in ethics is avoided. See
      So, if you may set your critical level at 100, everyone may do so. Can you consistently want that? If not, then you are not allowed to pick the level 100.

      About avoiding counterintuitive conclusions: you may think the choice of those negative utilitarians is counterintuitive, but to them it is not (or if it is, they are willing to bite the bullet and accept the counterintuitive conclusion). Who are you to judge that society of negative utilitarians? Who are you to say that your moral intuitions are more valid than theirs?

  2. Pingback: Why I became a utilitarian | Stijn Bruers, the rational ethicist

  3. Pingback: On the interpersonal comparability of well-being | Stijn Bruers, the rational ethicist

  4. Pingback: Reducing existential risks or wild animal suffering? | Stijn Bruers, the rational ethicist

  5. Pingback: Some solutions to utilitarian problems | Stijn Bruers, the rational ethicist

  6. Pingback: Wild animal suffering, longtermism and population ethics | Stijn Bruers, the rational ethicist

  7. Pingback: Probability estimate for wild animal welfare prioritization | Stijn Bruers, the rational ethicist

  8. Pingback: Asymmetry in ethical principles | Stijn Bruers, the rational ethicist

  9. Pingback: My three top charities | Stijn Bruers, the rational ethicist

  10. Pingback: A game theoretic solution to population ethics | Stijn Bruers, the rational ethicist

  11. Pingback: The problem of possible populations: animal farming, sustainability, extinction and the repugnant conclusion | Stijn Bruers, the rational ethicist

Geef een reactie

Vul je gegevens in of klik op een icoon om in te loggen. logo

Je reageert onder je account. Log uit /  Bijwerken )


Je reageert onder je Twitter account. Log uit /  Bijwerken )

Facebook foto

Je reageert onder je Facebook account. Log uit /  Bijwerken )

Verbinden met %s