Why I became a utilitarian

Abstract

In this article I explain how a specific utilitarian theory (variance normalized variable critical level rule preference utilitarianism that says we have to choose the situation that maximizes the sum of everyone’s variance normalized self-determined relative utilities) avoids and solves many problems in moral philosophy (e.g. about personal identity and population ethics) and incorporates many moral values and theories (e.g. prioritarianism, justice, equality and deontological principles). 

For an easier introduction, see ‘On the interpersonal comparison of well-being‘.

Introduction

In the past, I developed a pluralistic ethical system combining several principles from a utilitarian-consequentialist ethic (dealing with the value of well-being), a deontological ethic (dealing with basic rights and the value of bodily autonomy) and an environmental ethic (dealing with the value of biodiversity). However, in recent years, I shifted towards a utilitarian ethic because of new insights I developed about utility functions. These utility functions are more important and useful than I expected.

John von Neumann and Oskar Morgenstern proved that under certain assumptions about rationality, every individual has a utility function that measures the preferences of that individual. This utility function gives a real-valued number for every option (possible situation or choice that we can make). The higher this number, the stronger the preference for the corresponding option. An individual i in situation S has a preference or utility for situation T given by the utility function Ui(S,T). The utility for the actual situation S is Ui(S,S). This measures how strongly an individual in situation S prefers situation S. The total preference or utility Utotal(S) of the whole population for situation S is the sum over all individuals of their utilities for the situation S. A utilitarian ethic says that we should choose the situation that maximizes the total utility of the population (including all present and future sentient beings).

Things I’ve learned

A first thing I’ve learned about utility functions, is that they relate to individuals who have preferences, i.e. sentient beings. Sentient beings have preferences and value things such as their own well-being. They prefer a higher well-being, so the well-being of sentient beings becomes important. In contrast, non-sentient entities do not value anything. For example, we cannot violate the preferences of an insentient computer, no matter how we treat it, because computers do not have subjective preferences. If something matters morally, it should matter for at least one sentient being who values it. In other words, the utility function of a non-sentient entity becomes trivial (i.e. a constant). We automatically take the utilities of non-sentient entities fully into account, no matter what we choose. We can say that we already have maximized the utility functions of non-sentient entities, because there is no other choice that we can make that increases their utilities. We automatically respect the autonomy and preferences of all non-sentient entities. This means that those entities are not and cannot be discriminated. Focusing on the preferences of sentient beings is not discriminatory.

Second, I learned that we should value autonomy. Individuals can autonomously decide their own preferences. I started to value autonomy because I learned about the philosophy of effective altruism. Altruism means helping others, doing things that other individuals want or prefer. To avoid egoism, egocentrism, paternalism or chauvinism, the preferences of other individuals are what matters. Altruistically speaking, we should let people decide for themselves what kinds of moral values they prefer. For example, in utilitarian ethics, there is the discussion about what kind of quantity we should maximize. Should we maximize subjective experiences such as happiness as in hedonic utilitarianism? Or maximize desire satisfaction as in preference utilitarianism? Or is there a list of preferable things such as creativity and friendship, as in objective list utilitarianism? (See the three theories of well-being.) What about the different evaluations of the experiencing self (valuing moment-to-moment happiness or moment utility) versus the remembering self (valuing life satisfaction or remembered utility)? (See Daniel Kahnemans work on well-being.) What about Robert Nozick’s experience machine that gives you maximum happiness in a virtual reality? What about deathbed promises whose non-compliance will never be experienced? My answer would be that people can decide for themselves what they value, what counts as well-being, how important promises are, and so on. For example, if they only prefer momentary experienced happiness maximization of hedonic utilitarianism, then we should respect that. But in the end it is about their preferences, about what they want or prefer, so a preference utilitarianism is the most fundamental theory.

This respect for autonomy also means we can basically delete environmental ethics, because ecosystems themselves do not value anything. They do not care about values of an environmental ethic, such as naturalness, integrity or biodiversity. Ecosystems do not have autonomous preferences for naturalness or biodiversity, so we cannot violate the autonomy of ecosystems, even if we destroy nature. That means biodiversity becomes merely instrumentally important, i.e. only when it is useful in the sense that it contributes to the well-being of sentient beings.

Third, I learned that the utility function can be a non-linear function of well-being or happiness. Hence, a utility function not necessarily equals well-being or happiness. Someone’s utility function can be a concave function of his or her well-being, i.e. with decreasing marginal utility of well-being. The more well-being that person has, the less an extra unit of well-being adds to the utility. If everyone has a concave utility function of well-being, this results in a prioritarian ethic that says that we should Increase the well-being of all sentient beings alive in the present and the future, whereby improvements of the worst-off positions (the worst sufferers, the beings who have the worst lives and the lowest well-being) have a strong priority. We should improve the well-being of the worst-off, unless this drastically decreases the well-being of others. As a result, if we have to choose between two situations that have equal total amounts of well-being, the situation with the more equal distribution of well-being should be chosen. This counters an often heard criticism against utilitarianism, that it doesn’t value justice or equality. People can decide for themselves how concave their utility function is, and if they all choose a very concave function, then this issue of justice or equality becomes very important. A utilitarian theory that respects autonomy does not state in advance how important equality is. The importance of equality or justice is a derived property, determined by the people.

Fourth, I learned that a utility function is not necessarily only a function of an individual’s well-being. Other values, including moral values, can determine someone’s utility function. For example, someone who prefers a deontological ethic that values the basic right to bodily autonomy (the right not to be used against one’s will as merely a means to someone else’s ends), might have a very different utility function that can even be very negative in situations where someone’s basic right is violated. For example, I strongly care about animal rights, so I would prefer a world without animal exploitation in livestock farming above a world where I have a lower well-being myself, or above a world where I do not exist even if I experience a happy satisfying life in the actual world.

In other words: if everyone’s utility function becomes very negative when basic rights are violated, we arrive at a deontological ethic. From the deontological right to bodily autonomy (which corresponds to the mere means principle that says that we should not use someone as merely a means), we can also derive other deontological principles, such as the tolerance for some levels of partiality and the difference between imperfect or less absolute positive duties (to help others) versus perfect or more absolute negative duties (not to harm others). (See the extended mere means principle In Born Free and Equal, chapter 6.6).

As with a prioritarian or egalitarian ethic (that values justice and equality), a deontological ethic can be derived from a utilitarian ethic, if people value deontological rights and principles. If some people but not everyone values equality, justice or rights, we arrive at a hybrid theory that partially includes those values. But fundamentally it remains a kind of preference utilitarianism, because those values are all based on personal preferences. If only I value equality or justice, imposing my preference for equality on others violates the autonomy of others.

What about people who have discriminatory (e.g. racist or speciesist) values? Are racist judgments allowed in someone’s utility function? The answer is no, if we impose a fundamental restriction to avoid unwanted arbitrariness. The restriction says: if you make a choice, you are only allowed to make that choice if you can give a justifying rule of which you can consistently want that everyone follows that rule, in all possible situations. You can consistently want something only if it is compatible with a consistent set of the strongest things that you want. This restriction, which slightly resembles a Kantian categorical imperative or a golden rule, is probably the most fundamental and important principle in ethics. Without this restriction, not everyone can consistently want an unrestricted utilitarianism. With this restriction, all kinds of discrimination are excluded, because discrimination involves unwanted arbitrariness. This restriction means we have a kind of rule utilitarianism, because the restriction refers to the importance of following rules.

Fifth, I learned that a person is allowed to have a different utility function in each different situation and moment in time. For example, in situation S1 (or time t1), individual i has a utility Ui(S1,T) for situation T. But in another situation S2 (or time t2) that individual might have a different utility Ui(S2,T) for situation T. In fact, we do not need to know if those two individuals are the same. We could as well have written Uj(S2,T) for another individual j in situation S2. This avoids the problem of personal identity over time and through situations. Are you the same person as your alter-ego ten years ago, if you have different preferences now? Are you the same person as your alter-ego in a different possible world, where you would have different preferences due to different experiences and circumstances? We don’t need to know the answers to these questions. All that matters is the total utility of a situation, and this is the sum of everyone’s utility in that situation for that situation (i.e. Ui(S,S)) over all moments of time.

Sixth, I learned that someone’s utility function is uniquely determined up to adding a constant and multiplying by a positive scalar, and that this offers elegant solutions to two problems of utilitarianism. As pointed out by John Harsanyi, John von Neumann and others, the total utility can be written as the sum of affine transformations of individual utilities: Utotal equals the sum over all individuals of aiUi+bi, where ai is a positive constant (scalar), Ui is the utility function of individual i and bi is a constant.  The values of ai and bi are not determined. This seems to be very problematic, as the aggregation of everyone’s utility function into a total utility of the whole population seems to become arbitrary.

The problem with the scalar multiplication (selecting the values of ai) relates to the problem of interpersonal comparability of utility. How can we compare the happiness levels of different individuals? Is my painful experience equally painful or bad as yours? If I say that my utility for this situation is 10, does that correspond with a value of 10 for you? This problem of interpersonal comparability can be solved by variance normalization. This method goes as follows. Consider the preferences of an individual in situation S for all possible situations T. These preferences are the utilities Ui(S,T) for all possible T. Now we can calculate the variance Vi(S) of these utilities over all possible T. The standard deviation SDi(S) is the square root of this variance. The scaler values ai can now be set equal to 1/SDi(S). This means all utilities are normalized to a variance equal to 1. There are other possible normalizations, but variance normalization is in some way special. For example,  Owen Cotton-Barratt proved that under certain assumptions, variance normalization is the only weighted sum method that is immune to strategic voting.

What about the undefined parameters bi? They offer an interesting solution to the problems in population ethics. If everyone has the same level bi = -c, we arrive at critical level utilitarianism, where c is a critical level of utility. This theory says that only when someone’s utility is higher than this critical level, it contributes to the total utility of the population. But to respect autonomy, everyone can determine his or her own parameter bi, i.e. his or her own critical level. This theory is called variable critical level utilitarianism.

The reason why this solves many population ethical problems, is because the parameters bi depend on the situation. The total utility Utotal(S) of the population for situation S can be written as the sum of ai(S)Ui(S,S)+bi(S) over all individuals. If an individual exists in situation S, the parameter bi(S) can be non-zero. But if this individual does not exist in situation T, the parameter bi(T) (and of course also the utility Ui(T,T)) equals 0.

The parameters bi always lie within a range between a lowest and a highest safe value. No-one would prefer a negative critical value. Suppose someone took a negative critical value c=-bi=-10. The contribution to the total utility is Ui(S,S)-10, which is the relative utility (relative with respect to a critical level). This relative utility can be positive even if Ui(S,S) is negative but higher than -10. So if a person has a negative utility -4, that person still positively contributes to the total utility. Therefore, a critical level of 0 is the lowest safe value.

Similarly, if someone took a very high critical level c (much higher than Ui(S,S)), the relative utility Ui(S,S)-c is negative and hence the contribution to the total utility becomes negative, even if Ui(S,S) is positive and very high. Then it would have been better if that person did not exist, even if that person has a positive utility Ui(S,S). In other words: if everyone took a very high critical level, we should stop procreating, because adding new people will negatively contribute to the total utility. Of course, if we have a preference for procreation and we cannot procreate, our utility for the non-procreation situation is lower than for the procreation situation. We have to compare this decrease of our utility from non-procreation with the negative relative utilities of the potential future people. The maximum safe critical level that still guarantees procreation and existence of future people, is determined by our preference for procreation. Respecting the autonomy of (future) people, everyone can choose his or her own maximum safe critical level. Choosing a higher critical level becomes dangerous, as one risks a too negative contribution to the total utility and that would mean a situation that is less preferred (e.g. a situation where one does not even exist) should be chosen.

In population ethics, there are several theories and we face the problem which theory to choose. Our approach avoids this problem, because people can decide for themselves their own preferred critical levels. We can choose our own critical levels somewhere between the lowest safe value and the highest safe value. If we all preferred the lowest safe value, we arrive at total utilitarianism in population ethics. This means we all accept the very repugnant conclusion where a situation of maximally miserable people (with very negative utilities) plus a huge population of extra people with slightly positive utilities (e.g. lives barely worth living) is preferred over a situation where the miserable people become maximally happy and the extra huge population does not exist. If we all took the same critical level between the lowest and highest safe values, we arrive at critical level utilitarianism. If we all took the highest safe value, we arrive at a kind of negative utilitarianism, which also comes close to person-affecting utilitarianism and antifrustrationism (see e.g. Arrhenius’ Future Generations dissertation). In reality, different people might prefer different critical levels, so we arrive at a hybrid theory which I call variable critical level utilitarianism.

What moral philosophers have to do now

If our variance normalized variable critical level rule preference utilitarianism solves and avoids many problems in moral philosophy, what is left to do for moral philosophers (and moral psychologists)? Here are some suggestions.

  • Help people construct rational utility functions. In particular, help people clarify their own moral values: how important are values like rights, justice or equality to them? What kind of well-being do they value? What other values do they have and how can we accurately define them? People’s preferences are not always consistent or clear. For example, sometimes they have incomplete preferences (that A is neither preferred nor dispreferred nor equal to B) or intransitive preferences (that A is better than B, which is better than C, which is better than A).
  • Find out the moral preferences (utility functions) of people. For example, how many people choose the maximum safe critical level? How many people value deontological rights and how strongly do they value them?
  • Study the flexibility of utility functions. How easy is it to change someone’s utility function? For example, can you make someone prefer another critical level or another definition of well-being?
  • Estimate the utility functions of sentient beings (e.g. babies, mentally disabled humans, non-human animals), who are not able to clearly express their preferences.
  • Find out what we have to do when we cannot reliably estimate someone’s utilities. E.g. what about insects and fetuses?
  • Find out easy but reliable methods to aggregate everyone’s relative utilities. Calculating and adding up everyone’s variance normalized relative utilities for all possible situations, including the far future, might be far too data intensive, so we need easy rules of thumb. Compare it with physics, in particular the study of thermodynamics and the statistical mechanics of many-particle systems, avoiding the numerous complex interactions and properties of all the particles at the microstate level. We need a thermodynamics of moral philosophy.

Summary

In summary we see that a utilitarian theory that maximally respects autonomy – in particular where everyone can determine his or her own utility function – solves many problems in ethics. First if we take a version of preference utilitarianism, we avoid discussions about what is valuable, what counts as well-being and what people should value. As the utilities in preference utilitarianism are not necessarily linear functions of only well-being, we can take into account preferences for justice, equality and deontological rights. So if people have preferences for such moral theories and values, we can derive a prioritarian ethic as well as deontological principles (the mere means principle, the difference between perfect and imperfect duties, the difference between positive and negative rights). We can also avoid problems related to personal identity through time and situations. Second, if we take a version of rule utilitarianism, we avoid immoral unwanted arbitrariness (e.g. a preference for situations involving discrimination) in our utility functions. Third, if we take variable critical level utilitarianism, we avoid many discussions in population ethics. Fourth, if we take variance normalized utilitarianism, we solve the problem of interpersonal comparability of utility.

 

Advertenties
Dit bericht werd geplaatst in Artikels, Blog, English texts en getagged met , , , , . Maak dit favoriet permalink.

Een reactie op Why I became a utilitarian

  1. Pingback: On the interpersonal comparability of well-being | Stijn Bruers, the rational ethicist

Geef een reactie

Vul je gegevens in of klik op een icoon om in te loggen.

WordPress.com logo

Je reageert onder je WordPress.com account. Log uit /  Bijwerken )

Google+ photo

Je reageert onder je Google+ account. Log uit /  Bijwerken )

Twitter-afbeelding

Je reageert onder je Twitter account. Log uit /  Bijwerken )

Facebook foto

Je reageert onder je Facebook account. Log uit /  Bijwerken )

Verbinden met %s