Why I became a utilitarian

Abstract

In this article I explain how a specific utilitarian theory (variance normalized variable critical level rule preference utilitarianism that says we have to choose the situation that maximizes the sum of everyone’s variance normalized self-determined relative utilities) avoids and solves many problems in moral philosophy (e.g. about personal identity and population ethics) and incorporates many moral values and theories (e.g. prioritarianism, justice, equality and deontological principles). 

For an easier introduction, see ‘On the interpersonal comparison of well-being‘.

Introduction

In the past, I developed a pluralistic ethical system combining several principles from a utilitarian-consequentialist ethic (dealing with the value of well-being), a deontological ethic (dealing with basic rights and the value of bodily autonomy) and an environmental ethic (dealing with the value of biodiversity). However, in recent years, I shifted towards a utilitarian ethic because of new insights I developed about utility functions. These utility functions are more important and useful than I expected.

John von Neumann and Oskar Morgenstern proved that under certain assumptions about rationality, every individual has a utility function that measures the preferences of that individual. This utility function gives a real-valued number for every option (possible situation or choice that we can make). The higher this number, the stronger the preference for the corresponding option. An individual i in situation S has a preference or utility for situation T given by the utility function Ui(S,T). The utility for the actual situation S is Ui(S,S). This measures how strongly an individual in situation S prefers situation S. The total preference or utility Utotal(S) of the whole population for situation S is the sum over all individuals of their utilities for the situation S. A utilitarian ethic says that we should choose the situation that maximizes the total utility of the population (including all present and future sentient beings).

Things I’ve learned

A first thing I’ve learned about utility functions, is that they relate to individuals who have preferences, i.e. sentient beings. Sentient beings have preferences and value things such as their own well-being. They prefer a higher well-being, so the well-being of sentient beings becomes important. In contrast, non-sentient entities do not value anything. For example, we cannot violate the preferences of an insentient computer, no matter how we treat it, because computers do not have subjective preferences. If something matters morally, it should matter for at least one sentient being who values it. In other words, the utility function of a non-sentient entity becomes trivial (i.e. a constant). We automatically take the utilities of non-sentient entities fully into account, no matter what we choose. We can say that we already have maximized the utility functions of non-sentient entities, because there is no other choice that we can make that increases their utilities. We automatically respect the autonomy and preferences of all non-sentient entities. This means that those entities are not and cannot be discriminated. Focusing on the preferences of sentient beings is not discriminatory.

Second, I learned that we should value autonomy. Individuals can autonomously decide their own preferences. I started to value autonomy because I learned about the philosophy of effective altruism. Altruism means helping others, doing things that other individuals want or prefer. To avoid egoism, egocentrism, paternalism or chauvinism, the preferences of other individuals are what matters. Altruistically speaking, we should let people decide for themselves what kinds of moral values they prefer. For example, in utilitarian ethics, there is the discussion about what kind of quantity we should maximize. Should we maximize subjective experiences such as happiness as in hedonic utilitarianism? Or maximize desire satisfaction as in preference utilitarianism? Or is there a list of preferable things such as creativity and friendship, as in objective list utilitarianism? (See the three theories of well-being.) What about the different evaluations of the experiencing self (valuing moment-to-moment happiness or moment utility) versus the remembering self (valuing life satisfaction or remembered utility)? (See Daniel Kahnemans work on well-being.) What about Robert Nozick’s experience machine that gives you maximum happiness in a virtual reality? What about deathbed promises whose non-compliance will never be experienced? My answer would be that people can decide for themselves what they value, what counts as well-being, how important promises are, and so on. For example, if they only prefer momentary experienced happiness maximization of hedonic utilitarianism, then we should respect that. But in the end it is about their preferences, about what they want or prefer, so a preference utilitarianism is the most fundamental theory.

This respect for autonomy also means we can basically delete environmental ethics, because ecosystems themselves do not value anything. They do not care about values of an environmental ethic, such as naturalness, integrity or biodiversity. Ecosystems do not have autonomous preferences for naturalness or biodiversity, so we cannot violate the autonomy of ecosystems, even if we destroy nature. That means biodiversity becomes merely instrumentally important, i.e. only when it is useful in the sense that it contributes to the well-being of sentient beings.

Third, I learned that the utility function can be a non-linear function of well-being or happiness. Hence, a utility function not necessarily equals well-being or happiness. Someone’s utility function can be a concave function of his or her well-being, i.e. with decreasing marginal utility of well-being. The more well-being that person has, the less an extra unit of well-being adds to the utility. If everyone has a concave utility function of well-being, this results in a prioritarian ethic that says that we should Increase the well-being of all sentient beings alive in the present and the future, whereby improvements of the worst-off positions (the worst sufferers, the beings who have the worst lives and the lowest well-being) have a strong priority. We should improve the well-being of the worst-off, unless this drastically decreases the well-being of others. As a result, if we have to choose between two situations that have equal total amounts of well-being, the situation with the more equal distribution of well-being should be chosen. This counters an often heard criticism against utilitarianism, that it doesn’t value justice or equality. People can decide for themselves how concave their utility function is, and if they all choose a very concave function, then this issue of justice or equality becomes very important. A utilitarian theory that respects autonomy does not state in advance how important equality is. The importance of equality or justice is a derived property, determined by the people.

Fourth, I learned that a utility function is not necessarily only a function of an individual’s well-being. Other values, including moral values, can determine someone’s utility function. For example, someone who prefers a deontological ethic that values the basic right to bodily autonomy (the right not to be used against one’s will as merely a means to someone else’s ends), might have a very different utility function that can even be very negative in situations where someone’s basic right is violated. For example, I strongly care about animal rights, so I would prefer a world without animal exploitation in livestock farming above a world where I have a lower well-being myself, or above a world where I do not exist even if I experience a happy satisfying life in the actual world.

In other words: if everyone’s utility function becomes very negative when basic rights are violated, we arrive at a deontological ethic. From the deontological right to bodily autonomy (which corresponds to the mere means principle that says that we should not use someone as merely a means), we can also derive other deontological principles, such as the tolerance for some levels of partiality and the difference between imperfect or less absolute positive duties (to help others) versus perfect or more absolute negative duties (not to harm others). (See the extended mere means principle In Born Free and Equal, chapter 6.6).

As with a prioritarian or egalitarian ethic (that values justice and equality), a deontological ethic can be derived from a utilitarian ethic, if people value deontological rights and principles. If some people but not everyone values equality, justice or rights, we arrive at a hybrid theory that partially includes those values. But fundamentally it remains a kind of preference utilitarianism, because those values are all based on personal preferences. If only I value equality or justice, imposing my preference for equality on others violates the autonomy of others.

What about people who have discriminatory (e.g. racist or speciesist) values? Are racist judgments allowed in someone’s utility function? The answer is no, if we impose a fundamental restriction to avoid unwanted arbitrariness. The restriction says: if you make a choice, you are only allowed to make that choice if you can give a justifying rule of which you can consistently want that everyone follows that rule, in all possible situations. You can consistently want something only if it is compatible with a consistent set of the strongest things that you want. This restriction, which slightly resembles a Kantian categorical imperative or a golden rule, is probably the most fundamental and important principle in ethics. Without this restriction, not everyone can consistently want an unrestricted utilitarianism. With this restriction, all kinds of discrimination are excluded, because discrimination involves unwanted arbitrariness. This restriction means we have a kind of rule utilitarianism, because the restriction refers to the importance of following rules.

Fifth, I learned that a person is allowed to have a different utility function in each different situation and moment in time. For example, in situation S1 (or time t1), individual i has a utility Ui(S1,T) for situation T. But in another situation S2 (or time t2) that individual might have a different utility Ui(S2,T) for situation T. In fact, we do not need to know if those two individuals are the same. We could as well have written Uj(S2,T) for another individual j in situation S2. This avoids the problem of personal identity over time and through situations. Are you the same person as your alter-ego ten years ago, if you have different preferences now? Are you the same person as your alter-ego in a different possible world, where you would have different preferences due to different experiences and circumstances? We don’t need to know the answers to these questions. All that matters is the total utility of a situation, and this is the sum of everyone’s utility in that situation for that situation (i.e. Ui(S,S)) over all moments of time.

Sixth, I learned that someone’s utility function is uniquely determined up to adding a constant and multiplying by a positive scalar, and that this offers elegant solutions to two problems of utilitarianism. As pointed out by John Harsanyi, John von Neumann and others, the total utility can be written as the sum of affine transformations of individual utilities: Utotal equals the sum over all individuals of aiUi+bi, where ai is a positive constant (scalar), Ui is the utility function of individual i and bi is a constant.  The values of ai and bi are not determined. This seems to be very problematic, as the aggregation of everyone’s utility function into a total utility of the whole population seems to become arbitrary.

The problem with the scalar multiplication (selecting the values of ai) relates to the problem of interpersonal comparability of utility. How can we compare the happiness levels of different individuals? Is my painful experience equally painful or bad as yours? If I say that my utility for this situation is 10, does that correspond with a value of 10 for you? This problem of interpersonal comparability can be solved by variance normalization. This method goes as follows. Consider the preferences of an individual in situation S for all possible situations T. These preferences are the utilities Ui(S,T) for all possible T. Now we can calculate the variance Vi(S) of these utilities over all possible T. The standard deviation SDi(S) is the square root of this variance. The scaler values ai can now be set equal to 1/SDi(S). This means all utilities are normalized to a variance equal to 1. There are other possible normalizations, but variance normalization is in some way special. For example,  Owen Cotton-Barratt proved that under certain assumptions, variance normalization is the only weighted sum method that is immune to strategic voting.

What about the undefined parameters bi? They offer an interesting solution to the problems in population ethics. If everyone has the same level bi = -c, we arrive at critical level utilitarianism, where c is a critical level of utility. This theory says that only when someone’s utility is higher than this critical level, it contributes to the total utility of the population. But to respect autonomy, everyone can determine his or her own parameter bi, i.e. his or her own critical level. This theory is called variable critical level utilitarianism.

The reason why this solves many population ethical problems, is because the parameters bi depend on the situation. The total utility Utotal(S) of the population for situation S can be written as the sum of ai(S)Ui(S,S)+bi(S) over all individuals. If an individual exists in situation S, the parameter bi(S) can be non-zero. But if this individual does not exist in situation T, the parameter bi(T) (and of course also the utility Ui(T,T)) equals 0.

The parameters bi always lie within a range between a lowest and a highest safe value. No-one would prefer a negative critical value. Suppose someone took a negative critical value c=-bi=-10. The contribution to the total utility is Ui(S,S)-10, which is the relative utility (relative with respect to a critical level). This relative utility can be positive even if Ui(S,S) is negative but higher than -10. So if a person has a negative utility -4, that person still positively contributes to the total utility. Therefore, a critical level of 0 is the lowest safe value.

Similarly, if someone took a very high critical level c (much higher than Ui(S,S)), the relative utility Ui(S,S)-c is negative and hence the contribution to the total utility becomes negative, even if Ui(S,S) is positive and very high. Then it would have been better if that person did not exist, even if that person has a positive utility Ui(S,S). In other words: if everyone took a very high critical level, we should stop procreating, because adding new people will negatively contribute to the total utility. Of course, if we have a preference for procreation and we cannot procreate, our utility for the non-procreation situation is lower than for the procreation situation. We have to compare this decrease of our utility from non-procreation with the negative relative utilities of the potential future people. The maximum safe critical level that still guarantees procreation and existence of future people, is determined by our preference for procreation. Respecting the autonomy of (future) people, everyone can choose his or her own maximum safe critical level. Choosing a higher critical level becomes dangerous, as one risks a too negative contribution to the total utility and that would mean a situation that is less preferred (e.g. a situation where one does not even exist) should be chosen.

In population ethics, there are several theories and we face the problem which theory to choose. Our approach avoids this problem, because people can decide for themselves their own preferred critical levels. We can choose our own critical levels somewhere between the lowest safe value and the highest safe value. If we all preferred the lowest safe value, we arrive at total utilitarianism in population ethics. This means we all accept the very repugnant conclusion where a situation of maximally miserable people (with very negative utilities) plus a huge population of extra people with slightly positive utilities (e.g. lives barely worth living) is preferred over a situation where the miserable people become maximally happy and the extra huge population does not exist. If we all took the same critical level between the lowest and highest safe values, we arrive at critical level utilitarianism. If we all took the highest safe value, we arrive at a kind of negative utilitarianism, which also comes close to person-affecting utilitarianism and antifrustrationism (see e.g. Arrhenius’ Future Generations dissertation). In reality, different people might prefer different critical levels, so we arrive at a hybrid theory which I call variable critical level utilitarianism.

What moral philosophers have to do now

If our variance normalized variable critical level rule preference utilitarianism solves and avoids many problems in moral philosophy, what is left to do for moral philosophers (and moral psychologists)? Here are some suggestions.

  • Help people construct rational utility functions. In particular, help people clarify their own moral values: how important are values like rights, justice or equality to them? What kind of well-being do they value? What other values do they have and how can we accurately define them? People’s preferences are not always consistent or clear. For example, sometimes they have incomplete preferences (that A is neither preferred nor dispreferred nor equal to B) or intransitive preferences (that A is better than B, which is better than C, which is better than A).
  • Find out the moral preferences (utility functions) of people. For example, how many people choose the maximum safe critical level? How many people value deontological rights and how strongly do they value them?
  • Study the flexibility of utility functions. How easy is it to change someone’s utility function? For example, can you make someone prefer another critical level or another definition of well-being?
  • Estimate the utility functions of sentient beings (e.g. babies, mentally disabled humans, non-human animals), who are not able to clearly express their preferences.
  • Find out what we have to do when we cannot reliably estimate someone’s utilities. E.g. what about insects and fetuses?
  • Find out easy but reliable methods to aggregate everyone’s relative utilities. Calculating and adding up everyone’s variance normalized relative utilities for all possible situations, including the far future, might be far too data intensive, so we need easy rules of thumb. Compare it with physics, in particular the study of thermodynamics and the statistical mechanics of many-particle systems, avoiding the numerous complex interactions and properties of all the particles at the microstate level. We need a thermodynamics of moral philosophy.

Summary

In summary we see that a utilitarian theory that maximally respects autonomy – in particular where everyone can determine his or her own utility function – solves many problems in ethics. First if we take a version of preference utilitarianism, we avoid discussions about what is valuable, what counts as well-being and what people should value. As the utilities in preference utilitarianism are not necessarily linear functions of only well-being, we can take into account preferences for justice, equality and deontological rights. So if people have preferences for such moral theories and values, we can derive a prioritarian ethic as well as deontological principles (the mere means principle, the difference between perfect and imperfect duties, the difference between positive and negative rights). We can also avoid problems related to personal identity through time and situations. Second, if we take a version of rule utilitarianism, we avoid immoral unwanted arbitrariness (e.g. a preference for situations involving discrimination) in our utility functions. Third, if we take variable critical level utilitarianism, we avoid many discussions in population ethics. Fourth, if we take variance normalized utilitarianism, we solve the problem of interpersonal comparability of utility. So we end up with a variance normalized variable critical level preference rule utilitarianism.

 

Advertenties
Dit bericht werd geplaatst in Artikels, Blog, English texts en getagged met , , , , . Maak dit favoriet permalink.

5 reacties op Why I became a utilitarian

  1. Pingback: On the interpersonal comparability of well-being | Stijn Bruers, the rational ethicist

  2. It seems to me that this assumes both 1) that we would all choose to assign a finite value to all conscious states/preferences, and 2) that the utility of positive experiences/preference satisfaction can counter-balance the utility of negative experiences/preference frustration.
    Both are assumptions I would strongly question:
    https://magnusvinding.com/2018/09/03/suffering-focused-ethics/
    https://magnusvinding.com/2018/09/03/the-principle-of-sympathy-for-intense-suffering/

    • stijnbruers zegt:

      Thanks for your reaction.
      1) Allowing infinite values will be very problematic, something that no-one can consistently want. If the disvalue of the most extreme suffering was infinite, we could not say that two persons in most extreme suffering (or two days in most extreme suffering) is worse than one person (or one day) in worst extreme suffering. If you are allowed to have infinite utilities, than so am I, and adding or substracting infinities is not well-defined, so we end up with an anything goes ethic.
      2) I indeed believe that utility of positive experiences can counter-balance the utility of negative experiences. I’ve been on holiday, which was fantastic, but it costed me a few months of stressful, boring work to save enough money for the trip. I have paid for this one month trip, but I’d definitely not pay for being asleep or unconscious for one month. So the value of happiness during the trip was really positive (not merely an absence of suffering). Furthermore, I also consider past year as being positive overall, so the positive experiences trump the negative ones in this case. I believe one unit of positive utility counter-balances one unit of negative utility. I would say that the size of disvalue (negative utility) of the most horrible suffering that we can imagine might be much bigger than the size of value of the most terrific happiness that we can imagine. But someone else might value more extreme happiness higher than less extreme suffering, and who are we to say that that is wrong?
      3) Utility is not necessarily a linear function of happiness and suffering. Utility can be a concave function, resulting in a prioritarian utilitarianism, giving a priority to advancing the worst-off. Furthermore, we have to look at relative utilities, i.e. utilities relative to a reference value, such as in critical level utitarianism where everyone has the same reference value equal to the critical level. I believe everyone can choose his or her own relative utility function, i.e. how utility is a function of happiness and suffering, how high the reference value is. So I prefer variable critical level utilitarianism. If everyone chooses a high reference value (critical level), we end up with a kind of negative utilitarianism.
      3) It seems your ethic results in antinatalism, where no-one should be allowed to procreate, because the suffering of not being allowed to procreate seems to be finite, less than the potential extreme suffering of at least one future sentient being. I disagree with such antinatalism. Imagine someone faced a dilemma: either experiencing extreme suffering (like the examples you mentioned) and all newborn sentient beings will be happy (no more suffering in the future), or no extreme suffering for that person but infertility for all sentient beings (also no more future suffering). I can imagine someone prefers the extreme suffering and a continued existence of happy sentient beings (even if the suffering is so bad that one wishes to be dead or one would have prefered not to exist). For this person, extreme suffering can be trumped by positive utilities in the future. We should not say that this person has the wrong preferences or the wrong utility function, because that would violate autonomy to choose one’s own utility function.

      • Magnus Vinding zegt:

        Thanks for your reply. 🙂

        “1) Allowing infinite values will be very problematic, something that no-one can consistently want. If the disvalue of the most extreme suffering was infinite, we could not say that two persons in most extreme suffering (or two days in most extreme suffering) is worse than one person (or one day) in worst extreme suffering.”

        This is not true. You can represent moments of extreme suffering along one axis that is orthogonal to other axes along which you may represent other, yet incommensurably less valuable things. In this way, there is nothing problematic of difficult about saying that, say, two days of extreme suffering is worse than one. The subject of value lexicality, or superiority, is one that there is an entire literature on; for a nice introduction, as well as some elaboration on the point I just made, I recommend the following post: https://foundational-research.org/value-lexicality/

        “2) I indeed believe that utility of positive experiences can counter-balance the utility of negative experiences. I’ve been on holiday, which was fantastic, but it costed me a few months of stressful, boring work to save enough money for the trip. […] So the value of happiness during the trip was really positive (not merely an absence of suffering).”

        Playing the devil’s advocate: Might you have felt more regret afterwards if you had slept rather than being on a trip (a trip which also gave you something to tell others about, which you would also have left out on).

        Yet the more critical reply I would give to your overall claim relates to this:

        “I would say that the size of disvalue (negative utility) of the most horrible suffering that we can imagine might be much bigger than the size of value of the most terrific happiness that we can imagine. But someone else might value more extreme happiness higher than less extreme suffering, and who are we to say that that is wrong?”

        The important question I want to put forth is this: Can any amount of positive experiences of yours, or indeed of anyone, justify imposing extreme suffering (e.g. of the kind described in the second essay above) on someone else?

        More simply: would you disagree with David Pearce’s claim that “no amount of happiness or fun enjoyed by some organisms can notionally justify the indescribable horrors of Auschwitz.”? Or the similar sentiment expressed here: https://www.youtube.com/watch?v=4OWl5nTctYI

        “3) Utility is not necessarily a linear function of happiness and suffering. Utility can be a concave function, resulting in a prioritarian utilitarianism, giving a priority to advancing the worst-off.”

        Yet still allowing the worst off to suffer a lot provided that others gain a sufficiently large amount of pleasure or happiness?

        “So I prefer variable critical level utilitarianism. If everyone chooses a high reference value (critical level), we end up with a kind of negative utilitarianism.”

        And it seems to me that someone can choose the opposite, and deem their own pleasure, if they get enough of it, worth the extreme suffering of others. That is what I would strongly resist, cf. the section “Creating Happiness at the Cost of Suffering Is Wrong” in the first essay linked above.

        “[4)] It seems your ethic results in antinatalism”

        In some ideal sense, yes, but not necessarily in practice, cf.

        https://www.smashwords.com/books/view/543094

        https://reducing-suffering.org/strategic-considerations-moral-antinatalists/

        “I can imagine someone prefers the extreme suffering and a continued existence of happy sentient beings (even if the suffering is so bad that one wishes to be dead or one would have prefered not to exist).”

        By definition, a state of suffering so extreme that one would not accept it at any price is not preferred or accepted *in the moment it is experienced*. And I am arguing that other moments’ preference for pleasure should not trump this, or indeed any other, given moment’s interest in not experiencing such extreme suffering. I am saying we should prioritize the evaluations of this experience moment the highest rather than the other experience moments, which essentially, to the extent they agree to imposing this suffering on the unfortunate experience moment, victimize this most unfortunate experience moment — against her/his consent — for the sake of their own pleasure.

        “For this person, extreme suffering can be trumped by positive utilities in the future. ”

        Again, not in the moment of horror. This view of “a person” cannot ultimately be maintained — or so I would argue, cf. the link below.

        “We should not say that this person has the wrong preferences or the wrong utility function, because that would violate autonomy to choose one’s own utility function.”

        I would argue that the most important autonomy to respect is other experience moments’ preference not to experience extreme suffering. And so I would argue it is actually the experience moments who impose this upon another experience moment who violate respect for autonomy in the most relevant sense. I think our common sense notion of respect for personal autonomy is highly instrumentally useful and worth adhering to, yet, ultimately, I think it breaks down upon a closer examination of the nature of personal identity (cf. https://www.smashwords.com/books/view/719903), and that becomes especially relevant in cases like the one we are discussing here.

        I have attempted to defend a view on normative ethics similar to Brian’s here:

        https://magnusvinding.com/2018/09/03/the-principle-of-sympathy-for-intense-suffering/

        (See also more general arguments for suffering-focused ethics here: https://magnusvinding.com/2018/09/03/suffering-focused-ethics/)

        I would question your claims about this being a matter of a value function being “hacked”. What is this underlying value function that is supposedly hacked? Is it “that which our faculty of reason would deem most valuable upon reflection”? If so, why would this value function not end up viewing the reduction of extreme suffering as being of the greatest value? In the case of my own faculty of reason, that is what it points to.

        I should also note that my view differs from Tomasik’s in that I have a realist view on meta-ethics (cf. https://www.smashwords.com/books/view/719903; in one sense, you may say it is also emotivist, in that it ultimately is all about the value of emotions, or experiences more broadly), and I agree with you that we should aspire to be guided by reason, i.e. that which seems most reasonable all things considered (cf. https://magnusvinding.com/2018/07/09/the-endeavor-of-reason/).

      • stijnbruers zegt:

        “You can represent moments of extreme suffering along one axis that is orthogonal to other axes along which you may represent other, yet incommensurably less valuable things. In this way, there is nothing problematic of difficult about saying that, say, two days of extreme suffering is worse than one. The subject of value lexicality, or superiority, is one that there is an entire literature on; for a nice introduction, as well as some elaboration on the point I just made, I recommend the following post: https://foundational-research.org/value-lexicality/”
        Ok, I agree. Or you could use transfinite ordinal numbers to measure the badness of the extreme suffering. This still allows for addition, indeed.
        Thinking more about it, I would consider levels of suffering as a continuum. You can start with one pin prick, then two,… all the way to extreme suffering. If this is the case, I expect a decreasing marginal badness of suffering. For example, we can easily detect the difference between 1 and 2 needles in our arm, but not between 100 and 101 needles. Adding more needles does not linearly increase the pain and suffering. Like all other senses and judgments (e.g. the decreasing marginal utility of money, Weber-Fechner law of a logarithmic stimulus-perception relationship,…), I can expect that suffering also has such a decreasing marginality law. That means adding more and more suffering would not allow us to move into a transfinite regime. To move beyond infinity, we either need a discontinuous badness function or an increasing marginal badness of suffering function (suffering on the horizontal axis, badness on the vertical axis, and a convex badness function with a vertical asymptote at a certain positive level of suffering). So I remain skeptical about this transfinite regime of extreme suffering. At least I think a continuous, non-lexical, non-transfinite badness function of suffering is not unreasonable.

        “Playing the devil’s advocate: Might you have felt more regret afterwards if you had slept rather than being on a trip (a trip which also gave you something to tell others about, which you would also have left out on).”
        I strongly prefer the trip. There are other examples: suppose tonight I could watch some nice movies instead of going to sleep. Suppose if I saw the movies, I would not feel tired tomorrow, I would be as good as if I slept well all night. I prefer to watch the movies and I would consider sleep as a waste of time. That means even watching a movie can be better than non-experience.

        “The important question I want to put forth is this: Can any amount of positive experiences of yours, or indeed of anyone, justify imposing extreme suffering (e.g. of the kind described in the second essay above) on someone else?”
        I can imagine that people would answer yes to this question, and I’m not able to give reasonable arguments why they would be wrong (even if I or you personally disagree with them). Why would positive (finite or transfinite) numbers always be weaker than negative numbers? Why would positive experiences always be weaker than negative ones? The symmetry of total utilitarianism is not less reasonable than the asymmetry of negative utilitarianism. Even if we prefer a kind of negative utilitarianism ourselves, who are we to say that the total utilitarians are wrong?

        “More simply: would you disagree with David Pearce’s claim that “no amount of happiness or fun enjoyed by some organisms can notionally justify the indescribable horrors of Auschwitz.”? Or the similar sentiment expressed here: https://www.youtube.com/watch?v=4OWl5nTctYI”
        Suppose you sacrifice all your positive experiences you ever had and will have, to spare Bonny from suffering from cancer. If that would be good (a moral duty), we can move a step further: now that Bonny no longer has cancer, she could sacrifice all her experiences to spare someone else from suffering, and so on. In the end, we all sacrifice all our positive experiences such that Bonny (and everyone else) does not have any negative nor positive experiences left. It is as if Bonny was unconscious or permanently asleep. So would you sacrifice all your positive experiences in order to painlessly euthanize everyone? Again, a ‘no’ seems reasonable to me.

        “Yet still allowing the worst off to suffer a lot provided that others gain a sufficiently large amount of pleasure or happiness?”
        According to prioritarian utilitarianism, yes indeed.

        “And it seems to me that someone can choose the opposite, and deem their own pleasure, if they get enough of it, worth the extreme suffering of others.”
        If they would also deem the pleasure of others worth the extreme suffering of themselves, i.e. if they are willing to face that suffering for themselves, can we claim that they are wrong? Can we claim that they should always give priority to decreasing their own extreme suffering above the happiness of others? I think answering these questions with a ‘no’ is reasonable.

        “”[4)] It seems your ethic results in antinatalism” In some ideal sense, yes, but not necessarily in practice, cf. https://www.smashwords.com/books/view/543094”
        I think an unnuanced or naive negative utilitarianism does result in anti-natalism and in universal extinction. Imagine an anti-natalist engineer invents an AI-robot that is not able to feel extreme suffering (perhaps it can only experience joy) but is able to destroy whole planets and autonomously fly to other planets to destroy them. That engineer builds such a robot, the robot destroys the whole earth and then starts looking for other planets to destroy them. Suppose the death of sentient beings by destruction of their home planets is not worse than the death that those sentient beings would have experienced anyway if the planets were not destroyed. Suppose it will take us a few more generations before we could invent another AI-robot that would be able to abolish all extreme suffering without destroying planets and life. And suppose in the next few generations there are at least some individuals who will experience extreme suffering. Most likely no-one (especially not the negative utilitarians) would feel extra extreme suffering by the idea that a robot will destroy all planets. I.e. we may dislike it, but we would not feel extreme suffering by the idea that no new life forms will be born. Now we can do the negative utilitarian calculation, and the result will be that the planet destroying robot will minimize extreme suffering (waiting a few generations before we have the other robot would mean extreme suffering among at least some members of the next generations). It would become highly immoral (in fact the most immoral thing to do if extreme suffering is infinitely bad) if we would prevent that engineer from launching the AI-robot. Now imagine that some people (e.g. total utilitarians) say we should prevent this robot from destroying planets. Again I think that moral judgment is not unreasonable and we cannot give more reasonable arguments why they are wrong. Would you say that what that anti-natalist engineer does is in fact the best thing one could do, that he is a moral genius?

        “By definition, a state of suffering so extreme that one would not accept it at any price is not preferred or accepted *in the moment it is experienced*.”
        None of the examples of suffering that you gave necessarily imply this definition of extreme suffering. With ‘extreme suffering’, I meant the kinds of suffering mentioned in your concrete examples, which do not imply ‘not accepting it at any price’. I can imagine someone – a kind of very altruistic Jesus figure – who is willing to undergo such extreme suffering in order to improve the well-being of sufficiently many other people. So this Jesus was willing to accept his own extreme suffering at a price, namely the happiness of others.

        “I would argue that the most important autonomy to respect is other experience moments’ preference not to experience extreme suffering. And so I would argue it is actually the experience moments who impose this upon another experience moment who violate respect for autonomy in the most relevant sense.”
        What if you were the only negative utilitarian, and everyone else was say a total utilitarian? Would you argue that only negative utilitarians are allowed to impose their moral theory on others?

        “I would question your claims about this being a matter of a value function being “hacked”. What is this underlying value function that is supposedly hacked? Is it “that which our faculty of reason would deem most valuable upon reflection”? If so, why would this value function not end up viewing the reduction of extreme suffering as being of the greatest value? In the case of my own faculty of reason, that is what it points to.”
        This part I didn’t understand.

Geef een reactie

Vul je gegevens in of klik op een icoon om in te loggen.

WordPress.com logo

Je reageert onder je WordPress.com account. Log uit /  Bijwerken )

Google+ photo

Je reageert onder je Google+ account. Log uit /  Bijwerken )

Twitter-afbeelding

Je reageert onder je Twitter account. Log uit /  Bijwerken )

Facebook foto

Je reageert onder je Facebook account. Log uit /  Bijwerken )

Verbinden met %s