My cause prioritization

Summary

The search for the most important causes is essential for an effective altruist who wants to do the most good. Here I present my cause prioritization, starting from a moral theory that deals with problems in population ethics (involving future generations). Given our current knowledge in welfare biology, I argue that healthy humans might have the best lives worth living. However, most humans consume animal products and therefore contribute to the existence of exploited animals with lives not worth living. As a result, promoting veganism, animal rights and antispeciesism is a first cause area with top priority. The moral theory of maximizing relative preferences also gives a strong priority to reducing catastrophic risks, in particular S-risks (suffering risks) where futures are created that contain huge numbers of sentient beings with lives not worth living. Artificial intelligence is unique in the sense that it can create the biggest S-risks. Therefore, a second cause area with top priority is AI-safety research, in particular solving the value alignment problem such that AI-machines have the correct values to avoid S-risks.

 

Population ethics

For effective altruists who want to do the most good, cause prioritization is crucial. To find the most important causes, we first have to deal with population ethics that studies the goodness of moral choices when our choices determine who will exist in the future. This is important, because we must also be concerned about people in the far future who do not yet exist and may never exist when we make other choices.

My starting point is the maximum self-determined relative preferences principle. Suppose we choose a specific situation S. In that situation, there are a number of individuals who exist or will exist in the future. Each of those individuals has his or her own relative preference: the preferences for situation S relative to a reference preference that the individual has in situation S. This preference can be a complex function of everything the individual values, such as well-being or happiness. For example: if everyone’s preference function is a concave function of well-being, we get a prioritarian theory that says that we should improve everyone’s well-being, giving priority to the worst-off people who have the lowest levels of well-being.

Suppose an individual exists in situation S and in this situation, he or she strongly prefers situation S with a preference or utility equal to 100 utility units. This individual can also choose his or her own reference preference (hence the self-determination), which can be (but need not be) the preference for another situation, such as the most preferred situation M. That individual in situation S might have a preference for his or her most preferred situation equal to 1000 utility units. The self-determined relative preference of that individual in situation S equals -900 utility units. In this case, the relative preference is negative, which means it measures a kind of complaint: in situation S the individual is complaining with a strength of 900 utility units that situation M was not chosen. Similarly, the individual can choose the empty situation E where no-one exists as a reference, for which he or she has a preference of 0 utility units. The relative preference now equals 100 utility units. This value is now positive, which means it measures a kind of gratitude: in situation S the individual is grateful with a strength of 100 utility units that situation S instead of E was chosen.

The maximum relative preferences principle states that we should choose the situation where the sum of everyone’s relative preferences, measured in utility units, is maximum. This principle unifies different theories or views in population ethics. The reference preferences of most people gravitate towards two important possibilities (or combinations of them): 0 or a conditional maximum. All the individuals can determine their own reference preferences, but what are the implications if everyone had the same reference preference?

First, suppose everyone’s reference preference is 0, which corresponds with the empty situation where no-one exists. In this case, the maximum relative preferences principle becomes a kind of sum utilitarianism, also known as the total view, where we simply maximize the total of preference satisfaction. Suppose in situation 1 a number of maximally happy people exist, with a total gratitude of 1000 utility units. In situation 2, those people have maximally miserable lives, with a total complaint of -1000 utility units (which means their lives are not worth living), but there is a huge population of extra people, each with a small gratitude of 1 utility unit. If the size of this second population is large enough, the total relative preferences in situation 2 becomes higher than in situation 1, which means situation 2 should be selected. The sum of the small gratitudes of the extra people can trump the complaints of the miserable people. For some people, this seems counter-intuitive, a kind of sadistic conclusion that prefers a situation where some people have miserable lives.

Second, suppose everyone’s reference preference is their conditional maximum. That means that everyone has a complaint, and our maximum relative preferences principle becomes a minimum complaint theory where we have to choose the situation that minimizes the total complaint. This is a kind of negative utilitarianism, also known as a person affecting view. It is closely related, but not exactly similar, to suffering-focused ethics, antifrustrationism and critical level utilitarianism in population ethics. The important difference with those related theories lies in the conditionality of the maximum reference preference. Suppose you have a preference of 100 utility units for situation S, and 1000 utility units for situation M, and I have the reverse preferences: 100 for M and 1000 for S. Both situations S and M contain a complaint of at least one person. The only situation that minimizes complaints, is the empty situation E where we don’t exist. However, we both have positive preferences in situations S and M. So I can decide not to complain in situation M. It is as if in situation M, I take that situation for my reference preference, which means my self-determined relative preference becomes 0 instead of -900, and situation M can be chosen.

The total view and the person affecting view are the two most dominant views in population ethics. The first theory roughly says that we have to make happy people and the second says that we have to make people happy. This difference is important when we face existential risks. Suppose in one possible future, there will be thousands of generations with billions of people. In the second future there will be a global catastrophe that kills everyone who currently exists. In that second situation, there will be no-one in the future. In that situation, there is of course a harm done to the current generation of people who die. But is the non-existence of the possible future people also a harm? According to the total view, the harm is immensely big, because if the people in the future would have positive preferences (lives worth living), their total gratitude would be enormous; much higher than all the possible complaints and gratitudes of the current generation.

So according to the total view, avoiding this existential risk has by far the highest priority (at least if the possible people in the future have positive preferences for their existence). But according to the person affecting view, the non-existing people in the second future are not harmed and do not complain. For a person who does not exist, the relative preference is always 0, so there is no complaint (and no gratitude). The future where people do not exist is neither bad nor good for those non-existing people, because those non-existing people are not affected. That is why this theory is called the person affecting view. Hence, the person affecting view gives a higher priority to the preferences of the existing population (the people who can be affected because they exist or will exist in all the situations that we can choose). For the person affecting view, the top priority is avoiding negative relative preferences. In particular this means avoiding the existence of individuals who have lives not worth living.

As each individual can determine his or her own reference preference, which basically means that each individual can determine which population ethics theory should be applied to him or her, we will have a complex combination of the total view and the person affecting view. The gratitudes of the existing people in the future will have some weight. (There may be individuals who are indifferent between the population ethical views or who do not have a clear reference preference. In that case, we are allowed to determine their reference preferences.)

Welfare biology

Because at least some people choose a conditional maximum as their reference preference, we have to give some weight to the person affecting view in population ethics. In that case, we have a priority to avoid the existence of individuals with lives not worth living. Here we face the problem of wild animal suffering. It is possible that some animals in nature have lives not worth living, because their lives are full of negative experiences due to hunger, diseases, injuries, parasites and predators. Especially the animals with an r-selection reproductive strategy have a problem: these animals have a lot of offspring (the population has a high rate of reproduction, hence the name ‘r-selection’), and only a few of them survive long enough to reproduce themselves. Most lives of those animals are very short and probably miserable. We are not likely to see the majority of those animals, because they will die and be eaten quickly.

A better reproductive strategy in terms of well-being, is K-selection: having few offspring with long lives and high survival rates. If a life is long, it is more likely to be positive because it has proportionally fewer negative experiences of hunger or deadly diseases. Only humans are very close to a perfect K-selection: the average fertility rate of a woman is 2,5 children, and this rate is decreasing and expected to reach 2 children in the second halve of this century. When it reaches 2 children per woman, and when all children survive till they reproduce, the human population becomes stable. Every human can have a full live. (As lifespan increases, the fertility rate can drop below 2 children per woman.)

According to the person affecting view, we have to give priority to avoiding r-selection and promoting K-selection. Perhaps with genetic manipulation (e.g. gene drives), we can turn every population into K-selection (where female animals have on average two offspring) and make sure that all animals have long healthy lives. But for the moment, only humans are about to reach the ideal K-selection reproduction.

Healthy humans have other advantages: they have complex preferences and strong personal identities over time, which means they can have potentially high levels of lifetime well-being when their preferences are satisfied. So it is possible that humans can have larger relative preferences than non-human animals. Most humans can also clearly communicate their preferences: it is easier to determine the levels of well-being of humans who can self-consciously think and speak than the levels of well-being of non-human animals who can only communicate their preferences in very indirect ways through behavior. Estimating the well-being or relative preferences of wild animals is very difficult and may require accurate brain scans. We can be very confident that the lives of healthy humans are worth living, but not confident at all that the life of an average wild animal is worth living.

The above implies that we can give a priority to saving and helping humans. This preference for healthy humans (increasing the relative number of healthy humans) is not speciesism, because the basic criteria to derive this preference (e.g. the level of personal identity over time, the level of communication and the level of K-selection) did not refer to species membership. The above discussion did not use the word ‘species’ at all. Given our current state of knowledge, a preference for healthy humans is most likely to satisfy the maximum relative preferences principle.

Pros and cons of human population growth

As explained above, helping humans means increasing K-selection in the world. The more individuals who belong to a K-selection population, the better. However, there are also problems with human population growth. More humans means more competition for scarce resources, more people who can invent dangerous technologies, more greenhouse gas emissions, higher likelihood of spreading of dangerous viruses. These things increase existential risks. But it can also mean more mutually beneficial situations through trade and cooperation, more inventions of good technologies, higher likelihood of resistance against dangerous viruses.

However, there is one very big disadvantage of giving priority to humans: most humans consume animal products. Buying animal products gives an incentive to breed animals who have lives not worth living in e.g. factory farms. Fighting poverty and promoting economic development might increase animal suffering: a $1,000 increase in per capita GDP in the poorest countries implies an increased consumption of 1.7 kg of meat per person per year. Saving the life of a human omnivore means a consumption of about 30 kg of meat.

It is difficult to estimate the total costs and benefits of further human population growth. Give the consumption of animal products, I tend towards the conclusion that decreasing human population growth is valuable, but only if it is done in a way that has other cobenefits. Avoiding unwanted pregnancies through family planning is the only strategy that has a lot of cobenefits in terms of women’s rights, health of newborn children, environmental impact reduction and poverty reduction. The benefit-cost ratio of family planning is high. This means that family planning may also be consistent with the total view in population ethics, even if fewer happy people might come into existence. Finally by reducing the fertility rate, family planning is a means to reach perfect K-selection. Therefore, I give a low priority to family planning by supporting organizations such as Marie Stopes International.

Cause area: veganism and antidiscrimination

As helping humans involves a risk of increasing animal suffering, I give a high priority to promoting veganism, animal rights and antispeciesism. According to some thought experiments, we can conclude that most animals in agriculture and aquaculture have lives not worth living, so creating those lives violates both the person affecting view and the total view in population ethics. Promoting veganism is a more neglected area than improving human health and well-being.

Furthermore, veganism also has many cobenefits in terms of improved human health: less chronic diseases due to healthier diets, less health impact from climate change due to lower greenhouse gas emissions, less malnutrition due to lower food prices for the poorest people, and less health risks from pollution, zoonotic viruses and antibiotic resistant bacteria.

Veganism also facilitates spreading the value of antidiscrimination. Speciesism is an example of discrimination. If people consume animal products, a cognitive dissonance prevents them from valuing animals as equal to humans. When they eat vegan, this cognitive dissonance diminishes and they are more open to the value of antispeciesism. The interspecies model of prejudice predicts that a decrease in speciesism results in a decrease in racism, i.e. a decrease of prejudice against other groups of people. Antispeciesism is also necessary to start scientific research about wild animal suffering and to find safe and effective means to intervene in nature to improve wild animal well-being. And finally, antispeciesism becomes important when it comes to the development of artificial general intelligence and superintelligence. If we create superintelligent AI machines and implement them with our own speciesist goals, even more animals can be exploited by AI machines for many years in the future.

The cause area of veganism is also relatively neglected and tractable, which means effective altruists have a lot of high impact opportunities in this area. Effective vegan advocacy, perhaps with deep canvassing, is promising. But clean meat, and more generally tissue engineering, appear to be very promising as well. With these technologies, we can create animal products without using animals. It might also be a crucial technology for wild animal suffering reduction, as it can provide a food alternative for predators. The tissue engineering technology can also be used to extend life and replace a lot of animal experimentation. Therefore, I support the Good Food Institute and to a lesser degree the Methuselah Foundation.

Catastrophic risks

There are several possible extinction risks (X-risks) where everyone dies: asteroid impacts, supervolcano eruptions, pandemic viruses, runaway global warming, global nuclear war, dangerous nanotechnology. According to the total view of population ethics, extinction of sentient and intelligent life is a tragedy, because it means a lot of future preference satisfaction (well-being, happiness) is lost. Hence, extinction prevention (X-risk reduction) gets a top priority.

From a person affecting view, extinction is less bad, because with extinction, non-existent future beings cannot complain and wild animals with lives not worth living will no longer be born, so future complaints will be avoided. Extinction is only bad for those of the current generations who value a continued existence in the far future, and especially for the last generation, because most extinction scenarios involve suffering when everyone dies.

But there is a class of catastrophic risks that is even worse than X-risks: S-risks or suffering risks, where the future contains huge populations of sentient beings with lives full of misery. This is worse than extinction, because an S-risk is terrible both from a total view as well as from a person affecting view.

An example of an S-risk is space colonization where we export wild animal suffering and livestock farming: the number of animals with lives not worth living will multiply when other planets are colonized. Before we start with space colonization, we should first adopt veganism and antispeciesist values such that we will not export and multiply animal suffering.

Pros and cons of AI-development

Next to tissue engineering – creating organic bodies – another breakthrough technology of this century is artificial intelligence – creating intelligent minds. With further developments in artificial intelligence, we can better solve the problems of wild animal suffering and human suffering. The potential positive impact of AI is huge. But this technology is also uniquely risky.

First, AI generates an X-risk. Superintelligent AI-machines are more powerful than humans. If the values of these AI-machines are not aligned with the values of humans, AI machines may outcompete humans. This is the important value alignment problem in AI safety research. Developing safe AI is crucial, because we will never be smart enough to control the first superintelligent machines that are smarter than us.

But AI is unique because it also creates S-risks. AI might speed up space colonization, exporting the exploitation of sentient beings to other planets. AI might use humans and animals as slaves, keeping newborn sentient beings in misery. And worse of all: AI might perform virtual reality simulations containing lots of sentient beings in the simulated worlds. The number of sentient beings who suffer in those simulated worlds can be huge.

Just like intelligent humans could dominate sentient animals, superintelligent AI-machines can dominate intelligent sentient beings both in the real world as well as in simulated worlds. Just like the domination of sentient animals by intelligent humans led to a vast increase of the number of exploited animals with lives not worth living, the domination of real and simulated intelligent people by superintelligent AI-machines can result in a vast increase of the number of exploited people with lives not worth living. The S-risk of AI might be the exponent of the S-risk of a perpetual livestock farming. For the animals, livestock farming is an Eternal Treblinka. But for the future generations, AI-machines might create a new, bigger Eternal Treblinka. Both from a total view, but especially from a person affecting view (a downside focused ethics), such S-risks from AI are the worst possible outcomes and we have a top priority to avoid such risks.

Cause area: AI-safety and value alignment

The above brings us to the second cause area: AI-safety. A first strategy is to slow down AI-development research. This involves improving international cooperation and improving institutions to better regulate AI research. However, AI has potential huge benefits and really slowing down research on a global scale is difficult. There is a collective action problem: if we slow down our AI-research, we have to make sure that everyone else also slows down their research, otherwise other AI-researchers can gain a dangerous advantage. Hence, slowing down research is less feasible or tractable. Therefore, I give a lower priority to this strategy.

A second strategy therefore might be to speed up AI-safety research, in particular solving the value alignment problem: how can we implement good values in AI-algorithms? This gets a top priority, because this area is also highly neglected compared to other cause areas. This strategy doesn’t suffer from a collective action problem: if we learn from our research how to make AI safer, everyone else can learn from us and adopt our safety measures.

Here we also see a link with the abovementioned cause area: promoting values such as antidiscrimination. We should not implement discrimination such as speciesism or substratism in AI-machines. Substratism is a kind of discrimination where beings with one type of substrate (e.g. electronic computers) are considered more important than beings with other substrates (e.g. organic brains). AI-machines should not discriminate organic life forms or simulated beings. If we keep discriminating animals and we develop AI-machines, what chance do we have that those machines do not discriminate others?

To improve AI-safety research, I support MIRI and the Future of Humanity Institute.

Dit bericht werd geplaatst in Artikels, Blog, English texts en getagged met , , , , . Maak dit favoriet permalink.

2 reacties op My cause prioritization

  1. Pingback: Variable critical level utilitarianism as the solution to population ethics | Stijn Bruers, the rational ethicist

  2. Pingback: Arguments for an impartial preference for human lives | Stijn Bruers, the rational ethicist

Plaats een reactie