Reducing existential risks or wild animal suffering?

What are the most important focus areas if you want to do the most good in the world? Focus on the current generation or the far future? Focus on human welfare or animal welfare? These are the fundamental cause prioritization questions of effective altruism. Look for the biggest problems that are the most neglected and are the easiest to solve. If we do this exercise, two focus areas become immensely important: reducing existential risks and reducing wild animal suffering. But which of those two deserves our top priority?

X-risks

An existential risk (X-risk) is a catastrophic disaster from nature (e.g. an asteroid impact, a supervirus pandemic or a supervolcano eruption), technologies (e.g. artificial superintelligence, synthetic biology, nanotechnology or nuclear weapons) or human activities (e.g. runaway global warming or environmental degradation), that can end all of civilization or intelligent life on earth.

If we manage to avoid existential risks, there can be flourishing human or intelligent life for many generations in the future, able to colonize other planets and multiply by the billions. The number of sentient beings with long happy flourishing lives in the far future can be immense: a hundred thousand billion billion billion (1032) humans, including a million billion (1016) humans on Earth, according to some estimates. In a world where an existential risk occurs, all those potentially happy people will never be born.

WAS

Wild animal suffering (WAS) is the problem created by starvation, predation, competition, injuries, diseases and parasites that we see in nature. There are a lot of wild animals alive today: e.g. 1013 – 1015 fish, 1017 – 1019 insects, according to some estimates. It is possible that many of those animals have lives not worth living, that those animals have more or stronger negative than positive experiences and hence overall a negative well-being. Most animals follow an r-selection reproductive strategy: they have a lot of offspring (the population has a high rate of reproduction, hence the name ‘r-selection’), and only a few of them survive long enough to reproduce themselves. Most lives of those animals are very short and therefore probably miserable. We are not likely to see most of those animals, because they will die and be eaten quickly. When we see a happy bird singing, ten of its siblings died within a few days after hatching. When the vast majority of newborns die, we can say that nature is a failed state, not able to take care of the well-being of its inhabitants.

Due to the numbers (billions of billions), the suffering of wild animals may be a bigger problem than all human suffering from violence, accidents and diseases (a few billion humans per year), and all human caused suffering of domesticated animals (a few hundred billion per year).

Population ethics

What is worse: all the suffering, today and in the future, of wild animals who have miserable lives? Or the non-existence of a huge number of people in the far future who could have had beautiful lives? To solve this question, we need to answer one of the most fundamental question in ethics: what is the best population ethical theory? Population ethics is the branch of moral philosophy that deals with choices that influence who will exist and how many individuals will exist.

A promising population ethical theory is the variable critical level utilitarianism. Each sentient being has a utility function that measures how strongly that individual prefers a situation. That utility can be a function of happiness and all other things valued by that individual. If your utility is positive in a certain situation, you have a positive preference for that situation. The more you prefer a situation, the higher your utility in that situation. If a person does not exist, that person has a zero utility level.

The simplest population ethical theory is total utilitarianism, which says that we should choose the situation that has the highest total sum of everyone’s utilities. However, this theory has a very counter-intuitive implication, called a sadistic repugnant conclusion (a combination of the sadistic conclusion and the repugnant conclusion in population ethics). Suppose you can choose between two situations. In the first situation, a million people exist and have maximally happy lives, with maximum utilities. In the second situation, those million people have very miserable lives, with extremely negative levels of utility. But in that situation, there also exist new people with utilities slightly above zero, i.e. lives barely worth living. If we take the sum of everyone’s utilities in that second situation, and if the number of those extra people is high enough, the total sum becomes bigger than the total of utilities in the first situation. According to total utilitarianism, the second situation is better, even if the already existing people have maximally miserable lives and the new people have lives barely worth living, whereas in the first situation everyone is maximally satisfied, and no-one is miserable.

To avoid this conclusion, we can change the utilitarian theory, for example by using a reference utility level as a critical level. Instead of adding utilities, we add relative utilities, where a relative utility of a person is his or her utility minus the critical level. The critical level of a non-existing person is zero. This population ethical theory is the critical level utilitarianism, and it can avoid the sadistic repugnant conclusion: if the critical level is higher than the small positive utilities of the new people in the second situation, the relative utilities of those extra people are all negative. The sum of all those relative utilities never becomes positive, which means the total relative utility of the first situation is always higher than the second situation, and so the first situation is preferred.

If all critical levels of all persons in all situations are the same, we have a constant or rigid critical level utilitarianism, but this theory still faces some problems. We can make the theory more flexible by allowing variable critical levels: not only can everyone determine his or her own utility in a specific situation, everyone can also choose his or her critical level. The preferred critical level can vary from person to person and from situation to situation.

A person’s critical level always lies within a range, between his or her lowest preferred and highest preferred levels. The lowest preferred critical level is zero: if a person would choose a negative critical level, that person would accept a situation where he or she can have a negative utility, such as a life not worth living. Accepting a situation that one would not prefer, is basically a contradiction. The highest preferred critical level varies from person to person. Suppose we can decide to bring more people into existence. If they choose a very high critical level, their utilities fall below this critical level, and hence their relative utilities become negative. In other words: it is better that they do not exist. So if everyone would choose a very high critical level, it is better that no-one exists, even if people can have positive utilities (but negative relative utilities). This theory is a kind of naive negative utilitarianism, because everyone’s relative utility becomes a negative number and we have to choose the situation that maximizes the total of those relative utilities. It is a naive version of negative utilitarianism, because the maximum will be at the situation where no-one exists (i.e. where all relative utilities are zero instead of negative). If people do not want that situation, they have chosen a critical level that is too high. If everyone chose their highest preferred critical level, we end up with a better kind of negative utilitarianism, which avoids the conclusion that non-existence is always best. It is a quasi-negative utilitarianism, because the relative utilities are no-longer always negative. They can sometimes be (slightly) positive, in order to allow the existence of extra persons.

X-risks versus WAS

Now we come to the crucial question: if variable critical level utilitarianism is the best population ethical theory, what does it say about our two problems of existential risks and wild animal suffering?

If everyone chose their lowest preferred critical level, we end up with total utilitarianism, and according to that theory, the potential existence of many happy people in the far future becomes dominant. Even if the probability of an existential risk is very small (say one in a million the next century), reducing that probability is of highest importance if so many future lives are at stake. However, we have seen that total utilitarianism contains a sadistic repugnant conclusion that will not be accepted by many people. This means those people decrease their credence in this theory.

If people want to move safely away from the sadistic repugnant conclusion and other problems of rigid critical level utilitarianism, they should choose a critical level infinitesimally close to (but still below) their highest preferred levels. If everyone does so, we end up with a quasi-negative utilitarianism. According to this theory, adding new people (or guaranteeing the existence of future people by eliminating existential risks) becomes only marginally important. The prime focus of this theory is avoiding the existence of people with negative levels of utility: adding people with positive utilities becomes barely important because their relative utilities are small. But adding people with negative utilities is always bad, because the critical levels of those people are always positive and hence their relative utilities are always negative and often big in size.

However, we should not avoid the existence of people with negative utilities at all costs. Simply decreasing the number of future people (avoiding their existence), in order to decrease the number of potential people with miserable lives, is not a valid solution according to quasi-negative utilitarianism. Suppose there will be one sentient being in the future who will have a negative utility, i.e. a life not worth living, and the only alternative option to avoid that negative utility, is that no-one in the future exists. However, the other potential future people strongly prefer their own existence: they all have very positive utilities. In order to allow for their existence, they could lower their critical levels such that a future with all those happy future beings and the one miserable individual is still preferred. This means that according to quasi-negative utilitarianism, the potential existence of one miserable person in the future does not imply that we should prefer a world where no-one will live in the future. However, what if a lot of future individuals (say a majority) have lives not worth living? The few happy potential people will have to decrease their own critical levels below zero in order to allow their existence. In other words: if the number of future miserable lives is too high, a future without any sentient being would be preferred according to quasi-negative utilitarianism.

If everyone chooses a high critical level such that we end up with a quasi-negative utilitarianism, we should give more priority to eliminating wild animal suffering than eliminating existential risks, because lives with negative utilities are probably most common in wild animals and adding lives with positive well-being is only minimally important. In an extreme case where most future lives would be unavoidably very miserable (i.e. if the only way to avoid this misery is to avoid the existence of those future people), avoiding an existential risk could even be bad, because it would guarantee the continued existence of this huge misery. Estimating the distribution of utilities in future human and animal generations becomes crucial. But even if with current technologies most future lives would be miserable, it can still be possible to avoid that future misery by using new technologies. Hence, developing new methods to avoid wild animal suffering becomes a priority.

Expected value calculations

If total utilitarianism is true (i.e. if everyone chooses a critical level equal to zero), and if existential risks are eliminated, the resulting increase in total relative utility (of all current and far-future people) is very big, because the number of future people is so large. If quasi-negative utilitarianism is true (i.e. if everyone chooses their maximum preferred critical level), and if wild animal suffering is eliminated, the resulting increase in total relative utility of all current and near-future[1] wild animals is big, but perhaps smaller than the increase in total relative utility by eliminating existential risks according to total utilitarianism, because the number of current and near-future wild animals is smaller than the number of potential far-future people with happy lives. This implies that eliminating existential risks is more valuable, given the truth of total utilitarianism, than eliminating wild animal suffering, given the truth of quasi-negative utilitarianism.

However, total utilitarianism seems a less plausible population ethical theory than quasi-negative utilitarianism because it faces the sadistic repugnant conclusion. This implausibility of total utilitarianism means it is less likely that everyone chooses a critical level of zero. Eliminating existential risks was most valuable when total utilitarianism was true, but its expected value becomes lower because the low probability of total utilitarianism being true. The expected value of eliminating wild animal suffering could become higher than the expected value of eliminating existential risks.

But still, even if the fraction of future people who choose zero critical levels is very low, the huge number of future people indicate that guaranteeing their existence (i.e. eliminating existential risks) remains very important.

The interconnectedness of X-risks and WAS

There is another reason why reducing wild animal suffering might gain importance over reducing existential risks. If we reduce existential risks, more future generations of wild animals will be born. This increases the likelihood that more animals with negative utilities will be born. For example: colonizing other planets could be a strategy to reduce existential risks (e.g. blowing up planet Earth would not kill all humans if we could survive on other planets). But colonization of planets could mean introducing ecosystems and hence introducing wild animals, which increases the number of wild animals and increases the risk of more future wild animal suffering. If decreasing existential risks means that the number of future wild animals increases, and if this number becomes bigger and bigger, the non-existence of animals with negative utilities (i.e. the elimination of wild animal suffering) becomes more and more important.

On the other hand, if an existential risk kills all humans, but the non-human animals survive, and if humans could have been the only hope for wild animals in the far future by inventing new technologies that eliminate wild animal suffering, an existential risk might make it worse for the animals in the far future. That means eliminating existential risks might become more important when eliminating wild animal suffering becomes more important.

So we have to make a distinction between existential risks that could kill all humans and animals, versus existential risks that would kill only those persons who could potentially help future wild animals. The second kind of existential risk is bad for wild animal suffering, so eliminating this second kind of risk is important to eliminate wild animal suffering in the far future.

Victimhood

The difference between total utilitarianism (prioritizing the elimination of existential risks) and quasi-negative utilitarianism (prioritizing the elimination of wild animal suffering), can also be understood in terms of victimhood. If due to an existential risk a potential happy person would not exist in the future, that non-existing person cannot be considered as a victim. That non-existing person cannot complain against his or her non-existence. He or she does not have any experiences and hence is not aware of being a victim. He or she does not have any preferences in this state of non-existence. On the other hand, if a wild animal has a negative utility (i.e. a miserable life), that animal can be considered as a victim.

Of course, existential risks create victims: the final generation of existing people would be harmed and would not like the extinction. But this number of people in the last generation will be relatively small compared to the many generations of many wild animals who can suffer. So if the status of victimhood is especially bad, wild animal suffering gets worse than existential risks, because the problem of wild animal suffering creates more victims.

Neglectedness

Both existential risk reduction and wild animal suffering reduction are important focus areas of effective altruism, but reducing wild animal suffering seems to be more neglected. Only a few organizations work on reducing wild animal suffering: Wild-Animal Suffering Research, Animal Ethics, Utility Farm and the Foundational Research Institute. On the other hand, there are many organizations working on existential risks both generally (e.g. the Centre for the Study of Existential Risk, the Future of Humanity Institute, the Future of Life Institute, the Global Catastrophic Risk Institute and 80000 Hours) and specifically (working on AI-safety, nuclear weapons, global warming, global pandemics,…). As wild animal suffering is more neglected, it has a lot of room for more funding. Based on the importance-tractability-neglectedness framework, wild animal suffering deserves a higher priority.

Summary

In the population ethical theory of variable critical level utilitarianism, there are two extreme critical levels that correspond with two dominant population ethical theories. If everyone chooses the lowest preferred critical level (equal to zero), we end up with total utilitarianism. If everyone chooses the highest preferred critical level, we end up with quasi-negative utilitarianism. According to total utilitarianism, we should give top priority to avoiding existential risks such that the existence of many future happy people is guaranteed. According to quasi-negative utilitarianism, we should give top priority to avoiding wild animal suffering such that the non-existence of animals with miserable lives (negative utilities) is guaranteed (but not always simply by decreasing or eliminating wild animal populations and not necessarily at the cost of whipping out all life).

The value of eliminating existential risks when everyone chooses the lowest preferred critical level would probably be higher than the value of eliminating wild animal suffering when everyone chooses the highest preferred critical level. But total utilitarianism is less likely to be our preferred population ethical theory because it faces the sadistic repugnant conclusion. This means that the expected value of eliminating wild animal suffering could be bigger than the expected value of eliminating existential risks. These calculations become even more complex when we consider the interconnectedness of the problems of existential risks and wild animal suffering. For example, decreasing existential risks might increase the probability of the existence of more future wild animals with negative utilities. But eliminating some existential risks might also guarantee the existence of people who could help wild animals and potentially eliminate all future wild animal suffering with new technologies.

Finally, wild animal suffering deserves a higher priority because this focus area is more neglected than existential risks.

[1] We cannot simply add the relative utilities of far-future wild animals, because that would presume that existential risks are avoided.

Advertenties
Geplaatst in Artikels, Blog, English texts | Tags: , , , , , | Een reactie plaatsen

Effective altruism and the law of diminishing marginal effect

If you hold a weight of 10 grams in your left hand and 20 grams in your right hand, you will feel a difference, but if you hold a weight of 1,01 kilograms in your left hand and 1,02 kilograms in your right, you probably won’t feel a difference, even if the real difference in weight is 10 grams. This is an example of the Weber-Fechner law: a change in stimulus (such as the pressure of a weight on your hand) gives a smaller change in perception (such as the feeling of the weight) when the stimulus is larger. The stimulus-perception relation is often a logarithmic function. Other examples are the loudness of sounds (measured on a logarithmic decibel scale), the brightness of stars (measured on a logarithmic stellar magnitude scale) or the number of objects. You can immediately see a clear difference between 10 objects and 20 objects, but not between 1010 objects and 1020 objects. Another example is the price of a product: if in the supermarket you can choose between a product of 10 euro and an equivalent product of 11 euro, you are likely to choose the cheapest product that saves you 1 euro. But now suppose you are buying a car and you can choose between a car of 4711 euro and one of 4721 euro. Now you don’t bother the difference, even if the difference is ten times higher: you can save 10 euro by buying the cheapest car.

All our senses and our subjective judgments have this law of diminishing marginal effect. A marginal effect is the change in effect (e.g. a subjective valuation, estimation or perception) that is the result of a unit change in an objective, measurable variable (e.g. weight, sound amplitude, amount of light, number of objects). This marginal effect is a function of the objective variable, but it is not a linear function. The law of diminishing marginal effect says that the function is concave, such as the logarithmic or square root function.

This law of diminishing marginal effect as important implications for effective altruism, where the goal is to do good and help others as effectively as possible. Here are three implications.

1.      Poverty reduction and diminishing marginal utility of money

If you are very poor, getting an extra amount of money will strongly increase your happiness. But if you already earn a lot of money, you probably don’t even notice a higher income. Consider someone like Bill Gates who earns about 114 dollar per second. If he finds a 100-dollar bill on the street, would he bother bending over to pick it up? His wealth is about 100.000 times higher than an average person in a developed country, which means that for him, buying a house feels like buying a loaf of bread to us.

In contrast, someone in extreme poverty is about 100 times poorer than us. For that poor person, finding 1 dollar on the street feels like finding a 100-dollar bill for us, in terms of increased happiness. That is why an organization like GiveDirectly can be highly effective in improving well-being and promoting happiness by giving the poorest people an unconditional cash transfer.

These are examples of the law of diminishing marginal utility of money, also known as Gossen’s law. Utility measures how valuable or preferable something is. The marginal utility of money measures how much we value or prefer an extra unit of money (an extra dollar). If we already have or earn a lot of money, our preference (measured by an increase in happiness or satisfaction) for an extra dollar diminishes. As a result of the law of diminishing marginal utility of money, increasing the income levels of the poorest people should get a priority: if rich people give some money to the poorest, the happiness of the rich people doesn’t decrease that much but the happiness of the poorest people increases a lot.

2.      Suffering reduction and diminishing marginal suffering

The law of diminishing marginal utility (e.g. of money) results in a prioritarian ethic, where we must give a strong priority to improving the position of the worst-off people: the poorest humans or more generally the sentient beings that suffer the most. Avoiding extreme suffering gets a priority. However, there is also the law of diminishing marginal suffering. For example, we can easily detect the difference between 1 and 2 needles in our arm, but not between 101 and 102 needles. Adding more needles does not linearly increase the pain and suffering. Also, adding one prison day to a jail sentence of ten years is less painful for the prisoner than adding one prison day to a jail sentence of one week. As a result, we can sometimes overestimate the badness of extreme suffering. Or equally possible: we can sometimes underestimate the badness of less extreme suffering. This means that avoiding extreme suffering should not always get absolute priority. Sometimes avoiding less extreme suffering of many people can be more important than avoiding the extreme suffering of one person.

3.      Saving lives and scope neglect

The most important and far reaching implication of the law of diminishing marginal effect for an effective altruist, is the problem of scope neglect. Looking at the donations given to charities, we see that most people have a diminishing marginal willingness to pay to prevent harm or save lives. The difference in willingness to pay to save 2 lives instead of 1 is higher than the difference to save 102 instead of 101 lives. Some studies indicate a logarithmic relationship between the willingness to pay and the size of the prevented harm. However, for an effective altruist, this should be a linear function: saving an extra live when 101 lives are already saved is not less valuable than saving an extra live when only one person was already saved. The moral value of saving an extra live is not dependent on the other lives. An effective altruist should try to avoid scope neglect. This has two important implications.

  1. Linearity of the marginal utility of resources

When it comes to our own preferences and our own consumption, we have a diminishing marginal utility of resources such as money and time. The more money we have, the less valuable an extra euro becomes. The more food we can consume, the less valuable an extra loaf of bread becomes. The more leisure time we have, the less valuable an extra hour becomes. But with resources such as money and time, we can help others. The amount of good done is a linear function of the amount of help (e.g. the number of lives saved, or the amount of harm avoided). Therefore, when it comes to helping others, we should have a linear instead of a diminishing marginal utility of resources.

This requires a new way of thinking. If we buy the cheapest product in the supermarket, we can donate the money saved to an effective charity. But the same goes if we buy the cheapest car. We can reorganize our work such that we become more efficient and save one day on a small project that takes a week. Then we have an extra day to do good. But the same goes if we can save one day on a big project that takes a year. Hence, an effective altruist should try to make his or her marginal utility function more linear. This requires some effort, because we are not familiar with this kind of thinking. We are used to think in terms of relative instead of absolute numbers, and to ratios instead of differences. We spontaneously think that a 1% saving of time on a long project is as good as a 1% saving on a small project, that a 1% saving of costs on an expensive product is as good as a 1% saving on a cheap product, that a 1% reduction of mortality at a big catastrophe is as good as a 1% reduction at a small disaster. But when it comes to doing good, differences instead of ratios are what matters. If we can easily save 1 euro when we buy a car that costs a few thousand euros, it is as good as saving 1 euro on a loaf of bread, if this 1 euro goes to a charity.

  1. Risk neutrality

Next to a marginal utility linearity, a second implication of avoiding scope neglect is risk neutrality. When it comes to our own preferences and consumption, we have a risk aversion. Imagine you can play a game. You can toss a fair coin, if it is heads, you get 100 euro. If it is tails, you must pay me 100 euro. On average the expected profit of playing the game is 0 euro, the same as not playing the game. If you have risk aversion, you would avoid playing this game, because you want to avoid the risk of losing 100 euro. This risk aversion is (partially) a consequence of the law of diminishing marginal utility of money. Suppose you already have 100 euro. The difference in utility of 100 euro (when you don’t play the game) and 0 euro (when you play the game and lose 100 euro) is bigger than the difference in utility between 200 euro (when you play and win 100 euro) and 100 euro (when you don’t play). That means that utility is a concave function of money: the more money you have, the less valuable an extra euro becomes.

Risk aversion also plays a role when it comes to saving lives, as was demonstrated by Kahneman and Tversky in their example of the Asian disease problem. Suppose there is a new Asian disease that will kill 600 people. There are two vaccines. With vaccine A, 200 people will be saved, with vaccine B there is a 1/3 probability that 600 people will be saved, and a 2/3 probability that no people will be saved. With both vaccines, the expected number of people saved is 200. If people must choose between either vaccine A or vaccine B, a majority prefers vaccine A, because in that case they have a certainty that 200 people are saved. This demonstrates the risk aversion.

However, as Kahneman and Tversky also demonstrated, there is a framing effect. The above description of the vaccines was in terms of positive effects, i.e. saving lives. Another framing (choice of words) is possible, in terms of losses or deaths. With vaccine A 400 people will die, with vaccine B, there is a 1/3 probability that nobody will die, and a 2/3 probability that 600 people will die. When people face the Asian disease problem with these words, a majority prefers vaccine B. In other words: people become risk seeking (a preference to gamble), because they want to avoid the certainty of 400 people dying.

The Asian disease problem is like the game of tossing a coin. Suppose heads means someone will save 100 lives, tails means someone will kill 100 people. When framed this way, most people prefer not playing the game. Now we can reframe it: of a population of 200 people, heads means no-one will die, tails means everyone will die, and not playing the game means 100 people will die. According to this framing, not playing the game becomes less attractive, because it results in a certain death of 100 people.

If we want to avoid this irrational framing effect, and if doing good implies a linear marginal utility, when it comes to saving lives, the most rational decision-making attitude is risk neutrality. This also requires a new way of thinking. When starting a new risky project to help others, switching a career to do more good, investing money to donate the profits to charities, we should avoid our spontaneous tendency of risk aversion. Effective altruists should take more risks if the expected value is higher. We should do more risky, dynamic instead of safe, defensive investing. We should try riskier scientific research, i.e. research with more uncertain results, if there is a probability of obtaining highly useful results such that the expected benefits are very high.

Consider doing a project A that will definitively safe 10 lives, and another project B that will have a 90% probability of saving no-one and a 10% probability of saving 110 lives. The expected value (the expected number of lives saved) is 11 for project B, which is 10% higher than the value of project A. If all effective altruists make such riskier choices, i.e. when everyone chooses the riskier project, 10% more lives are saved. Of those effective altruists, 9 out of 10 will help no-one, but 1 out of 10 will save 110 lives. For an effective altruist it doesn’t matter who saves lives, as long as the most lives are saved.

Geplaatst in Artikels, Blog, English texts | Tags: , , | Een reactie plaatsen

What if everyone was human?

Speciesism is discrimination based on species, where species membership is identified with moral community membership (i.e. where species membership determines someone’s moral status and rights). Like all kinds of discrimination, speciesism is a kind of unwanted arbitrariness: the victims of discrimination cannot want their arbitrary exclusion from the moral community. The exclusion is arbitrary, because there is no justifying rule to select species instead of one of the many other possible groups (e.g. races, genera, families, biological orders,…), and the precise boundary of a species is not well-defined and therefore inherently arbitrary. We cannot always determine who belongs to a species. If you consider all your ancestors, you cannot always tell who was human. There was no human ancestor whose parents were clearly non-human.

If we do not want to avoid all kinds of unwanted arbitrariness, we acknowledge that unwanted arbitrariness is morally permissible, and we would no longer be able to give valid arguments why we should not be the victims of unwanted arbitrariness. Therefore, we must avoid all kinds of unwanted arbitrariness, including speciesism.

The best way to avoid speciesism in our moral judgments, is to consider everyone as a human being. Everyone includes everyone who has personal experiences and preferences. Some humans are not able to talk with words or walk on their two hind legs. They are smaller, have more hair, a longer nose, bigger ears, another skin color, a tail. We normally call them dogs, but let’s call them humans such that we consider them as humans. Some humans have wings to fly, some humans have an IQ less than 50, some humans have a very good sense of smell, some can run fast, some can breathe under water, some are not able to understand moral rules, some are not able to understand the law or the far future, some require special diets,… One could even consider plants, computers and other non-sentient objects as humans, but because these objects lack any consciousness, they are as conscious as non-existent humans, so it doesn’t matter if we do not consider them as humans.

Now let’s look at the world full of humans. What do we see? We see small humans entering the bodies of bigger humans (parasitism). We see some humans hunting and killing many other humans (predation). We see some humans dying of starvation if they are not able to take body parts (e.g. muscle tissue) of other humans. We see some humans breeding, slaughtering and eating other humans merely for their taste pleasure (livestock farming). And we see that the vast majority of humans have brothers and sisters that are suffering and dying at a very early age (the so-called r-selection reproductive strategy of wild animals, where those animals have many offspring that have very short lives). That is what we see with antispeciesist glasses. Most of those humans are mentally disabled, which means they do not have any understanding of ethics or mathematics. Luckily, a minority of humans have high levels of IQ, are able to understand ethics and are able to invent new ways to improve the well-being of other humans. Unfortunately, those smart humans do not always see the world through antispeciesist eyes, although they are able to understand that unwanted arbitrariness is unwanted. So it’s time for those humans to put on antispeciesist glasses, to see everyone as a human, with their own capacities, desires and experiences. Then they will figure out what they’ll have to do with all the other humans in the world.

Geplaatst in Artikels, Blog, English texts | Tags: , , | Een reactie plaatsen

The anti-experimentation bias

People should not be used as test objects, which means using someone against his or her will in an experiment is morally problematic and would require very strong justification. However, a lot of people are also reluctant to perform medical, economic or social experiments even when the people involved (the test subjects) are not treated against their will, or when they can consistently want the experiment to be done (i.e. when the experiment is in line with their strongest moral values and goals). This is an example of an anti-experimentation bias.

Consider randomized controlled trials in development economics or in medicine. The population is randomly divided into a treatment group and a non-treatment control group. The treatment group gets a treatment (e.g. a development intervention or a new drug). If we see different consequences between those groups, we can conclude that the treatment has a causal effect. Such randomized controlled trials are often considered as immoral. Of course, if we already know that the treatment works and has good consequences, and if we could give the treatment also to the control group, then having a non-treatment control group means we would withhold those people from an effective cure. That would be immoral indeed.

However, in many cases we do not yet know if the treatment will work, and in many cases we do not have enough resources (time, money) to give everyone the treatment. In that case the experiment is no longer immoral. Consider the distribution of antimalarial bed nets as a development project. Those bed nets are costly, so it is impossible to give bed nets to all the people in all the villages in all poor tropical countries. Now suppose we want to study the effectiveness of this intervention by doing a randomized controlled trial (RCT). A lot of development organizations dislike such RCTs, because they do not want to treat poor people as test objects. However, what they don’t realize, is that they are always doing an experiment. An RCT requires two things: randomization and having a control group. But every project with insufficient resources (i.e. not enough resources to cover everyone with the treatment) has a control group. Some villages in poor tropical countries are in the treatment group and receive bed nets, other villages are in the control group and do not receive bed nets due to insufficient funding (perhaps they receive another intervention, which allows us to compare the effectiveness of bed nets with that of other intervention). What happens in practice is that development organizations only look at the results of the treatment group. They do not compare those results with the control group, because they neglect the results of the control group. This is a waste of data. If we already have a control group, there is no justification to refuse looking at the control group data.

What about the randomization? How do development organizations decide who should get the treatment? Which villages should get the bed nets? We can target the poorest, most affected populations, but even then, we often do not have enough resources to cover that whole population. Consequently, there is always a kind of arbitrariness in selecting the treatment group. Development organizations sometimes try to give arguments for their selection of the treatment group, but this is often nothing more than a rationalization. If this arbitrariness is unavoidable, one could as well go for a complete randomization, to make the scientific experiment more robust.

Instead of wasting the data of the results of the control group and rationalizing the choice of the treatment group, we could learn much more valuable information about the effectiveness of interventions if we accept that we have a lot of opportunities for doing full experiments. Not doing the full experiments might be harmful, because we would stick to less effective interventions.

Also in other areas we often have an anti-experimentation bias. Suppose we have new ideas about economic policies (e.g. a basic income), democratic reforms (e.g. futarchy, approval voting or epistocracy), education reforms, social projects, agricultural practices, or other areas. Either two things can happen: we are not sure whether the new idea is effective, or we feel very confident that the idea will work. In the former case, we are often reluctant to do experiments, because of a status quo bias: we want to keep the current situation. That means our experiment has no treatment group. In the latter case, we are often overconfident, which means we want the new idea to be implemented everywhere immediately. That means our experiment does not have a control group. In both case we are missing the opportunity to do experiments that give us valuable information about the effectiveness of the new ideas.

Considering all new reforms as experiments has another advantage: we remain flexible and open minded. We allow ourselves to learn from the results and to adjust or abandon the new idea if it turns out to be ineffective. An example is organic agriculture: producing and buying organic food was important, because we could learn from this new agricultural practice. However, as it turns out, organic farming appears to be an experiment with a negative result: after decades of farming and hundreds of studies, organic farming still does not appear to be significantly better for the environment or our health. That doesn’t mean that buying organic food was a bad or ineffective choice. If there were no consumers, there would not have been the experiment. However, it is important to learn from experiments. If people do not consider new practices such as organic farming as experiments because of their anti-experimentation bias, they tend to be too rigid or dogmatic in their beliefs. People might keep supporting organic farming in an unjustified belief that it is effective.

As a summary, we often have an anti-experimentation bias, which means we are not considering new practices as experiments, although they are. Sometimes we have a status quo bias which means we are doing an experiment where the control group covers the whole population (no treatment group). Sometimes we have an overconfidence bias which means we are doing an experiment where the control group is absent. And in the cases where we have a control group due to resource constraints (limited time and money), we are reluctant to look at the data from the control group. The anti-experimentation bias has two negative consequences. First, we are not learning new valuable information about the effectiveness of the new practice, because the experiment has a bad design (either a 100% control group, a 0% control group or a loss of data from the control group). And second, even when we can learn valuable information, we are reluctant to change our minds about the effectiveness of the new practice.

Geplaatst in Artikels, Blog, English texts | Tags: , , | Een reactie plaatsen

Conflict of interest bias

In discussions about controversial topics such as climate change, vaccine safety, environmental sustainability, pesticide toxicity, chronic Lyme disease, male privilege or homeopathic therapies, we often hear the argument that scientific studies are biased due to financial conflicts of interest of the researchers. The accusation of conflicts of interest is used to discredit studies, but we have to be careful to avoid a conflict of interest bias: an unjustifiable asymmetry where we see the conflicts of interest of the opponent but not the conflicts of interest of those people holding our own views. This conflict of interest bias is a version of the disconfirmation bias, where we are more critical and distrustful towards those people or studies that disconfirm our prior beliefs.

The most extreme example of a conflict of interest bias is probably the case about chronic Lyme disease. There is no evidence that chronic Lyme disease is caused by a persistent bacterial infection that can be treated with long-term antibiotic therapy. Medical associations such as the Infectious Disease Society of America (IDSA) advise against long-term antibiotic treatment, because antibiotics are ineffective in this case and a long-term therapy is expensive and can be harmful. However, Connecticut Attorney General Richard Blumenthal accused the IDSA of having undisclosed financial conflicts of interests held by several IDSA Lyme disease panelists (however, he did not name those panelists, nor did he clarify the kind of conflicts of interest). There is no evidence for such undisclosed conflicts of interest. Blumenthal and chronic Lyme disease pressure groups reject the guidelines of medical associations such as the IDSA, using the accusation of conflicts of interest as a weapon to discredit their opponents. However, they are not so critical about the possible conflicts of interest of people holding their own views. In the case of antibiotic therapy against chronic Lyme disease, one could equally say that patients involved in those pressure groups have a financial conflict of interest when they argue for insurance coverage of long-term antibiotic therapy. And of course, Big Pharma (the pharmaceutical industry) could generate a conflict of interest, trying to sell their antibiotics to chronic Lyme disease patients. How can the IDSA panelists be profiting financially by recommending to not treat patients with antibiotics?

The case of antibiotics brings us to a second example of conflict of interest bias: homeopathy. Take for example many organic livestock farmers: they often refuse to give their sick animals antibiotics, claiming that those antibiotics are harmful and merely serve the profits of the pharmaceutical industry. Instead, those farmers use homeopathic therapies for their animals. However, not only is there a scientific consensus that homeopathy is less effective than antibiotics in treating bacterial infections (at most, homeopathy has a placebo effect). A lot of studies that demonstrate the effectiveness of homeopathy have conflicts of interest. For example, researchers were paid by companies that sell homeopathic products.

The case of organic farming brings us to a third example of conflict of interest bias. Proponents of organic food claim that a lot of scientific studies that indicate that organic food is not better for our health and the environment, were performed by scientists who had conflicts of interest with the non-organic agricultural industry (e.g. with companies like Monsanto). Those proponents overestimate the conflicts of interest of the counterparty and they underestimate the conflicts of interest of their own party. A lot of scientific studies that claim that organic food is better for our health and the environment, or that non-organic genetically modified crops are unsafe, were performed by scientists who had conflicts of interest with the organic agricultural sector. Some infamous names include: Charles Benbrook (had undisclosed conflicts of interest: worked at the Organic Center and research was funded by Whole Foods, Organic Valley, United Natural Foods, Organic Trade Association and others), Gilles-Eric Séralini (consultant of Sevene Pharma that sells homeopathic antidotes against pesticides), Judy Carman (her anti-GMO research was funded by Verity Farms and published in a journal sponsored by the Organic Federation of Australia) and the Rodale Institute (a research institute that has a commercial interest in organic farming by selling organic products). These (often undisclosed) conflicts of interest are at least as bad as the conflicts of interest of e.g. Monsanto selling GMOs and pesticides. Imagine how environmentalists would react if proponents of GMOs came up with studies that had similar conflicts of interest with Monsanto.

The case of Monsanto brings us to a fourth example of conflict of interest bias. Monsanto sells the herbicide glyphosate, so of course they want to deny that glyphosate is toxic or carcinogenic. Opponents of glyphosate warn that studies showing the safety of glyphosate are biased due to the close ties between scientists and the pesticide industry. However, there are potential financial conflicts of interest among the opponents of glyphosate. For example farmers who developed non-Hodgkin lymphoma and who support the environmentalist cause against glyphosate, aim for compensation fees from Monsanto.

Speaking about compensation fees, a fifth serious example of conflict of interest bias can be seen in the antivaccination movement. Opponents of vaccines often claim that studies demonstrating the safety and effectiveness of vaccines are invalid because they are supposedly influenced by the pharmaceutical industry that wants to make profits from selling vaccines. However, a lot of members of the antivaccination movement also have conflicts of interest: they want compensation fees from the pharmaceutical companies for the alleged damages they incurred from vaccines. One influential person in the antivaccination movement is Andrew Wakefield, the author of an infamous study about an alleged link between measles vaccines and autism. Wakefields research about the MMR-autism connection contained undisclosed conflicts of interest, because he was paid by lawyers who were suing for vaccine injuries. Of course, Wakefields studies could help those lawyers in the lawsuits against the vaccine producers. As with the other examples of conflict of interest bias, antivaccination activists focus on the conflicts of interest by their opponent (the pharmaceutical industry) and deny or minimize the conflict of interest of their proponent (Andrew Wakefield).

A sixth example is the problem of male privilege. Feminists often accused men who are critical about feminist issues for having a conflict of interest, in particular a male privilege that they want to protect. However, if male privilege leads to a bias amongst men because they are privileged, it also leads to a bias amongst women. If men want to protect their privilege and are therefore less reliable or credible in some matters, we can as well say that women want to achieve privilege and are therefore also less reliable in those matters. Everyone can be said to have a conflict of interest: those who have power want to keep it, those who do not have power want to achieve it. It is not obvious why the latter would have a weaker conflict of interest and would be more credible.

What about climate change? It is well known that many deniers of anthropogenic climate change have financial conflicts of interest with the fossil fuel industry. But deniers sometimes claim that believers can have two kinds of conflicts of interest. First, climate scientists could have been paid by the low carbon, clean energy industries (e.g. nuclear power and renewable energy sectors), and second, climate scientists could be spreading doomsday scenarios of global warming as a means to ask for more government funding for more research, to secure their jobs. The former conflict of interest with the clean energy industry is expected to be very weak, because that industry is much smaller than the fossil fuel industry, and it becomes less and less likely because fossil fuel companies are investing more and more in nuclear power and renewable energies. The latter conflict of interest is unlikely, because it results in a huge conspiracy theory where all climate scientists have to mislead the governments. That requires an impossible coordination among scientists and a huge effort to keep the truth secret. A few decades ago, climate scientists warned about global cooling and a new ice age. That would have been a better story for securing more future government funds for research, because in that story, humans are not responsible for the climate catastrophes, which means that this story would not have met opposition from huge industries like the fossil fuel industry.

Suppose both believers and disbelievers, proponents and opponents, have conflicts of interest. What should we do then? We are no longer able to use the easy strategy of looking for conflicts of interest and discrediting all studies with such conflicts of interest. Luckily, another easy strategy is to see if there is a scientific consensus. And we have to rely on the more difficult strategy of looking at the content of the scientific studies instead of the backgrounds of the authors.

The most important lesson that we can learn from the conflict of interest bias, is that we have to be fair in our judgments and acknowledge that people (scientists) who hold our own views can also have conflicts of interest. That means we should tolerate some level of conflict of interest. For example, it is important that environmental organizations have the most reliable scientific knowledge, and therefore those organizations should invite scientists to give advice or to speak at environmentalist conferences. In order to attract enough top scientists, it might be an effective, necessary and therefore good idea to pay those scientists, to give them consultancy and speaking fees. Does that mean that those top scientists are no longer allowed to sit in governmental scientific panels or advisory boards, due to their financial conflicts of interest with the environmental organizations? Of course, those organizations would not complain against the panel memberships of those scientists. If those top scientists are not allowed in the panel, the government risks ending up with a small panel with only a few scientists with lower levels of expertise. Now suppose those scientists had similar conflicts of interest with the industry (e.g. giving paid presentations at conferences sponsored by the industry). Now the environmental organizations object. This is unfair. Furthermore, it is also important that the industry can rely on good advice from scientists: the scientists have the knowledge, the companies have the capital, and both are necessary to produce good products. So we should be more tolerant or nuanced towards some conflicts of interest.

Scientists are not only susceptible to financial conflicts of interest, but also to all kinds of cognitive biases. How reliable is a scientist who warns against a synthetic chemical product, if that scientist has a naturalness bias and is a member of an environmentalist organization? How reliable is a scientist who favors a new therapy against an untreatable disease if that scientist has a family member with that disease?

Luckily, the scientific method (with peer reviewed research, statistical methods to detect biases, other scientists testing and retesting hypotheses,…) is the best strategy we have to avoid those biases of individual scientists. Instead of focusing on the biases of an individual scientist, we should look at the broader scientific picture, the validity of studies, the statistics, the meta-analyses, the scientific consensus views, the positions of scientific academies…

Geplaatst in Artikels, Blog, English texts | Tags: , , , | Een reactie plaatsen

The problem of counting persons and conscious experiences

Introduction

All else equal, saving two people is better than saving one person, two hours of pain is worse than one hour of pain and an election candidate that has two votes from two people is more likely to win the election than a candidate that has one vote from one person. In all important moral theories, counting people (e.g. votes) and conscious experiences (e.g. pain) matters. But this counting is not always easy or straightforward, and perhaps in the future it will become even more difficult. Two examples demonstrate the problem of counting consciousness.

The temporal counting problem

Different sentient beings can have different brain processing speeds. Consider vision: humans can see at most 60 flashes of light per second. Showing flashes at a higher frequency results in seeing a continuous light. The flicker fusion rate measures how fast a light has to be switched on and off before one sees it as a continuous light. A fly has a flicker fusion rate four times higher than a human, which means a fly can see 250 images or flashes per second. This explains why it is so difficult to swat a fly: a fly sees everything in slow motion, four times slower than we do.

Perhaps not only vision, but also our conscious experiences have a maximum frequency. What is the smallest time interval that we can experience? Suppose an experience of pain is turned on and off. Suppose at this moment you do not feel pain, a second later you feel pain, another second later the pain is gone. That means every second you can have a different conscious experience. But what if we increase the frequency? At this moment you do not feel pain, a millisecond later there is a pinprick. Another millisecond later the needle is removed, and so on. Now you might feel a slight, continuous pain instead of different pain pulses, which means you cannot consciously distinguish milliseconds.

Suppose the flicker fusion rate of your consciousness is 60 experiences per second, as with vision. This is as if you have an internal clock that has a moving hand rotating full circle in 60 steps per second. Every position of the moving hand corresponds with a different conscious state. You can have at most 60 different conscious experiences per second. But some insects may have faster internal clocks. In one real second, they can have 250 different conscious experiences. If you experience pain for one second, you actually have 60 conscious states of pain. But if insects can feel pain and if they feel pain for one second at a higher brain speed, that corresponds with 250 conscious states of pain. It is as if you would experience 4 seconds of pain.

Perhaps the tiny brains of insects indicate that the intensity of their pain experience is lower than the intensity of pain experienced by animals with larger brains. But if their brains are faster, they experience pain in slow motion, meaning that a second of pain appears to last longer for insects. Is 1 second of intense pain for a human as bad as one second of equally intense pain for an insect with a faster brain?

In the future, artificially intelligent conscious computer programs and whole brain emulations of humans are perhaps possible. Suppose we program your whole brain on a computer. If we run this brain emulation program, this computer might generate the same conscious experiences that you experience right now. The computer program becomes conscious. Suppose that computer emulated person committed a crime and receives a sentence of one year imprisonment. Now suppose we run this computer program ten times faster. Perhaps the emulated person generated by the computer program has the same experience as if you would sit ten years in prison.

The temporal counting problem states that if we consider a time interval, how many conscious experiences (or temporally different persons or conscious minds) are there within this interval? If time is a continuum, can we say that one second involves an infinite amount of instantaneous experiences. Or are there only 60 experiences for a human, 250 for a fly and a billion for a brain emulation of a human that runs a billion times faster than its corresponding biological brain?

The spatial counting problem

The possibility of whole brain emulations and conscious artificial intelligence also raises questions about copying minds. It is easy to make copies of computer programs. We can have two different computers (two hardwares) that run the same software. When making a copy of a computer program, we add an extra hardware that allows us to run the copied software program. This extra hardware is spatially separated from the original computer. Like computers, your brain is also hardware and your mind or consciousness is the software. Asking how many conscious minds or persons there are becomes like asking how many Microsoft Windows programs there are: the number of spatially separated Windows computers (hardwares) or the one Windows program (software)?

The possibility of copying minds makes counting persons or conscious experiences very difficult. Here is a thought experiment. Suppose a person Alex is sitting in a room. Now we make a full copy of that room, including a copy of Alex’s brain. In the copied room, there is a person with the exact same brain structure as Alex, with the exact same experiences (the same visual stimuli, the same shape of chair he sits in, the same sounds, and so on). Most people believe there are now two persons, call them AlexA and his copy AlexB, even if those two persons have the same experiences. The persons are different, because they could have had different experiences: it is possible to cause pain to AlexB without causing pain to AlexA. This means that AlexB has a right to vote, and so does AlexA. Two people, two votes. This also means that saving AlexA and AlexB is better than saving only AlexA.

Suppose we could enlarge the skulls of both persons, make them twice their original size. We also stretch their neurons in their brains. Suppose the neurons keep firing in the same way. If brains generate consciousness and conscious experiences are uniquely determined by the neural firing pattern, the experiences generated by the stretched-out brains remain the same. As the skull of AlexA is larger, there is room available for extra neurons. Suppose we put all the neurons of AlexB inside the skull of AlexA, place them in their original order such that their neuron firings remain the same.

How many persons or conscious experiences are now generated inside the skull of AlexA? All we did is moving AlexB’s brain in AlexA’s skull, and moving a brain does not delete consciousness, so we can say there are now two people inside AlexA’s skull. But suppose we now merge one neuron of AlexA with the corresponding copied neuron of AlexB. Both neurons were already lying closely next to each other, so suppose we replace them by one big neuron that again fires in the exact same way as the original neurons of AlexA and AlexB. Now we merge a second neuron of AlexA with its copy from AlexB, and so on. After we have merged all neurons, we end up with one big brain that has the exact same neural firings and patterns as the original brain of AlexA. Again, if brains generate consciousness and conscious experiences are uniquely determined by the neural firing pattern, the big brain has the same experience as the original brain. How many people are there inside AlexA’s skull now? If you believe that there are still two persons, you could equally say that inside your head, at this moment, your brain generates two conscious experiences and there are two persons. After all, we can reverse the process: split all your neurons in halve along their length and disentangle the neurons into two separate brains. Furthermore, we could also include a third copy AlexC, so we could as well say that there are three persons inside AlexA’s skull. Any number is possible. The number of persons becomes ill defined, unless you answer that the one big brain in AlexA’s skull generates only one consciousness. That means somewhere along the line, merging neurons together, the two persons really unite to become one person.

An analogy with playing cards

An analogy with playing cards might clarify the above thought experiment. Just like the question how many persons or conscious experiences there are, we can ask how many aces there are. There are at least three possible answers. At the most abstract or conceptual level, there is only one ace, corresponding to the idea of an ace: the first card. At the functional level, there are four aces, one for each of the four suits (clubs, hearts, spades and diamonds). At the most concrete or material level, there are thousands of aces: if there are thousand decks of cards in the world and four aces per deck of card, there are 4000 aces.

The material level corresponds to the hardware, the conceptual and functional levels correspond to the software. However, the number of aces at the material level (the hardware), is not well defined. I could take an ace of spades and cut it in half. Does this mean there is now one more ace in the world? You can easily say yes. But you can also say no, because the two pieces of the ace of spades still have the same function in a card game such as solitaire. Now suppose I cut all 52 cards in half. Now we are able to play two different games of solitaire: use the left hand side of the cards for one game, and the right hand sides for the other. Note that to play two different games of solitaire, we did not have to cut the joker cards in half.

The analogy between playing cards and brains goes as follows. The number of decks of cards correspond to the number of brains. One deck of card playing one solitaire game corresponds with one brain generating one conscious mind. The set of four aces in one deck of card correspond to the set of neurons that generate consciousness in one brain. The other 48 cards correspond to other crucial neurons (e.g. optical fibers), where ‘crucial’ means that they should also be split before we can generate two different persons. The remaining cards (jokers) correspond to non-crucial neurons and body cells: they do not have to be split in order to create two minds.

The importance of causality

In the above thought experiment, we considered a process where we started with two people (two conscious mind) who had the exact same experiences and ended up with one person. What determines this transition? How can we tell whether there are one or several persons present? This is a crucial question in ethics. The answer has to do with causality. In the initial state with AlexA and AlexB, there are two people, because their two brains are causally independent, even when they happen to have the same neural firing patterns. We can give AlexB a different visual stimulus than AlexA, and from that moment, the brain of AlexB will have a different neural firing pattern and hence generates a different experience than AlexA. On the other hand, when both brains are inside AlexA’s skull and more and more neurons are merged, it eventually becomes logically impossible to give AlexB’s brain a different neural firing pattern that generates a different conscious experience.

This is comparable to the deck of cards. Suppose we start with two complete decks of cards, playing solitaire. Suppose with those two decks of cards, we played the same game of solitaire, with the same starting positions and the same choices. The two games develop in parallel. We can say that these are two games of solitaire (just like there are two persons AlexA and AlexB), because even if the games develop in the same way, we could make other choices such that the games start to diverge from each other. Suppose we take a joker from one deck and glue it together with the corresponding joker from the other deck, just like we merged one non-crucial neuron from AlexA with the corresponding neuron copy of AlexB. We can still play two different games of solitaire. However, once we merge (glue together) for example the two kings of diamonds of the two decks of cards, we are no longer able to play two different games. From that moment, we only have one game, just like we have one consciousness in AlexA’s skull after merging two crucial neurons.

If it is impossible to generate two different experiences at the same time in your brain, your brain generates only one consciousness. If a copy of your brain can generate a different experience, that copied brain creates a different person, even if you both happen to have the exact same experiences. The same goes for your emulated brains on computers: if the hardwares are sufficiently different such that we can cause two different experiences, there are two persons, even if they happen to have the exact same experiences. If the hardwares are sufficiently entangled such that we cannot generate two different experiences, there is only one person present. And the same goes for the temporal differences in consciousness. If it is possible to generate a painful experience a second after a non-painful experience, there are two different conscious experiences, separated in time by one second, just like two hardwares can be separated in space by one kilometer. Even if at this moment you experience the exact same thing as a second ago, we can still say that the two instantaneous minds (your consciousness now and your consciousness a second ago) are different. However, if it is impossible to cause a painful experience within a millisecond after a non-painful experience, because your brain is too slow to experience such rapid differences, those two instantaneous minds are the same and count as one.

Final remarks

In my ethical theory of variable critical level utiliarianism, I argued that we should maximize the sum of everyone’s normalized relative utilities, where those utilities or preferences measure how strongly a person prefers a situation that we can choose. Here we also have to deal with counting consciousness, because we have to take the sum over different persons, so we have to answer the question when two persons (two conscious minds) are different or not. In my view, each individual, instantaneous conscious mind can have a different utility function and hence a different normalized relative utility for the state that s/he experiences. If a human has 60 different instantaneous conscious minds per second, we have to take the sum over all its utilities over that second. If a fly has 250 minds during that second, the sum includes 250 utilities per second for that fly.

Another bizarre consequence of consciousness as the result of information processing from neural firings in the brain, is that also other very complex systems, such as the air in a large room, can perform the same information processing. The patterns of neural firings can be translated in streams of bits (ones and zeros) of information processing. If a room contains enough molecules, also the positions and movements of air molecules can be described in an equally long stream of bits. Both streams of bits can be mapped to each other. For example the sequence ‘011001000…’ representing your conscious experience generated by your brain can be mapped into the sequence ‘111011011…’ representing the air, by a very long rule “the first bit 0 turns into 1, the second bit 1 stays 1, the third bit stays the same,…” This means that the information content of your brain with neurons (and hence the consciousness) and of anyone else’s brain with the same complexity can be mapped into the information content of a sufficiently large room with air molecules. (Technically speaking, the information entropy of the room should be as large as the information entropy of your brain.) So you could say that the air in that room also generates a consciousness (in fact generates all kinds of conscious experiences at once). This is a kind of panpsychism: consciousness is everywhere (in all sufficiently informationally complex systems) and in all forms. There is an infinite number of conscious minds. However, the fact that the air in the room generates conscious experiences is not morally relevant, because we are not causally able to influence those experiences. If I want to give you a happy experience by playing your favorite song, I can causally influence your brain via sound waves that enter your ear and send signals to your brain. If I claim that the position and movements of air molecules in the room also correspond with a process that generates your conscious experiences, you could ask me to do the same with the air in the room: make it happy. But that is impossible for me. Perhaps I will have to follow a very complex procedure, such as: “Shift molecule number 6479532 a little to the right, send molecule 362541115 towards molecule 65893547 if it is in the upper left corner, send those 5 upwards moving molecules downwards, wait five milliseconds and then…” That is not as simple as pushing the play button on a media player to generate the correct sound waves travelling towards your ears. Furthermore, such a procedure is arbitrary, because there is no way to uniquely derive the procedure or determine that it really works.

As a summary: only the conscious experiences that we can causally influence matter morally.

Geplaatst in Artikels, Blog, English texts | Tags: , , | 2 reacties

Being rational about nuclear power

Disclaimer: in the past I did several actions against nuclear power. This article does not reflect the opinions of the environmental organizations I am and was involved in.

Effective environmentalism deals with the irrationalities in the environmental movement. These irrationalities are often caused by emotional attachments and include inaccurate beliefs that result in the choice for ineffective means to reach the environmentalist ends (including values like health, safety, sustainability, intergenerational justice and biodiversity). Examples of irrationalities in the environmental movement include a naturalness bias, a support for organic food and a resistance against genetic modification. Sometimes environmentalist campaigns can backfire and cause more harm than good (such as the campaign to ban glyphosate). Effective environmentalism has some resemblance with ecomodernism.

In this article I want to present another irrationality of the environmental movement: its resistance against nuclear energy. The message is that campaigning against nuclear power is ineffective and sometimes counterproductive. There are much more effective campaigns, such as a campaign to support plant-based diets or support economic measures such as a green tax shift or a cap-auction-trade system for greenhouse gas emissions. The effectiveness of environmental organizations is improved if they stop their nuclear campaigns and instead focus on more effective solutions.

The case in favor of nuclear power is made by the following two arguments.

  • Nuclear power has fewer deaths from pollution and accidents than almost all other energy sources. Several sources mention that the deathprint of nuclear is much lower than the death print of fossil fuels and even lower than the deathprint of most renewable energy sources such as solar and wind energy. The deathprint measures the number of deaths per kWh of electricity produced from a life cycle perspective, just like the environmental footprint measures the environmental impact per kWh. For example, the past decades a trillion kWh of nuclear energy caused less than 100 human deaths, whereas renewable sources (solar, wind and hydro) caused between 100 and 2000 human deaths and fossil fuels caused between 4000 and 100.000 deaths from air pollution and accidents (the deaths from climate change and possible future accidents are not included). Animal deaths (e.g. birds and bats dying from wind farms) are not included in these deathprint statistics. As a consequence, even if a ban on nuclear energy would result in a large shift towards renewable energy sources, a small residual shift towards fossil fuels such as gas and coal (as is the case in e.g. Japan after the Fukushima nuclear power station accident) would result in more deaths overall. By replacing fossil fuels, some researchers estimate that global nuclear power has prevented an average of 1.84 million air pollution-related deaths and has the potential to prevent another 7 million deaths in the future.
  • Nuclear power has one of the lowest carbon footprints of all the energy sources. Looking at several life cycle analyses (e.g. a meta-analysis of low carbon technologies by Ricardo AEA 2013, values from the IPCC, the UK Parliamantary Office of Science and Technology 2011). Nuclear has less than 10 grams of CO2 emissions per kWh, comparable to wind energy, 10 times lower than photovoltaic (solar) energy and around 100 times lower than fossil fuel energy (gas and coal). As a consequence, even if a ban on nuclear energy would result in a large shift towards renewable energy sources, a small residual shift towards fossil fuels such as gas and coal would result in more greenhouse gas emissions overall and hence more future harms from climate change.

For the non-experts, the case in favor of nuclear power is strengthened by extra supportive arguments from authority.

  • A majority of scientists are in favor of building more nuclear power plants. According to a Pew Research Center survey of a representative sample of scientists connected to the American Association for the Advancement of Science (AAAS), 65% of scientists favor building more nuclear power plants.
  • A lot of effective altruists are in favor of nuclear power. An example of an effective altruist organization in favor of nuclear power is the Founders Pledge. Effective altruists use reason and scientific evidence to do the most good. As they are altruists, they do not have personal (e.g. financial) conflicts of interest with any economic sector, including the nuclear power sector. The difference with a lot of environmentalists is that effective altruists use a lot of critical thinking and are highly aware of their own cognitive biases. They consistently look for scientific evidence and solid arguments and try to avoid fallacies and erroneous judgments. When it comes to very unlikely risks such as the risk of nuclear accidents, we have cognitive biases (e.g. availability heuristic, hindsight bias) that make us overestimate the likelihood of those risk. Also, when environmental organizations invested a lot in antinuclear campaigns, they are susceptible to a sunk cost fallacy. As a consequence, they are less willing to abandon those campaigns. Effective altruists on the other hand are more willing to change their minds and to abandon ineffective actions.

An example of how a fearful reaction about nuclear energy can be irrational and cause more harm, is the population relocation after nuclear accidents.

  • Population relocation after the nuclear reactor accidents of Chernobyl and Fukushima was largely unjustifiable and could cause more harm than good, resulting in more deaths and a lower Life Quality Index. Evacuation after the Fukushima nuclear power station accident in 2011 resulted in more premature deaths (due to increased levels of stress, physical and mental exhaustion, increased suicide rates, elderly people needing nursing care, decreased medical care from evacuating hospitals) compared to the situation where everyone stayed home, for a majority of communities in the 20 km relocation zone, according to a 2017 study in the journal Process Safety and Environmental Protection. Stress of moving resulted in an estimated 1600 premature deaths in the first 3 years after the Fukushima accident. This corresponds to an average loss of life expectancy of one month per relocated person, more than the health risk from radiation exposure when people stayed home, in 8 of the 12 evacuated communities in Fukushima. Moreover, looking at the cost-effectiveness in terms of the J-value (the ratio of the actual sum of money to be spent on protection against a health risk to the maximum that it is reasonable to spend if the quality of life of those affected – measured by the Life Quality Index – is not to be compromised), relocation was unjustified for 75% of the 335,000 people relocated after Chernobyl and for all of the 160,000 people relocated after Fukushima. In other words: the Life Quality Index of the relocated people is lower than if they stayed home. The environmental movement might have contributed to this overreaction and too high levels of scare.

Finally, the arguments against nuclear power are weak.

  • The nuclear waste problem is small. First of all, the hazardous nuclear waste produced by a person using nuclear energy (25 ml per year) is more than thousand times smaller than the non-nuclear hazardous waste produced (around 80 kg per year per person). If we also consider air pollution and greenhouse gases as hazardous waste, nuclear energy produces much less hazardous waste than fossil fuels. Second, the existing amount of nuclear waste is much higher than the newly produced waste. The nuclear industry has already produced more than 60.000 ton of used nuclear fuel and adds about 2000 ton per year. Risks (and costs to avoid risks) may not increase linearly with the amount of waste. Compare it with a bank that has a vault with gold. There is a security risk that the gold gets stolen, comparable to the risk that nuclear waste gets out of the storage sites. If it requires one guard to protect a vault containing one ton of gold, it does not necessarily require two guards if the vault contains two tons of gold. The first units of gold (nuclear waste) may generate the highest risks and marginal security costs. If there is a decreasing marginal risk, and if there is already a lot of nuclear waste, adding an extra 3% of waste increases risks with less than 3%. This means that the extra risks (and extra, marginal security costs to avoid risks) for an additional unit of nuclear waste may become comparatively small. Third, future technologies and new generations of nuclear power plants might be able to process the nuclear waste (this is at least theoretically possible according to the laws of physics). Fourth, keeping nuclear energy would have an impact on society in such a way that in the future other people will be born compared to the situation with a ban on nuclear energy. Those other people owe their lives to nuclear energy (without keeping nuclear energy, they would not have been born). From a certain population ethical point of view, one could say that if those other people have lives worth living, they cannot complain against our decision to keep nuclear energy, even if they are confronted with our nuclear waste risks, and that makes keeping nuclear energy more permissible. Furthermore, keeping nuclear energy results in more economic growth, which allows for more scientific research and wealth accumulation and hence more economic wealth, technological inventions and scientific knowledge for future generations, which makes it more likely that their lives are worth living (and may even be better than ours).
  • The effect of civilian nuclear power on the risk of nuclear weapons proliferation is unclear. The data suggest that there is not much evidence that civilian nuclear power programs increase the likelihood of pursuit of weapons by countries. There are some arguments that civilian nuclear power might even decrease the risks from nuclear weapons. First, nuclear power plants could use uranium and plutonium from nuclear weapons and therefore help in nuclear disarmament. Nuclear power is a safe (more controlled) way to dismantle atomic bombs, so to speak. Second, civilian nuclear power might have countervailing political effects that limit the probability of proliferation. International conventions on civilian nuclear power increase the likelihood that a parallel nuclear weapons program is detected and attracts outside non-proliferation pressures. If a country with civilian nuclear power starts to produce nuclear weapons, it risks non-proliferation sanctions. Due to those trade sanctions, it becomes more difficult for that country to import nuclear fuels. As the country is economically dependent on nuclear power, these sanctions might be so economically damaging that the country prefers to avoid those damages by abolishing its weapons program. Also: civilian nuclear power is not necessary to acquire nuclear weapons.
  • Future (third and fourth) generations of nuclear power technologies, such as molten salt thorium reactors, are safer, more cost effective and more sustainable. They produce nuclear waste that remains radioactive for a shorter time, they have more than 100 times the energy yield of current nuclear power, they use more abundant and easily accessible nuclear fuels, they can burn existing nuclear waste, and they are less susceptible to nuclear accidents (no melt-downs).

Geplaatst in Artikels, Blog, English texts | Tags: , , , | Een reactie plaatsen