The anti-experimentation bias

People should not be used as test objects, which means using someone against his or her will in an experiment is morally problematic and would require very strong justification. However, a lot of people are also reluctant to perform medical, economic or social experiments even when the people involved (the test subjects) are not treated against their will, or when they can consistently want the experiment to be done (i.e. when the experiment is in line with their strongest moral values and goals). This is an example of an anti-experimentation bias.

Consider randomized controlled trials in development economics or in medicine. The population is randomly divided into a treatment group and a non-treatment control group. The treatment group gets a treatment (e.g. a development intervention or a new drug). If we see different consequences between those groups, we can conclude that the treatment has a causal effect. Such randomized controlled trials are often considered as immoral. Of course, if we already know that the treatment works and has good consequences, and if we could give the treatment also to the control group, then having a non-treatment control group means we would withhold those people from an effective cure. That would be immoral indeed.

However, in many cases we do not yet know if the treatment will work, and in many cases we do not have enough resources (time, money) to give everyone the treatment. In that case the experiment is no longer immoral. Consider the distribution of antimalarial bed nets as a development project. Those bed nets are costly, so it is impossible to give bed nets to all the people in all the villages in all poor tropical countries. Now suppose we want to study the effectiveness of this intervention by doing a randomized controlled trial (RCT). A lot of development organizations dislike such RCTs, because they do not want to treat poor people as test objects. However, what they don’t realize, is that they are always doing an experiment. An RCT requires two things: randomization and having a control group. But every project with insufficient resources (i.e. not enough resources to cover everyone with the treatment) has a control group. Some villages in poor tropical countries are in the treatment group and receive bed nets, other villages are in the control group and do not receive bed nets due to insufficient funding (perhaps they receive another intervention, which allows us to compare the effectiveness of bed nets with that of other intervention). What happens in practice is that development organizations only look at the results of the treatment group. They do not compare those results with the control group, because they neglect the results of the control group. This is a waste of data. If we already have a control group, there is no justification to refuse looking at the control group data.

What about the randomization? How do development organizations decide who should get the treatment? Which villages should get the bed nets? We can target the poorest, most affected populations, but even then, we often do not have enough resources to cover that whole population. Consequently, there is always a kind of arbitrariness in selecting the treatment group. Development organizations sometimes try to give arguments for their selection of the treatment group, but this is often nothing more than a rationalization. If this arbitrariness is unavoidable, one could as well go for a complete randomization, to make the scientific experiment more robust.

Instead of wasting the data of the results of the control group and rationalizing the choice of the treatment group, we could learn much more valuable information about the effectiveness of interventions if we accept that we have a lot of opportunities for doing full experiments. Not doing the full experiments might be harmful, because we would stick to less effective interventions.

Also in other areas we often have an anti-experimentation bias. Suppose we have new ideas about economic policies (e.g. a basic income), democratic reforms (e.g. futarchy, approval voting or epistocracy), education reforms, social projects, agricultural practices, or other areas. Either two things can happen: we are not sure whether the new idea is effective, or we feel very confident that the idea will work. In the former case, we are often reluctant to do experiments, because of a status quo bias: we want to keep the current situation. That means our experiment has no treatment group. In the latter case, we are often overconfident, which means we want the new idea to be implemented everywhere immediately. That means our experiment does not have a control group. In both case we are missing the opportunity to do experiments that give us valuable information about the effectiveness of the new ideas.

Considering all new reforms as experiments has another advantage: we remain flexible and open minded. We allow ourselves to learn from the results and to adjust or abandon the new idea if it turns out to be ineffective. An example is organic agriculture: producing and buying organic food was important, because we could learn from this new agricultural practice. However, as it turns out, organic farming appears to be an experiment with a negative result: after decades of farming and hundreds of studies, organic farming still does not appear to be significantly better for the environment or our health. That doesn’t mean that buying organic food was a bad or ineffective choice. If there were no consumers, there would not have been the experiment. However, it is important to learn from experiments. If people do not consider new practices such as organic farming as experiments because of their anti-experimentation bias, they tend to be too rigid or dogmatic in their beliefs. People might keep supporting organic farming in an unjustified belief that it is effective.

As a summary, we often have an anti-experimentation bias, which means we are not considering new practices as experiments, although they are. Sometimes we have a status quo bias which means we are doing an experiment where the control group covers the whole population (no treatment group). Sometimes we have an overconfidence bias which means we are doing an experiment where the control group is absent. And in the cases where we have a control group due to resource constraints (limited time and money), we are reluctant to look at the data from the control group. The anti-experimentation bias has two negative consequences. First, we are not learning new valuable information about the effectiveness of the new practice, because the experiment has a bad design (either a 100% control group, a 0% control group or a loss of data from the control group). And second, even when we can learn valuable information, we are reluctant to change our minds about the effectiveness of the new practice.

Dit bericht werd geplaatst in Artikels, Blog, English texts en getagged met , , . Maak dit favoriet permalink.

Een reactie op The anti-experimentation bias

  1. Pingback: Exploiting the cognitive biases of altruists | Stijn Bruers, the rational ethicist

Plaats een reactie