Exploiting the cognitive biases of altruists

How to become an effective altruist? The answer is easy: by exploiting the cognitive biases of altruists. Learn about cognitive biases, see how altruists are susceptible to these biases by making ineffective choices, and do what those biased altruists don’t do: make the altruistic choices that avoid those biases. People perform many altruistic deeds, but due to those biases, we miss so many opportunities to do more good. So here is a list of the most important biases that make altruists less effective.

Arbitrary categorization. We can categorize a set into a hierarchy of subsets, subsubsets of those subsets, subsubsubsets, and so on. For example the set of all problems can be divided in subproblems, subsubproblems and so on. The group of all suffering patients can be divided in subgroups, subsubgroups and so on. In their cause prioritization, looking for the problem to focus on or the patients to help, altruists have a tendency to arbitrarily (without giving good reasons or following a rule) pick a level in the hierarchy of categories (e.g. the subsubsets) and arbitrarily pick a specific element (e.g. a specific subsubset) at this level. That way of prioritization results in choices of problems and groups of patients where one can have less impact. For example altruists tend to focus on countries and prefer helping people in their own country. Or they tend to focus on groups of patients with a specific disease, or individuals belonging to a specific species.

Advice to become more effective: perform your cause prioritization by first looking at the total set of all suffering (including all diseases, all causes of suffering), or the total group of everyone (including foreigners, non-human animals, future generations,…). Then categorize this total set in subsets and pick the subset where you can have the biggest positive impact. That choice of subset is no longer arbitrary, because it follows the rule of maximizing impact.

 

Ingroup bias. As a result of arbitrary categorization, people have a tendency to prefer helping individuals of their own subgroup: people in their own country, of their own species, with the same interests (e.g. the same disease).

Advice to become more effective: help those individuals who are not considered as ingroup members by other altruists. The problems faced by those individuals are more neglected.

 

Zero risk bias (related to pseudocertainty effect and Allais paradox). People have a stronger incentive to eliminate a small risk compared to slightly reducing a bigger risk, even if reducing that bigger risk makes the world safer overall (in terms of reducing the aggregate of all risks). This zero risk bias is also a consequence of arbitrary categorization, because one can divide the total set (aggregate) of all risks into subsets of subrisks, subsubrisks,… and then focus on eliminating a specific subrisk.

Advice to become more effective: perform your cause prioritization by first looking at the total set of all risks. Then categorize this total set in subsets (subrisks) and pick the subset where you can have the biggest impact in the sense of the strongest reduction of the aggregate of all risks.

 

Risk aversion (related to certainty effect). People prefer safer bets with lower expected rewards compared to high risk high reward bets. This means the value function is not linear in gains. Winning one extra dollar is worth less when you already gained a lot of money. But when it comes to helping others, the value function should be linear in gains: saving an extra life is equally valuable, no matter how many lives are already saved. As a result of risk aversion, altruists prefer safe but low impact actions with certain but small positive results, above high risk high impact actions that have a higher expected impact. When a group of effective altruists choose high risk high impact actions, most of those altruists have bad luck and will cause no impact, but a minority is extremely lucky and will cause a lot of positive impact. This group as a whole does more good than a group of risk averse altruists where each member is sure to cause a little bit of good. For the effective altruists, it does not matter who are the lucky winners in the group; they only care about the strategy that results in the most good overall.

Advice to become more effective: be risk neutral when it comes to doing good and go for the high risk high impact actions because these are more neglected by other, risk averse altruists.

 

Loss aversion (related to reflection effect and framing effect). People have a stronger incentive to avoid losses than to obtain gains. Losses and gains are measured relative to a reference situation. If one considers as a reference the situation where everyone dies, every life saved is a gain. But if one considers the situation where everyone lives, everyone not saved is a loss. When being loss averse, your value function depends on a reference situation and the absolute value of the value function is not symmetric in gains and losses. The positive value of gaining one unit of goodness is in absolute terms smaller than the negative value of losing one unit of goodness. This is irrational, because it depends on the framing of the problem, i.e. the arbitrary choice of the reference situation. When N lives are at stake, framing a situation in terms of saving M of those lives (i.e. a guaranteed gain of M) should be considered the same as letting N-M people die (i.e. a guaranteed loss of N-M people).

Advice to become more effective: be gain-loss neutral when it comes to doing good and consider actions that can cause losses, when the expected net-benefits (gains minus losses) is positive. For example don’t be too extreme in applying the precautionary principle when introducing new technologies that have an expected net positive impact.

 

Status quo bias. People tend to be conservative and prefer the status quo. They are reluctant towards e.g. human cognitive enhancement, life extension or interventions in nature to improve wild animal welfare, because they prefer the current situation (the current cognitive level, life expectancy, natural processes,…).

Advice to become more effective: do the reversal test to check whether your preference for (in)action depends on a status quo bias. Be less reluctant against e.g. genetic cognitive enhancement, life extension (fighting aging) and nature intervention for wild animal welfare.

 

Scope neglect. In their cause prioritization, altruists are not often considering the size of the problem they focus on. On the contrary, there is often a positive correlation between the scope of a problem and the neglectedness of that problem (even for problems that are equally tractable in terms of feasibility to reduce them). The bigger the problem, the less attention it gets. Local relative poverty in rich countries gets more attention than global, extreme poverty. Shelter animal suffering gets more attention than livestock animal suffering, which gets more attention than wild animal suffering.

Advice to become more effective: consider the scope of the problem and choose the biggest problems.

 

Identifiable victim effect. People prefer to help patients they know personally or victims they can identify. They choose to support a campaign that helps a specific patient, become a foster parent of a specific child, or adopt a specific dog.

Advice to become more effective: choose supporting campaigns and actions where you cannot identify or know the patients or victims who benefit from your help. Those actions are often more neglected.

 

Identifiable problem effect. People have a stronger incentive to take preventive or precautionary measures when the potential problem is more clearly identifiable (e.g. in terms of place and time). This could result in taking too much preventive measures in one area (to prevent a small, identifiable risk) and not enough preventive measures in another area (to prevent a big, unidentifiable risk). Unidentifiable risks are risks where you cannot know whether your preventive measures resulted in decreasing or avoiding that risk. This lack of identification or knowledge means that those risks are more neglected.

Advice to become more effective: choose to invest in preventive measures against unidentifiable risks, even if we will never know that the preventive measures made a difference.

 

Availability heuristic. People focus on problems that easily come to mind, e.g. because of media attention (e.g. terrorism, natural disasters).

Advice to become more effective: focus on problems that get less media attention, donate to less known organizations (instead of e.g. disaster relief).

 

Groupthink (group conformity bias, bandwagon effect). People often follow other group members and adopt their beliefs. This can sometimes result in collective beliefs that are less accurate. For example most people on the left of the political spectrum are in favor of organic food and fair trade; most people in the environmental movement are against GMOs and nuclear power, even if those positions are not effective in terms of doing good.

Advice to become more effective: be less concerned about what other altruists believe. Use critical thinking and scientific evidence instead.

 

A/B effect (anti-experimentation bias). People are reluctant to consider interventions as experiments where data of the intervened group (group A) is compared to data of the non-intervened (control) group (group B).

Advice to become more effective: do more experiments (e.g. randomized controlled trials) to estimate the effectiveness of interventions. Such scientific research of effectiveness is often neglected.

 

Hyperbolic discounting. People are often inconsistent in discounting the well-being and suffering in the future. For example the difference between helping someone now versus the same kind of help next year is considered greater than the difference between helping someone in 100 years versus the same kind of help a year later (i.e. in 101 years). In general, this results in too much discounting of the future.

Advice to become more effective: consider the far future (longtermism).

 

Confirmation bias (related to overconfidence). People often have more confidence in a belief than what can be justified by evidence or reason, and they consider new information selectively in a way that affirms their prior beliefs. A confirmation bias is at play when altruists are not willing to accept negative evidence against their project or idea. As a result, altruists can have beliefs that are not always accurate, resulting in selection of ineffective means to help others.

Advice to become more effective: consider all new information in an impartial way, avoid strong feelings about your beliefs, update your confidence levels according to new evidence.

 

Commitment bias (sunk cost fallacy). People have a tendency to keep on investing in a project when they have already put a lot of effort in that project, even if new evidence shows that the project is much less effective than other opportunities.

Advice to become more effective: dare to quit projects, change jobs, do something else, be flexible, consider projects as learning experiences. Avoid too strong emotional attachments to current projects.

Dit bericht werd geplaatst in Artikels, Blog, English texts en getagged met , , , , , . Maak dit favoriet permalink.

Plaats een reactie