The identifiable victim effect is a cognitive bias (or moral illusion) where an incentive to help someone is higher when the person in need can be identified by the helper. For example, people donate more money to a charity if the donors know the identity (e.g. name) of the person that is helped, compared to a charity that helps anonymous victims (such that the donors do not know who they help).
Here I hypothesize that there is a similar cognitive bias when it comes to preventing or solving problems: the identifiable problem effect. For example, if you drive extremely fast and reckless, you know that you will risk an accident. You can have a good guess about the place and time of the accident: the more busy the place or slippery the road, the more likely the accident. The faster you drive, the sooner the accident. Being able to guess where and when an accident can occur gives you a strong incentive to drive more slowly and safely.
However, this is different in the case of a preventive car engine check. If you don’t check the engine, you cannot guess where and when an accident could occur. It is even possible that the engine is fine and you will never get an accident. In that case, checking the engine would become superfluous. If you check the engine, and you do not have an accident, you will never know whether checking the engine caused the avoidance of an accident. You cannot identify the avoided accident, you cannot tell when and where the accident could have taken place, because perhaps there was no accident.
The identifiable problem effect is the hypothesis that people have a stronger incentive to take preventive measures when the potential problem is more clearly identifiable (e.g. in terms of place and time). This effect is a cognitive bias, because it could result in taking too much preventive measures in one area or not enough preventive measures in another area.
This identifiable problem effect can have serious consequences, for examples in the area of existential risks. These are risks that could derail society, kill (almost) everyone or simply wipe out humanity. Some existential risks are identifiable. For example climate change: if greenhouse gas emissions keep rising, we can estimate when temperatures will increase to such a high level that life becomes endangered. We already see an increase in temperatures, and we have computer simulations to estimate what will happen when and where.
Compare climate change with other existential risks such as a global pandemic, a nuclear war or new technologies such as unsafe artificial superintelligence. We can invest in preventive measures, just like the car engine check, but in doing so, we will never know whether we actually prevented such a catastrophe, let alone when the catastrophe would have happened. The potential problem of a future pandemic supervirus is an unidentifiable problem: we have no idea whether, when and where that virus will strike us and kill us all. It could be the case that the problem never occurs. Therefore, it can seem that preventive measures are futile or superfluous. We will not know whether we made progress in solving the problem, because the problem is unidentifiable.
So we have two types of existential risks: the identifiable and the unidentifiable risks. Due to the identifiable problem effect, people tend to underinvest in preventive measures against the second type of existential risks. Research in artificial intelligence safety or global coordination to prevent pandemics and nuclear wars, are more neglected than e.g. climate change. This neglectedness could be very dangerous. It also means that investments in preventive measures against unidentifiable existential risks are highly valuable, even if we will never know that the preventive measures made a difference.