David Ropeik is an international consultant in risk perception and risk communication, and an Instructor in the Environmental Management Program at the Harvard University Extension School. He is the author of How Risky Is It, Really? Why Our Fears Don’t Always Match the Facts, principal co-author of RISK A Practical Guide for Deciding What’s Really Safe and What’s Really Dangerous in the World Around You, and blogs for Huffington Post, Psychology Today, and has written guest blogs for Scientific American, Climate Central, and Big Think. He founded the program “Improving Media Coverage of Risk,” was an award-winning journalist in Boston for 22 years and a Knight Science Journalism Fellow at MIT.
You are reading a piece in Nature, so you are probably fairly well-educated, and there is a better than even chance that you fancy yourself a fact-based thinker and reasonably rational. Meaning no disrespect, but that assumption is fanciful, at least when it comes to the perception of risk. Ambrose Bierce was right when he defined the brain as “the organ with which we think we think.” Research from diverse fields, and countless examples from the real world, have convincingly established that our perceptions of risk are an inextricable blend of fact and feeling, reason and gut reaction, cognition and intuition. No matter what the hard risk sciences may tell us the facts are about a risk, the social sciences tell us that our interpretation of those facts is ultimately subjective.
While this system has done a good job getting us this far along evolution’s winding road, it also gets us into trouble because sometimes, no matter how right our perceptions feel, we get risk wrong. We worry about some things more than the evidence warrants (vaccines, nuclear radiation, genetically modified food), and less about some threats than the evidence warns (climate change, obesity, using our mobiles when we drive). That produces what I have labeled The Perception Gap, the gap between our fears and the facts, which is a huge risk in and of itself.
The Perception Gap produces dangerous personal choices that hurt us and those around us (declining vaccination rates are fueling the resurgence of nearly eradicated diseases). It causes the profound health harms of chronic stress (for those who worry more than necessary). And it produces social policies that protect us more from what we’re afraid of than from what in fact threatens us the most (we spend more to protect ourselves from terrorism than heart disease)…which in effect raises our overall risk.
We do have to fear fear itself…too much or too little. So we need to understand how our subjective system of risk perception works, in order to recognize and avoid its pitfalls. Surprisingly, few people are aware of how much we know about this system. (I’ve tried to summarize that knowledge in my book, How Risky Is It, Really? Why Our Fears Don’t Always Match the Facts). Here’s a mad dash through the literature on risk perception;
• Neuroscience by Joseph LeDoux et.al. has discovered neural pathways that insure that we respond initially to risky stimuli subconsciously/instinctively, before cognition kicks in. And in the ongoing risk response that follows, the wiring and chemistry of the brain also insure that instinct and affect (feelings) play a significant role, sometimes the primary role, in how we perceive and respond to danger. Simplistically, the brain is designed to subconsciously feel first and consciously think second, and to feel more and think less.
• The research of Daniel Kahneman et.al. has discovered a mental toolbox (as Gird Gigerenzer puts it) of heuristics and biases we use to quickly make sense of partial information and turn a few facts into the full picture of our judgment. These mental shortcuts occur subconsciously, outside (and often before) conscious reasoning. This research further confirms that we are far more Homo Naturalis than Homo Rationalis.
• The Psychometric Paradigm research of Paul Slovic et.al. has revealed a suite of psychological characteristics that make risks feel “more” frightening, or less, the facts notwithstanding. These ‘risk perception factors’ include:
• Recent research on the theory of Cultural Cognition by Dan Kahan et.al has found that our views on risks are shaped to agree with those we most strongly identify with, based on our group’s underlying feelings about how society should operate. We fall into four general groups about the sort of social organization we prefer, defined along two continua, represented as a grid. We all fall somewhere along these two continua, depending on the issue.
Individualists prefer a society that maximizes the individual’s control over his or her life. Communitarians prefer a society in which the collective group is mire actively engaged in making the rules and solving society’s problems (Individualists deny environmental problems like climate change because such problems require a ’we’re all in this together’ communal response. Communitarians see climate change as a huge threat in part because it requires a social response). Along the other continuum, Hierarchists prefer a society with rigid structure and class and a stable predictable status quo, while Egalitarians prefer a society that is more flexible, that allows more social and economic mobility, and is less constrained by ‘the way it’s always been’. (Hierarchists deny climate change because they fear the response means shaking up the free market-fossil fuel status quo. Shaking up the status quo is music to the ears of Egalitarians, who are therefore more likely to believe in climate change.)
—
That risk is inescapably subjective is disconcerting for those who place their faith in the ultimate power of Pure Cartesian “I think, therefore I am” Reason. But the robust evidence summarized above makes clear that;
1. Risk perception is inescapably subjective
2. No matter how well educated or informed we may be, we will sometimes get risk wrong, producing a host of profound harms.
3. In the interest of public and environmental health, we need a more holistic, and more realistic, approach to what risk means. Societal risk management has to recognize the risk of risk misperception, the risk that arises when our fears don’t match the evidence, the risks of The Perception Gap.
Letting go of our naïve fealty to perfect reason will allow us to recognize and understand these hidden dangers. Once brought to light, the harms to society from declining vaccination rates, the lost benefits of genetically modified food, the morbidity and mortality and societal costs of obesity – these risks and many more can be studied and quantified and managed with the same tools we already use to manage the risks from pollution or crime or disease. The challenge is not how to manage the risks of the Perception Gap. The challenge is to rationally let go of our irrational belief in the mythical God of Perfect Reason, and use what we know about the psychology of risk perception to more rationally manage the risks that arise when our subjective risk perception system gets things dangerously wrong.
Further Reading:
The neuroscience of risk perception – LeDoux J, The Emotional Brain, Simon and Schuster, 1996
Heuristics and Biases – Kahneman, D., Slovic, P. & Tversky, A. Judgment Under Uncertainty: Heuristics and Biases, Cambridge University Press, 1982)
The Psychometric Paradigm ‘risk perception factors’ – Slovic P, The Perception of Risk, Earthscan 2000
Report this comment
An interesting article, but maybe Mr Ropeik is himself being subjective in some of his risk assessments? For example, in saying we worry about genetically modified food more than we should.. how does he know that?
Speaking for myself, I do not worry too much about the effects of eating such foodstuffs but I do worry considerably about the impact on the environment of releasing such plants into the wild. The record of humankind’s previous involvements with biological introductions both deliberate and inadvertent is nothing short of disastrous, and surely more than justifies a high level of assessed risk, does it not?
I suppose these subjective assessments, however inaccurate they may be, do at least support his central hypothesis including at least two of the three numbered points above 🙂