The rise of anomalistic psychology – and the fall of parapsychology?

Professor Chris French is the Head of the  Anomalistic Psychology Research Unit  in the Psychology Department at Goldsmiths, University of London.  He is also a Fellow of the British Psychological Society and of the Committee for Skeptical Inquiry and a member of the Scientific Advisory Board of the British False Memory Society.  His main current area of research is the psychology of paranormal beliefs and anomalous experiences. He frequently appears in the media casting a skeptical eye over paranormal claims. He edited The Skeptic magazine for more than a decade and sometimes writes for the Guardian’s online science pages. 

Ever since records began, people have reported strange experiences that appear to contradict our conventional scientific understanding of the universe. These have included reports that appear to support the possibility of life after death, such as near-death experiences, ghostly encounters and apparent communication with the dead, as well as claims by various individuals that they possessed mysterious powers such as the ability to read minds, see into the future, obtain information from remote locations without the use of the known sensory channels, or to move objects by willpower alone.  Such accounts are accepted as veridical by most of the world’s population in one form or another and claims relating to miraculous healing, alien abduction, astrological prediction and the power of crystals are also accepted by many.  Belief in such paranormal claims is clearly an important aspect of the human condition. What are we to make of such accounts from a scientific perspective?

Should we accept at least some of these claims more or less at face value? That is to say, should we accept that extrasensory perception (ESP), psychokinesis (PK), and life after death are all real? Parapsychologists have systematically investigated such phenomena for around 130 years but have so far failed to convince the wider scientific community that this is the case. The eminent scientists and intellectuals who founded the Society for Psychical Research in 1882 were convinced that, with the tools of science at their disposal, they would settle the issue one way or another within a few years. Clearly, that has not happened. Instead, parapsychology has been characterised by a series of ‘false dawns’ during which it has been declared that at last a technique has been developed which can reliably show under well-controlled conditions that paranormal effects are real. With time, however, the technique falls out of favour as subsequent research fails to replicate the initially reported effects and methodological shortcomings become apparent.

The latest candidate for such a ‘false dawn’ is a series of relatively straightforward experiments reported by Daryl Bem in the prestigious Journal of Personality and Social Psychology.  In eight of nine experiments, involving more than a thousand participants in total, Bem reported significant results suggesting that human beings are able in some way to sense events before they happen. For example, the study which produced the largest effect size appeared to show that participants are able to recall more words if they rehearse them than if they do not – even if the rehearsal does not take place until after recall has been tested! As so often happens, these controversial findings received widespread coverage in the mainstream science media. However, subsequent attempts at replication have failed, including a study involving three independent replication attempts carried by Richard Wiseman  (University of Hertfordshire), Stuart Ritchie (University of Edinburgh), and myself (Goldsmiths, University of London).

If paranormal forces really do not exist, how are we to explain the widespread belief in them and the sizeable minority of the population who claim to have had direct personal experience of paranormal phenomena? One possible answer is that there are certain events and experiences which may appear to involve paranormal phenomena but which can in fact be fully explained in non-paranormal, usually psychological, terms. This is the approach adopted by anomalistic psychologists. In general, anomalistic psychologists attempt to explain such phenomena in terms of known psychological effects such as hallucinations, false memories, the unreliability of eyewitness testimony, placebo effects, suggestibility, reasoning biases and so on. It is noteworthy that anomalistic psychologists have, in just a few decades, produced many examples of replicable effects that adequately explain a range of ostensibly paranormal phenomena.

Anomalistic psychology is definitely on the rise. Not only is it now offered as an option on many psychology degree programmes, it is also an option on the most popular A2 psychology syllabus in the UK.  Every year more books and papers in high quality journals are published in this area and more conferences and symposia relating to topics within anomalistic psychology are held. There is no doubt that anomalistic psychology is flourishing.

And what of parapsychology? The health of this discipline is somewhat harder to assess but apart from the occasional ray of hope offered by the latest false dawn, the situation does not look encouraging for parapsychologists. Funding for such research is inevitably more difficult to obtain in times of economic uncertainty. Scarce research funding will be invested in areas where the probability of success is high – and the history of parapsychology shows all too clearly that studies in this area often involve huge investments of time and resources and produce nothing in return. Without a genuine breakthrough in the near future, can parapsychology survive for much longer? Without psychic powers, it’s difficult to know but I certainly would not bet on it.

The multitasking mind

Cross-posted with permission of OUPblog.

dario.jpg

This week’s guest blogger is Dario Salvucci, a professor of computer science and psychology at Drexel University, and author with Niels Taatgen of The Multitasking Mind. Dr. Salvucci has written extensively in the areas of cognitive science, human factors, and human-computer interaction, and has received several honors including a National Science Foundation CAREER Award.

If the mind is a society, as philosopher-scientist Marvin Minsky has argued, then multitasking has become its persona non grata.

In polite company, mere mention of “multitasking” can evoke a disparaging frown and a wagging finger. We shouldn’t multitask, they say – our brains can’t handle multiple tasks, and multitasking drains us of cognitive resources and makes us unable to focus on the critical tasks around us. Multitasking makes us, in a word, stupid.

Unfortunately, this view of multitasking is misguided and undermines a deeper understanding of multitasking’s role in our daily lives and the challenges that it presents.

The latest scientific work suggests that our brains are indeed built to efficiently process multiple tasks. According to our own theory of multitasking called threaded cognition, our brains rapidly interleave small cognitive steps for different tasks – so rapidly (up to 20 times per second) that, for many everyday situations, the resulting task behaviors look simultaneous. (Computers similarly interleave small steps of processing to achieve multitasking between applications, like displaying a new web page while a video plays in the background.) In fact, under certain conditions, people can even exhibit almost perfect time-sharing – doing two tasks concurrently with little to no performance degradation for either task.

mind.jpg

The brain’s ability to multitask is readily apparent when watching a short-order cook, a symphony conductor, or a stay-at-home mom in action. But our brains also multitask in much subtler ways: listening to others while forming our own thoughts, walking around town while avoiding obstacles and window-shopping, thinking about the day while washing dishes, singing while showering, and so on.

Multitasking is not only pervasive in our daily activities, it actually enables activities that would otherwise be impossible with a monotasking brain. For example, a driver must steer the vehicle, keep track of nearby vehicles, make decisions about when to turn or change lanes, and plan the best route given current traffic patterns. Driving is only possible because our brains can efficiently interleave these tasks. (Imagine the futility of only being able to steer, or plan a route.)

So how has multitasking earned such a negative reputation? In large part, this reputation stems from unrealistic expectations. The brain’s multitasking abilities – like all our abilities – come with limitations: when performing one task, the addition of another task generally interferes with the first task. For many everyday tasks, the interference is negligible or unimportant: your singing may affect your showering, or thinking about your day may affect your dish-washing, but likely not so much that you notice or care.

Other tasks, though, require every ounce of attention and can push past the limits of our multitasking abilities. In driving, the essential subtasks are demanding enough; additional subtasks – texting, dialing, even talking on a phone – increase these demands, and when controlling a 3000-pound vehicle at 65 miles per hour, even these minimal additional demands may lead to unacceptable risks.

Still other tasks do not have safety implications per se, yet most would consider them important enough that multitasking in those contexts is undesirable. A student in class is already multitasking in listening to the teacher, processing ideas, and taking notes. If this student is checking Facebook at the same time, this extra subtask drains mental effort away from the more critical subtasks and dilutes the learning experience.

The problem with multitasking thus lies not in our brain’s inability to multitask efficiently, but in our own priorities and decision-making. When we choose to multitask, we are deciding – consciously or not – to accept degraded performance on one or more of tasks involved. And when we still choose to multitask when it is undesirable (as in the classroom) or unacceptable (as in driving), we should hold ourselves accountable for these decisions. So if you walk into a pole or wreck your car while texting, don’t blame your brain; blame yourself.

A Happy Revolution

nick.bmp

Dr Nattavudh (Nick) Powdthavee is a behavioural economist in the Department of Economic at Nanyang Technological University, Singapore, and is the author of The Happiness Equation: The Surprising Economics of Our Most Valuable Asset. He obtained his PhD the economics of happiness from the University of Warwick. Discussions of his work have appeared in over 50 major international newspapers in the past five years, including the New York Times and the Guardian, as well as in the Freakonomics and Undercover Economist blogs.

It’s not often in our lifetime that we could almost hear the intellectual tide turning. The year was 1993. The main perpetrators were Andrew Oswald and Andrew Clark; two British economists who, in October that year, organised the world’s first ever economics of happiness conference at London School of Economics and Political Sciences. Posters advertising the event were put up weeks in advance. A hundred chairs were put out in the famous Lionel Robbins building, waiting to be filled by many of the world’s greatest minds. The meeting, the organisers thought, was going to be revolutionary to economics science. Perhaps it was even going to be historical, not so dissimilar to the one which was held a few months earlier in Cambridge where British mathematician Andrew Wiles presented the proof of Fermat’s Last Theorem to a few hundred academics before him.

Imagine their disappointment when only eight people turned up on the day*. It was official; the world’s first ever economics of happiness conference was no less of a complete and utter failure.

Fast forward eighteen years to 2011. Happiness is currently one of the hottest topics in world’s politics and economic research. The British Prime Minister David Cameron has set out a plan to measure and improve people’s happiness – or in his compound term “general well-being”. The French president Nicholas Sarkozy has already launched an inquiry into happiness, commissioning Nobel Prize winners Joseph Stiglitz and Amartya Sen to look at how policies on Gross Domestic Products (GDP) sometimes trampled over the government’s other goals, such as sustainability and work-life balance. There are now over two hundred thousand economic papers on the World Wide Web written exclusively on “happiness”, “life satisfaction”, or “subjective well-being”.

How did we get here so fast in just less than two decades?

Of course, one of the early issues that people have with the economics of happiness (and you’d be forgiven if you yourself did laugh at the idea) is that happiness is hardly a measurable concept. This is a big deal for economists who like to call themselves quasi-scientists (in that they mainly deal with objectively measurable data such as income and inflation rates). If what people say about the way they are feeling is subjective by definition, how can it be analysed and quantified?

This issue, I feel, has now been resolved almost entirely. Working alongside scientists, psychologists have been able to provide objective confirmations that what people say about their own happiness does indeed provide useful information about their true inner well-being. For instance, self-rated happiness has been shown to correlate significantly with the duration of “Duchenne” or genuine smiles a person give during a day, as well as the quality of memory, blood pressure, brain activities, and even heart beats per second. More remarkably, scientists have been able to show that how happy we feel about our lives today have important predictive power of whether or not we will still be alive, forty or fifty years from now. Put it simply, we really do mean what we say.

The last two decades had also seen a substantial rise in the number of newly available data sets which are impossibly large by previous standards. And by applying appropriate statistical tools on these randomly drawn samples, researchers are able to explore whether or not the determinants of individual’s happiness (which is normally captured by asking individuals to rate their happiness from “1.not too happy”, “2.pretty happy”, or “3.very happy”) are the same in America as they are in Great Britain, South Africa, and China (which they are, thus lending further credence to the idea that such answers should be taken seriously).

So, what are the interesting results happiness economists have discovered so far? Well, for a start, happiness is U-shaped in age. On average, we are likely to be happier with our life at the younger and older age points in our life-cycle, with the minimum point occurring somewhere around mid-40s. Money buys little happiness, whilst other people’s money tends to make us feel unhappy with ours. The big negatives in our life include, for example, unemployment and ill health. Yet these negative experiences hurt us less subjectively if we happened to know a lot of other unemployed people (or in the case of ill health, other people with the same illness as ours). Marriage and friendships are extremely valuable, although there is little statistical evidence to suggest that children make parents any happier than their non-parents counterpart. And more recently, happiness economists have been able to put dollar, pound, or euro values on happiness (or unhappiness) from seemingly priceless experiences or life events that come with no obvious market values such as time spent with friends, getting married, losing one’s job, and even different types of bereavement.

It’s difficult to try and forecast how important this kind of work will be in the political arena in the forthcoming century. It’s possible that future governmental policies may shift entirely from the pursuit of wealth towards more non-materialistic goals as a result of these findings. We may even witness a replacement of GDP for a more general well-being index such as the GNH (or Gross National Happiness) altogether, although this is probably unlikely to happen. However, one thing’s for sure; economics as a dismal science will never be the same again.

*Of those eight, five were speakers especially invited to speak at the conference by the organisers.

Risk perception

untitled.bmp

David Ropeik is an international consultant in risk perception and risk communication, and an Instructor in the Environmental Management Program at the Harvard University Extension School. He is the author of How Risky Is It, Really? Why Our Fears Don’t Always Match the Facts, principal co-author of RISK A Practical Guide for Deciding What’s Really Safe and What’s Really Dangerous in the World Around You, and blogs for Huffington Post, Psychology Today, and has written guest blogs for Scientific American, Climate Central, and Big Think. He founded the programImproving Media Coverage of Risk,” was an award-winning journalist in Boston for 22 years and a Knight Science Journalism Fellow at MIT.

You are reading a piece in Nature, so you are probably fairly well-educated, and there is a better than even chance that you fancy yourself a fact-based thinker and reasonably rational. Meaning no disrespect, but that assumption is fanciful, at least when it comes to the perception of risk. Ambrose Bierce was right when he defined the brain as “the organ with which we think we think.” Research from diverse fields, and countless examples from the real world, have convincingly established that our perceptions of risk are an inextricable blend of fact and feeling, reason and gut reaction, cognition and intuition. No matter what the hard risk sciences may tell us the facts are about a risk, the social sciences tell us that our interpretation of those facts is ultimately subjective.

While this system has done a good job getting us this far along evolution’s winding road, it also gets us into trouble because sometimes, no matter how right our perceptions feel, we get risk wrong. We worry about some things more than the evidence warrants (vaccines, nuclear radiation, genetically modified food), and less about some threats than the evidence warns (climate change, obesity, using our mobiles when we drive). That produces what I have labeled The Perception Gap, the gap between our fears and the facts, which is a huge risk in and of itself.

The Perception Gap produces dangerous personal choices that hurt us and those around us (declining vaccination rates are fueling the resurgence of nearly eradicated diseases). It causes the profound health harms of chronic stress (for those who worry more than necessary). And it produces social policies that protect us more from what we’re afraid of than from what in fact threatens us the most (we spend more to protect ourselves from terrorism than heart disease)…which in effect raises our overall risk.

We do have to fear fear itself…too much or too little. So we need to understand how our subjective system of risk perception works, in order to recognize and avoid its pitfalls. Surprisingly, few people are aware of how much we know about this system. (I’ve tried to summarize that knowledge in my book, How Risky Is It, Really? Why Our Fears Don’t Always Match the Facts). Here’s a mad dash through the literature on risk perception;

• Neuroscience by Joseph LeDoux et.al. has discovered neural pathways that insure that we respond initially to risky stimuli subconsciously/instinctively, before cognition kicks in. And in the ongoing risk response that follows, the wiring and chemistry of the brain also insure that instinct and affect (feelings) play a significant role, sometimes the primary role, in how we perceive and respond to danger. Simplistically, the brain is designed to subconsciously feel first and consciously think second, and to feel more and think less.

• The research of Daniel Kahneman et.al. has discovered a mental toolbox (as Gird Gigerenzer puts it) of heuristics and biases we use to quickly make sense of partial information and turn a few facts into the full picture of our judgment. These mental shortcuts occur subconsciously, outside (and often before) conscious reasoning. This research further confirms that we are far more Homo Naturalis than Homo Rationalis.

• The Psychometric Paradigm research of Paul Slovic et.al. has revealed a suite of psychological characteristics that make risks feel “more” frightening, or less, the facts notwithstanding. These ‘risk perception factors’ include:

untitled.bmp

• Recent research on the theory of Cultural Cognition by Dan Kahan et.al has found that our views on risks are shaped to agree with those we most strongly identify with, based on our group’s underlying feelings about how society should operate. We fall into four general groups about the sort of social organization we prefer, defined along two continua, represented as a grid. We all fall somewhere along these two continua, depending on the issue.

untitled1.bmp

Individualists prefer a society that maximizes the individual’s control over his or her life. Communitarians prefer a society in which the collective group is mire actively engaged in making the rules and solving society’s problems (Individualists deny environmental problems like climate change because such problems require a ’we’re all in this together’ communal response. Communitarians see climate change as a huge threat in part because it requires a social response). Along the other continuum, Hierarchists prefer a society with rigid structure and class and a stable predictable status quo, while Egalitarians prefer a society that is more flexible, that allows more social and economic mobility, and is less constrained by ‘the way it’s always been’. (Hierarchists deny climate change because they fear the response means shaking up the free market-fossil fuel status quo. Shaking up the status quo is music to the ears of Egalitarians, who are therefore more likely to believe in climate change.)

That risk is inescapably subjective is disconcerting for those who place their faith in the ultimate power of Pure Cartesian “I think, therefore I am” Reason. But the robust evidence summarized above makes clear that;

1. Risk perception is inescapably subjective

2. No matter how well educated or informed we may be, we will sometimes get risk wrong, producing a host of profound harms.

3. In the interest of public and environmental health, we need a more holistic, and more realistic, approach to what risk means. Societal risk management has to recognize the risk of risk misperception, the risk that arises when our fears don’t match the evidence, the risks of The Perception Gap.

Letting go of our naïve fealty to perfect reason will allow us to recognize and understand these hidden dangers. Once brought to light, the harms to society from declining vaccination rates, the lost benefits of genetically modified food, the morbidity and mortality and societal costs of obesity – these risks and many more can be studied and quantified and managed with the same tools we already use to manage the risks from pollution or crime or disease. The challenge is not how to manage the risks of the Perception Gap. The challenge is to rationally let go of our irrational belief in the mythical God of Perfect Reason, and use what we know about the psychology of risk perception to more rationally manage the risks that arise when our subjective risk perception system gets things dangerously wrong.

Further Reading:

The neuroscience of risk perception – LeDoux J, The Emotional Brain, Simon and Schuster, 1996

Heuristics and Biases – Kahneman, D., Slovic, P. & Tversky, A. Judgment Under Uncertainty: Heuristics and Biases, Cambridge University Press, 1982)

The Psychometric Paradigm ‘risk perception factors’ – Slovic P, The Perception of Risk, Earthscan 2000

Cultural Cognition.

It just doesn’t feel right

simon.bmpThis week’s guest blogger is Simon Laham, PhD, a social psychologist and a Research Fellow and Lecturer in Psychological Sciences at the University of Melbourne, Australia. His work focuses on the psychology of morality.

Matthew is playing with his new kitten late one night. He is wearing only his boxer shorts, and the kitten sometimes walks over his genitals. Eventually, this arouses him and he begins to rub his bare genitals along the kitten’s body. The kitten purrs and seems to enjoy the contact.

What do you think about this? Morally right or wrong? Well, if you’re like most, you think that Matthew’s behavior is not only pretty disgusting, but morally condemnable.

But now ask yourself why you think it’s wrong? No one is harmed here, after all; Matthew is having fun and it seems that the kitten isn’t too bothered. What about germs? Well, let’s say that the kitten has just been bathed and there is no chance of Matthew catching something. Still wrong?

When psychologist Jonathan Haidt presented participants in one of his studies with scenarios just like this (depicting harmless, but norm-violating behaviors, such as masturbating with frozen chickens and eating road kill), he found that many people relentlessly insisted that such behaviours were “just wrong,” even though they couldn’t muster any convincing justifications. These participants sat, “morally dumbfounded,” as Haidt put it, asserting simply that “it just feels wrong.”

When prodded, people’s moral foundations tend to wobble a little bit. Although many of us like to think that our moralities are firmly grounded in principles – thou shalt not kill, love thy neighbour as thyself – and that moral judgments spring from the logical application of such principles, it just so happens that many of our moral judgments aren’t driven by the rational, deliberative contemplation of moral rules at all. Rather they are driven by intuitions. We witness an action, experience an intuitive flash of disgust, or anger, for example, and, as a result, deem the action morally wrong. Matthew isn’t violating any lofty moral law with his kitten rubbing, he’s just doing something disgusting, and, thus, wrong.

Just where do these intuitions come from? It’s quite likely that they have an evolutionary basis. Put simply, we feel disgusted or angry about behaviors that somehow compromised the reproductive success of our evolutionary ancestors.

Take incest as an example. Those ancestors of ours who happened to have felt disgust at incest would have been less likely to commit it, and thus more likely to have produced viable offspring, passing on their incest-condemning genes to future generations. Certain moral intuitions conferred reproductive advantages in the past; those are the moral intuitions we feel today.

It’s quite sobering to realise that your moral outlook is shaped not by appeal to higher reason, but by the contingencies of your evolutionary history. Still more sobering, however, are results from other research which suggests that opinions about important moral questions are influenced by a raft of other, thoroughly irrelevant factors.

Consider this: if I had happened to write the Matthew scenario above in chiller font or blackadder ITC font or some other difficult to read font, chances are you would have found it even more morally wrong than you did originally. Some work from my own lab shows that when people have a difficult time processing a stimulus (because, for example, it’s hard to read), they are more likely to think it’s morally wrong than if they have an easy time processing it. The idea here is that “disfluent” processing feels negative, and this negativity seeps into our moral judgments, making us harsher moral critics.

Or consider this question: What entities in the world deserve our moral consideration? Apes? Dogs? Fetuses? This is not a trivial question. Your answers will form the basis of your attitudes towards vegetarianism, abortion, or animal experimentation, among other pressing moral issues. Yet even here we see the subtle influence of moral irrelevancies. When people are asked to generate a list of such morally worthy entities by selecting candidates from a longer list, they end up with fewer candidates than people asked to cross unworthy entities off a longer list. The size of your moral community, in other words, depends on how you happen to be asked to populate it.

The list of subtle shapers of moral judgment goes on: show people a clip from Saturday Night Live and they are more likely to make utilitarian judgments; have them make judgments in a dirty room, littered with used tissues and pizza boxes, and they become harsher moral judges; expose people to “fart spray” and they less likely to endorse marriage between first cousins…

It should give you pause to realize that your judgments of right and wrong – be they about euthanasia, incest, abortion, or kitten masturbation – are subject to a range on non-rational, gut feelings or intuitions rather than under the control deliberative, rational, reasoning processes. The belief that our moral compasses are guided by a set of well thought-out principles that we consciously and painstakingly apply to each new situation is simply inconsistent with the empirical evidence. This belief fails to capture the complexity of moral judgment and it ignores the now well-documented fact that our judgments of right and wrong are driven largely by intuitive and often irrelevant factors that reside largely outside of our awareness.