What The Computer Says About Who We Are

steve.bmp

Steve Fuller is Auguste Comte Professor of Social Epistemology in the Department of Sociology, University of Warwick. His next book, Humanity 2.0: What It Means to Be Human Past Present and Future is out with Palgrave Macmillan in September 2011.

You can tell a lot about the sort of creature we think we are, by the value we place on the things we make. In October 2010, the Economist staged an on-line debate on the most important technological innovation of the 20th century. The challengers: the digital computer and the artificial fertiliser. Perhaps unsurprisingly, the computer won by a margin of 3-to-1. Why was this result not surprising? After all, the artificial fertiliser is arguably the invention most responsible for a fourfold growth in the world’s population over the past century, as well as cutting the proportion of suffering from malnutrition by at least two-thirds. It would be difficult to think of another product of human ingenuity that has had such deep and lasting benefits for so many people. Even if it is true, that in absolute terms there are more people living in poverty now than the entire population of the earth in 1900, the success of artificial fertilisers has kept alive the dream that all poverty is ultimately eradicable.

Yet, the computer won – even though its development has tracked and in some cases amplified, global class divisions. Indeed, it is becoming increasingly common to speak of ‘knows’ and ‘know-nots,’ in the way one spoke of ‘haves’ and ‘have-nots’ fifty years ago. Nevertheless, over the ten days of debate it became clear that the computer was bound to win because, for better or worse, we identify more strongly with the extension than the conservation of human potential. Underlying this distinction is a fundamental ambivalence that human beings have always had towards the bodies of their birth. The fact that, when compared with other animals, we take such a long time to reach adulthood has led philosophers through the ages to muse that we are by nature premature beings who need to go beyond ourselves to complete our existence.

Whether we call this prosthetic extension ‘culture’, ‘technology’ or, in Richard Dawkins’ case, the ‘extended phenotype’, it suggests that we are not fully human until, or unless our biological bodies are somehow enhanced. The computer captures that desire in a twofold sense: it both provides a model for how to think of ourselves in such an enhanced state and the environment in which to realize it. A new book by the media theorist David Berry, The Philosophy of Software, explores the implications of this development in terms of such computer-based technologies as iPhones and iPads that increasingly constitute the human life-world. Bluntly put, the more time people spend interacting with high-tech gadgets, the more grounds there are for claiming that what the previous generation called ‘virtual reality’ is becoming the actual reality in which people define themselves.

Seen in this light, it is not surprising that an invention that ‘merely’ keeps alive our normal biological bodies – such as the artificial fertiliser – should be ranked decidedly lower than the computer in terms of importance. Back in the 1960s, the economist Thomas Schelling argued that you can tell the value that people place on their own lives by the amount they are willing to pay for securing it. Whether the relevant sense of ‘security’ is defined in terms of healthcare, life insurance, development aid or military budgets, one would be left with an open verdict on just how much people value the indefinite maintenance of the bodies of their birth. If we identify people’s preferences with what they do rather than what they say, it would seem that beyond a certain point, people would prefer to forgo security in favour of the freedom (and risk) to explore alternative possible modes of existence – for which the computer, again for better or worse, provides the technological exemplar.

Science owes much to both Christianity and the Middle Ages

James.JPGThis week’s guest blogger is James Hannam, he has a PhD in the History and Philosophy of Science from the University of Cambridge and is the author of The Genesis of Science: How the Christian Middle Ages Launched the Scientific Revolution (published in the UK as God’s Philosophers: How the Medieval World Laid the Foundations of Modern Science).

The award of the Templeton Prize to the retired president of the Royal Society, Martin Rees, has reawakened the controversy over science and religion. I have had the pleasure of meeting Lord Rees a couple of times, including when my book God’s Philosophers (newly released in the US as The Genesis of Science) was shortlisted for the Royal Society science book prize. I doubt he has welcomed the fuss over the Templeton Foundation, but neither will he be particularly perturbed by it.

the genesis of science.JPGFew topics are as open to misunderstanding as the relationship between faith and reason. The ongoing clash of creationism with evolution obscures the fact that Christianity has actually had a far more positive role to play in the history of science than commonly believed. Indeed, many of the alleged examples of religion holding back scientific progress turn out to be bogus. For instance, the Church has never taught that the Earth is flat and, in the Middle Ages, no one thought so anyway. Popes haven’t tried to ban zero, human dissection or lightening rods, let alone excommunicate Halley’s Comet. No one, I am pleased to say, was ever burnt at the stake for scientific ideas. Yet, all these stories are still regularly trotted out as examples of clerical intransigence in the face of scientific progress.

Admittedly, Galileo was put on trial for claiming it is a fact that the Earth goes around the sun, rather than just a hypothesis as the Catholic Church demanded. Still, historians have found that even his trial was as much a case of papal egotism as scientific conservatism. It hardly deserves to overshadow all the support that the Church has given to scientific investigation over the centuries.

That support took several forms. One was simply financial. Until the French Revolution, the Catholic Church was the leading sponsor of scientific research. Starting in the Middle Ages, it paid for priests, monks and friars to study at the universities. The church even insisted that science and mathematics should be a compulsory part of the syllabus. And after some debate, it accepted that Greek and Arabic natural philosophy were essential tools for defending the faith. By the seventeenth century, the Jesuit order had become the leading scientific organisation in Europe, publishing thousands of papers and spreading new discoveries around the world. The cathedrals themselves were designed to double up as astronomical observatories to allow ever more accurate determination of the calendar. And of course, modern genetics was founded by a future abbot growing peas in the monastic garden.

god designing uni.bmpBut religious support for science took deeper forms as well. It was only during the nineteenth century that science began to have any practical applications. Technology had ploughed its own furrow up until the 1830s when the German chemical industry started to employ their first PhDs. Before then, the only reason to study science was curiosity or religious piety. Christians believed that God created the universe and ordained the laws of nature. To study the natural world was to admire the work of God. This could be a religious duty and inspire science when there were few other reasons to bother with it. It was faith that led Copernicus to reject the ugly Ptolemaic universe; that drove Johannes Kepler to discover the constitution of the solar system; and that convinced James Clerk Maxwell he could reduce electromagnetism to a set of equations so elegant they take the breathe away.

Given that the Church has not been an enemy to science, it is less surprising to find that the era which was most dominated by Christian faith, the Middle Ages, was a time of innovation and progress. Inventions like the mechanical clock, glasses, printing and accountancy all burst onto the scene in the late medieval period. In the field of physics, scholars have now found medieval theories about accelerated motion, the rotation of the earth and inertia embedded in the works of Copernicus and Galileo. Even the so-called “dark ages” from 500AD to 1000AD were actually a time of advance after the trough that followed the fall of Rome. Agricultural productivity soared with the use of heavy ploughs, horse collars, crop rotation and watermills, leading to a rapid increase in population.

It was only during the “enlightenment” that the idea took root that Christianity had been a serious impediment to science. Voltaire and his fellow philosophes opposed the Catholic Church because of its close association with France’s absolute monarchy. Accusing clerics of holding back scientific development was a safe way to make a political point. The cudgels were later taken up by TH Huxley, Darwin’s bulldog, in his struggle to free English science from any sort of clerical influence. Creationism did the rest of the job of persuading the public that Christianity and science are doomed to perpetual antagonism.

Nonetheless, today, science and religion are the two most powerful intellectual forces on the planet. Both are capable of doing enormous good, but their chances of doing so are much greater if they can work together. The award of the Templeton Prize to Lord Rees is a small step in the right direction.

The Genesis of Science: How the Christian Middle Ages Launched the Scientific Revolution is available now.

Shortlisted for the Royal Society Science Book Prize

Well-researched and hugely enjoyable.” New Scientist

“A spirited jaunt through centuries of scientific development… captures the wonder of the medieval world: its inspirational curiosity and its engaging strangeness.” Sunday Times

“This book contains much valuable material summarised with commendable no-nonsense clarity… James Hannam has done a fine job of knocking down an old caricature.” Sunday Telegraph

Risk perception

untitled.bmp

David Ropeik is an international consultant in risk perception and risk communication, and an Instructor in the Environmental Management Program at the Harvard University Extension School. He is the author of How Risky Is It, Really? Why Our Fears Don’t Always Match the Facts, principal co-author of RISK A Practical Guide for Deciding What’s Really Safe and What’s Really Dangerous in the World Around You, and blogs for Huffington Post, Psychology Today, and has written guest blogs for Scientific American, Climate Central, and Big Think. He founded the programImproving Media Coverage of Risk,” was an award-winning journalist in Boston for 22 years and a Knight Science Journalism Fellow at MIT.

You are reading a piece in Nature, so you are probably fairly well-educated, and there is a better than even chance that you fancy yourself a fact-based thinker and reasonably rational. Meaning no disrespect, but that assumption is fanciful, at least when it comes to the perception of risk. Ambrose Bierce was right when he defined the brain as “the organ with which we think we think.” Research from diverse fields, and countless examples from the real world, have convincingly established that our perceptions of risk are an inextricable blend of fact and feeling, reason and gut reaction, cognition and intuition. No matter what the hard risk sciences may tell us the facts are about a risk, the social sciences tell us that our interpretation of those facts is ultimately subjective.

While this system has done a good job getting us this far along evolution’s winding road, it also gets us into trouble because sometimes, no matter how right our perceptions feel, we get risk wrong. We worry about some things more than the evidence warrants (vaccines, nuclear radiation, genetically modified food), and less about some threats than the evidence warns (climate change, obesity, using our mobiles when we drive). That produces what I have labeled The Perception Gap, the gap between our fears and the facts, which is a huge risk in and of itself.

The Perception Gap produces dangerous personal choices that hurt us and those around us (declining vaccination rates are fueling the resurgence of nearly eradicated diseases). It causes the profound health harms of chronic stress (for those who worry more than necessary). And it produces social policies that protect us more from what we’re afraid of than from what in fact threatens us the most (we spend more to protect ourselves from terrorism than heart disease)…which in effect raises our overall risk.

We do have to fear fear itself…too much or too little. So we need to understand how our subjective system of risk perception works, in order to recognize and avoid its pitfalls. Surprisingly, few people are aware of how much we know about this system. (I’ve tried to summarize that knowledge in my book, How Risky Is It, Really? Why Our Fears Don’t Always Match the Facts). Here’s a mad dash through the literature on risk perception;

• Neuroscience by Joseph LeDoux et.al. has discovered neural pathways that insure that we respond initially to risky stimuli subconsciously/instinctively, before cognition kicks in. And in the ongoing risk response that follows, the wiring and chemistry of the brain also insure that instinct and affect (feelings) play a significant role, sometimes the primary role, in how we perceive and respond to danger. Simplistically, the brain is designed to subconsciously feel first and consciously think second, and to feel more and think less.

• The research of Daniel Kahneman et.al. has discovered a mental toolbox (as Gird Gigerenzer puts it) of heuristics and biases we use to quickly make sense of partial information and turn a few facts into the full picture of our judgment. These mental shortcuts occur subconsciously, outside (and often before) conscious reasoning. This research further confirms that we are far more Homo Naturalis than Homo Rationalis.

• The Psychometric Paradigm research of Paul Slovic et.al. has revealed a suite of psychological characteristics that make risks feel “more” frightening, or less, the facts notwithstanding. These ‘risk perception factors’ include:

untitled.bmp

• Recent research on the theory of Cultural Cognition by Dan Kahan et.al has found that our views on risks are shaped to agree with those we most strongly identify with, based on our group’s underlying feelings about how society should operate. We fall into four general groups about the sort of social organization we prefer, defined along two continua, represented as a grid. We all fall somewhere along these two continua, depending on the issue.

untitled1.bmp

Individualists prefer a society that maximizes the individual’s control over his or her life. Communitarians prefer a society in which the collective group is mire actively engaged in making the rules and solving society’s problems (Individualists deny environmental problems like climate change because such problems require a ’we’re all in this together’ communal response. Communitarians see climate change as a huge threat in part because it requires a social response). Along the other continuum, Hierarchists prefer a society with rigid structure and class and a stable predictable status quo, while Egalitarians prefer a society that is more flexible, that allows more social and economic mobility, and is less constrained by ‘the way it’s always been’. (Hierarchists deny climate change because they fear the response means shaking up the free market-fossil fuel status quo. Shaking up the status quo is music to the ears of Egalitarians, who are therefore more likely to believe in climate change.)

That risk is inescapably subjective is disconcerting for those who place their faith in the ultimate power of Pure Cartesian “I think, therefore I am” Reason. But the robust evidence summarized above makes clear that;

1. Risk perception is inescapably subjective

2. No matter how well educated or informed we may be, we will sometimes get risk wrong, producing a host of profound harms.

3. In the interest of public and environmental health, we need a more holistic, and more realistic, approach to what risk means. Societal risk management has to recognize the risk of risk misperception, the risk that arises when our fears don’t match the evidence, the risks of The Perception Gap.

Letting go of our naïve fealty to perfect reason will allow us to recognize and understand these hidden dangers. Once brought to light, the harms to society from declining vaccination rates, the lost benefits of genetically modified food, the morbidity and mortality and societal costs of obesity – these risks and many more can be studied and quantified and managed with the same tools we already use to manage the risks from pollution or crime or disease. The challenge is not how to manage the risks of the Perception Gap. The challenge is to rationally let go of our irrational belief in the mythical God of Perfect Reason, and use what we know about the psychology of risk perception to more rationally manage the risks that arise when our subjective risk perception system gets things dangerously wrong.

Further Reading:

The neuroscience of risk perception – LeDoux J, The Emotional Brain, Simon and Schuster, 1996

Heuristics and Biases – Kahneman, D., Slovic, P. & Tversky, A. Judgment Under Uncertainty: Heuristics and Biases, Cambridge University Press, 1982)

The Psychometric Paradigm ‘risk perception factors’ – Slovic P, The Perception of Risk, Earthscan 2000

Cultural Cognition.

Risk Intelligence

dylan2.jpg

This week’s guest blogger is Dylan Evans, an author and academic at University College Cork, Ireland. He lectures in behavioural science and is the author of numerous books including Emotion and Placebo.

President Obama recently criticized American spy agencies for failing to predict the spreading unrest in the Middle East. Now a new study is attempting to discover what makes a good forecaster.

Volunteers are being recruited for a multi-year, web-based study of people’s ability to predict world events. The study is sponsored by the Intelligence Advanced Research Projects Activity (IARPA). One aim of the study is to discover whether some kinds of personality are better than others at making accurate predictions. The researchers hope to recruit a diverse panel of participants who are interested in offering predictions about events and trends in international relations, social and cultural change, business and economics, public health, and science and technology.

The Forecasting World Events Project is part of a multi-year research program investigating the accuracy of individual and group predictions about global events and trends, with the aim of advancing the science of forecasting. Last year I carried out some similar research. In December 2009 I set up a prediction game in which we asked people to estimate the chances of various developments in politics and business around the world in the coming year.

During the first few months of 2010, over 200 people who had already taken our basic risk intelligence test (which asked people to estimate the likelihood of statements about general knowledge) estimated the probability of each prediction. Over the rest of year, whenever any of the predictions came true or false, my colleague Benjamin Jakobus entered the details in the system accordingly. At the end of the year, we had enough data to calculate their risk intelligence.

The big question was whether these scores would correlate with those derived from the general-knowledge version of the test. If they did, that would suggest that the cognitive tasks involved in estimating the likelihood of general knowledge statements are basically the same as the skills required to estimate the probability of future events. In other words, if people tended to get similar scores on both types of test, it would support the view that risk intelligence is a single general-purpose ability to deal with uncertainty that can applied equally to reasoning about the past, present and future. If, on the other hand, people tended to get very different scores on in the two tests, this might suggest that risk intelligence is more domain-specific, so a person could be risk smart in one area and risk stupid in another.

predictions.jpg

As you can see from the graph, the results were not impressive; the correlation between the scores on the two tests was only .21. This means there is questionable value in administering a general knowledge version of the risk intelligence test to someone in an attempt to discover his or her skill at forecasting. It may be a useful approach when selecting the best hundred forecasters from a pool of a thousand applicants, since even low correlations can be useful when dealing with large groups. For individuals, however, it would appear that the only way to measure forecasting ability is to collect probability estimates about actual future events; the general knowledge type of risk intelligence test will not serve as a proxy.