Should you join a start-up company after academia?

A career in a start-up company is more than just risk, discovers Idil Cazimoglu.

This piece was one of two winners of the Science Innovation Union writing competition, Oxford.

“Risky.”

My housemate, now in the final year of his PhD, had a one-word answer to my question “Would you consider working in a start-up company after you graduate?”

Intrigued, I posed this question to fellow PhD students in various disciplines over the following weeks, and received similar answers including “I don’t want to live in uncertainty,” “No job security,” “Academia is more stable,” and, memorably, “I’d rather go bungee jumping.”

rocket-1122402_1920

Not everything launches so smoothly

Continue reading

Why building a start-up is probably your most sensible career path

Your PhD has given you the perfect tool set to start a high-tech company, and it’s nothing to do with your technical skill, says Mark Hammond.

In stark contrast to the proliferation of web based start-ups led by young founders, science based start-ups have typically remained the domain of seasoned professors, spinning out breakthrough technology built on years of research. This is changing rapidly, and it’s now more viable than ever to start a science based company straight out of a PhD. In fact, it might just be one of the most sensible career paths that you can take.

Students at Imperial College’s Marker’s meetup present ideas and get feedback

Continue reading

Risk perception

untitled.bmp

David Ropeik is an international consultant in risk perception and risk communication, and an Instructor in the Environmental Management Program at the Harvard University Extension School. He is the author of How Risky Is It, Really? Why Our Fears Don’t Always Match the Facts, principal co-author of RISK A Practical Guide for Deciding What’s Really Safe and What’s Really Dangerous in the World Around You, and blogs for Huffington Post, Psychology Today, and has written guest blogs for Scientific American, Climate Central, and Big Think. He founded the programImproving Media Coverage of Risk,” was an award-winning journalist in Boston for 22 years and a Knight Science Journalism Fellow at MIT.

You are reading a piece in Nature, so you are probably fairly well-educated, and there is a better than even chance that you fancy yourself a fact-based thinker and reasonably rational. Meaning no disrespect, but that assumption is fanciful, at least when it comes to the perception of risk. Ambrose Bierce was right when he defined the brain as “the organ with which we think we think.” Research from diverse fields, and countless examples from the real world, have convincingly established that our perceptions of risk are an inextricable blend of fact and feeling, reason and gut reaction, cognition and intuition. No matter what the hard risk sciences may tell us the facts are about a risk, the social sciences tell us that our interpretation of those facts is ultimately subjective.

While this system has done a good job getting us this far along evolution’s winding road, it also gets us into trouble because sometimes, no matter how right our perceptions feel, we get risk wrong. We worry about some things more than the evidence warrants (vaccines, nuclear radiation, genetically modified food), and less about some threats than the evidence warns (climate change, obesity, using our mobiles when we drive). That produces what I have labeled The Perception Gap, the gap between our fears and the facts, which is a huge risk in and of itself.

The Perception Gap produces dangerous personal choices that hurt us and those around us (declining vaccination rates are fueling the resurgence of nearly eradicated diseases). It causes the profound health harms of chronic stress (for those who worry more than necessary). And it produces social policies that protect us more from what we’re afraid of than from what in fact threatens us the most (we spend more to protect ourselves from terrorism than heart disease)…which in effect raises our overall risk.

We do have to fear fear itself…too much or too little. So we need to understand how our subjective system of risk perception works, in order to recognize and avoid its pitfalls. Surprisingly, few people are aware of how much we know about this system. (I’ve tried to summarize that knowledge in my book, How Risky Is It, Really? Why Our Fears Don’t Always Match the Facts). Here’s a mad dash through the literature on risk perception;

• Neuroscience by Joseph LeDoux et.al. has discovered neural pathways that insure that we respond initially to risky stimuli subconsciously/instinctively, before cognition kicks in. And in the ongoing risk response that follows, the wiring and chemistry of the brain also insure that instinct and affect (feelings) play a significant role, sometimes the primary role, in how we perceive and respond to danger. Simplistically, the brain is designed to subconsciously feel first and consciously think second, and to feel more and think less.

• The research of Daniel Kahneman et.al. has discovered a mental toolbox (as Gird Gigerenzer puts it) of heuristics and biases we use to quickly make sense of partial information and turn a few facts into the full picture of our judgment. These mental shortcuts occur subconsciously, outside (and often before) conscious reasoning. This research further confirms that we are far more Homo Naturalis than Homo Rationalis.

• The Psychometric Paradigm research of Paul Slovic et.al. has revealed a suite of psychological characteristics that make risks feel “more” frightening, or less, the facts notwithstanding. These ‘risk perception factors’ include:

untitled.bmp

• Recent research on the theory of Cultural Cognition by Dan Kahan et.al has found that our views on risks are shaped to agree with those we most strongly identify with, based on our group’s underlying feelings about how society should operate. We fall into four general groups about the sort of social organization we prefer, defined along two continua, represented as a grid. We all fall somewhere along these two continua, depending on the issue.

untitled1.bmp

Individualists prefer a society that maximizes the individual’s control over his or her life. Communitarians prefer a society in which the collective group is mire actively engaged in making the rules and solving society’s problems (Individualists deny environmental problems like climate change because such problems require a ’we’re all in this together’ communal response. Communitarians see climate change as a huge threat in part because it requires a social response). Along the other continuum, Hierarchists prefer a society with rigid structure and class and a stable predictable status quo, while Egalitarians prefer a society that is more flexible, that allows more social and economic mobility, and is less constrained by ‘the way it’s always been’. (Hierarchists deny climate change because they fear the response means shaking up the free market-fossil fuel status quo. Shaking up the status quo is music to the ears of Egalitarians, who are therefore more likely to believe in climate change.)

That risk is inescapably subjective is disconcerting for those who place their faith in the ultimate power of Pure Cartesian “I think, therefore I am” Reason. But the robust evidence summarized above makes clear that;

1. Risk perception is inescapably subjective

2. No matter how well educated or informed we may be, we will sometimes get risk wrong, producing a host of profound harms.

3. In the interest of public and environmental health, we need a more holistic, and more realistic, approach to what risk means. Societal risk management has to recognize the risk of risk misperception, the risk that arises when our fears don’t match the evidence, the risks of The Perception Gap.

Letting go of our naïve fealty to perfect reason will allow us to recognize and understand these hidden dangers. Once brought to light, the harms to society from declining vaccination rates, the lost benefits of genetically modified food, the morbidity and mortality and societal costs of obesity – these risks and many more can be studied and quantified and managed with the same tools we already use to manage the risks from pollution or crime or disease. The challenge is not how to manage the risks of the Perception Gap. The challenge is to rationally let go of our irrational belief in the mythical God of Perfect Reason, and use what we know about the psychology of risk perception to more rationally manage the risks that arise when our subjective risk perception system gets things dangerously wrong.

Further Reading:

The neuroscience of risk perception – LeDoux J, The Emotional Brain, Simon and Schuster, 1996

Heuristics and Biases – Kahneman, D., Slovic, P. & Tversky, A. Judgment Under Uncertainty: Heuristics and Biases, Cambridge University Press, 1982)

The Psychometric Paradigm ‘risk perception factors’ – Slovic P, The Perception of Risk, Earthscan 2000

Cultural Cognition.

Risk Intelligence

dylan2.jpg

This week’s guest blogger is Dylan Evans, an author and academic at University College Cork, Ireland. He lectures in behavioural science and is the author of numerous books including Emotion and Placebo.

President Obama recently criticized American spy agencies for failing to predict the spreading unrest in the Middle East. Now a new study is attempting to discover what makes a good forecaster.

Volunteers are being recruited for a multi-year, web-based study of people’s ability to predict world events. The study is sponsored by the Intelligence Advanced Research Projects Activity (IARPA). One aim of the study is to discover whether some kinds of personality are better than others at making accurate predictions. The researchers hope to recruit a diverse panel of participants who are interested in offering predictions about events and trends in international relations, social and cultural change, business and economics, public health, and science and technology.

The Forecasting World Events Project is part of a multi-year research program investigating the accuracy of individual and group predictions about global events and trends, with the aim of advancing the science of forecasting. Last year I carried out some similar research. In December 2009 I set up a prediction game in which we asked people to estimate the chances of various developments in politics and business around the world in the coming year.

During the first few months of 2010, over 200 people who had already taken our basic risk intelligence test (which asked people to estimate the likelihood of statements about general knowledge) estimated the probability of each prediction. Over the rest of year, whenever any of the predictions came true or false, my colleague Benjamin Jakobus entered the details in the system accordingly. At the end of the year, we had enough data to calculate their risk intelligence.

The big question was whether these scores would correlate with those derived from the general-knowledge version of the test. If they did, that would suggest that the cognitive tasks involved in estimating the likelihood of general knowledge statements are basically the same as the skills required to estimate the probability of future events. In other words, if people tended to get similar scores on both types of test, it would support the view that risk intelligence is a single general-purpose ability to deal with uncertainty that can applied equally to reasoning about the past, present and future. If, on the other hand, people tended to get very different scores on in the two tests, this might suggest that risk intelligence is more domain-specific, so a person could be risk smart in one area and risk stupid in another.

predictions.jpg

As you can see from the graph, the results were not impressive; the correlation between the scores on the two tests was only .21. This means there is questionable value in administering a general knowledge version of the risk intelligence test to someone in an attempt to discover his or her skill at forecasting. It may be a useful approach when selecting the best hundred forecasters from a pool of a thousand applicants, since even low correlations can be useful when dealing with large groups. For individuals, however, it would appear that the only way to measure forecasting ability is to collect probability estimates about actual future events; the general knowledge type of risk intelligence test will not serve as a proxy.