This week’s guest blogger is Dylan Evans, an author and academic at University College Cork, Ireland. He lectures in behavioural science and is the author of numerous books including Emotion and Placebo.
President Obama recently criticized American spy agencies for failing to predict the spreading unrest in the Middle East. Now a new study is attempting to discover what makes a good forecaster.
Volunteers are being recruited for a multi-year, web-based study of people’s ability to predict world events. The study is sponsored by the Intelligence Advanced Research Projects Activity (IARPA). One aim of the study is to discover whether some kinds of personality are better than others at making accurate predictions. The researchers hope to recruit a diverse panel of participants who are interested in offering predictions about events and trends in international relations, social and cultural change, business and economics, public health, and science and technology.
The Forecasting World Events Project is part of a multi-year research program investigating the accuracy of individual and group predictions about global events and trends, with the aim of advancing the science of forecasting. Last year I carried out some similar research. In December 2009 I set up a prediction game in which we asked people to estimate the chances of various developments in politics and business around the world in the coming year.
During the first few months of 2010, over 200 people who had already taken our basic risk intelligence test (which asked people to estimate the likelihood of statements about general knowledge) estimated the probability of each prediction. Over the rest of year, whenever any of the predictions came true or false, my colleague Benjamin Jakobus entered the details in the system accordingly. At the end of the year, we had enough data to calculate their risk intelligence.
The big question was whether these scores would correlate with those derived from the general-knowledge version of the test. If they did, that would suggest that the cognitive tasks involved in estimating the likelihood of general knowledge statements are basically the same as the skills required to estimate the probability of future events. In other words, if people tended to get similar scores on both types of test, it would support the view that risk intelligence is a single general-purpose ability to deal with uncertainty that can applied equally to reasoning about the past, present and future. If, on the other hand, people tended to get very different scores on in the two tests, this might suggest that risk intelligence is more domain-specific, so a person could be risk smart in one area and risk stupid in another.
As you can see from the graph, the results were not impressive; the correlation between the scores on the two tests was only .21. This means there is questionable value in administering a general knowledge version of the risk intelligence test to someone in an attempt to discover his or her skill at forecasting. It may be a useful approach when selecting the best hundred forecasters from a pool of a thousand applicants, since even low correlations can be useful when dealing with large groups. For individuals, however, it would appear that the only way to measure forecasting ability is to collect probability estimates about actual future events; the general knowledge type of risk intelligence test will not serve as a proxy.
Report this comment
Its quite interesting to know about risk intelligence. My score is 73/100 in the RQ test on the link you have provided in the article. But I couldn’t find the general knowledge version of the test you mentioned. So does this score has a standing on its own and if it does what does it say?
Report this comment
The link provided in the article points to the general knowledge version of the test. The prediction version is no longer online. Your score of 73 does have a standing on its own; it indicates above-average risk intelligence. More information would be provided by the shape of your calibration curve.
If you have any more questions please let me know and i can get in touch with Dylan again.
Report this comment
Thankyou Laura for the feedback. I would also be interested to know if time, as in how long one takes to complete the test, does have any influence on the outcome. I might have took a little over 5 min to finish all the 50 questions. (the website mentions it takes around 5min to answer). I’m just curious if the time of the test says anything about spontaneous ability to predict risks or if it takes into account the educated guesses /calculated risks.