Soapbox Science

Test, learn, adapt – a scientific approach to public policy

Dr. Prateek Buch is a research scientist and public engagement professional working to involve patients and the public in a world-leading research programme to develop gene and cell therapies for vision loss. Prateek is also a Liberal Democrat, the acting Director of the Social Liberal Forum (socialliberal.net), and is on the executive of Science is Vital, through which he advocates evidence-based policies and promotes the importance of science in public policy. He tweets @prateekbuch and blogs (occasionally) about science and politics at teekblog.blogspot.com

Most people are in favour of evidence-based policy. Or, to put it another way, very few people advocating or implementing public policy would admit to being in favour of evidence-free policy based on mere guesswork, hunches and ideology. So if evidence-based policy is what we want, and we want it now (there’s the beginnings of a campaign slogan in there somewhere), what’s the best way to make it happen? According to a Cabinet Office paper by Laura Haynes and Owain Service of the Behavioural Insights team, co-authored by Dr. Ben Goldacre and Prof David Torgerson, the best way is to conduct randomised controlled trials (RCTs) that pitch interventions against current best practice.

They propose a nine-step framework for RCTs of public policy that they call ‘test, learn, adapt,’ and as a scientist with a keen interest in public policy it’s an approach I and many others like me will wholeheartedly endorse. Any similarity with the title of my own obscure blog about science and politics (consider, evaluate, act) is pure coincidence (as any robust study would no doubt prove), but there is much in the paper that gives voice to the kind of approach many of us wish was more widespread in public policy-making.

Debunking myths that opponents of public policy trials often cite (with debunker-extraordinaire Goldacre on board we would hardly expect less), the authors make the case for well-designed, controlled and randomised trials between similarly-matched groups of people to assess whether a policy is as effective as claimed – the ‘test’ element of test, learn adapt. The authors emphasise the importance of pre-determining what is to be measured, adequate randomisation of comparable groups and powering the study to be big enough to provide meaningful data – following these steps would ensure an ethical, robust trial.

They also provide examples of well-conducted trials in business, international development and even government policy proving informative and effective, helping us learn what works. In doing so the paper makes a solid case for the more extensive use of the RCT, for so long the gold-standard for showing that medical interventions are effective.

The final element of ‘test, learn, adapt’ is arguably the most important, and at the same time may prove the most troublesome for politicians habituated to a culture of certainty and often committed ideology. The Cabinet Office paper uses theoretical and real-world examples of Department of Work and Pensions (DWP) trials seeking to determine the effectiveness of various ‘back-to-work’ programmes for the unemployed – however, recent experience in this exact domain shows that even if studies are well-designed and provide data that we can learn from, adapting public policy to their findings can prove a step too far for Ministers who have reasons other than evidence for pursuing a given pet policy.

Convincing policy-makers that testing their hypotheses (intervention X will improve outcome Y in Z ways) and learning from those tests will prove less of a barrier to a more scientific approach to public policy than will getting them to adapt their stance to align with the evidence. This argument is well-made in Mark Henderson’s outstanding book Geek Manifesto, from which it is clear that politics is based more on policy-based evidence than evidence-based policy – something that my brief and somewhat peripheral experience of Westminster policy-making affirms. Henderson sets out some interesting potential solutions, including an independent Office of Scientific Responsibility, and an intriguing suggestion first floated by Tim Harford that would ‘encourage U-turns made in the face of the evidence:’ the awarding of an ‘annual prize to the politician who has been most usefully wrong and admitted to it,’ to which Henderson adds the need for ways to ‘name and shame the evidence abusers.’ Such a carrot-and-stick approach may help show our elected representatives that the electorate would prefer them to base their policies on evidence, and to bear Keynes in mind (when the facts change…), rather than insisting on pre-conceived and often biased notions that the evidence disproves.

The ‘test, learn, adapt’ paper sets out how the Cabinet Office would like to see evidence generated and used in the pursuit of ‘what works’ in public policy – here’s hoping their Whitehall and Westminster colleagues test this approach, learn from it, and adapt their policy-making to incorporate its findings.

Comments

  1. Report this comment

    b k said:

    am I the only one sick of anacronymns? The author of this is correct. How about applying what we know now with some common sense and rolling with it?

Comments are closed.