Chief Scientific Adviser to the European Commission discusses evidence-based policy and nurturing and supporting a European scientific culture

"The policy world very much mirrors what we do in science today."  Image: (c) European Union

“The policy world very much mirrors what we do in science today.” Image: (c) European Union

Professor Anne Glover joined the European Commission as Chief Scientific Adviser to the President in January 2012, and is the first person to hold this position.

In this role she advises the President on any aspect of science and technology, liaises with other science advisory bodies of the Commission, the Member States and beyond, coordinates science and technology foresight, and promotes the European culture of science to a wide audience, conveying the excitement and relevance of science to non-scientists. She also chairs the recently established Science & Technology Advisory Council of the President.

Prior to her current appointment she was Chief Scientific Adviser for Scotland from 2006-2011. Professor Glover currently holds a Personal Chair of Molecular and Cell Biology at the University of Aberdeen. Most of her academic career has been spent at the University of Aberdeen where she has a research group pursuing a variety of areas from microbial diversity to the development and application of whole cell biosensors (biological sensors) for environmental monitoring and investigating how organisms respond to stress at a cellular level.

Professor Glover holds several honorary doctoral degrees and is an elected Fellow of the Royal Society of Edinburgh, the Society of Biology, the Royal Society of Arts and the American Academy of Microbiology. Professor Glover was recognised in March 2008 as a Woman of Outstanding Achievement in the UK and was awarded a CBE for services to Environmental Science in the Queen’s New Years Honours list 2009.

When Professor Anne Glover finished her five-year term as Chief Scientific Adviser for Scotland, the biologist was lauded for not only raising the visibility of science in Scotland and the UK, but for further increasing the role of scientific evidence in the policy-making process.

These fruitful five years led her to the challenging and geographically diverse role of Chief Scientific Adviser to the European Commission (EC), which she leaves after three years in the position, at the end of 2014. As the first ever scientist to be tasked with the responsibility of independently advising politicians and policy-makers governing more than 500m people across 28 member states, this was no easy assignment.

Continue reading

Transitions: From Science to Politics – Julian Huppert

Career paths are not always straightforward. Choosing a scientific vocation can involve challenging and unanticipated decisions, often with no tour guide to follow. Some scientists may hop from the lab bench into industry while others progress up the academic research ladder. Others decide to leave research behind and explore science communication,  teaching, setting up their own business or working in technical roles outside of the lab. 

While a love of science can lead to varied and fulfilling careers, it may be lonely trying to evaluate the next step to take. Recently, initiatives such as “This is what a scientist looks like” and the #IamScience discussions, have shone a bright light on scientific career trajectories. In our latest Soapbox Science series, we focus on some interesting examples of scientific career transitions. We will hear from different contributors, all of whom use their scientific background in their current jobs, asking each of them the same questions: how did you decide on your career path, what are your motivations, and what does the future hold?

Julian Huppert is a Liberal Democrat politician in the United Kingdom and has been a Member of Parliament for Cambridge since 2010. He studied at Cambridge University, completing a BA and then a PhD in Biological Chemistry at Trinity College. He was elected a Junior Research Fellow of Trinity in 2004 and became a fellow of Clare College in 2009. He is also a Member of the Royal Society of Chemistry (MRSC) and the Institute of Physics (MInstP).  Continue reading

A New Era of Science Funding – Part 4: Speaking up in support of federally funded research

Over the years science funding has changed significantly. In the past, funding would have been obtained through private benefaction or from wealthy individuals. Today, researchers are usually funded by a mixture of grants from government agencies, non-profit foundations and institutions. However, with the increasing popularity of social media and the internet, methods used to obtain money may be undergoing a shift. New routes linking funding sources with scientists are being increasingly explored. Tighter budgets and struggling economies are driving a need for new ways of funding and social media is proving to be invaluable in raising awareness of projects and linking like-minded people more effectively.

In this special Soapbox Science series, we focus on the new ways in which science groups and individuals are obtaining funding and how projects such as Petridish, Tekla LabsKickstarter and the #scifundchallenge may change the future of scientific research.

Dr. Thon holds joint appointments within the hematology division at Brigham and Women’s Hospital, and Harvard Medical School in Boston, and is an American Society of Hematology Scholar. Dr. Thon received his doctorate from the University of British Columbia, Canada, under Dr. Dana Devine where he worked closely with Canadian Blood Services for the improvement of the processing and storage of blood platelets. As a post-doctoral fellow in Dr. Joseph Italiano’s lab, Dr. Thon’s research now focuses on the cytoskeletal mechanics and signaling pathways leading to platelet formation. This research has set the groundwork for the development of biological model systems that will be used to (1) study the process of platelet release under physiologically relevant conditions, (2) develop bio-mimetic systems to generate useable numbers of clinically viable platelets for infusion, and (3) establish representative ex vivo models of human bone marrow and surrounding blood vessels to test drugs and develop treatments for thrombocytopenia.

The day-to-day rigors of academic biomedical research are difficult to appreciate, and it is necessary that scientists share their perspective of the knowledge market with politicians and government representatives who are burdened with making difficult decisions on our behalf. Unlike the airline industry which also does research and development (R&D) to create safer, lighter and more efficient airplanes, academic medicine does not build R&D into the pricing of its services. This is because biomedical research is a surprisingly random process which depends on chance observations, unexpected results and seemingly unrelated outcomes. As a result, downstream applications of research are almost impossible to predict at the outset and necessitate an altogether different model of cost recovery. To subsidize national biomedical research endeavors, projected costs are spread among citizens in the form of taxes, and distributed to multiple academic institutes as operating grants. Investments in research lead to licensed technologies which create jobs and revenues far in excess of the grants that support them, with every dollar invested in academic biomedical research generating two dollars in economic growth (Murphy K., Topel, R. The economic value of medical research, 1999).

A country’s biomedical advancement and innovation is thus tied to its investment in academic research. Funding of research comes entirely from government and private donors, and is as value-based, bottom-up and pork- and crony-free as it gets. In North America approximately two-thirds of academic biomedical research is supported through federal funding agencies such as the National Institute of Health (NIH) and the Canadian Institute of Health Research (CIHR). The mainstays of NIH/CIHR support are grants made to individual investigators for reasonably broad research projects, and researchers compete for these funds through a rigorous process of peer review. Nevertheless, the lack of sustained growth for both the NIH and CIHR has forced success rates for primary operating grants to drop significantly over the last decade to approximately 12% (NIH, R01) and 15% (CIHR Operating Grant); Fiscal Year 2011. This means that only a very small percentage of outstanding applications for research projects are being actively supported to tackle the multitude of health needs in these countries. As a result, a majority of highly-rated research proposals will not be funded, opening the field for countries like Germany, India and China which are boasting funding rates of 47% and higher to take the lead.

Relatively flat funding rates in North America have meant that universities, hospitals and research institutes have been forced to implement hiring freezes of PhDs into faculty positions, effectively stranding their scientists in temporary, low-paying jobs with limited prospects of advancement. Not only does this risk exporting our scientists abroad, but private industry’s reliance on biomedical research, both in terms of scientific innovation and the researchers they help train, means industry will follow suit. Canadians call this the ‘brain drain.’

Indeed, 80% of PhDs in North America will not become professors (Fuhrmann et al., Improving graduate education to support a branching career pipeline: Recommendations based on a survey of doctoral students in the basic biomedical sciences, 2011). For the majority of these scientific investigators, the inability to secure a faculty position has meant that they must languish in a series of post-doctoral positions supported by grant-funded professors who are finding themselves increasingly with limited resources. The average age of independence in research is now in the mid-40s, a testament to the bleak prospects facing young scientists. Given the state of academic funding, it is not surprising that many investigators have chosen to transition into more secure professions like teaching, medicine or law. The loss hurts our competitiveness in biomedical research and forces industry abroad.

Given our current economy, it is imperative that efforts to improve the nation’s fiscal stability be grounded in the long-term competitiveness of industries we currently head, and that we leverage our expertise in medical science and capacity to do high-tech research. This does not need to come from increased government spending alone. Whereas academic medicine cannot build R&D into the pricing of its services, universities profit directly from tuition fees, patents and personal endowments. Since these revenues are derived from faculty teaching loads, the scientific success of their investigators, and established reputation of their research program, faculty support must be factored into departmental operating budgets. For American institutions, the Canadian system represents a more sustainable model in this regard, and universities on both sides of the border should be required to assume greater responsibility for investigator salaries and administrative support, freeing up tax dollars to directly support research innovation. Likewise, tax breaks for private donations to federal funding agencies would reduce their dependence on tax-payer dollars and incentivize industry investment in national research programs. Finally, limiting the number of federal awards issued per investigator (most of which are held by senior faculty) would open up more funding opportunities to help support young investigators and significantly lower the age of independence.

Science is a marathon, and if we fall behind now, while we lead health innovation in the world, the cost of recovering our position, in light of emerging economies with which we compete, will become progressively more expensive. Sustained increases in NIH/CIHR funding are critical to maintain North America’s innovation engines at a crucial time for research and the economy, and most importantly, improve the health and well-being of our nations. Now is the time for scientists to advocate most strongly for national investment in biomedical research. Senators, congressmen, and members of parliament are the decision-makers you elect to represent you – write to them. You can go here and enter your zip code (in the United States), or here and enter your postal code (in Canada), to access your representative.

While the argument for the government to prioritize an industry where the number of clinical advances, drug developments and cures is proportional to total research investment is not a difficult case to make – make it. Addressing these concerns forces the issue to light, and commits politicians to publicly defensible positions for which they can subsequently be held accountable. Government agencies cannot lobby for themselves and policy makers do not share your unique perspective. Our health, economy, and the future of scientific progress are at stake.

To find out more about science funding you can read this special Nature News feature,  Finding philanthropy: Like it? Pay for it.

Climate change and extreme events

Dr Andy Russell is a climate science lecturer in the Institute for the Environment at Brunel University. His research focuses on how severe storms develop in Europe and Antarctic climate dynamics. He blogs his thoughts on weather and climate issues and tweets as @dr_andy_russell.

Despite the recent controversies regarding the Intergovernmental Panel on Climate Change (IPCC), the international effort to summarise the state of the climate and make projections about its likely condition by the end of the 21st Century rumbles on.

As I type, I have a massive chapter for the next full Assessment Report (due to be published in 2014) sitting on my desk to review and a couple of analysis routines churning their way through terabytes of climate model data. There’ll be hundreds of other people around the world focussed on similar things. The aim is to produce the 5th series of Assessment Reports since the IPCC was formed in 1988 to help decision makers, well, make decisions.

But the IPCC has been up to other things recently as well. In November 2011 it published a Special Report Summary for Policymakers on “Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation” (or SREX, the full report will be published in February 2012). Understanding how extreme events might change in the future is really important as it’s these things that will really impact people: heat waves, flash floods, hurricanes, droughts and sea level rise related inundation. This is far more useful to know than the quite abstract concept of global mean temperature change. This report looks like an advance in the IPCC procedure as it involved a far more integrated approach than usual IPCC outputs, having authors from climate science, impacts and adaptation backgrounds as well as disaster risk management experts.

Although it sounds obvious, one of the key conclusions of SREX was that the impact of extreme climatic events is greatest where vulnerability is highest. On the ground, this has manifested itself as higher fatality levels in developing nations and higher economic losses in developed countries. There’s a lot to think about here in terms of how developing nations move forward and how developed nations approach things sustainably to reduce exposure. That’s not really my area though.

From a scientific point of view, they also point out that analysing extremes is relatively difficult as they are rare and data from around the world are not always up to the job. That said, this depends a lot on the particular “extreme” being investigated – this has always struck me as slightly odd about the climate extremes community in that the only common theme is the statistics and not the science behind the phenomena.

Looking to projections, the IPCC SREX assign their highest confidence assessment (“virtually certain”) to increases in temperature extremes by 2100. This is because this is pretty much a direct response to the radiation changes forced by atmospheric greenhouse gas emissions. Everything else is a slightly more messy consequence of the temperature changes and these other fields vary much more amongst the 12 different models used in this analysis making their projections uncertain. However, it also looks likely that heavy precipitation events will increase in certain regions and that the maximum winds associated with tropical cyclones will increase whilst their total number will likely decrease.

Oddly enough, the emissions pathway that we take in the future (the IPCC analyses different sets of projections based on different socioeconomic and technological development assumptions) has little impact on extreme events in the next 30 years or so – they don’t appear to have an impact until the latter half of the 21st Century when inter-model variability masks most of the climate signal anyway. This highlights how making projections of extreme events is a difficult game. In that spirit, here are two of the key problems as I see them relating to my area of research on severe storms in Europe:

Loading the dice or getting new dice?

If we assume that climatic quantities have a normal distribution (which isn’t always the case, especially with precipitation) then you can view the extremes as the tails at either end of the distribution e.g. hot or cold. So climate change could be viewed as like loading dice – you start rolling more sixes (or getting more hot days). However, when the climate regime changes this analogy breaks down as, instead of just rolling more sixes, you start needing to roll sevens as climate records are broken (see the figure below). This poses a problem for climate models as, like a six sided die isn’t designed to roll a seven, climate models haven’t been designed (or at least haven’t been verified against) conditions that have never been observed.

We’re gonna need a smaller box.

The second problem is that some important things – like severe storms, tornados and regional and local changes such as river catchment area precipitation changes – are too small for climate models to represent or resolve. The reason for this is that these computer models split the atmosphere (and oceans) into a 3D array of boxes. The important equations are solved in each box and then they pass information to neighbouring boxes as appropriate at each model time step. These boxes usually have horizontal dimensions of around 100-400 km to allow for a convenient computational time. However, storms and tornados work on scales of significantly less than 100 km so there’s no way that the models can tell us anything about these things. This problem is particularly acute in relation to the IPCC SREX as this analysis used a suite of climate model data from a project called CMIP3, which was completed in 2006 for the last IPCC assessment and, therefore, does not use the most up-to-date and highest resolution model data. (The data currently being prepared for the next IPCC Assessment Report called CMIP5 is, however, not yet complete so perhaps this criticism is a bit unfair.)

Is this good enough?

So does this mean that analyses using these model data are not useful or reliable? When faced with this question I struggle to get past the fact that, however much they can improve in the future, these models are still the best and only tool we have for making climate projections. Beyond that, we can take comfort in the fact that the very basic physics of climate science is really well understood – even very simple energy balance models can tell us useful things about the effects of increasing atmospheric greenhouse gas changes. What we’re talking about here are the details, albeit very important details, and in that respect our current analyses are consistent with the things that we’re pretty sure of.

The green curve represents the distribution of Swiss summer temperatures from 1864 to 2002. Clearly, 2003 does not align well with that distribution and is an example of an extreme breaking a previous record. This figure has been taken from the IPCC AR4, for more details see here.

 

 

Science Online NYC (SoNYC) 3 – Science and the Law

On Wednesday evening, we hosted the third installment of the monthly Science Online NYC (SoNYC) discussion series. The topic for debate this month was “Science and the Law” and the panel featured Nadim Shohdy, Matt Berntsen Simon Singh (who kindly stayed up late in the UK so that he could join us via Skype) and Dan Vorhaus.

As is our usual format, following short introductory talks from the panelists, we invited attendees present in person at Rockefeller University or watching online to take part in a wider discussion.

On saying the right thing – Libel and the Law

The evening began with an introduction by Simon Singh on his experiences of being sued for libel in England, highlighting some key differences in the Law in the UK and the US. One of the questions from the audience was whether Singh would write the same article again if he’d known what he knows now. Singh reflected that yes, there were things that could potentially have been done such as not naming an organisation or slightly softening the tone used, but the key issue here was why should a journalist feel scared to do their job?

Later in the evening, Matt Berntsen began his talk with a two sentence definition of libel laws in the US, which stressed the importance of consent. Translating for the less legally-aware in the audience, Dan Vorhaus explained that if you interview or write a piece about an individual or an organisation and they OK it before you publish it, they cannot later sue you about its contents.

Singh’s libel discussions concluded with some crowd-sourced advice for all bloggers to check out online guidelines by Sense about Science in the UK and Electronic Frontier Foundation in the US about libel laws.

Whose gene is it anyway?

Nadim Shohdy chose to focus on patent law in the US, giving an overview of the Bayh-Dole Act, which is the US legislation that relates to research work that is funded by federal government. Its outcome was to give universities (and others) the intellectual property rights to their inventions.

Discussions later in the evening returned to patents, specifically whether or not it should be possible to patent individual genes. A loose explanation of current situation in the US is that discoveries cannot be patented, whereas inventions can be. To date, genes have fallen into the latter category as they are “created” in lab. However, ongoing legal discussion around the BRCA genes involved in breast cancer may result in changes to the previously accepted status quo.

On the side of the Law…and the scientists

In the final set of comments before everyone moved to the bar to continue the discussions, Dan Vorhaus brought the elephant in the room around Science and Law out onto the stage in an attempt to promote further dialogue. Aware that lawyers are stereotypically seen as unapproachable and “unsexy,” and that, frustratingly, Law often trails some years behind scientific advances, Vorhaus asked how scientists and lawyers could develop closer collaborations. One suggestion included scientists helping lawyers to identify developments that are likely to have policy and/or legal implications. For example, rapidly falling costs of DNA sequencing were clearly going to lead to a wider accessibility and use of personal genomics. However, there is still only one piece of legislation specifically relating to the use of genetic material – other legal decisions are taken by interpreting and re-purposing other areas of Law, a situation which is less than optimal.

To read what people on Twitter were saying about the event, check out our Storify of tweets at the bottom of this post.

Blog posts about the 3rd #sonyc

Do let us know if you blog about the event and we’ll include a round-up of links here.

Photos

Have been added to our Facebook page. Do let us know if you’d like us to link to any of yours.

Live-streaming and video archiving

We do also live-stream each SoNYC event to give as many people as possible the chance to take part in the debate. Check out our livestream channel where the archives of the first two meetings are currently hosted.

Finding out more

There will not be a SoNYC in July as we are taking the month off, so the next event will be held in August. The details of August’s event will be announced soon – keep an eye on the SoNYC twitter account for more details and/or watch the #sonyc hashtag.

If you have a suggestion for a future panel or would be interested in sponsoring one of the events, please get in touch.

This month’s Storify

NB. Please let us know of any mistakes in the recounting of the legal definitions from this event so that we can correct them.

A Happy Revolution

nick.bmp

Dr Nattavudh (Nick) Powdthavee is a behavioural economist in the Department of Economic at Nanyang Technological University, Singapore, and is the author of The Happiness Equation: The Surprising Economics of Our Most Valuable Asset. He obtained his PhD the economics of happiness from the University of Warwick. Discussions of his work have appeared in over 50 major international newspapers in the past five years, including the New York Times and the Guardian, as well as in the Freakonomics and Undercover Economist blogs.

It’s not often in our lifetime that we could almost hear the intellectual tide turning. The year was 1993. The main perpetrators were Andrew Oswald and Andrew Clark; two British economists who, in October that year, organised the world’s first ever economics of happiness conference at London School of Economics and Political Sciences. Posters advertising the event were put up weeks in advance. A hundred chairs were put out in the famous Lionel Robbins building, waiting to be filled by many of the world’s greatest minds. The meeting, the organisers thought, was going to be revolutionary to economics science. Perhaps it was even going to be historical, not so dissimilar to the one which was held a few months earlier in Cambridge where British mathematician Andrew Wiles presented the proof of Fermat’s Last Theorem to a few hundred academics before him.

Imagine their disappointment when only eight people turned up on the day*. It was official; the world’s first ever economics of happiness conference was no less of a complete and utter failure.

Fast forward eighteen years to 2011. Happiness is currently one of the hottest topics in world’s politics and economic research. The British Prime Minister David Cameron has set out a plan to measure and improve people’s happiness – or in his compound term “general well-being”. The French president Nicholas Sarkozy has already launched an inquiry into happiness, commissioning Nobel Prize winners Joseph Stiglitz and Amartya Sen to look at how policies on Gross Domestic Products (GDP) sometimes trampled over the government’s other goals, such as sustainability and work-life balance. There are now over two hundred thousand economic papers on the World Wide Web written exclusively on “happiness”, “life satisfaction”, or “subjective well-being”.

How did we get here so fast in just less than two decades?

Of course, one of the early issues that people have with the economics of happiness (and you’d be forgiven if you yourself did laugh at the idea) is that happiness is hardly a measurable concept. This is a big deal for economists who like to call themselves quasi-scientists (in that they mainly deal with objectively measurable data such as income and inflation rates). If what people say about the way they are feeling is subjective by definition, how can it be analysed and quantified?

This issue, I feel, has now been resolved almost entirely. Working alongside scientists, psychologists have been able to provide objective confirmations that what people say about their own happiness does indeed provide useful information about their true inner well-being. For instance, self-rated happiness has been shown to correlate significantly with the duration of “Duchenne” or genuine smiles a person give during a day, as well as the quality of memory, blood pressure, brain activities, and even heart beats per second. More remarkably, scientists have been able to show that how happy we feel about our lives today have important predictive power of whether or not we will still be alive, forty or fifty years from now. Put it simply, we really do mean what we say.

The last two decades had also seen a substantial rise in the number of newly available data sets which are impossibly large by previous standards. And by applying appropriate statistical tools on these randomly drawn samples, researchers are able to explore whether or not the determinants of individual’s happiness (which is normally captured by asking individuals to rate their happiness from “1.not too happy”, “2.pretty happy”, or “3.very happy”) are the same in America as they are in Great Britain, South Africa, and China (which they are, thus lending further credence to the idea that such answers should be taken seriously).

So, what are the interesting results happiness economists have discovered so far? Well, for a start, happiness is U-shaped in age. On average, we are likely to be happier with our life at the younger and older age points in our life-cycle, with the minimum point occurring somewhere around mid-40s. Money buys little happiness, whilst other people’s money tends to make us feel unhappy with ours. The big negatives in our life include, for example, unemployment and ill health. Yet these negative experiences hurt us less subjectively if we happened to know a lot of other unemployed people (or in the case of ill health, other people with the same illness as ours). Marriage and friendships are extremely valuable, although there is little statistical evidence to suggest that children make parents any happier than their non-parents counterpart. And more recently, happiness economists have been able to put dollar, pound, or euro values on happiness (or unhappiness) from seemingly priceless experiences or life events that come with no obvious market values such as time spent with friends, getting married, losing one’s job, and even different types of bereavement.

It’s difficult to try and forecast how important this kind of work will be in the political arena in the forthcoming century. It’s possible that future governmental policies may shift entirely from the pursuit of wealth towards more non-materialistic goals as a result of these findings. We may even witness a replacement of GDP for a more general well-being index such as the GNH (or Gross National Happiness) altogether, although this is probably unlikely to happen. However, one thing’s for sure; economics as a dismal science will never be the same again.

*Of those eight, five were speakers especially invited to speak at the conference by the organisers.