Climate Feedback

The climate machine

Olive Heffernan

Last November, I took a trip to Exeter to visit the UK Met Office. The purpose of my visit was to meet with Chris Jones, a climate modeller at the Met Office’s climate-change branch, the Hadley Centre.

Jones is one of a team of scientists who – over the past four years – have devoted much of their time developing and testing what is arguably the world’s most sophisticated climate model. Known as HADGEM2-ES (short for the Hadley Centre Global Environmental Model, version two, with an added Earth-system component), this labour of love is one of a new generation of models under development that reach far beyond their distant forebears, which represented just the physical elements of the climate, such as air, sunlight and water. ‘Earth system models’ include all that and much more: forests that can shrink or spread as conditions change; marine food webs that react as the oceans grow more acidic with carbon dioxide; and aerosol particles in the atmosphere that interact with greenhouse gases, enhancing or sapping their warming power.

climate-machine.bmp

Right now, HADGEM2-ES is gearing up for a major challenge. Over the coming months, it will run a series of climate simulations out to the year 2100 for the next report of the Intergovernmental Panel on Climate Change (IPCC), on the physical-science basis of climate change, due out in 2013.

The scientists – such as Jones – who have developed HADGEM2-ES hope that by representing the earth system in greater complexity they will be to simulate the present-day climate with greater realism. This should, in theory, lead to more realistic projections for the future, but many of the climate modellers I spoke to were keen to point out that simulating the climate with more complex models may well lead to greater uncertainty about what the future holds. That’s because including sources of large feedbacks – such as forests that can expand or die or tundra that can release vast amounts of methane – adds a whole new suite of factors to which the climate can respond.

So, it’s quite likely that the next IPCC report will have much larger error bars on its estimates of future temperature or precipitation, compared with AR4. Climatologist Jim Hurrell of the National Center for Atmospheric Research (NCAR) in Boulder, Colorado, who is heading up development of the NCAR Earth-system model, had this to say:

“It’s very likely that the generation of models that will be assessed for the next IPCC report will have a wider spread of possible climate outcomes as we move into the future".

So why include more complexity in the model, if it will produce results that are less useful for decision-making? Here, it’s worth remembering that for climatologists, models are not just tools that can give a glimpse of what the future holds; they are also an experimental playground – a replica world on which they can test their knowledge of the climate system. Without the ability to conduct global-scale experiments in the lab or in the field, models are the only tools they have. So while the results from more complex models may, in the short-term, be less informative for policy makers and the public, they will help scientists better understand what drives climate change and lead to better simulations in the long-term.


According to Lenny Smith of the London School of Economics, making use of the output from these complex models is partly a communication challenge. Says Smith:

“Sure, you can say the newer models are “better”, but that does not inform decision makers on which questions these models are fit to base decisions on. The question is where and when are the simulations realistic? At 25 km or 1000km? For monthly averages or decadal means? For temperature or rainfall?”

While reporting this feature, I became very curious about the limits to models’ complexity. There comes a point where adding more complexity slows the model to the point where it’s no longer useful. For HADGEM2-ES, simulating one month takes an hour of computing time. More complexity would slow this further, yet climate scientists at the Hadley Centre are already planning its successor – a model that can factor in the nitrogen cycle and methane emissions from thawing permafrost.

At the Hadley Centre, some staff joke that the modellers will eventually try to include panda bears in their simulations. The bit of hyperbole no doubt riles some of the modellers, perhaps because they are only too keenly aware that their creations can never fully represent the real thing.

But where’s the end point, I wondered? And is the danger of the models becoming too complex? I spoke to Syukuro Manabe, one of the founders of modern climate models and a researcher at the GFDL about this in Atlanta last month.”Once in a while you have to stop and think if this complexity is really warranted in light of our ignorance”, said Manabe. But, he added, the key is to look at whether “the detail in the model is balanced with our knowledge of the process”.

That’s exactly how it’s done, according to Hurrell, who says:

“We develop new representations of aspects of the system based on our best understanding of the system, and when we are confident that we can represent that process well, we include it in the model. The point of all of this is not to develop a perfect model. If it was strictly a beauty contest we might say we’re not quite comfortable with that”

Figuring out what’s important to include is part of the challenge for Jones, who says “there’ll always be some level of detail or complexity that we can’t get in there.”

Read the full feature, which includes a detailed profile of the model, in this week’s Nature [subscription].

Comments

  1. Ryan said:

    This raises some interesting questions about models in general. But you have missed a crucial one: If the latest round of model runs is going to be LESS policy relevant than for AR4, then why should it even be part of the IPCC?

    Most of the policy decisions related to climate policy and adaptation policy won’t need input from GCMs.

    It’s one thing to fund these “experimental playgrounds” as long term “pie-in-the-sky” projects. It’s quite another to fund them because (we are led to believe) they help people to make decisions.

    Maybe the IPCC should limit its focus to the parts of the science that actually take decision maker needs seriously.

    Report this comment Cancel report
    Your details

    Please confirm the words below

    In order to reduce spamming, this process ensures you are a real person and not an automated program.

  2. Jenli said:

    This modell shurely will not include the new findings of Susan Solomon regarding stratospheric water vapor and it’s influence to global temperatures. If we believe Solomons findings stratospheric water vapor was responsible for a big amount of warming since 1979 and even for about 25 % less warming since about 2000. The reason? Unknown! So what about the “predictions” of a model which algorithms don’t consider anything about this facts?

Comments are closed.