Wednesday, November 14th 2012, 5:40 PM EST
Climate modelers are still trying to create a clockwork climate machine
For half a century, climate scientists have been attempting to simulate the workings of Earth's climate system in computer models. Over that period of time computers have increased in computational power a million fold, allowing models to grow in complexity and, if you accept the word of the modelers themselves, accuracy. Today's models may produce more realistic output but that should not be confused with more accurate output—modern climate models are still unable to accurately predict future fluctuations in Earth's environment. Why this should be so is highlighted in a new paper published in the Journal Of Advances In Modeling Earth Systems (JAMES), a publication of the American Geophysical Union. In it the tuning secrets of those modern-day mystics, climate modelers, are revealed.
Early on, computer modelers were content to capture the rough behavior of isolated parts of the physical world: heat transfer, fluid flow in ocean and atmosphere, and such. These models were created to provide scientists with insight into natural processes where it was impractical to perform real-world experiments. This is a thoroughly reasonable application of computer models. But as computers became larger and more powerful the modelers soon expanded the complexity of their code, aiming eventually to simulate the system of the world. Sadly, the development of these models went hand in hand with the rise of climate change alarmism—the over hyped and unproven theory that human activity was causing a dangerous warming of the global climate.
A layperson might think that there would be a single correct solution to modeling the interrelated physical processes that comprise Earth's climate machine, but so far that single perfect model that accurately reproduces real climate has eluded science. So over time, a bevy of climate models have been created, all different, all giving different inaccurate answers. Recognizing the imperfect nature of their creations, modelers have adjusted or tuned their model's properties in various ways in an attempt to more closely match the behavior of Earth's actual climate. In “Tuning the climate of a global model,” Thorsten Mauritsen et al. describe some of the ritual incantations of the arcane art of climate model tuning. The evolution of these tuning tricks is introduced by the author's as follows:
The need to tune models became apparent in the early days of coupled climate modeling, when the top of the atmosphere (TOA) radiative imbalance was so large that models would quickly drift away from the observed state. Initially, a practice to input or extract heat and freshwater from the model, by applying flux-corrections, was invented to address this problem [Sausen et al., 1988]. As models gradually improved to a point when flux-corrections were no longer necessary [Colman et al., 1995; Guilyardi and Madec, 1997; Boville and Gent, 1998; Gordon et al., 2000], this practice is now less accepted in the climate modeling community. Instead, the radiation balance is controlled primarily by tuning cloud-related parameters at most climate modeling centers [e.g., Watanabe et al., 2010; Donner et al., 2011; Gent et al., 2011; HadGEM2 Development Team, 2011; Hazeleger et al., 2012], while others adjust the ocean surface albedo [Hourdin et al., 2012] or scale the natural aerosol climatology to achieve radiation balance [Voldoire et al., 2012]. Tuning cloud parameters partly masks the deficiencies in the simulated climate, as there is considerable uncertainty in the representation of cloud processes. But just like adding flux-corrections, adjusting cloud parameters involves a process of error compensation, as it is well appreciated that climate models poorly represent clouds and convective processes. Tuning aims at balancing the Earth’s energy budget by adjusting a deficient representation of clouds, without necessarily aiming at improving the latter.
Model tuning is an integral part of the model development process, but tuning is not a well-defined term. Indeed, it is not extensively discussed in the literature. This paper does its best to pull back the veil on model tuning and the implications of the changes that are still necessary to make climate models even get in the neighborhood of the real climate. The authors discuss their tuning process for a specific suite of models: the Max Planck Institute Earth System Model at base-resolution (MPI-ESM-LR ), which consists of ECHAM6 version 6.0, at T63 spectral resolution with 47 vertical levels, including the JSBACH land model, coupled to the MPIOM ocean model at 1.5 degree resolution with 40 vertical levels.
In the end it all comes down to radiation balance, the energy coming in from the Sun and the energy being radiated back into space. Typical model parameters include the top of atmosphere (TOA) net, longwave and shortwave fluxes, cloud cover, cloud liquid water- and water vapor paths. “We tune the radiation balance with the main target to control the preindustrial global mean temperature by balancing the TOA net longwave flux via the greenhouse effect and the TOA net shortwave flux via the albedo affect.,” the authors state, mentioning that the process may vary among modeling groups. Some of the cloud processes are depicted in the figure below.
The illustration shows some of the major climate-related cloud processes frequently used to tune the climate of the ECHAM model. Stratiform liquid and ice clouds, and shallow and deep convective clouds are represented. The grey curve to the left represents tropospheric temperatures and the dashed line is the top of the boundary layer. Parameters are a) convective cloud mass-flux above the level of non-buoyancy, b) shallow convective cloud lateral entrainment rate, c) deep convective cloud lateral entrainment rate, d) convective cloud water conversion rate to rain, e) liquid cloud homogeneity, f) liquid cloud water conversion rate to rain, g) ice cloud homogeneity, and h) ice particle fall velocity. As you can see, there are a lot of knobs to be tweaked to pull a models output into line with empirically measured reality.
But does all of this poking and prodding make a model right, or does it just force it to give the answer the researchers desire? Mauritsen et al. sum up state of the tuners art this way: “Today, we tune several aspects of the models, including the extratropical wind- and pressure fields, sea-ice volume and to some extent cloud-field properties. By doing so we clearly run the risk of building the models’ performance upon compensating errors, and the practice of tuning is partly masking these structural errors.” Just because a model gives the “right” answers based on historical measurements does not mean the model is faithfully reproducing the natural system. All of those errors added to compensate for inaccuracy can come back later to bite the modelers.
Another problem with this process is that it is based on historical data, measurements taken from an Earth that no longer exists. This is because the Earth system is constantly changing making climate modeling a bit like shooting at a moving target. But this statistical problem not withstanding, there are a number of more fundamental computational problems with the model described. One sure sign that your model is not correct is that its results diverge from reality over time. Just such behavior is observed from climate models, something the modelers call “climate drift.”
Successfully modeling Earth's climate as it exists today relies not only on having an accurate physical model of the processes involved and their interactions, but on getting the initial conditions correct. This means capturing the energy already in the system and where it resides—in the atmosphere and the ocean depths for example. Climate modelers recognize this problem and the authors' make this comment:
A particular problem when tuning a coupled climate model is that it takes thousands of years for the deep ocean to be equilibrated. In many cases, it is not computationally feasible to redo such long simulations several times. Therefore it is valuable to estimate the equilibrium temperature with good precision long before equilibrium is actually reached. Ideally, one would like to think that if we tune our model to have a TOA radiation imbalance that closely matches the observed ocean heat uptake in simulations where SST’s are prescribed to the present-day observed state with all relevant forcings applied, then the coupled climate model attains a global mean temperature in reasonable agreement with the observed.
It truth, it is almost impossible to capture the initial state of the climate as it stands today so, as stated, modelers make some assumptions about the state some time in the past and get a flying start at today's state. To do this they run their inexact and possibly ill tuned models for long periods of time. What happens to a model when run under such conditions is shown below.
Climate drift is indicated by the gray trails. Some models drift considerably, up to 1°K, while other models have fairly low drift during a typical 500-year long control run. Clearly, when models are run over long periods of time their results have a tendency to drift, to diverge from the response of the real system (ie the climate). There are at least three reasons why this happens:
1.Climate models may not exactly conserve energy, implying that the models are incomplete or unrealistic.
2.The climate sensitivity of the model to the various forcings may not match the real climate system, and the forcings themselves may be erroneous.
3.Local SST biases in the coupled model may influence the atmospheric state—for example cloudiness—and thereby shift the global mean temperature (remember, cloudiness is one of the things that they tweak to adjust model output).
The reasons listed above are all mentioned in the paper, so it is obvious that modelers are aware of these limitations of their methodology. I would add that one of the things that modelers do is to correct for drift by resetting values in the simulation to better reflect reality. This is analogous to the hand of God reaching down to keep the world running smoothly, instead of creating a world that runs correctly on its own.
Since doing long settling in runs is time consuming and does not yield proper results anyway, climate modelers have devised a workaround. Their solution is to do shorter runs and average conditions over a decade or more, trying to reconcile those results with their estimates of TOA energy fluxes. On top of all this, recent research has shown that past estimates of global energy balance have been in error and remain uncertain. This means that the target being used to tune the models is wrong and remains ill-defined. This tuning methodology is a kludge on top of a set of kludges and certainly cannot be empirically justified. All of the fudging about with initial conditions aside, perhaps the most troubling revelation is that climate models “leak” energy.
As previously stated, modeling climate is a mater of balancing the energy arriving at Earth and the amount leaving—the sums must be equal. This is directly related to item 1 in the list of reasons for climate drift given above and derives from conservation of energy. To put it bluntly, a model that leaks energy is not correct. Here is a comment from the authors:
If a model equilibrates at a positive radiation imbalance it indicates that it leaks energy, which appears to be the case in the majority of models, and if the equilibrium balance is negative it means that the model has artificial energy sources. We speculate that the fact that the bulk of models exhibit positive TOA radiation imbalances, and at the same time are cold-biased, is due to them having been tuned without account for energy leakage.
In other words, modelers have been taking fundamentally flawed, unrealistic models and the tuning them to give answers that matches historical real-world data. As a result, as error resulting from the bad models accumulates—possibly due to energy leaks amplified by numerical representation and round off being propagated as the system iterates—values drift, giving obviously wrong results. Here is how the authors conclude their paper:
Parameter tuning is the last step in the climate model development cycle, and invariably involves making sequences of choices that influence the behavior of the model. Some of the behavioral changes are desirable, and even targeted, but others may be a side effect of the tuning. The choices we make naturally depend on our preconceptions, preferences and objectives. We choose to tune our model because the alternatives - to either drift away from the known climate state, or to introduce flux-corrections - are less attractive. Within the foreseeable future climate model tuning will continue to be necessary as the prospects of constraining the relevant unresolved processes with sufficient precision are not good.
The emphasis added in the above paragraph is mine—I wanted to be sure no one missed the admission that tuning is done with a specific end in mind. In summary, climate modelers start with flawed, incomplete models, adjust physical processes within those models in unrealistic and unjustified ways to keep the models from diverging from reality, targeting energy flux measurements that are uncertain or erroneous, all to force their computer code to give them the answers that they want.
The Mauritsen et al. paper is a detailed description of the lengths that climate modelers have gone to force their models to give the desired results. This is because, by their own admission, their models are incapable of giving real results. What is more, the prospects of them being able to do so in the foreseeable future are not good. This is why putting one's faith in computer climate models is a fool's game—unfortunately fools abound in our world.
Be safe, enjoy the interglacial and stay skeptical.