I have resisted, until now, the urge to write about the computer climate models that that dominate much of the global warming discourse. That is because it is almost impossible to discuss these models, and their flaws, without getting too technical or wonky.
But it is increasingly clear that the models are the linchpin of the theory of catastrophic man-made global warming theory. They are not just a piece of the evidence for future catastrophes, they are the only evidence.
How can this be, you say? There are seemingly thousands of studies coming out every week on various aspects of climate. And that is true. But note that I was careful when I focused my assertion around “the catastrophe.”
Plenty of the issues that swirl around the climate debate can be proven without resorting to computer modelling, often from direct observation. We know the climate is changing all the time through history, and we know temperatures rise and fall (and have mostly risen over the last century). We also know that human emitted CO2, all things being equal, can warm the Earth as its atmospheric concentrations rise.
But what we know from direct observation does not get us to the threatened catastrophe. Direct observation in a laboratory of the greenhouse effect of CO2 leads us to believe that, all things being equal, CO2 will further warm the Earth about 1.2 degrees C for each doubling of its atmospheric concentration (the effect is logarithmic so the increase of CO2 from 300 to 600 ppm would have the same effect as the increase from 600 to 1200 ppm). This relationship between Earth’s temperature and CO2 concentrations is called climate sensitivity, and based on this sensitivity of 1.2 we might expect only a degree of warming over the next century.
I encourage you to read my previous article on this topic before continuing
if these concepts are new to you. Suffice it to say that this lab-measured temperature sensitivity to CO2 of about 1.2 falls well short of the catastrophe we’ve been threatened with in the press. Climate scientists must assume large numbers of amplifying effects to multiply this sensitivity three to five times or more to get the scary forecasts that we are used to seeing.
The evidence for these amplifying or “feedback” effects is at best equivocal. In part, this is because isolating and measuring these effects in the real, horrendously complex and chaotic climate is very hard.
Take an example from economics. If a Congressman tells you that his legislation will boost GDP by an extra half percent, is that credible? Probably not. The economy is wildly complex, and even after decades of trying, no one has found (and perhaps no one will ever find) a method for attributing output B to solely and directly to input A. That is why estimates of the effect of the Obama stimulus are all over the map. Economics, and climate, seldom offer opportunities for controlled experiments to test the effect of changing a single variable.
As a result, scientists have found no way to directly measure the actual, real-world change in temperature from a change in CO2. Sure, CO2 has increased over the last hundred years, but at the same time solar output, land use, ocean cycles, and a myriad of other drivers of climate and temperature have changed as well.
That is why a lot climate experimentation occurs within computers, rather than via direct observation of natural phenomena. For example, in the last IPCC report, their conclusion that most of the recent warming had probably been man-made was based mainly on computer study of the period between 1978 and 1998. They ran their models for this period both with and without manmade CO2, and determined that they could only replicate the temperature rise in this period with by including manmade CO2 in their models.
Believe it or not, that is the main evidence that global warming catastrophism is based on. Yes, I am sure you can raise all the concerns I have — what if the computer models don’t adequately model the climate? What if they leave out key factors or over-emphasize certain dynamics? Drawing firm conclusions from these models is like assuming you can be a rock star after winning a game of Guitar Hero.
But it is when these models are used to project catastrophic outcomes in the future that they are perhaps the most suspect. Scientists often act as if the projected warming from various CO2 forecasts is just an output of the models — in other words, “we built in a sophisticated understanding of how the climate works and out pops a lot of warming.” And in the details this is true. The timing and regional distribution of the warming tends to be a fairly unpredictable product of the model. But the approximate magnitude of the warming is virtually pre-determined. It turns out that climate sensitivity, the overall amount of warming we can expect from a certain rise in CO2 concentrations, is really an input to most models.
This means that the inputs of the model are set such that a climate sensitivity of, say, 4 degrees per doubling is inevitable. The model might come up with 4.1 or 3.9, but one could have performed a quick calculation on the inputs and found that, even without the model, the answer was already programmed to be close to 4. Rather than real science, the climate models are in some sense an elaborate methodology for disguising our uncertainty. They take guesses at the front-end and spit them out at the back-end with three-decimal precision. In this sense, the models are closer in function to the light and sound show the Wizard of Oz uses to make himself seem more impressive, and that he uses to hide from the audience his shortcomings.
But this raises a question — if the climate sensitivity to CO2 in the models is essentially an arbitrary input set to the modeler’s whims, how do these models replicate history? After all, they are all checked against historic temperature and CO2 data and all the models used by the IPCC do a pretty good job of replicating past temperature history given past CO2 levels.
Nearly a decade ago, when I first started looking into climate science, I began to suspect the modelers were using what I call a “plug” variable. I have decades of experience in market and economic modeling, and so I am all too familiar with the temptation to use one variable to “tune” a model, to make it match history more precisely by plugging in whatever number is necessary to make the model arrive at the expected answer.
When I looked at historic temperature and CO2 levels, it was impossible for me to see how they could be in any way consistent with the high climate sensitivities that were coming out of the IPCC models. Even if all past warming were attributed to CO2 (a heroic acertion in and of itself) the temperature increases we have seen in the past imply a climate sensitivity closer to 1 rather than 3 or 5 or even 10 (I show this analysis in more depth in this video
My skepticism was increased when several skeptics pointed out a problem that should have been obvious. The ten or twelve IPCC climate models all had very different climate sensitivities — how, if they have different climate sensitivities, do they all nearly exactly model past temperatures? If each embodies a correct model of the climate, and each has a different climate sensitivity, only one (at most) should replicate observed data. But they all do. It is like someone saying she has ten clocks all showing a different time but asserting that all are correct (or worse, as the IPCC does, claiming that the average must be the right time).
The answer to this paradox came in a 2007 study by climate modeler Jeffrey Kiehl. To understand his findings, we need to understand a bit of background on aerosols. Aerosols are man-made pollutants, mainly combustion products, that are thought to have the effect of cooling the Earth’s climate.
What Kiehl demonstrated was that these aerosols are likely the answer to my old question about how models with high sensitivities are able to accurately model historic temperatures. When simulating history, scientists add aerosols to their high-sensitivity models in sufficient quantities to cool them to match historic temperatures. Then, since such aerosols are much easier to eliminate as combustion products than is CO2, they assume these aerosols go away in the future, allowing their models to produce enormous amounts of future warming.
Specifically, when he looked at the climate models used by the IPCC, Kiehl found they all used very different assumptions for aerosol cooling and, most significantly, he found that each of these varying assumptions were exactly what was required to combine with that model’s unique sensitivity assumptions to reproduce historical temperatures. In my terminology, aerosol cooling was the plug variable.
The problem, of course, is that matching history is merely a test of the model — the ultimate goal is to accurately model the future, and arbitrarily plugging variable values to match history is merely gaming the test, not improving accuracy.
This is why, when run forward, these models seldom do a very credible job predicting the future. None, for example, predicted the flattening of temperatures over the last decade. And when we look at the results of these models, or at least their antecedents, from twenty years ago, they are nothing short of awful. NASA’s James Hansen famously made a presentation to Congress in 1988 showing his model runs for the future, all of which show 2011 temperatures well above what we actually measure today.
Climate modelers will argue that their models have gotten better over the last 20 years. But I would argue that they, just like our economic models, still fall well short of accurately modelling tremendously complex processes. Worse, they continue to repeat the mistake of assuming their conclusion, choosing their constants in a way that guarantee certain warming answers. In the last 20 years they may have added a lot of lines of code, but have added little accuracy. After all, adding a few more special effects to the Wizard of Oz’s light show doesn’t make him a better wizard.