Stop Them, Before They Model Again

Just when it looked like the climate catastrophists had slunk back into well deserved academic obscurity, a new report in the journal Nature Geoscience has resurrected claims of Earth's impending climatic demise. A new computer climate study says to expect increases in temperature of up to 3°C by 2050, confirming or exceed predictions made by the IPCC reports. Can this model based report be considered any more accurate than previous attempts? Have modeling techniques suddenly improved? Or is this report's appearance in a major scientific journal the signal of a renewed round of scaremongering by eco-alarmists?

In these days of faltering economies and tight government spending there still seems to be an infinite amount of funding available to promote ever larger computer based climate studies. The latest such study, “Broad range of 2050 warming from an observationally constrained large climate model ensemble,” was published online on March 25, 2012. A veritable potpourri of international climate science boffins applied yet another technique to the problem of turning sow's ear climate model results into a silk purse predictions to help bolster the IPCC's flagging fortunes. The paper's abstract explains the work and motivation:

Incomplete understanding of three aspects of the climate system—equilibrium climate sensitivity, rate of ocean heat uptake and historical aerosol forcing—and the physical processes underlying them lead to uncertainties in our assessment of the global-mean temperature evolution in the twenty-first century. Explorations of these uncertainties have so far relied on scaling approaches, large ensembles of simplified climate models1, or small ensembles of complex coupled atmosphere–ocean general circulation models which under-represent uncertainties in key climate system properties derived from independent sources. Here we present results from a multi-thousand-member perturbed-physics ensemble of transient coupled atmosphere–ocean general circulation model simulations. We find that model versions that reproduce observed surface temperature changes over the past 50 years show global-mean temperature increases of 1.4–3 K by 2050, relative to 1961–1990, under a mid-range forcing scenario. This range of warming is broadly consistent with the expert assessment provided by the Intergovernmental Panel on Climate Change Fourth Assessment Report, but extends towards larger warming than observed in ensembles-of-opportunity typically used for climate impact assessments. From our simulations, we conclude that warming by the middle of the twenty-first century that is stronger than earlier estimates is consistent with recent observed temperature changes and a mid-range ‘no mitigation’ scenario for greenhouse-gas emissions.

The new trick that these savants applied to an existing climate model is called a perturbed-physics ensemble. Reportedly, the investigators created a large collection of model results (an ensemble) by “perturbing the physics in the atmosphere, ocean and sulphur cycle components, with transient simulations driven by a set of natural forcing scenarios.” Much like tapping a bell with a hammer and observing the vibrations, they tweaked some of the model's parameters and watched what happened to the output. The claim is, that by analyzing a large number of these “perturbed” model runs, conclusions can be made regarding the error present in those models. Naturally, given that their results were “broadly consistent” with previous IPCC generated claptrap, the conclusions reached will surprise no one. Witness the figure below.


Evolution of uncertainties in reconstructed global-mean temperature projections under SRES A1B in the HadCM3L ensemble.

Why the researchers felt that yet another massive modeling study was needed lies in an honest assessment of the model use to prepare the previous IPCC report, AR4. Recall that the people of the world were asked to accept the output from those modeling runs as a valid prediction of where Earth's future climate was headed. Here is what these scientists are saying about those older model reports:

In the latest generation of coupled atmosphere–ocean general circulation models (AOGCMs) contributing to the Coupled Model Intercomparison Project phase 3 (CMIP-3), uncertainties in key properties controlling the twenty-first century response to sustained anthropogenic greenhouse-gas forcing were not fully sampled, partially owing to a correlation between climate sensitivity and aerosol forcing, a tendency to overestimate ocean heat uptake and compensation between short-wave and long-wave feedbacks. This complicates the interpretation of the ensemble spread as a direct uncertainty estimate, a point reflected in the fact that the ‘likely’ (>66% probability) uncertainty range on the transient response was explicitly subjectively assessed as −40% to +60% of the CMIP-3 ensemble mean for global-mean temperature in 2100, in the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4).

The old models do not account for “key properties” that control climate to the point that the results are so uncertain as to be meaningless. This is unsurprising to those of us familiar with computer modeling in general and climate modeling in particular. “From this evidence it is clear that the CMIP-3 ensemble, which represents a valuable expression of plausible responses consistent with our limited ability to explore model structural uncertainties, fails to reflect the full range of uncertainties indicated by expert opinion and other methods,” the authors conclude. In other words, the older model results are crap.

Yet the AR4 report's conclusions were justified using such twaddle. As the authors state: “In the absence of uncertainty guidance or indicators at regional scales, studies have relied on the CMIP-3 ensemble spread as a proxy for response uncertainty, or statistical post-processing to correct and inflate uncertainty estimates, at the risk of violating the physical constraints provided by dynamical AOGCM simulations, especially when extrapolating beyond the range of behaviour in the raw ensemble.” Violating physical constraints is modeling speak for the program acting in a way that contradicts the laws of physical reality—an indication that the models used do not accurately represent nature.

Still, the reader is asked to accept this new analysis as proving the modeling approach's veracity. “Perturbed-physics ensembles offer a systematic approach to quantify uncertainty in models of the climate system response to external forcing, albeit within a given model structure,” the authors write. That last qualification is key, “within a given model structure.” More plainly put, if your model is wrong you cannot get good results. So they analyzed a multi-thousand-member ensemble of transient AOGCM simulations from 1920 to 2080 using HadCM3L, a version of the UK Met Office Unified Model, and found their results stayed within the constraints programmed into the model (what a surprise). Other caveats include: unexpectedly observing little relationship between climate sensitivity and aerosol forcing; difficulty in comparing the control simulation like-for-like to any period in the past, partially blamed on the “paucity of observations” at the start of the twentieth century; and under-sampling uncertainty in ocean heat uptake arising from ocean physics through perturbing only a single, coarse-resolution, ocean model structure.

The bottom line on all this statistical and modeling slight of hand is this: “Assessing goodness-of-fit, which represents a limited expression of model error, requires a measure of the expected error between model simulations and observations due to sampling uncertainty, assuming it is primarily from internally-generated climate variability.” There is absolutely no justification in making that last assumption. All they are measuring is how stable their models are with respect to the output the model would generate if unperturbed. The result has no bearing on whether the model in question accurately represents Earth's actual climate system. This is hand-waving at its most creative.

So if this new “study” is not really an improvement on previous computer driven shams why is it appearing now? Think of this report as the first salvo in the run up to the next IPCC report, due out sometime next year. But surely the IPCC has learned its lesson, you say, they must have figured out that making bogus claims of impending disaster, unsubstantiated by real science, has only lead to their own marginalization? Think again. Consider the words of the IPCC's discredited but dogged leader.

“When the IPCC’s fifth assessment comes out in 2013 or 2014, there will be a major revival of interest in action that has to be taken,” said Dr. Pachauri, speaking of the periodic assessments rendered by the group of more than 400 scientists around the world that he leads. “People are going to say, ‘My God, we are going to have to take action much faster than we had planned.’”


Pachuari and friends deciding on the results of their future research.

In the same interview, Pachauri went on to explain that ethics are the “missing dimension” in the climate debate. You see, climate science is not about science at all, it is about social justice. Of course, writing those frightening reports calling for immediate global action is made easier when the conclusions are known before the data have been gathered and analyzed. To hasten delivery of their reports to decision makers world wide, the IPCC has decided not to let fact interfere with their conclusions. And they have the temerity to call what they do science.

The party line has not changed, nor have their tactics—frighten the general public into enacting radical transnational-socialist plans for restructuring the world (and pick up a bit more government grant money in the process). How much more damage to the reputation of Science can these prognosticating hucksters do? Unfortunately, these climate science narcissists have no conscience and no shame, their arrogance is unbounded and they will not be deterred. Be prepared everyone, the next round in the global warming saga is just beginning. Someone please, stop them before they model again.

Be safe, enjoy the interglacial and stay skeptical.

Thats a really strange error methodology

Well, I do not claim to be a scientist. But one thing I do recall from some of my college classes is how to calculate the error range. As I recall, typically in an experiment of this sort, the error range is calculated by summing the possible error of all the factors involved in the calculation. So what these people are saying is that they can get good results by looking at a bunch of bad results? I suppose that may be true that the real numbers are within the error range of the calculation, but I cant see how the error range could be small enough that the results would have any meaning. I would think the error range would be something like 10 degrees C.

Propagation of errors theory

The expected error, using the propagation of errors theory,is equal to the square root of the sum of the squares of the individual errors in the measurement process. However; the analysis assumes that the errors are random in the + or - sense which is anything but the case here when the leader of the IPCC, Dr. Pachauri, has literally, in advance of the model simulations, mandated his group to find results leading in one direction only. Furthermore, these results are not measurements. They are biased estimates.

In 2003, I attended, as an uninvited guest, a seminar on modelling projection results held by the IPCC in Ottawa, Canada. Eighteen widely varying simulations done in different parts of the world, but always in the positive increase in temperature sense were illustrated. I was appalled when the chairperson stated that his committee had decided to delete 6 of the simulations as outliers, and accept the mean of the remaining 12 projections for presentation in the IPCC report.

Don Farley. Gatineau, Quebec