Buy

Books
Click images for more details

Twitter
Support

 

Recent comments
Recent posts
Currently discussing
Links

A few sites I've stumbled across recently....

Powered by Squarespace
« A letter from the future | Main | Windy »
Wednesday
Jan042012

Conveying truth

I had an interesting exchange with Doug McNeall on Twitter yesterday. Doug is a statistician at the Met Office and an occasional commenter here at BH. We were discussing how scientists convey uncertainty and in particular I asked about a statement made by Julia Slingo in a briefing (warning 10Mb!) to central government in the wake of Climategate:

Globally, 17 of the warmest years on record have occurred in the last 20 years.

This statement was made without any caveats or qualifications.

If I recall correctly, I've posted on the briefing paper before, so for today I just want to concentrate on this one statement. I think Slingo's words represent very poor communication of science since they do not convey any uncertainties and imply to the reader that the statement actually means something. There is, of course, a possibility that it signifies nothing at all.

By this I mean that the occurrence of the 17 warmest years on record could have happened by chance. Doug and I agree that this is a possibility, although we differ on  just how much of a possibility. Doug assesses the chances as being very slim based on comparison of the temperature record to climate models. I don't see a problem in this per se, but I think that by introducing models into the assessment, certain things have to be conveyed to the reader: the models' poor performance in out-of-sample verification, our lack of knowledge of clouds, aerosols and galactic cosmic rays, and the possibility of unknown unknowns being obvious ones. Doug reckons our knowledge of clouds and aerosols is adequate to determine that that the temperature history of recent decades is out of the ordinary. This is not obvious to me, however.

But more than that, is the very fact that we are having to introduce models into the equation needs to be conveyed to the reader. Were our knowledge of temperature history better, we would be able to show based on purely empirical measurements that the temperature was doing something different in recent decades. That we cannot do so needs to be conveyed to the reader, I would say.

My challenge to you, dear readers, is to convey in, say, four sentences, the state of the science in this area. (We will take it as given that it is reasonable for Slingo to convey the basic statement about recent temperatures that she has chosen to do. If you feel otherwise, feel free to make your case in the comments.)

PrintView Printer Friendly Version

Reader Comments (125)

What record?
Whose record?
Jims' record?
Not my record!

Jan 4, 2012 at 2:54 PM | Unregistered CommenterJohn Silver

Martin, I had forgotten this:

"10 December 2009

We, members of the UK science community, have the utmost confidence in the observational evidence for global warming and the scientific basis for concluding that it is due primarily to human activities. The evidence and the science are deep and extensive. They come from decades of painstaking and meticulous research, by many thousands of scientists across the world who adhere to the highest levels of professional integrity. That research has been subject to peer review and publication, providing traceability of the evidence and support for the scientific method."

ANYBODY who signed this disgraceful document should hang their head in shame!

Jan 4, 2012 at 3:04 PM | Unregistered CommenterRoger Longstaff

Bill et al are absolutely right - "global average" in temperature terms is utterly without meaning and mis-creants can make it anything they want it to be. Remember that the average American has one tit and one ball and the average Belgian has less than two legs!

Vernon E

Jan 4, 2012 at 3:24 PM | Unregistered CommenterVernon E

My question would be what is the statistical probability of this happening by chance?

Let’s take a sample of 100, to make calculations easy. Each sample is assigned at random a value from 1 to 100.

Now arbitrarily divide the 100 samples into two subsets of 50 samples each. Each subset, statistically, will contain 25 highs and 25 lows. The odds of this are 50%.

Now divide one of the subsets in half again so that there are 25 samples. Each of these subsets, statistically, will contain 12 highs and 12 lows. The odds are 50%.

In a random sample of 25 from 100, there is a 50% chance of having 12 highs, or 12 lows.

From this it follows that in a random sample of 75 from 300, there is a 50% chance of having 36 highs, or 36 lows.

Thus in a random sample of 37 from 150, there is a 50% chance of having 18 highs or 18 lows.

The odds of having 18 highs in a random sample of 37 from 150 are 50%.

The odds of having 18 highs from a random sample of 20 from 150 are about half of that, or 25%.

The odds are 1 in 4, 25%, that having 17 of the warmest years of the last 150, occurring in the last 20, is purely random.

Jan 4, 2012 at 3:27 PM | Unregistered CommenterRedbone

Roger Longstaff Jan 4, 2012 at 3:04 PM


ANYBODY who signed this disgraceful document should hang their head in shame!

Yes, it was absolutely disgraceful. At the time I was astounded that the CHIEF SCIENTIST of the Met Office should initiate such a thing.

Jan 4, 2012 at 3:42 PM | Unregistered CommenterMartin A

Temperature is a measure of a particular point in space and time, much like measurement of an RF field. As a consequence, the very term 'global average temperature' actually has no meaning.

Jan 4, 2012 at 3:51 PM | Unregistered CommenterMingy

The original post stated:

"Were our knowledge of temperature history better, we would be able to show based on purely empirical measurements that the temperature was doing something different in recent decades."

I don't quite understand what is being said here. Maybe I misunderstand, but however long the temperature record, it doesn't say anything about what is causing the change. All natural variability has a cause of one sort or another. You might think of it as noise, but one man's noise is another man's signal.

Suppose the next decade shows steady warming. What would those 'purely empirical measurements' show about the cause? Would they prove it was or was not the sun or cosmic rays?

Suppose for the sake of argument that we had excellent proxies going back 1500 hundred years which showed the Medieval warming as a mild bump and another two decades of warming until 2030. Without theory and calculations based on theory what would that show?

If, in 2100, we have 'purely empirical measurements' of global warming of 2 degrees above the nineteenth century then without physical understanding that would still not say anything about causes.

Could someone clarify?

Jan 4, 2012 at 3:51 PM | Unregistered CommenterJK

It all reminds me of the man who drowned in a pond, the averge depth of which was 6 inches.

Jan 4, 2012 at 3:52 PM | Unregistered CommenterTom Mills

The 49 warmest years in the record occurred in the last century. My cherry beats your cherry, Slingo! (Oooh, might that be rude?)

Jan 4, 2012 at 4:30 PM | Unregistered CommenterRich

I think its interesting the lengths folks go to to keep the "est" tag. At the turn of the millennium, you had a nice linear curve (particularly with satellite data going back only 22 years) for temperature increases. Then we had a plateau. So in the interest of accuracy while still making a convincing argument, the talking point turned to the warmest decade on record for the 2001 -2010 time period. With warming in the next decade off to a slow start for 2011, the talking point has started to encompassed two decades. I guess it's easier for a linear thinker than having to explain a plateau.

Jan 4, 2012 at 4:58 PM | Unregistered CommenterSean

With a little assist from Mencken (he said it so well),

"On a planet that is estimated to be over 4 billion years old, to imply that humans measurement of temperatures in the last 160 years is some sort of record, despite the geological evidence to the contrary, is anthropocentric at best, and anti-human at worst.

Those advocating the loudest for action on Climate Change now, were the same voices advocating the banning of DDT, the sterilization of the "sub-normal" via Eugenics laws, lobotomizing willful and moody young women to "cure" them (hence the beginnings of the Special Olympics) and are currently leading the charge against cheap, clean energy via Fracking.

The whole aim of Environmental Politics is to keep the population alarmed (and hence clamorous to be led to safety) by menacing it with an endless series of hobgoblins, all of them imaginary.

Their ultimate goal, Agenda 21 fully implemented on a world-wide basis, won't happen within my lifetime, but the long slow shuffle towards global totalitarianism has begun."

Seems rather depressing, once you actually write it all down.

Jan 4, 2012 at 5:09 PM | Unregistered Commentermitchel44

The statement "Globally, the 17 warmest years on record have all occurred in the last 20 years." in the document, is not a fact, it is an artifact. The "Global Temperature" referred to, is the result of the current (arbitrary) parameter settings of the particular model in use. These change over time, which accounts for the "fact" that the 1930s were warmer in the 1970s than they (the '30s) were in the 1990s! This nonsense is unscientific (unless historic periods "cool" over time) but the "evidence" seems to exist in the form of the changing data-sets.

The reason that the keepers of the data need ever more expensive and powerful supercomputers is that it becomes more difficult (less probable) to find a new combination of tweaks which will keep the illusion alive. They must search an ever-expanding multi-dimensional array of values to find the decreasing proportion of "favorable" results.

It's models - all the way down. And, until their models properly use local Temp, Pressure, Humidity, etc. in computing the earth's temperature, thay are only fooling themselves.

Or, perhaps not. They may not be trying to fool themselves, only the public and the politicians.

It is said "You can't fool all the people all the time.", but, then again, you don't need to. In a democracy, you only need to fool half (of those who will vote) plus one.

Jan 4, 2012 at 5:11 PM | Unregistered Commenterdadgervais

Globally, 17 of the warmest years on record have occurred in the last 20 years. Of course the thermometer record reliably dates back only to no earlier than about 1850. However, comparing the rate of rise and level of temperature with models indicates that both are unusually high. Further evidence supporting this conclusion comes from some studies of palæntological and other temperature proxies.

The '1850' can be made to be an earlier year depending on the brass neck of the speaker.

Jan 4, 2012 at 6:07 PM | Unregistered CommenterEvil Denier

Oh, for the avoidance of doubt, I also believe in fairies at the bottom of the garden, Santa Claus, Roswell and am a convinced 'Birther' and 'Truther'.

Jan 4, 2012 at 6:16 PM | Unregistered CommenterEvil Denier

Globally, 17 of the warmest years on record have occurred in the last 20 years, although a very similar statement could have been made in 1945. How many other times in the past similar rises in temperature have occured is unknown. It is therefore difficult to attach any importance to this statement.

Jan 4, 2012 at 6:30 PM | Unregistered CommenterJames Evans

1. The instrument record is rather short. It only goes back a couple of centuries.
2. The instruments themselves were invented in one of the coldest periods of the Holocene.
3. The instruments record the recovery from this very cold period of the Holocene which tends to go along in time (though not evenly so) so more recent periods are more "recovered" than older periods.
4. Both Bob Tisdale and Judith Curry have pointed out that the rises tend to occur in stepwise fashion following El Nino events so to find that the temperature regime after the most recent step up is the warmest should be expected and not the least bit surprising as recovery from the LIA continues.

Jan 4, 2012 at 6:35 PM | Unregistered Commentercrosspatch

Surely, if you are going to talk about 'global average temperatures' you need to ensure that your sampling sites are evenly spatially distributed across the whole globe. Otherwise your data are biased from the start, and the output is effectively bollocks.

Jan 4, 2012 at 7:20 PM | Unregistered CommenterSalopian

This type of question was addressed by Koutsoyiannis and Cohn in the following presentation (click through to presentation, slide #31):

http://itia.ntua.gr/en/docinfo/849/

Because of the nature of variability in climate (fractionally integrated), it is likely that the clustering effect will result groups of records at one extreme of a "short" time series. The example given above uses 8 or 9 record years in a decade and notes that this is unexceptional. The sums would need to be done for 17 of 20 years, but I suspect that would be equally unexceptional under realistic assumptions of natural variability in climate.

Jan 4, 2012 at 7:25 PM | Unregistered CommenterSpence_UK

I think Slingo's words represent very poor communication of science since they do not convey any uncertainties and imply to the reader that the statement actually means something. There is, of course, a possibility that it signifies nothing at all.

I am in strong agreement with His Eminence on that. Slingo’s words do not, indeed cannot, mean anything without some model to give the (temperature) data context. And yet most people who do not have scientific training would impute meaning.

There are two main classes of models that could be considered: statistical models and general climate-simulation models (GCMs). Regarding statistical models, no one has proposed a model that both fits the data well and shows a significant increase in temperatures (I think Doug McNeall agrees with that). For a non-technical discussion of that issue, see the op-ed piece that I published in the Wall Street Journal last year.

So we are left with GCMs. The claim that GCMs adequately represent the global climate system has not been sufficiently established in the literature. Major issues include clouds, ocean currents, the biosphere, and the effects of solar variation and galactic rays. Doug McNeall’s assertion that clouds are adequately modeled has been disputed most strongly by Lindzen & Choi.

There is also a heuristic argument against current GCM simulations of clouds. In the distant past, temperatures have been warmer and CO2 concentrations have been higher than at present; yet Earth did not experience runaway warming. Contemporary GCMs, however, do have runaway warming under such circumstances, as I understand things. It would be interesting to hear some comments on this from someone highly knowledgeable about GCMs: this point alone, if correct, would seem to invalidate claims that GCMs are adequate.

For the time scales of interest here, ocean currents are also crucial. Climate exists because of heat from the sun. The climate system redistributes that heat: it continually transports heat from the tropics towards the polar regions (some heat is also converted to kinetic energy). The climate system has two main mechanisms for transporting heat: the atmosphere and the oceans. The atmosphere can move quickly, i.e. winds. The oceans move more slowly. The oceans, though, hold much more heat than the atmosphere; e.g. the top 15 cm of ocean waters contain more heat than the entire atmosphere. Over decades, ocean heat transport can significantly change. Those changes can have an effect on global heat redistribution, that is, on climate. Yet models of ocean currents have not been demonstrated to be anywhere near adequate. Indeed, major factors relating to deep ocean currents are poorly understood, at present.

Regarding solar variation, the sun seems to have a particularly direct effect on temperatures in the arctic [Soon, GRL, 2005: Figure 1]. Has any GCM been able to accurately simulate this effect? If not, then we can conclude that GCMs do not adequately simulate the effects of the sun.

I like what Jeremy Harvey wrote at 12:22 PM.

Jan 4, 2012 at 7:58 PM | Unregistered CommenterDouglas J. Keenan

A slightly different take on this from Steve Jones in the Daily Telegraph on Tuesday: "Last year was the second warmest in Britain since records began and CO2 levels have gone up by almost a fifth in the past five decades; but as every well-briefed journalist knows, global warming is a myth put out by charlatans, so there can be no cause for concern." I suppose this is the same Steve Jones who drew up the guidance on science reporting for the BBC.

Jan 4, 2012 at 8:21 PM | Unregistered CommenterMike Fowle

Climate scientists are using models as substitutes for well confirmed physical hypotheses and this fact reveals a fundamental confusion on the part of climate scientists. Well confirmed physical hypotheses constitute the theories found in the hard sciences and each hypothesis describes some natural regularity. By contrast, models produce simulations which are attempts to reproduce some salient features of reality. As reproductions of reality rather than descriptions, models are worthless for prediction or postdiction. A model can be "trained" to reproduce a series of numbers as its output but the fact that the series of numbers appears on one or more lines on a graph of past climate is totally irrelevant to the model's achievement.

Jan 4, 2012 at 8:59 PM | Unregistered CommenterTheo Goodwin

The last 20 years have been the best time to be alive in all of human history.

We can all do things that our grandparents could only dream about.

Jan 4, 2012 at 9:20 PM | Unregistered CommenterJack Hughes

What science?

Jan 4, 2012 at 10:01 PM | Unregistered CommenterDuster

Theo Goodwin Jan 4, 2012 at 8:59 PM.

A model can be "trained" to reproduce a series of numbers as its output but the fact that the series of numbers appears on one or more lines on a graph of past climate is totally irrelevant to the model's achievement.

Yet the ability of their models to reproduce the history used to construct them is exactly what the Met Office (last time I looked at their web site) says is what validates their models. If they truly believe such rubbish, they should be closed down. For that matter, if they do not believe it, whilst presenting it as a a valid statement, they should be closed down - and their super computers sold for electronic scrap.

It should be obvious to anyone that, if their models could not even reproduce the data used to construct them, the models would be worthless. But reproducing the training data provides no confirmation of the correcteness of the physical model, necessary for predictions. And still less does it show that, even if the physical model is correct, chaotic effects will not dominate predictions further ahead than the immediate future.

Jan 4, 2012 at 10:40 PM | Unregistered CommenterMartin A

Jan 4, 2012 at 10:40 PM | Martin A

Right. Models can be used for analytic purposes. Some climate scientists might not understand models and might be depending on the programmers. But the programmers know beyond a doubt that the models can neither predict nor postdict.

Jan 4, 2012 at 11:52 PM | Unregistered CommenterTheo Goodwin

It's too early to be certain but I suspect much Northern regional warming has been from the Arctic melt cycle superimposed on the warming 30 years of ENSO.

The 50-70 year Arctic cycle has been written about for 140 years in Russian literature and there are regular reports elsewhere, e.g. an 1817 Admiralty report.

It's nothing to do with CO2. We are just entering the freeze cycle and by 2020 the Arctic will be a scold as 1900: http://bobtisdale.files.wordpress.com/2011/12/figure-101.png


What goes up is probably what goes down: Slingo is apparently so wedded to the CO2-AGW religion she is blinkered to other, much better established science. Should science and the public service which is the Met. Office be driven by dogma?

Jan 5, 2012 at 8:17 AM | Unregistered Commentermydogsgotnonose

"The last 20 years have been the best time to be alive in all of human history. We can all do things that our grandparents could only dream about."

Can't walk on the moon, can't fly across the pond at Mach 2 sipping a G&T......

Looks like I chose the wrong career,

Jan 5, 2012 at 10:07 AM | Unregistered CommenterRoger Longstaff

I think the statement is misleading and deliberately so.

The intent of the statement is to convey the idea that global warming is continuing in the noughties as projected; however because of the correlation of GMST within close time-periods it is to be expected regardless of whether warming is continuing, plateauing or declining.

I think the statement would only be meaningful if there were no autocorrelation of GMST.

Jan 5, 2012 at 1:01 PM | Unregistered CommenterGary Moran

John Shade at 12:48 pm yesterday: your fickleness saddens me greatly! Note that I tried to write something that someone who trusted the consensus could sign up to.

Jan 5, 2012 at 1:17 PM | Unregistered CommenterJeremy Harvey

Sorry for getting off topic, but I have to admit to being a little baffled by the sweeping and unqualified attacks on models, such as those from Theo Godwin and Martin A. I don't understand how 'well confirmed physical hypotheses', to use Theo's suggestion, are a viable alternative in this case.

For example, Newton's laws were used to model the trajectory of Apollo to the moon. A numerical calculation was carried out on a computer to predict the trajectory. Many factor were small enough to be neglected and the underlying laws were not exact (neglecting quantum mechanics and relativity).

In the climate system we have Newton's laws in the form of the Navier Stokes equations, Maxwell's equations, thermodynamics, etc. To be sure, the factors that are not modelled are larger and there are many less well tested parameterisations.

But even if we had more comprehensive understanding, physical principles could only be compared to reality through calculation using a model. What is the alternative?

Is the complaint about models really a short hand, for complaints about parameterisation or complaints about the difficulty of modelling a chaotic system? That would at least be a useful starting point for debate. But I find it hard to take seriously a total dismissal of all modelling.

Jan 5, 2012 at 1:59 PM | Unregistered CommenterJK

JK: the complaint about the Models isn't the Navier-Stokes momentum solution, it's the false physics used to put in the purported GHG warming.

1. 'Back radiation' heating is plain wrong; as it breaches the 2nd law of thermodynamics, it's a de facto perpetual motion machine.
2. To account for the failure of temperatures to rise as predicted, cloud optical depths in the models are quietly set at double reality and cloud albedo effect cooling, the only purported evidence of high feedback, is almost certainly the wrong sign and the main GW/AGW mechanism.
3. The predicted future temperature rise is exaggerated by ~3.7 times because present GHG warming is claimed to be 33 K when it's really ~9 K.
4. Then these people have the gall to claim that anyone probably better qualified than most in the discipline criticize its elementary scientific failures as 'deniers'.

Jan 5, 2012 at 3:12 PM | Unregistered Commentermydogsgotnonose

In the UK, CO2 works mostly in the Autumn and to a lesser extent in the Spring. If you check out the CET record and chart the seasonal averages, Summer trend has been flat since 1988, Winter trend is downwards and Autumn trend is upwards, Spring a little less so. Have we got the wrong sort of CO2?

Taking the same series, a Press release in 1698 for example, could have said “eight of the coldest years have occurred in the last 15 years”.

Jan 5, 2012 at 6:46 PM | Unregistered CommenterDennisA

Jan 5, 2012 at 1:59 PM | JK

Thanks for your question. A little terminology will clarify the matter. Newton's theory is a physical theory and its several hypotheses are well confirmed as it applies to our solar system. Of course it does not provide the detail or reach of Einstein's theory but it does just fine in our solar system.

Our solar system is a model of Newton's equations. A model is a set of objects that renders true all the individual statements in a physical theory. One can construct computer models of Newton's theory. Some company sells an "observatory" that will project our solar system on the semicircular ceiling that you have constructed just for this purpose and it will predict and postdict planetary movement and such. A model that does prediction and postdiction can exist because of Newton's theory. In other words, the programmers actually used Newton's equations to calculate where all the shiny little dots should appear on the ceiling in the future or past as you dial-up one time or another.

The climate scientists who are creating GCMs, models of Earth's climate, have no set of physical hypotheses that play the role of Newton's equations in our little observatory. All they have are Arrhenius's equations and a lot of data about climate. Arrhenius' equations have never been rigorously formulated for the actual Earth. So they are not well confirmed in Earth's atmosphere. No less important, everyone knows, as Arrhenius knew, that Arrhenius' equations are not enough to explain or predict Earth's climate. In addition, you need the physical hypotheses that govern all the so-called "feedbacks" such as cloud behavior. These physical hypotheses do not exist in any form that could be considered well confirmed. Much empirical research must be done before they can exist.

As for the data, models contain wonderful differential equations that manipulate the data in wonderful ways; however, all of that data manipulation is nothing more than a sophisticated method of extrapolating the future from existing graphs. That is not science. That is a system of hunches.

I hope you now understand the difference between theories and models. Theories describe the natural regularities that make up nature while models reproduce the objects or events that are nature. It is not possible to make predictions from a model. If you had the perfect model of Earth's climate all you would have is Earth's climate. How can you make predictions from that?

Jan 5, 2012 at 6:59 PM | Unregistered CommenterTheo Goodwin

Theo,

Thank you for your reply (and sorry for misspelling you your name). I'm not sure what you are referring to by Arrhenius' equations? I would agree with you that there is no rigorous theory of clouds. But I do think that there is a rigorous basis for the primitive equations (see, for example http://en.wikipedia.org/wiki/Primitive_equations ) and the equation of radiative transfer (see for example http://en.wikipedia.org/wiki/Radiative_transfer ) together with data on molecular absorption.

It seems to me that this part of the problem is not so much a lack of rigorous equations as actually solving those equations. For example, it seems to me that if we could accurately solve the primitive equations (or even the full Navier Stokes equations) we would have great insight into the water vapor feedback because we would understand how the water vapor gets into the upper troposphere where it acts most strongly.

In this part of the problem I really don't see a strong distinction between theories and models that you put forward. Many complex mechanical and fluid flow problems require the use of simplifying assumptions to make useful predictions (for example in design of aircraft wings). Of course, assumptions need to be tested. But that is not the same thing as saying that properly motivated simplifications render prediction useless, or as you put it 'nothing more than a sophisticated method of extrapolating the future from existing graphs.'

Perhaps another example could clarify our disagreement. What do you think of numerical weather prediction? I think you would classify that, too, as a model as it contains parameterisations of clouds. The met office claims to have steadily, if modestly, improved the performance of their weather forecasts. (see for example http://www.metoffice.gov.uk/research/weather/numerical-modelling/verification/global-nwp-index ) Do you think that weather forecasts are improving, and if so how is that possible?

Jan 5, 2012 at 8:42 PM | Unregistered CommenterJK

JK, I agree with your comment at 8:42 and the previous one. In principle, there is nothing wrong with computational modelling. I should hope not - it is what all my research is about (I am a computational chemist). Lots of sceptics are very dismissive of computer modelling as applied to climate, and I think they then exaggerate the sins of computers to imply that all computer models are bad. The way I see it, some simple mathematical models can be solved using paper and pencil only, and thereby be used to make predictions (in a Popperian sense) and hence expose theories to falsification by experiment. For a theory to be solvable in this way, it usually needs to be for a fairly simple experimental system, with high symmetry, low number of parameters, etc. For more complicated systems, you need a computer to solve the model. Sometimes you can still keep an overall hand-waving overview of how the model works, so that you can 'understand' what the computer is doing using paper and pencil only, even though you cannot exactly reproduce the computers output. Sometimes - as in modelling the energy budget in the atmosphere - even that starts to get hard, especially for non-experts. That makes the model predictions rather untransparent, which is intellectually unsatisfactory. It also seems to be the case in climate science that that lack of transparency also concerns what goes into the model. Finally - climate science is not a case where it is all that easy to carry out experimental tests of genuine predictions. That makes people doubtful of the quality of the computer models.

Jan 5, 2012 at 9:49 PM | Unregistered CommenterJeremy Harvey

JK writes:

"In this part of the problem I really don't see a strong distinction between theories and models that you put forward. Many complex mechanical and fluid flow problems require the use of simplifying assumptions to make useful predictions (for example in design of aircraft wings)."

That is analytical work. Models are excellent for that. Models are not useless. They are the best tools for investigating the hidden assumptions in one's physical theory.

What climate scientists are doing is creating simulations without the benefit of physical hypotheses. They believe that changing some feature of a model and generating a new simulation will permit them to compare the new simulation against old simulations as a kind of test of their new idea that is expressed in the new simulation. That is delusional. They forget that the new idea in the new simulation only makes sense against the assumption of all the remainder of the model. Therefore, the comparison is actually between two distinct models, not between an old idea and its new replacement in one model that remains the same. Comparing two distinct models obviously gets you nowhere.

Climate scientists believe that rejiggering models will someday cause the salient features of Earth's climate to pop out. That is delusional. The only way is empirical research and the formulation of new physical hypotheses by creative physicists. Physical hypotheses can be tested individually or in small groups against actual reality. This process, known as scientific method, will arrive at the salient features eventually.

As regards your remark that the problem is solving the equations from radiation theory, that is just a byproduct of modelers' delusions. They have solved the equations and they have yielded nothing of value. But the modelers, poor deluded creatures, believe that just one more level of complication in the differential equations will get them there. That is why they say they cannot solve the equations. When you have no actual reality in sight the string of promising differential equations grows indefinitely.

Jan 5, 2012 at 11:02 PM | Unregistered CommenterTheo Goodwin

Theo, you write that modelers 'believe that changing some feature of a model and generating a new simulation will permit them to compare the new simulation against old simulations as a kind of test of their new idea that is expressed in the new simulation. That is delusional. They forget that the new idea in the new simulation only makes sense against the assumption of all the remainder of the model. Therefore, the comparison is actually between two distinct models, not between an old idea and its new replacement in one model that remains the same. Comparing two distinct models obviously gets you nowhere.'

I agree that when you add something to a model it will interact with many other factors which makes assessing the impact of that change difficult. I also agree that specific new hypotheses need to be investigated first in detail and understood so far as possible in isolation. But in the reality of the climate system new effects do not exist in isolation. How can we understand the impact of a new effect without trying to calculate it's interaction, so far as we can, with all the other process that are going on in the climate system?

You don't mention any of the work that is done comparing models with data. Let me give you some examples, just taken semi-randomly from the current issue of the Journal of Climate. Apologies if you cannot access the papers, but I'm not suggesting you read them, just looking at the abstracts I would characterize them as confronting models with data:

http://journals.ametsoc.org/doi/abs/10.1175/2011JCLI4001.1
http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-11-00059.1
http://journals.ametsoc.org/doi/abs/10.1175/JCLI-D-11-00058.1
http://journals.ametsoc.org/doi/abs/10.1175/2011JCLI4092.1

How does this type of work fit in with your picture of how what's wrong with climate modeling? Do you think there is just very little of it, or attempts to compare with data are systematically wrong or what?

I also agree on the importance of 'empirical research and the formulation of new physical hypotheses by creative physicists' (I would say chemists and biologists, too) which 'can be tested individually or in small groups against actual reality'. But I don't think this is opposed to modeling. Rather, it is the foundation on which complex models need to be built.

Let me give some examples of how I think this works in practice.

- With cosmic rays, Svensmark and others proposed that there may be an effect with cloud seeding. That prompted the CLOUD project at CERN and others to investigate the effect of charged particles. Now people are attempting to understand what effect cosmic rays might have by calculating their impact on climate using models (for example see http://www.agu.org/pubs/crossref/pip/2011GL050058.shtml )

- Biologists have investigated in the lab soil respiration, the rate at which leaves are broken down into CO2 and the way in which this rate is affected by temperature (for one example among many see http://www.sciencemag.org/content/331/6022/1265.3.full.pdf ). Modelers then work on how to incorporate these measurements and insights into an understanding of the climate system as a whole (for example see http://www.springerlink.com/content/qw78723107674774/ )

- A truly vast amount of work has gone into detailed studies of aerosols (this cannot be adequately summed up in a few references but see, for example, http://pubs.acs.org/doi/abs/10.1021/ac103152g , http://www.sciencedirect.com/science/article/pii/S0165993611002159 , http://atmos-chem-phys-discuss.net/11/22301/2011/acpd-11-22301-2011.pdf) Climate modelers subsequently try to draw on this work to understand their effect on climate (see, for example http://elib.dlr.de/70519/1/gmd-4-325-2011.pdf )

Is your complaint basically just that all these attempts are premature? You seem to be suggesting that the individual studies have to reach a certain level of maturity, to become elevated from model to theory, before it can become useful. You hint that this is not just incremental progress, or more of the same but a qualitative shift. Can you give an example in the climate sciences of some process which has made this transition or do you perhaps think that just more research along present lines might, in a few decades, form a good foundation for climate models.

You say that modelers 'have solved the equations and they have yielded nothing of value.' Could you give an example of what you mean, as I find it a bit confusing? For example, surely if modelers had solved the equations of fluid flow for the oceans then that would tell us what the value of the eddy diffusivity, which is critical to understanding climate sensitivity (see the recent discussion by Science of Doom http://scienceofdoom.com/2011/12/31/measuring-climate-sensitivity-part-three-eddy-diffusivity/ ) Can you point to where the equations have been solved, or explain why solving them would not give us the eddy diffusivity, or point to any such discussion?

You say 'But the modelers, poor deluded creatures, believe that just one more level of complication in the differential equations will get them there. That is why they say they cannot solve the equations. When you have no actual reality in sight the string of promising differential equations grows indefinitely.' Do you mean to say that they keep adding more equations? I don't think modelers see that as the universal key to progress. For example, that is why Isaac Held famously advocates working with simpler as well as more complex models (see http://journals.ametsoc.org/doi/abs/10.1175/BAMS-86-11-1609 ) To stick with the example of eddy diffusivity, I don't think any modeler would think that adding more equations is the key to progress? If this is what you are suggesting climate modelers think, can you point to any examples?

Apologies for the lengthy comment. Feel free to pick up on a particular aspect rather than respond to the whole thing, but some specific examples would be very useful for me in understanding your argument.

Jan 6, 2012 at 1:18 AM | Unregistered CommenterJK

"How can we understand the impact of a new effect without trying to calculate it's interaction, so far as we can, with all the other process that are going on in the climate system?"

There is no connection between either model and reality. The comparison across models gives the illusion that something has remained stable. But there is no stability in either model. Neither connects to reality.

Jan 6, 2012 at 3:09 AM | Unregistered CommenterTheo Goodwin

"You don't mention any of the work that is done comparing models with data."

Comparing models with data is very easy. The model either reproduces the data or it does not. If the model is a total fail in reproducing the data then it is worthless obviously.

If the model can reproduce 90% of the data but months of work show that it cannot reproduce the remainder then it should be abandoned. The purpose of a model is to reproduce the phenomena. Anything less than exact reproduction is not reproduction, no?

But even if programmers have created a model that exactly reproduces existing data, what good is it? The model is a good organizational tool because now I have all my data neatly organized by this model for quick and easy reference. But it has no other value whatsoever. Even a perfect model is worthless for prediction. To make this point clearly and definitively, let's take a very real example. Suppose that I am modeling the US Defense Department's logistics system. In my model I have all items regularly shipped, all origination points, all destination points, all means of transportation, and so on. Let's suppose this model can reproduce the Defense Department's shipping patterns for the last ten years and is accurate up to yesterday. Can I use it to predict the changes in shipping that will occur today. Surely, I do not need to answer that question. All that model can predict is "the same old, same old," which is exactly what any model predicts.

When modelers talk about prediction, they usually mean that they have changed the inputs or the parameterization and produced a simulation that is new in some way. Is it not obvious that all the prediction here is in the changes to the inputs or parameterization and not in the model itself. The predictions are logically independent of the model they are attributed to.

Jan 6, 2012 at 4:20 AM | Unregistered CommenterTheo Goodwin

"You seem to be suggesting that the individual studies have to reach a certain level of maturity, to become elevated from model to theory, before it can become useful."

You have failed to grasp the difference between physical hypotheses and models. Physical hypotheses describe natural regularities. The key word here is "describe." Physical hypotheses are about some aspect of physical reality and the true or really well confirmed physical hypotheses (which make up mature theories, eventually) tell us what that reality is.The key word here is "about." The physical hypotheses are creations of intellect that stand apart from reality and tell us about reality, tell us what reality is.

By contrast, models produce simulations that reproduce some salient features of reality. Simulations do not describe reality and are not about reality and, for those reasons, simulations are neither true nor false. Simulations are complete or not. They give an exact reproduction of reality or they fail as simulations to some degree. Simulations are not creations of intellect; rather, the computer code that generates them is a creation of intellect. However, the computer code does not describe reality and is not about reality. In sum, the value of a simulation depends entirely on whether and to what degree it reproduces reality. Why would you think that a reproduction of reality can be used to predict reality?

Physical hypotheses bear an important logical relationship to the reality that they describe. When combined with statements of initial conditions specifying observable fact, they logically imply observation sentences about future events. These observation sentences are what logicians call "instances" of the natural regularities described by the physical hypotheses. A record of predictions found true make physical hypotheses well confirmed and make for them a place in science. Note the centrality of "natural regularities." The purpose of science is to discover the natural regularities that comprise nature.

By contrast, can you specify some logical relationship that exists between reality and a model and its simulations? You cannot because there is none. That is why the usefulness of models in science is limited to analytical work such as discovering hidden assumptions.

Models cannot substitute for well confirmed physical hypotheses. The point is not based on temporary or practical considerations but on the very logic of the two structures.

Jan 6, 2012 at 4:40 AM | Unregistered CommenterTheo Goodwin

"You seem to be suggesting that the individual studies have to reach a certain level of maturity, to become elevated from model to theory, before it can become useful."

You have failed to grasp the difference between physical hypotheses and models. Physical hypotheses describe natural regularities. The key word here is "describe." Physical hypotheses are about some aspect of physical reality and the true or really well confirmed physical hypotheses (which make up mature theories, eventually) tell us what that reality is.The key word here is "about." The physical hypotheses are creations of intellect that stand apart from reality and tell us about reality, tell us what reality is.

By contrast, models produce simulations that reproduce some salient features of reality. Simulations do not describe reality and are not about reality and, for those reasons, simulations are neither true nor false. Simulations are complete or not. They give an exact reproduction of reality or they fail as simulations to some degree. Simulations are not creations of intellect; rather, the computer code that generates them is a creation of intellect. However, the computer code does not describe reality and is not about reality. In sum, the value of a simulation depends entirely on whether and to what degree it reproduces reality. Why would you think that a reproduction of reality can be used to predict reality?

Physical hypotheses bear an important logical relationship to the reality that they describe. When combined with statements of initial conditions specifying observable fact, they logically imply observation sentences about future events. These observation sentences are what logicians call "instances" of the natural regularities described by the physical hypotheses. A record of predictions found true make physical hypotheses well confirmed and make for them a place in science. Note the centrality of "natural regularities." The purpose of science is to discover the natural regularities that comprise nature.

By contrast, can you specify some logical relationship that exists between reality and a model and its simulations? You cannot because there is none. That is why the usefulness of models in science is limited to analytical work such as discovering hidden assumptions.

Models cannot substitute for well confirmed physical hypotheses. The point is not based on temporary or practical considerations but on the very logic of the two structures.

Jan 6, 2012 at 4:40 AM | Unregistered CommenterTheo Goodwin

Hello everyone,

An interesting post, and discussion. I note that people have started pointing to the peer reviewed literature, so I won't say too much more.

On the quote from the briefing: It is a factual statement. If you pull such a statement out of its original context then, well, ... it will seem to be rather devoid of context. Two points later, for example, is some rather important context.

On the point about uncertainty, I would say that Andrew seems to define "model" rather more narrowly than most climate scientists that I know. AOGCMs are an important tool in understanding climate, but are far from being the only strand of evidence that climate change is anthropogenic. I talk more about that in my reply to Doug Keenan's (hi Doug!) article, on my blog:

http://dougmcneall.posterous.com/some-correspondence-with-doug-keenan

Andrew's point about conveying uncertainties and caveats in any statement is well made. As always, there will always be a conflict between listing every possible uncertainty and caveat, and conveying useful information. I think that the briefing, taken as a whole, makes that balance well.

I would argue that many of the uncertainties that he mentions (cosmic rays etc.) *have* been considered by climate scientists, and have been found to be rather unimportant. We do worry about unknown unknowns, but then you *always* have to worry about them. The point is, that scientists will continue to try and find holes in their theories, and their theories *will* be changed as new data arrive and knowledge increases. This will affect their uncertainty estimates.

I'm a Bayesian, and believe that probabilities are constructed as reasonable betting odds in the light of information. I therefore have no problem in having a different probabilistic interpretation of the '17/20 years' statement than Andrew. I just think I've got better information ;)

On skepticism in general, I could write all day, and not be as clear as James Annan on this:

http://julesandjames.blogspot.com/2007/05/why-david-evans-is-wrong-along-with-all.html

regards all,

Doug McNeall

Jan 6, 2012 at 9:51 AM | Unregistered CommenterDoug McNeall

"The point is, that scientists will continue to try and find holes in their theories"

Ah yes...scientists like Phil Jones no doubt ;)

Regards

Mailman

Jan 6, 2012 at 5:01 PM | Unregistered CommenterMailman

Thanks Doug, but whilst Climate Science(TM) indulges activists who not only fail to"continue to try and find holes in their theories" but wifully obstruct (and quite likely defame) those who attempt to conduct this vital scientific function, then those of us who are trained in the scientific method hold your field in utter contempt.

Jan 6, 2012 at 5:17 PM | Unregistered CommenterSayNoToFearmongers

Jan 6, 2012 at 9:51 AM | Doug McNeall

Second comment on your posted link, kind of says it all really.

James Annan said...

I didn't put this post up as an open invitation for denialist kookery. Go away.

Jan 6, 2012 at 7:07 PM | Unregistered CommenterSandyS

My curiosity is often piqued by those who declare "I am a Bayesian". It reminds me a little of arguments between those who class themselves as "Bayesians" and "Frequentists", debates between whom often look more like an argument over who has the best games console rather than actual useful analysis.

Doug, you may also be aware that Matt Briggs also considers himself to be a Bayesian (and has written books on the subject), yet I believe he draws rather different conclusions than those that you express here. So it appears being a Bayesian is not synonymous with drawing the conclusions that you draw. Perhaps it may be useful to canvas his views on the subject?

Bayes theorem, along with related analysis (networks, classifiers) are really just tools to do a job. They can be useful, even powerful, when applied in the right way - just as frequentist methods can be (frequentists can calculate betting odds as well, you know!).

But I cringe a little when I read the type of analysis presented by Doug above. The idea I think Doug is putting forward (and please, Doug, correct me if I am wrong) is that we can treat these years as throws of a dice or spins of a roulette wheel, and accumulate the odds to show what is happening is far from the ordinary. Although despite claiming to be a Bayesian, which is only meaningful to my mind once the numbers have been crunched, I see no such analysis (or the requisite precautions of the many assumptions that go into such real world analysis).

The problem here is that unlike the rolls of a die or the spins of a roulette wheel, climate is autocorrelated, and the type of autocorrelation structure present in natural variability is pivotal to the probability of such an occurrence being unusual. Without accounting for this - whether carried out in a Bayesian or frequentist analysis - results gained are largely worthless. The type of analysis I am referring to is that which I linked in comment on page 2 of this thread (Jan 4, 2012 at 7:25 PM), which shows 9 of 10 years being records is not out of the ordinary of systems exhibiting Hurst-Kolmogorov dynamics. I'd be surprised if 17 of 20 years would be either (but I have not crunched the numbers).

(And this ignores the bias introduced by the selectivity of the numbers - why 10? why 20?)

Jan 6, 2012 at 8:39 PM | Unregistered CommenterSpence_UK

Doug McNeall says:

AOGCMs are an important tool in understanding climate, but are far from being the only strand of evidence that climate change is anthropogenic.

GCMs are not evidence. They are just mathematical constructs and unless they are fully verified and validated, they are worthless. In tens minutes I could construct three GCMs, one which would show the earth is going to cool, one which would show the earth is going to warm and one which would show the earth is going to remain unchanged. All would be equally worthless.

One of the problems we have these days is that too many people, who should know better, think that models are reality.

Jan 6, 2012 at 10:20 PM | Unregistered CommenterPhillip Bratby

Spence_UK

“(And this ignores the bias introduced by the selectivity of the numbers - why 10? why 20?)”

I often have to remind myself that this planet will never relate to manmade metrics.

Homo sapiens rejoice or dismay at a “record” month, day, year, decade, whilst the planet knows of no such metrics.

The new species “homo superbus” is convinced that within their lifespan they have been provided with all the knowledge needed to control the temperature of this planet?

Jan 6, 2012 at 11:02 PM | Unregistered CommenterGreen Sand

This is because it is sooooo much easier to write a computer program, insert any number of "parameters" and tweak them until it generates the desired results than to actually go out there and collect real data in a careful and meaningful manner. This is particularly true when you need to "show results" in order to continue the flow of grant money.

As for Bayesian statistical methods, the degree of belief I have is quite low. As Eisenstein noted: "I am convinced that He (God) does not play dice."

Jan 6, 2012 at 11:03 PM | Unregistered CommenterDon Pablo de la Sierra

Jan 6, 2012 at 10:20 PM | Phillip Bratby

Well said.

Jan 7, 2012 at 12:37 AM | Unregistered Commenteredward getty

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>