Buy

Books
Click images for more details

Twitter
Support

 

Recent comments
Recent posts
Currently discussing
Links

A few sites I've stumbled across recently....

Powered by Squarespace
« Black and greenie | Main | More world government »
Monday
Mar192012

Climate models for politicians

Some weeks ago, I invited readers to improve upon parts of a summary of global warming science, written by Julia Slingo for the benefit of readers in central government. The ground covered was mainly about surface temperatures. At some point I may well write this up into something more formal.

I think it would be interesting to also say something about climate models and their uncertainties and I have been giving this some thought. My knowledge of climate models is somewhat sketchy, so some of my understanding may be incorrect, but here's the ground I think central government really ought to understand:

1. Climate models are based on well-understood physical laws. There is wide agreement that on its own a doubling of CO2 levels would produce an initial warming of around 1degC.

2. However, the knock-on effects of this initial warming ("the feedbacks") are not well understood, particularly the role of clouds.

3. Most climate models suggest a warming of 2-6degC/century. It is not clear that this range actually covers the full envelope of possilities because of uncertainties over the feedbacks.

4. The temperature predictions of climate models cannot be tested in the short-to-medium term; 30 years is required to properly assess their performance.

5. However, climate modellers derive comfort that their models are reasonable approximations of the climate systems from a number of observations:

  • their models generally replicate the Earth's temperature history ("hindcasts"), although it should be noted that even models ancapsulating very different sensitivities to CO2 can do this, demonstrating that the models are fudged.
  • some models spontaneously reproduce features of the real climate, such as the PDO and El Nino, although not well enough to make such models useful predictive tools.

6. However

  • when the detail of the "hindcasts" is examined, it is found that the models do not in general replicate regional temperature histories.
  • to the extent that models have had their predictions tested against outcome, their performance has been poor.
  • no model has been shown to be skillful at regional forecasts.

Have I got anything wrong? Have I missed anything? I also wonder if politicians actually need to know about feedbacks and physics and sciencey stuff like that. Don't they just need to know how good the models are?

PrintView Printer Friendly Version

Reader Comments (81)

My understanding of the IPCC case for having confidence in the predictions of the GCMs can be summarised thus:

All the GCM models (23 I believe) are able to hindcast the climate of the twentieth century with acceptable accuracy, once the actual forcings (CO2, solar variations, volcanoes, aerosols etc) are programmed in. What is more, if the human CO2 emmisions are removed, the models fail to show the warming that actually occurred, thus proving that we are responsible for the increase in temperatures. The success of the GCM’s in hindcasting is powerful evidence that the forward projections of the same GCMs are valid and reliable.

If anyone thinks this does not accurately reflect the view argued in AR4, I would be interested to know where I am in error.

Assuming I am not, the logic employed appears to have 2 very significant logical flaws. The first is the glaring circularity of the argument that removing (actual) human CO2 emmisions should cause the temperature rise to disappear. All the GCM’s have been “trained”, by the selection of large numbers of parameterised variables from among a far larger pool of plausible alternatives, to replicate what actually did happen, given the forcings that actually happened. To then remove a forcing which did occur, and which is obviously believed by the model makers to be critical, and act as if the resultant divergence from observed reality is evidence of human causation, is circular reasoning in the extreme – just what did they expect would happen, no change to the model outputs?

The second flaw is that all the GCM’s are different, and all give divergent results when run into the future, resulting in the IPCC’s very wide range of 1 to 6 degrees Celsius expected warming. Logically this would imply that either all except one is wrong, or that they are all wrong. In either case, at least 22, and possibly 23, erroneous GCM’s have passed the hindcasting test with flying colours. This surely makes a nonsense of the claim that hindcasting ability is any indication of GCMs skill at forecasting the future climate.

Am I missing something? Or are they?

Mar 19, 2012 at 8:06 AM | Unregistered CommenterPeter Wilson

Just a tiny questionette, a thing about which I have asked before with no answer, and which may be silly. Do they publish every run? Or do they have occasional outriders which show results which are 'obviously wrong' and are therefore rejected.

I've done a little modelling in a completely unrelated field. It is soooo easy to tweak a parameter to improve the results, and fool yourself into thinking you haven't cheated.

Mar 19, 2012 at 8:16 AM | Unregistered CommenterRhoda

On 3, how about:

Most models suggest a warming of 1.5 to 4.5 degrees for a doubling of CO2, and a similar warming for the 21st century as a whole. Even this may not cover the full range of possibilities, since future emissions are unknown, the share of these emissions that will stay in the atmosphere is not certain, and feedbacks, especially from clouds, are poorly understood and quantified.

Mar 19, 2012 at 8:25 AM | Unregistered CommenterDavid Briewer

Perhaps you should just point them to Tamsin's site.. :-)

http://allmodelsarewrong.com/

Mar 19, 2012 at 8:26 AM | Registered Commenterjamesp

Very helpful. And if you were able to post relevant links for each point the value added would be even greater.

In 5., "ancapsulating" should, I think, be "encapsulating".

Mar 19, 2012 at 8:32 AM | Unregistered CommenterRichieRich

1. Climate models are based on well-understood physical laws. There is wide agreement that on its own a doubling of CO2 levels would produce an initial warming of around 1degC.


Climate models are based on currently accepted physical models. There is a concensus which accepts that a doubling of co² COULD produce a 1°C rise in 'global temperatures'. No-one has yet provided an engineering quality proof that co² can raise global temperatures at all.

some models spontaneously reproduce features of the real climate, such as the PDO and El Nino, although not well enough to make such models useful predictive tools.


But none of the model has ever accurately forecast either the timing or the intensity of ENSO features.

Mar 19, 2012 at 8:40 AM | Unregistered Commenterstephen richards

All the models forecast that greatest warming will be in the upper troposphere at low latitudes.

Since there has been no warming there this is the real killer. It is the solid, uncontested evidence that none of the models are valid. It is also a characteristic of positive temperature feedback by water vapour - which is the bone of contention in the whole debate.

Mar 19, 2012 at 8:45 AM | Unregistered CommenterDoubting Rich

Computer models that can niether be verified nor validated can not derive any useful or original information on a system. It really is as simple as that. Attempts to verify or validadte models by comparing the output of different models are lunacy.

I believe that this statement is as fundamental as, say, the second law of thermodynamics. If it is accepted - and I do not see how it can not be - I can not understand why the debate continues and why anybody treats the IPCC seriously.

Mar 19, 2012 at 8:49 AM | Unregistered CommenterRoger Longstaff

Perhaps the simplest observation is that GCMs are the same type of models that are used for forecasting weather :)

That aside I think you are too generous. GCMs were designed to be little labs for testing interactions in the atmosphere, and even then don't model phenomena on small scales. It isn't clear that GCMs are fit for purpose when it comes to political decision making. For example they don't do uncertainty which is a primary interest in politics - other more probabilistic simpler approaches are likely be much more useful for a political debate.

So interesting for arcane areas of climate modeling, not good for forecasting future climates where risks need to be assessed.

Mar 19, 2012 at 8:53 AM | Unregistered CommenterHAS

GIven (which appears to be increasingly in doubt) that CO2 doubling will, in the absence of feedbacks, raise the temperature by 1C I would not be inclined to trust models all of which tell me that the actual increase will be between 2 and 6 times that figure and none of which tell me it will be less.
I would (and do) conclude that the models have been programmed to assume feedback is positive yet by their own admission modellers admit that they don't model clouds well.
Since clouds are composed of water vapour which accounts for 90% ...
And so on!

Mar 19, 2012 at 9:03 AM | Registered CommenterMike Jackson

Here is a very important one, especially for policy makers:

The spread of output from models "hindcasting" provides no information about the range of uncertainty of that output, or any measure of confidence for the extent to which these models accurately replicate actual climate behaviour outcomes.

Likewise, a close agreement in projected/predicted/forecast outcomes does not provide any information about the efficacy of the output, nor the models themselves. Neither does the spread of projected/predicted/forecast outcomes provide any information about the potential range of actual outcome, nor any quantification of the extent of uncertainty

Mar 19, 2012 at 9:10 AM | Unregistered CommenterGeckko

Your point one covers a multitude of, if not sins, difficulties. The basic equations of the fluid flow have never been solved exactly, the cobbling together of bits of 'understood' components (and not all are) does not automatically give the same level of understanding to the whole system - quite the reverse, and Lorenz showed quite a while ago that the evolution of both models and system cannot be determined from imperfectly captured initial conditions. The great hope held out by modellers is that somehow many many model runs will average out some of the problems and give some insight, perhaps encouraged by, for example, our ability to make a reasonable guess at actuarial outcomes in a large group without being able to do so well for any particular subset of individuals. Therein lies one of the pillars of faith required before you would even consider taking the models seriously as a guide for practical actions. The predictive performance of the models is, as far as I know, generally accepted to be so awful that even the IPCC have not found a way to exclude this insight from their reports, although they did devise or take advantage of the term 'projections' in order to leave model outputs in a respectable place. Perhaps they knew that for the government, for the media, and indeed for most of us, the difference between a 'projection' and a 'prediction' would largely pass us by.

I think you should seek a way to describe the computer models as a possibly useful playground for speculations about the climate system, and as a challenging area for the development of computer science - just as computer games have been and are.

You also need to find a way to explain the difference between such models deployed in a weather forecasting role (remember the background images in that pitiful but pernicious Nurse documentary, with images of cyclones passing across huge screens and a voice over to the effect (as I recall) that the models were good at that. I can create a model in a few minutes that will good at predicting the arrival of a bus I can see in the distance on a familiar route. It won't always be right, but especially if I can keep the bus in sight, I'll do a pretty useful job overall. But I would not try to sell my few lines of code as being any good for situations in which the bus does not yet exist, and for a multitude of routes as yet unspecified.

Mar 19, 2012 at 9:11 AM | Registered CommenterJohn Shade

Bish I believe you have to keep it very simple for politicians. You cannot model the climate with any hope of accuracy because it's inherently a chaotic system.

If Richard is comes on here I'd like to understand what it is that dampens the positive feedback, in my electrical engineering training we did a lot on feedback, and positive feedback systems are inherently unstable, a la the theory of Venus, so what happens at, say 3.3C, to stabilise the system? It's been puzzling me.

Mar 19, 2012 at 9:16 AM | Unregistered Commentergeronimo

You might point out that the global temperature increase/decrease that people speak of is cannot be a global temperature. It is a statistic that has no meaning in physics.

Mar 19, 2012 at 9:18 AM | Unregistered CommenterGrantB

"Have I got anything wrong? Have I missed anything? I also wonder if politicians actually need to know about feedbacks and physics and sciencey stuff like that. Don't they just need to know how good the models are?"
Unfortunately, politicions do not appear to be able to see the wood from the tree rings. The experiences of Spain, germany and many other countries demonstrates (in my opinion) the futility of windpower.
This "FACT" is not the result of models but rather is empirical. If politicions cannot see this is a total waste of resourses, then discussing models and their abilities would appear to be a waste of breath.
It is not about the science (if it ever was) it is about the political agenda.

Mar 19, 2012 at 9:18 AM | Unregistered Commenterpesadia

Andrew ... this is a really worthwhile project and a very good start, but it might be better to think of it as a guide for intelligent laymen, like the hockey stick illusion. Also, I would not worry about it getting to technical, especially at first, because you can always have an introductory summary that explains it is terms everyone can understand. For example, all these models depend on simplifications of Navier Stokes equations to explain fluid flows -- this should be introduced and explained, because NS equations, while elegant descriptors of three dimensional fluid flow, are technically unsolvable without simplifications. That is why we still use wind tunnels to understand the far simpler effects of novel airfoil designs, notwithstanding the great advances in computational fluid dynamics. Models are utterly incapable of predicting plume effects and some plume effects, like changes in jet streams or the great spot on Jupiter can be long lived or the distribution patterns of radioactive fallout from the Fukushima meltdown. There are similar uncertaintiies in the related field of heat transfer and other many technical areas.

There are other pertinate modeling "effects" that are largely ignored. These relate to group pyschology. Global climate models, like all scientific theories are anallogies, and analogies are dangerous because they are seductive and can capture the imagination. Their synthetic character can lead to brilliant insights, and Einstein showed so often. But Einstein also tempered his brilliant synthetic insights with the discipline or ruthless analysis. But when analogies capture the imagination, the mind turns in on itself, and sees what it wants to see. Climate models are particularly dangerous analogies in this regard, because they are massive collective efforts with thousands of equations -- many of which embody nonlinear feedback effects -- that are very difficult for one mind to understand and analyze completely. Moreover, their collective nature of such complex modeling efforts creates a group-think psychological effect. (I am using the term precisely the framework described by Irving Janis is his classic work on that subject.) This groupthink effect gets reinforced by the intense competititon for the resources needed to finance the modeling effort. In effect the model becomes reality to its practitioners and their observations become observations of concoctions of their own imagination. This effect is clearly evident in the IPCC analyses and conclusions derived from observations of its models' results, in the warlike us-against-them fulminations of Michael Mann, and in the tone of emails hacked from U of E Anglia server. Indeed, the constant reference to consensus by climate scientists is a reflection of this pyschology.

This group-think effect is a huge part of the modeling mentality that can not be ignored -- it effects the fabricators of all complex computer models, whether they are modeling derivatives on Wall Street, weapon's effectiveness in the Pentagon, or the world's climate. This is something I came to appreciate up close and personal in my years as civil servant in the Office of the Secretary of Defense in the Pentagon.

FYI ... I describe this psychological effect obliquely in two recent articles I wrote for Counterpunch which can be found at these links: here: http://www.counterpunch.org/2012/02/09/climate-science-goes-megalomaniacal/ and here: http://www.counterpunch.org/2012/02/27/lying-for-the-cause/.

I applaud your effort ... this is a very important subject and deserves the same kind of extensive treatment you did on hockey stick.

Mar 19, 2012 at 9:21 AM | Unregistered CommenterChuck Spinney

Hmm, I think you are being too generous to the models.

"Climate models are based on well-understood physical laws."

This is in the category of 'true but misleading'. The physical laws are well understood, but what is not at all understood is what are the key physical processes that control the variations in the climate.
(For example, we don't really understand what causes ice ages).
Changes in climate could be caused by variations in the sun's output.
Or they could be natural irregular fluctuations of a chaotic non-linear system.

"There is wide agreement that on its own a doubling of CO2 levels would produce an initial warming of around 1degC."

Wide agreement? What does 'initial' mean here? I think you mean initial in the sense of 'before other factors are considered' but it could be misinterpreted as initial in time.

One important thing that is missed is that the equations describing all the physical processes involved are very complicated and involve a vast range of length- and time-scales that cannot possibly be solved in a computer model. This means that many simplifications have to be made in the model and extremely unrealistic parameter values have to be used to get the computer models to work.

There is also the question of the failure of models to predict the lack of warming over the last 15 years - I think the "have to wait 30 years" excuse was constructed in response to this failure - or was it?
Note that the 30-years statement is in contradiction with the statement in the last IPCC report, which predicted a rise of 0.2 degrees per decade over the next two decades.

Mar 19, 2012 at 9:21 AM | Registered CommenterPaul Matthews

I recommend the following at Tome22.info

http://tome22.info/Top/Principles.html#id2

If a computer model has been used:
Have all the relevant physical processes and variables been considered in formulating the model?
What assumptions underpin the model?
Are those assumptions realistic and justifiable?
Have any of those assumptions been violated?
Were the initial conditions, boundary conditions and feedbacks valid/plausible?
has the model been validated, verified, and adequately described (or at least a valid reference given which provides this information)? In other words, does the code actually do what it is intended to do (is it free of errors?) and does it produce results which match reality?
Did the authors adequately describe the algorithms and any filters applied?

Mar 19, 2012 at 9:24 AM | Unregistered CommenterPatrick

You could simply tell them what Kevin Trenberth said about the model in his Nature Blog of June 4 2007. I paraphrase because I can't find the blog at the moment. The models:

Do not consider many things like the recovery of the ozone layer, for instance, or observed trends in forcing agents….

None of the models used by IPCC is initialized to the observed state and none of the climate states in the models corresponds even remotely to the current observed climate.

The state of the oceans, sea ice, and soil moisture has no relationship to the observed state at any recent time in any of the IPCC models. There is neither an El Niño sequence nor any Pacific Decadal Oscillation that replicates the recent past; yet these are critical modes of variability that affect Pacific rim countries and beyond.

The Atlantic Multidecadal Oscillation, that may depend on the thermohaline circulation and thus ocean currents in the Atlantic, is not set up to match today's state, but it is a critical component of the Atlantic hurricanes and it undoubtedly affects forecasts for the next decade from Brazil to Europe.

Moreover, the starting climate state in several of the models may depart significantly from the real climate owing to model errors. I postulate that regional climate change is impossible to deal with properly unless the models are initialized.

GCMs assume linearity, which works for global forced variations, but cannot work for many aspects of climate, especially those related to water cycle ... the science is not done because we do not have reliable or regional predictions of climate."

Trenberth also claims that the IPCC does not make predictions, only proposes scenarios.


Straight from the horses mouth as they say.

Mar 19, 2012 at 9:28 AM | Unregistered Commentergeronimo

Hindcasting is used as a measure of how good a model is yet equally valid models can be produced by using a completely different set of parameters resulting in a completely different future.

There is so much scope for a subjective choice of parameters (e.g. increase CO2 forcing and decrease solar or vice versa) that the model becomes little more than a projection of somebodies opinion.

Mar 19, 2012 at 9:31 AM | Unregistered CommenterTerryS

"Climate models are based on well-understood physical laws" is too generous: what exactly does "based on" mean? We have no reason to believe that the selection of w-u physical laws is complete: in other words, the basis is doubtful. Perhaps "Climate models incorporate some well-understood physical laws"?

Mar 19, 2012 at 9:37 AM | Unregistered Commenterdearieme

@Bish. In England we spell it "skilful", although perhaps "skillful" is now acceptable when used specifically with reference to computer models rather like "program" (rather than "programme") has long been accepted as the accepted usage when referring to computer systems. In other words "skilfull" doesn't actually mean "skilful". Who'd have thought it.

Mar 19, 2012 at 9:42 AM | Unregistered Commentersimon abingdon

@paul matthews

'the last IPCC report, which predicted a rise of 0.2 degrees per decade over the next two decades'

Ahh, but when you question the alarmists about the observed lack of this warming they attempt to have you believe that it refers to the rate of warming, not the warming itself. That 0.2C per decade does not mean +0.2C after 10 years and +0.4C after 20 and so on.

So if after 10 years you observe no warming, that is not of itself anything to worry about. Because the warming itself is still going on, juts temporarily masked by short-term fluctuations that will soon (unquantified) die away. And then we'll all be sorry.

I am planning anew career as racing tipster 'CAGW Alarmist'. Judging no my trial runs, where all the horses I have tipped have consistently come in the lower half of the field, I am superbly well-qualified for the job.

The only hard part is persuading the aggrieved punters that when I said 'Slow Old Nag' would win the 3:30, his pitiful effort in coming in 6th from 7 and 25 lengths behind the winner was in fact deliberately masking his proper turn of speed and that they should double up their bets on his next outing. They seem to be a tad unconvinced - nay sceptical indeed - of this 'scientific' argument.

Late breaking news. The Guardina have just rung to say that they love my predictive ability and please could I join their environment section as a 'turf forecaster'. I am considering my options...............

Mar 19, 2012 at 9:45 AM | Unregistered CommenterLatimer Alder

The statement that all the models hind cast the past is wrong. The best is the Met. Office. In essence, they all over-predict CO2 warming [Lindzen says it's a factor of 3-5 assuming most recent warming is CO2-AGW] and offset it by using twice real low level cloud optical depth as a fixed offset for the base-line CO2 then use imaginarily large aerosol cooling to correct for AGW.

Hansen's latest claim is that the latter correction is exactly the same as AGW to account for over a decade of no warming, and it's because of Chinese aerosol emissions due to more coal burning. Pat Michaels contends that in reality, the Northern hemisphere has warmed and the southern has cooled.

China is in the North so Hansen is wrong: Michaels' paper has been rejected by peer review so the consensus remains intact; it stinks.. Frankly the models are fraudulent because of the incorrect cloud physics used as the fudge factor to force-fit the politically-correct CO2-AGW.

Mar 19, 2012 at 10:06 AM | Unregistered Commentermydogsgotnonose

Any fair description of climate models should point out that they are configured to support a particular hypothesis - the positive feedback of water vapour resulting from a marginal increase in temperature. It is not the only competing hypothesis! Something you never see pointed out about this hypothesis – though admittedly this has nothing to do with models - is that it is not predicated on increased CO2 per se but on any factor - “forcing” in the nomenclature - that results in a marginal increase in global temperature. That's because it is actually increased temperature, and not increased atmospheric CO2, that drives out additional water vapour from the oceans. To the unbiased scientific mind, this consideration alone disqualifies the official hypothesis because such minuscule increases in temperature happened countless times in earth's history yet the planet remains habitable. Only negative feedback can have this effect.

Any fair description of climate models should state, prominently, that they disregard the competing Svensmark hypothesis. Svensmark's hypothesis has some physical support – in contrast to the positive feedback hypothesis – but like all unestablished hypotheses, many unknowns remain. Models can help in this respect through parametrisation – fudge factors if you will – yet this is neglected.

Mar 19, 2012 at 10:13 AM | Registered Commenterallchangenow

This academic discipline is relatively new and has no track record in creating models that provide accurate long range predictions.

Recent climate lies within the range of natural variability as reconstructed for the current interglacial period (ice cores etc.) and previous interglacials.

There is consensus that we are probably nearing the end of an interglacial period with a prospect of global cooling.

Mar 19, 2012 at 10:18 AM | Unregistered CommenterGordon McKeown

I see nobody answered my question at 8:16. Never mind. Here's another. Did anybody ever model, say, one square metre of the surface, preferably the sea, and compare the results with direct measurement? Do we really know for sure how much of the heat exchange is convectional, how much radiation and how much conduction? How much goes up, and how much sideways? How the ratio of convection to radiative effects vary over the daily cycle? What difference cloud cover makes? If you can't model a square metre, you can't model the earth.

Mar 19, 2012 at 10:37 AM | Unregistered CommenterRhoda

Refer to IPCC 2007 WG1 section 2.9.1. In this section, the authors categorise climate forcings under 16 headings and assess the level of scientific understanding for each heading. For no less than 11 of the 16 categories their assessment is that scientific understanding is either low or very low. How can any reliance be placed on the output of climate models in these circumstances? The obvious danger is that effects which ought to be attributed to one of the lesser understood forcings ( e.g. cosmic rays ) are being attributed to GHGs.

Model hindcasts are rubbish. They are only made to work by incorporating aerosol fudge factors. There are a number of studies purporting to explore the effects of aerosols and, what a surprise, each modelling team adopts the results of whichever study makes their model most closely match the past. Hmm ! Without the fudge factor, the models would overstate the degree of past warming. But future model projections have to have the fudge factor removed otherwise they would not predict scary temperature rises. ( But don't take them out just yet, as we haven't had warming for 15 years.)

Isn't it amazing how the presence of aerosols in the atmosphere just happens to ebb and flow exactly in line with the well-known natural 60-year cycle of warmings and coolings.

Production of aerosols is concentrated in the northern hemisphere where most of the land and industry is based, and aerosols do not have a long enough lifetime in the atmosphere to diffuse evenly over the planet. So, if aerosols are indeed holding down temperature rises then the effect would be most noticeable in the northern hemisphere. Yet observations show that most global warming has been in the north. There has been virtually no warming in the southern hemisphere, based on unadjusted data, during the period of instrumental records.

Doubting Rich has already mentioned that the models predict a tropospheric hotspot which is not present in observations. As DR remarks, this is indeed the killer because it is the signature of positive water vapour feedback. No hotspot means no positive feedback which means warming for 2xCO2 limited to 1 degC which means no crisis.

Lindzen & Choi compared the surface sea temperature changes to changes in the outgoing long wave radiation. If positive feedback were present then an increase in SST would be accompanied by a reduction in OLR, causing more warming. L&C demonstrated that all of the models predicted that this is what would happen, but the actual satellite data showed the opposite. The negative feedback evident from observations indicated that the "with feedbacks" sensitivity to 2xCO2 is only around 0.6 degC.

As the effect of CO2 is logarithmic, to get 1.2 degC change would require 4xCO2 and to get 1.8 degC change would require 8xCO2.

Then there is the work of Ferenc Miskolczi. He showed empirically that there has been virtually no trend in the optical depth of the atmosphere over the last 60 years. In other words, his work indicates that the greenhouse effect is in a state of stable equilibrium. Evidently, as man puts more CO2 into the atmosphere, Mother Nature takes an almost exactly equivalent amount of water vapour out. I believe that atmospheric humidity records for the relevant altitudes do bear this out.

Mar 19, 2012 at 10:57 AM | Unregistered CommenterDavid Lilley (dcfl51)

My knowledge of GCMs is also sketchy, as is my knowledge of how, say , Windows operates. I do know though that I get troubles from Windows occasionally and so do many others, and I know too that I could use simpler, often free systems if it were to really trouble me. I could even just ditch the computer altogether. But when I see destructive policies and attitudes following on from the alarm created in part by zealots leaning on wholly inadequate software as if it were some kind of gospel, I think I am in a different ballpark altogether.

Whereas the IPCC has an interest, as do many others, in presenting GCMs in the best possible light, it would be wiser and more prudent to do the reverse given their potential as weapons for zealots, at least partially realised to date, to do a great deal of harm to society. The standard of evidence for such a role should be very high indeed, and yet it seems to have been very low indeed.

I do believe there is a very urgent need to convey to politicians the very modest expectations we can entertain from GCMs at present in the context of climate prediction. I have seen nothing to convince me that these models deserve to be let out of the laboratory in that role, and yet they stalk the earth as if we were in some kind of virtual Jurassic Park of politics. We need to find a way to get them captured and contained before they do us much more harm.

Mar 19, 2012 at 11:03 AM | Registered CommenterJohn Shade

The link below is a description of the numerical model used for computing the positions and motions of the major planets and 300 of the known asteroids:

http://iau-comm4.jpl.nasa.gov/XSChap8.pdf

Here is an extract (emphasis mine):

For the final (integration) phase of the ephemeris creation process, there are three main ingredients, each of which constitutes a major phase itself:

the equations of motion describing the gravitational physics which govern the dynamical motions of the bodies,

a method for integrating the equations of motion , and

the initial conditions and dynamical constants; i.e., the starting positions and velocities of the bodies at some initial epoch along with the values for various constants which affect the motion (e.g., planetary masses).

It is mainly the accuracy of the third component, the initial conditions and dynamical constants, which determines the accuracy of modern-day ephemerides, since the other two components (the physics and the integration method) are both believed to be sufficiently complete and accurate. The values of the initial conditions and constants are determined by the least squares adjustment of them to the observational data (measurements), and the accuracy of this adjustment, and thus of the ephemerides themselves, depends primarily upon the distribution, variety, and accuracy of the observational data. This crucial part of the ephemeris creation process is given in Section 8.5.

Climate models are in the toy department when compared with this model. The climate modellers wish do to the same as the celestial mechanicians but have neither accurate or complete physics, let alone intitial conditions and constants. They are light years away from getting anything useful.

Discuss.

Mar 19, 2012 at 11:07 AM | Unregistered CommenterBilly Liar

Can anyone tell me about the nature of these models?

I'm guessing there's a fair degree of "recursion" (i.e. using previous outputs to produce newer outputs). As this is imperfect it will result in error swamping the signal rather quickly. The combinational space needed to model is vast so "averaging" over some models wont work either.

GCMs are the most expensive dice in the known universe.

Mar 19, 2012 at 11:12 AM | Unregistered Commenterac1

The articl states "1. .....There is wide agreement that on its own a doubling of CO2 levels would produce an initial warming of around 1degC."

This estimation assumes that "the structure of the atmnosphere does not change". Or, in other words, the estimations look ONLY at radiation effects. This assumption has neve been justified, and is almost certainly wrong. The lapse rate almost certainly changes as GHGs accumulate in the atmosplhere.

Mar 19, 2012 at 11:19 AM | Unregistered CommenterJim Cripwell

My ongoing thinking about an elevator speech goes something like this:

Serious people know the earth has been warming for 200 to 400 years, and that carbon dioxide is a greenhouse gas, and that mankind's activities have contributed some fraction of that warming.

However, there are great scientific disagreements about the role of natural variability, and whether future increases of water vapor will result in net positive or negative feedbacks, and how strong they will be. In other words, important pieces of the climate models are unknown and uncertain - what Georgia Tech's Judith Curry calls the uncertainty monster.

The plateauing of global temperatures for 12 or 15 years despite increases of CO2 every year leads resonable people to believe the modelers have over-weighted carbon dioxide and under-weighted natural variability. The sky is not falling.

Mar 19, 2012 at 11:29 AM | Unregistered CommenterDon B

The Politicos only need to be aware of one fact about climate models:

GIGO

They might as well roll dice.

Over to you, Richard.

Mar 19, 2012 at 12:04 PM | Unregistered CommenterKon Dealer

You cannot say "demonstrating that the models are fudged" without immediately attracting a "denialist" tag and being ignored.

Equally, "climate modellers derive comfort that their models are reasonable" is condescending and will ensure that skeptics will like it but everyone else will ignore it.

Seriously, until the sceptic side can stick rigorously to language that is almost fawning, it will not even be read by central government, as it will not get past the ubiquitous aids that filter anything that reaches those in power.

Mar 19, 2012 at 12:05 PM | Unregistered Commentersteveta

KISS

All the politicians need to know is the graph of the change in "predictions" you put up recently with an overlay of the original prediction to Congress by Hansen in 1988 of a half degree per decade rise http://climate-skeptic.typepad.com/photos/uncategorized/2008/06/23/hansencheck.gif so that, if the alarmists are to be believed the world is now 1.2 C warmer ;-)

Add in a graph showing medieval warming was higher than now and 5000 BC much warmer without catastrophe.

One picture is worth a thousand words. Particluarly with scientifically illiterate politicans.

Or reprint bits out of Crichton's State of Fear.

Mar 19, 2012 at 12:26 PM | Unregistered CommenterNeil Craig

The models you mention do NOT predict climate or climate change. They predict a mystical average global temperature and compare it to historical local temperature numbers that have been adjusted, averaged and smeared. These models cannot be translated into climate change because they do not predict climate change. The latter, climate change, is never defined as it is a regional phenomenon involving many different parameters, temperature being only one. To say that temperature is the sole determinant of climate change is misleading. In fact, there is no standard or uniform method for even measuring climate change. The basic questions that should be addressed are: What is climate change, how do you measure it, and how do you predict it? GCMs do not answer these questions.

And back in the real world, can the models predict when the next ice age will occur?

Mar 19, 2012 at 12:28 PM | Unregistered CommenterDrcrinum

Feedbacks may be negative (dampen the response instead of amplifying it) such that the overall climate sensitivity may be less than 2'C. This possibility is basically ignored by the IPCC because they wouldn't be able to hindcast the 20thC; but part of this is that they make assumptions that internal variability is low and will cancel itself out over longer time periods, if these assumptions are false than internal variability might dominate and GCMs would be inherently too sensitive.

Mar 19, 2012 at 12:44 PM | Unregistered CommenterGary Moran

I see there is a book coming out shortly with the promising title 'Merchants of Despair: Radical Environmentalists, Criminal Pseudo-Scientists, and the Fatal Cult of Antihumanism' (author Robert Zubrin, publication date 20th March, acc. to Amazon.com).

I hope it will shed more light on the deployment of GCMs and other computer models as weapons of propaganda by despair merchants over the past 40 years or so. This is a bit O/T, but as well as the primary task of helping politicians see the extremely severe limitations of GCMs used for guidance over climate, there is a secondary one of helping them see how they have been used to influence their opinions, and for what dismal and despical ends and means.

Mar 19, 2012 at 12:53 PM | Registered CommenterJohn Shade

Drcrinum,

You are correct that temperature is not the be all and end all of climate change. But the greenhouse gas hypothesis states that more CO2 in the atmosphere creates warming. All of the other nasty effects: rising sea levels, melting polar bears, more/less snow, the rainiest drought since records began, etc are all supposed to flow from the raised temperature.

It is the attempt to control CO2 that is raising our fuel bills, jeopardising our ability to meet energy demand, driving the switch of land from food production to biofuels and preventing the third world from using cheap and abundant energy to escape from poverty by economic development.

Therefore, we have to demonstrate to the politicians that more CO2 does not create warming, at least not to any extent that we need to be concerned about. This is why we should concentrate on the models' predictions for temperature.

Mar 19, 2012 at 1:16 PM | Unregistered CommenterDavid Lilley

The models simply reflect their inputs (the 'forcings' in the language of the climate modelers) and are not even expected to be capable of making predictions. The 'hindcasts' similarly include 'forcings' (which vary with time) and so are not predictions (or 'skillful' in the language of the climate modelers).

Hence, the whole exercise is a terrible waste of money - though it may produce some nice graphics.

See http://www.climate-lab-book.ac.uk/2012/on-comparing-models-and-observations/

On this thread - ZT observed that climate models were similar to a detective asking the coroner to tell him the estimated time of death of the victim - and at the same time stipulating that 1am would be correct. Such a line of 'proof' is fine if you are the all knowing detective but not so good if you are in the dock.

(the comment was snipped!)

Mar 19, 2012 at 2:31 PM | Unregistered CommenterZT

BH:

1. Climate models are based on well-understood physical laws.

Hold it right there, Bishop.

This immediately suggests that climate models therefore have some validity. Let's call a spade a spade and halt the exercise at this point.

Roger Longstaff's comment (Mar 19, 2012 at 8:49 AM) seems to me to summarize the only possible realistic view:

Computer models that can neither be verified nor validated can not derive any useful or original information on a system. It really is as simple as that. Attempts to verify or validate models by comparing the output of different models are lunacy.

I think that this cannot be emphasised strongly enough.

The Met Office's claim that their models can reproduce the data used to construct them and that this therefore validates them is at, best, a joke. If a model could not even reproduce the data used to construct it, that would demonstrate outright incompetence in its programmers.

But the opposite simply confirms that its programmers may be less than completely incompetent - not that it correctly models the physical reality and is useful for predicting future outcomes.

Mar 19, 2012 at 2:34 PM | Registered CommenterMartin A

Climate models include ocean circulation models, which are important because the ocean has a very large thermal capacity. The oft referred to "warming in the pipeline" is mainly a result of large assumed rates of ocean heat uptake 'masking' the true level of sensitivity to radiative forcing. Most models calculate heat uptake that is much larger than the measured rate (Argo system)... suggesting that the models simultaneously overstate both sensitivity and ocean heat uptake.

The uncertainty in direct and indirect aerosol effects (based on the IPCC AR4 range) is so large as to make it impossible to declare as incorrect any model which yields a climate sensitivity in the range of ~1.4C per doubling to infinite warming per doubling. Little progress has been made in narrowing the wide diagnosed range from models because there has been little progress in narrowing aerosol influences. And despite the protestations of climate modelers, assumed aerosol effects are nothing more than a grotesque kludge, and one which no rational person should ignore when evaluating the credibility of model projections.

Climate models may have some useful purposes, but projecting long term effects of rising GHG concentrations is not one of them.

Mar 19, 2012 at 3:51 PM | Unregistered CommenterSteve Fitzpatrick

I have worked out how the CAGW scam has been constructed. Climate science claims that at 1 atmosphere, the Earth’s surface radiates at 396 W/m^2, a black body at 16°C in a vacuum.

[2009 Trenberth and Keihl diagram: www.cgd.ucar.edu/cas/Trenberth/trenberth.papers/TFK_bams09.pdf]

161 [LW IN] + 333 ['back radiation'] = 494

494 – 396 ['surface radiation'] = 17 [thermals] + 80 [evapo-transpiration] + 0.9 ['net absorbed']

However, any process engineer knows that at equilibrium, the sum of the heat fluxes from conduction, convection and radiation equals the heat input, so the real sum is:

161 [LW IN] – 0.9 ['net absorbed'] = 17 [thermals] + 80 [evapo-transpiration] + 63 [net IR emission]

The second formulation without 'back radiation is correct. The apparent reason why climate science has chosen falsely to claim IR output is the vacuum level is that the real IR emission is coupled with convection. You see this in the Urban Heat Island effect: reduced convection raises temperature to the point at which increased radiation compensates to match the input flux.

To maintain the scam, they must maintain the fiction that radiative output is independent of the UHI because if they ever admitted UHI changes IR, this would increase ‘back radiation’ in competition with CO2-AGW.

Mar 19, 2012 at 3:52 PM | Unregistered Commentermydogsgotnonose

Missing - yes, the creation and the supervision of the models.

See here. http://blog.squandertwo.net/2006/06/modelling.html

As a former IT bod used to having to comply with ISO9001 to sell our products, it was astonishing to peer into the IT "regime" at UEA exposed by CG1. Clearly there were no specifications, no documentation, no QA, no comments in the code except those exposing the programmer's concern regarding his own abilities. The code itself was execrable - indeed, a project such as this would never even have got started in the private sector - and why are THEY not QA'ed, when all companies selling software to the public sector have to get ISO9001 certification.

Anyway - read the link, Andrew - it would need précising but seems to me to be very much to the point.

Mar 19, 2012 at 4:02 PM | Unregistered CommenterJeremy Poynton

MIT Prof Richard Lindzen’s seminar:
Global warming How to approach the science
(Climate models and the evidence?)

Global warming is “trivially true”
“Skepticism implies doubts about a plausible proposition”

This is summary of the key point from MIT Professor Richard Lindzen’s seminar on global warming on 22nd February 2012 at the House of Commons in committee room 14. (A video and seminar slides are posted on our website www.repealtheact.org.uk)

Prof Richard Lindzen:

“I wish to thank the Campaign to Repeal the Climate Change Act for the opportunity to present my views on the issue of climate change – or as it was once referred to: global warming.

Stated briefly, I will simply try to clarify what the debate over climate change is really about. It most certainly is not about whether climate is changing: it always is. It is not about whether CO2 is increasing: it clearly is. It is not about whether the increase in CO2, by itself, will lead to some warming: it should. The debate is simply over the matter of how much warming the increase in CO2 can lead to, and the connection of such warming to the innumerable claimed catastrophes. The evidence is that the increase in CO2 will lead to very little warming, and that the connection of this minimal warming (or even significant warming) to the purported catastrophes is also minimal. The arguments on which the catastrophic claims are made are extremely weak – and commonly acknowledged as such. They are sometimes overtly dishonest.

Here are two statements that are completely agreed on by the IPCC. It is crucial to be aware of their implications.

1. A doubling of CO2, by itself, contributes only about 1C to greenhouse warming. All models project more warming, because, within models, there are positive feedbacks from water vapor and clouds, and these feedbacks are considered by the IPCC to be uncertain.

2. If one assumes all warming over the past century is due to anthropogenic greenhouse forcing, then the derived sensitivity of the climate to a doubling of CO2 is less than 1C. The higher sensitivity of existing models is made consistent with observed warming by invoking unknown additional negative forcings from aerosols and solar variability as arbitrary adjustments. Given the above, the notion that alarming warming is ‘settled science’ should be offensive to any sentient individual, though to be sure, the above is hardly emphasized by the IPCC.

Carbon Dioxide has been increasing
There is a greenhouse effect
There has very probably been about 0.8 C warming in the past 150 years
Increasing CO2 alone should cause some warming (about 1C for each doubling)
There has been a doubling of equivalent CO2 over the past 150 years

Nothing on the [above] is controversial among serious scientists.
Nothing on the [above] implies alarm. Indeed the actual warming is consistent with less than 1C warming for a doubling.

Unfortunately, denial of the facts on the left, has made the public presentation of the science by those promoting alarm much easier. They merely have to defend the trivially true points on the left; declare that it is only a matter of well known physics; and relegate the real basis for alarm to a peripheral footnote – even as they slyly acknowledge that this basis is subject to great uncertainty. We will soon see examples of this by the American Physical Society and by Martin Rees and Ralph Cicerone.

The usual rationale for alarm comes from models. The notion that models are our only tool, even, if it were true, depends on models being objective and not arbitrarily adjusted (unfortunately unwarranted assumptions). However, models are hardly our only tool, though they are sometimes useful. Models can show why they get the results they get. The reasons involve physical processes that can be independently assessed by both observations and basic theory. This has, in fact, been done, and the results suggest that all models are exaggerating warming.

The details of some such studies will be shown later.

Quite apart from the science itself, there are numerous reasons why an intelligent observer should be suspicious of the presentation of alarm.

1. The claim of ‘incontrovertibility.’ Science is never incontrovertible.

2. Arguing from ‘authority’ in lieu of scientific reasoning and data or even elementary logic.

3. Use of term ‘global warming’ without either definition or quantification.

4. Identification of complex phenomena with multiple causes with global warming and even as ‘proof’ of global warming.

5. Conflation of existence of climate change with anthropogenic climate change.

Some Salient Points:

1. Virtually by definition, nothing in science is ‘incontrovertible’ – especially in a primitive and complex field as climate. ‘Incontrovertibility’ belongs to religion where it is referred to as dogma.

2. As noted, the value of ‘authority’ in a primitive and politicized field like climate is of dubious value – it is essential to deal with the science itself. This may present less challenge to the layman than is commonly supposed.

Consider the following example:
Two separate but frequently conflated issues are essential for alarm:
1) The magnitude of warming, and
2) The relation of warming of any magnitude to the projected catastrophe.

1. Questionable data. (Climategate and involvement of all three centers tracking global average temperature anomaly.) This is a complicated ethical issue for several reasons. Small temperature changes are not abnormal and even claimed changes are consistent with low climate sensitivity. However, the public has been mislead to believe that whether it is warming or cooling – no matter how little – is of vital importance. Tilting the record slightly is thus of little consequence to the science but of great importance to the public perception.

2. More sophisticated data is being analyzed with the aim of supporting rather than testing models (validation rather than testing). That certainly has been my experience during service with both the IPCC and the National Climate Assessment Program. It is also evident in the recent scandal concerning Himalayan glaciers.

(Note that in both cases, we are not dealing with simple measurements, but rather with huge collections of sometimes dubious measurements that are subject to often subjective analysis – sometimes referred to as ‘massaging.’)

3. Sensitivity is a crucial issue. This refers to how much warming one expects from a given change in CO2 (usually a doubling). It cannot be determined by assuming that one knows the cause of change. If the cause is not what one assumes, it yields infinite sensitivity. This problem infects most attempts to infer climate sensitivity from paleoclimate data.


4. Models cannot be tested by comparing models with models. Attribution cannot be based on the ability or lack thereof of faulty models to simulate a small portion of the record. Models are simply not basic physics.

All the above and more are, nonetheless, central to the IPCC reports that supposedly are ‘authoritative’ and have been endorsed by National Academies and numerous professional societies.
Where do we go from here?

Given that this has become a quasi-religious issue, it is hard to tell. However, my personal hope is that we will return to normative science, and try to understand
how the climate actually behaves. Our present approach of dealing with climate as completely specified by a single number, globally averaged surface temperature anomaly, that is forced by another single number, atmospheric CO2 levels, for example, clearly limits real understanding; so does the replacement of theory by model simulation. In point of fact, there has been progress along these lines and none of it demonstrates a prominent role for CO2.

It has been possible to account for the cycle of ice ages simply with orbital variations (as was thought to be the case before global warming mania); tests of sensitivity independent of the assumption that warming is due to CO2(a circular assumption) show sensitivities lower than models show; the resolution of the early faint sun paradox which could not be resolved by greenhouse gases, is readily resolved by clouds acting as negative feedbacks.

So far we have approached the science in a somewhat peripheral way. In the remainder of this talk, we will deal with the science more directly.

As already mentioned, it is essential to know climate sensitivity. Model predictions depend on positive feedbacks and not just the modest effect of CO2. However, it is first necessary to understand the climate version of the greenhouse effect.
Real nature of greenhouse effect

All attempts to estimate how the climate responds to increasing CO2 depend on how the climate greenhouse actually works. Despite the concerns with the greenhouse effect that have dominated environmental thinking for almost a quarter of a century, the understanding of the effect is far from widespread.

Part of the reason is that the popular depiction of the effect as resulting from an infrared ‘blanket’ can be seriously misleading, and, as a result, much of the opposition that focuses purely on the radiation is similarly incorrect. The following description is, itself, somewhat oversimplified; however, it is probably adequate for understanding the underlying physics.

First, one must recognize that the troposphere, the layer of the atmosphere in contact with the surface, is a dynamically mixed layer. For a gaseous atmosphere, mixing requires that the resulting atmosphere is characterized by temperature decreasing with altitude. The rate of decrease is approximately 6.5K/km which is sometimes taken as an approximation to the moist adiabatic lapse rate, but the real situation is more complicated. To be sure, in the tropics, the mixing is effected by moist convection, but outside the tropics, the mixing is accomplished mostly by baroclinic eddies. Moreover, the moist adiabat in the tropics does not have a uniform lapse rate with altitude (viz the ‘hot spot’). For our immediate purposes, the important facts are that the lapse rate is positive (not zero or negative), and relatively uniform over most of the globe.

Second, one must recognize that gases within the atmosphere that have significant absorption and emission in the infrared (ie greenhouse gases) radiate to space with a flux characteristic of the temperature of the atmosphere at about one optical depth
(measured from space downward). To be sure, this level varies with wavelength, but the average emission level is about 5-6 km above the surface and well within the troposphere.

Third, adding greenhouse gases to the atmosphere must elevate the average emission level, and because of the first point, the new emission level is colder than the original emission level. This reduces the outgoing infrared radiative flux, which no longer
balances the net incoming solar radiation. Thus, the troposphere, which is a dynamically mixed layer, must warm as a whole (including the surface) while preserving its lapse rate.

We see that all the models are characterized by positive feedback factors (associated with amplifying the effect of changes in CO2), while the satellite data implies that the feedback should be negative. Similar results are being obtained by Roy Spencer.

This is not simply a technical matter. Without positive feedbacks, doubling CO2 only produces 1C warming. Only with positive feedbacks from water vapor and clouds does one get the large warmings that are associated with alarm. What the satellite data seems to show is that these positive feedbacks are model artifacts.

This becomes clearer when we relate feedbacks to climate sensitivity (ie the warming associated with a doubling of CO2).

Note that when f, the feedback factor, approaches +1, the response blows up. Presumably, this is what is meant by a tipping point. For larger values of f, the system is unstable.
The delicate dependence of the amplification on the precise value of the feedback factor – when the feedback factor is greater than about 0.5 – is important in its own right.

The feedback factor is almost certainly not a true constant since cloud radiative properties depend on aerosols and cosmic rays among other things. If climate sensitivity is currently large, it is unlikely that over the 4.5 billion years of the Earth’s history that it would not have exceeded one, and then we would not be here discussing this. The delicate dependence of the amplification on the precise value of the feedback factor – when the feedback factor is greater than about 0.5 – is important in its own right.
The feedback factor is almost certainly not a true constant since cloud radiative properties depend on aerosols and cosmic rays among other things. If climate sensitivity is currently large, it is unlikely that over the 4.5 billion years of the Earth’s history that it would not have exceeded one, and then we would not be here discussing this.

Discussion of other progress in science can also be discussed if there is any interest. Our recent work on the early faint sun may prove particularly important. 2.5 billion years ago, when the sun was 20% less bright (compared to the 2% change in the radiative budget associated with doubling CO2), evidence suggests that the oceans were unfrozen and the temperature was not very different from today’s. No greenhouse gas solution has worked, but a negative cloud feedback does.
You now have some idea of why I think that there won’t be much warming due to CO2, and without significant global warming, it is impossible to tie catastrophes to such warming. Even with significant warming it would have been extremely difficult to make this connection.

Perhaps we should stop accepting the term, ‘skeptic.’ Skepticism implies doubts about a plausible proposition. Current global warming alarm hardly represents a plausible proposition. Twenty years of repetition and escalation of claims does not make it more plausible. Quite the contrary, the failure to improve the case over 20 years makes the case even less plausible as does the evidence from climategate and other instances of overt cheating. In the meantime, while I avoid making forecasts for tenths of a degree change in globally averaged temperature anomaly, I am quite willing to state that unprecedented climate catastrophes are not on the horizon though in several thousand years we may return to an ice age." ( Lindzen seminar House of Commons 22nd Feb 2012)

Mar 19, 2012 at 4:06 PM | Unregistered CommenterFay Tuncay

Lay bod here - do we have reliable models of El Nino & Nina?

If not, then how can ANY climate model be relied upon in any way?

Mar 19, 2012 at 4:17 PM | Unregistered CommenterJeremy Poynton

I agree with Peter Wilson (Mar 19, 2012 at 8:06 AM). (1) if N>1 models hindcast well but forecast differently, it's silly to argue that "the ability to hindcast is proof that a model can accurately forecast." Maybe one of the models is valid, but its ability to hindcast is a "necessary" but not "sufficient" condition of proof of goodness. (2) If a model is generated that accurately hindcasts events and that accurate hindcasting is dependent on a "feature"--say a set of measurements like the rate anthropogenic CO2 is added to the atmosphere-- removal from the model of the feature will definitely affect hindcasts and likely affect forecasts. So what. As Peter said: "What did you expect?"

Regarding what you should present to politicians. In my professional career as an aerospace engineer I would often write proposals to obtain funding for competitive projects. I took a class in proposal writing. According ot the instructor, the most important aspect of a competitive proposal was: "Emphasize what is important to the person(s) who will make the yes/no decision." The example he gave was a proposal to obtain the contract for a new fighter plane. If the decision makers were fighter pilots or ex-fighter pilots, your proposal should emphasize what makes your design the neatest thing to fly since the Spitfire. Things like speed, acceleration, fire power etc. If the decision makers were politicians, emphasize how your design will benefit the politician--e.g., construction of your fighter will be performed in the legitsator's district and therefore create jobs that will get the legislator re-elected.

At the time I thought the instructor was nuts. Making the technical (scientific) case for your design was a better strategy. Over a 35 year professional career I came to the conclusion that for small efforts, the technical argument approach was better, but for large efforts the "stroke the decision maker" was the better approach. Mostly this was true because for small efforts the decision maker was often a technical person and could understand the technical arguments. For large efforts, the technical issues were too numerous and complex for the decision maker to understand; thus any technical jargon was likely to sound reasonable to the decision maker--in any event it was extremely unlikely that the decision maker could properly evaluate the technical issues. Thus, although it goes against my sense of "What is right", I suggest that when interacting with politicians you emphasize what is important to them--something like: "If you go forward with the green agenda, when the lights go out you'll be out of a job."

Mar 19, 2012 at 5:02 PM | Unregistered CommenterReed Coray

Where can I access Slingo's "summary of global warming science"?

Mar 19, 2012 at 5:17 PM | Unregistered Commentermescalero

"1. Climate models are based on well-understood physical laws. There is wide agreement that on its own a doubling of CO2 levels would produce an initial warming of around 1degC."

You must not assume that this statement has scientific meaning. If they mean that the physical laws are what we find exposited in textbooks on climate science then they are saying nothing. No one thought that climate models are based on alchemy.

To make a statement that is meaningful for science, they must show the laws in the rigorous formulation that are used for work in climate science.

For example, Arrhenius' Laws have received textbook exposition but none of them have been rigorously formulated and tested in Earth's atmosphere. And even Arrhenius agreed that his Laws require supplementation by Laws that describe the various feedback mechanisms. At this time, the Laws governing feedbacks do not exist and cannot be in the models.

The 1degC figure was pulled out of a hat and has no scientific backing at all. In any case, the actual figure for the actual Earth cannot be stated until there are well confirmed hypotheses about the feedbacks.

In short, if climate models are based on well understood physical laws then that set of laws is inadequate to the task for which climate models are used.

Mar 19, 2012 at 5:20 PM | Unregistered CommenterTheo Goodwin

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>