Buy

Books
Click images for more details

Twitter
Support

 

Recent comments
Recent posts
Currently discussing
Links

A few sites I've stumbled across recently....

Powered by Squarespace

Discussion > We are wasting our time; all of it.

Radical Rodent, since I was talking about the public, both uses of the word apply, but you're right, the scientists should be impartial, but aren't.

May 23, 2015 at 11:04 AM | Unregistered CommenterTinyCO2

Martin, try this image of the 12 month running average: https://tamino.files.wordpress.com/2015/05/gissmovave.jpeg It doesn't look so much like a pause with the context of the latest data. Instead it just looks like a constant rise with 1998 as an outlier. I wasn't interested in climate back then, but I imagine it must have been quite a shock to those watching.

Tiny/Radical, 'interest' goes to the heart of what we were talking about. The people from whom you take your cues (mostly unknowingly, which is the beauty of it) definitely have an interest in preventing action on climate. Scientists on the other hand are mainly just interested in the subject - that is why they are scientists. If it wasn't climate it would be something else. The public are by and large uninterested or ignorant, but they are definitely not disinterested. They have a profound interest in the climate remaining favouarble but they also have a more pressing interest in day to day survival - it should surprise noone that the latter overrides the former.

Micky ,

I've spent 15 odd years having to not assume data is correct
the funny thing there is that the only people I've seen claiming that the temp data is correct are those "skeptics" who insist we should be using the raw data only. No adjustments, use the raw data, the raw data is the truth. A cursory look at a random selection of station data will tell anyone that the data is not all ok. So if we want to use it we need to deal with that in one way or another.

But say we chose to discard the indices altogether as beyond correction. What then? Are we driving blind? If we can't use indices constructed from thousands of thermometers around the world, then we must be even less sure about any proxy data that tells us about historical temperatures. The proxies are vague point estimates compared to the broad cover from the indices. So we know little about past temperatures and we don't know recent temperatures. But we have the physics of greehouse gasses and the knowledge that CO2 concentrations are rising steadily. That alone should make us alarmed and tells us we need to make some changes even before we start looking at other observable changes to the planet.

May 23, 2015 at 1:57 PM | Unregistered CommenterRaff

"Answer the question, please. A scientist should be able to give a better answer than hunter." EM

Or perhaps you could start your own discussion since the subject of this one is 'ARE we wasting time' not 'CAN we waste time'.

While most of us drift off topic when the relevant issues have been adequately discussed, you and Raff don't even start. He tossed off a lie about coming here in his spare time away from persuading the undecided about AGW. Funny how I don't believe he spends any time at all. What's your excuse? Why not wave your graph at the disinterested instead of us?

May 23, 2015 at 10:19 AM | Unregistered CommenterTinyCO2

It's quite flattering that we "3%" have so much influence and power, isn't it?

May 23, 2015 at 3:29 PM | Unregistered Commentermichael hart

Raff "the only people I've seen claiming that the temp data is correct".

No you haven't. You've seen them pick holes in how the adjustments are made, especially when they're made decades after the event. Since GISS, HadCRUT4, RSS and UAH can differ on even the most recent measurements indicates the result is something of a guess. If it's a gues in 2015, it was a bloody dart board in 1850. Any interest in the RAW data is to demonstrate how much fiddling (necessary or not) has gone on. The temperature is climate science's most basic product. It's the thing that everything relies on, and in engineering terms it's crap. Phil Jones losing the original raw data was just another clue to how slap dash it all is. That doesn't mean the end product isn't vaguely right. However it's fed into highly complex software to predict the unpredictable. GIGO.

michael hart, it would be flattering if the other side weren't so... good at failure. And like all true losers, it always has to be someone else's fault. Cue the shadowy forces that are using us as their pawns.

May 23, 2015 at 4:57 PM | Unregistered CommenterTinyCO2

Raff - the fundamental reason why we are unlikely ever to agree is because there is no model that can be validated as being correct for what is going on. So we can go round forever in circles finding analyis that seems to show a trend or analysis that seems to show no recent trend.

If you take the view that anything that persists for more than 12 months is genuinely there, then it makes sense to use a 1-year running average as a (slightly crappy but useable) lowpass filter to remove things that persist for less than 12 months. IN that case, trends lasting more than 12 months get shown up, as in the graph you point to.

If you take the view that anything that persists for not more than 5 years will be due solely to random fluctuations unconnected with any trend caused by a permanent change in the composition of the atmosphere, then it makes sense to use a 5-year running average.

Come up with a statistical model (even a very simple one) that can be validated in some way, even just by a very convincing argument, and people like me might be convinced.

In the absence of a validated model, everyone will choose a model that seems to agree with their intuitive understanding and preconceptions of what is going on.

As I've tried to point out, applying statistical tests to something whose statistical properties are unknown, other than from the very short observed record, just produces xxxxxx. [Can't think of a suitable word but I'm sure you understand what I am trying to say. xxxxxx = something without any real meaning.]

EM has the problem that he often seems to think that complicated, poorly understood systems can be reduced to simple and precise formulas.

EM loves the radiative forcing log formula and seems think that it applies with precision, even though it comes only from finding parameters that match a log formula to a few numerical results from radiative model computations. Somebody pointed out that you can find parameters that fit a square root formula to the results just as well.

EM also likes to talk about "95% confidence intervals" when the probability distributions involved are unknown - so that confidence intervals cannot meaningfully be computed. Unless you use some sort of non-parametric analysis - and then perhaps you can come up with valid, but very much wider confidence intervals than you get by assuming a distributions - eg Gaussian.

Yes EM - I know that you think that "three standard deviations = 95%" or something like that but that is simplistic stuff that your teachers should have been discliplined for telling you. And that you should be ashamed of for having unquestioningly accepted.

Bottom line: Instead of flashing up regression lines etc as if that would convince us of something, find a statistical model for what it happening and show some solid reasons why it is valid. If you can convince us that the model is valid, then we can meaningfully discuss what happens when you do analysis using the model.

May 23, 2015 at 5:01 PM | Registered CommenterMartin A

Raff

I can the directions see the you are coming from. I'm not saying only use raw data. What I'm saying is that if you are to use corrections or adjustments they have to be tested. Not modelled and asserted but tested. At least if the data is to be used for policy and real world applications.

It's fine to apply assumptions for science as it allows scientific debate to proceed. But it's a different story if billions of pounds rests on a few tenths of a degree. Then you better know what you are talking about.

May 23, 2015 at 5:31 PM | Registered CommenterMicky H Corbett

Tiny, no you are wrong. There are people who object to adjustments per-se. Euan Mearns is an example - he even complained to Roy Spencer that the new UAH 6 adjusts the raw data. Go figure.

Martin, I think we can agree that we are unlikely to agree. The problem is that we don't understand climate in enough detail to know exactly what moves global temperature. But we know a lot, like the fact that volconoes tend to depress temps and that El Nino tends to raise temps. Take those influences out of the temperature record and there is no 'pause'. Tamino did a post on this some time back, well before this year's increase in temps. If you are interested I'll find it - but my guess is that you don't consider him a credible source.

Micky, the homogenization algorithms are indeed tested. Victor Venema, who is well known for work on homogenization, was involved in a study benchmarking homogenization algorithms - see http://variable-variability.blogspot.com/2012/01/new-article-benchmarking-homogenization.html

May 23, 2015 at 6:20 PM | Unregistered CommenterRaff

Removing alleged effects of volcanoes and El Nino to see what the trend would look like is a bit like -

lets assume the volcano effect is x, and El Nino effect is y, subtract them from the existing trend and what do we get?

Well it depends upon how big your x and y values are.

How big do want to make them?

The right amount to remove the pause..............

So many circles, so little knowledge.

May 23, 2015 at 7:40 PM | Unregistered CommenterSteve Richards

Raff

That isn't testing. Testing is characterising the measurement process, the variations in that process and the equipment used. It accounts for discontinuous bias and is used for tolerance calculations.

Does Victor have a table of empirically derived data relating station moves through different environments. Or does he use statistic models to guess?

Does he actually understand UHI at the micro level or does he use a model to guess?

Well based on the website he guesses. And it's good he does but it doesn't make the outcome any more correct. He cannot account for things that a responsible engineer or technician would have to measure, however boring and monotonous that is. Like the guys sitting in NPL measuring the 1 kg weight just for the purpose of maintaining national,standards.

Victor's approach wouldn't pass quality control for most engineering. Ironically what would happen in an engineering setting is that people would know their limits and say don't bother trying to guess and just add +\- 1 degree as a rough uncertainty for example. This uncertainty would mean that we use that data point with a lot more caution and that it would have to be budgeted for somewhere.
It may mean non conformance to some standard but at least it's honest. You don't make stuff us with maths when you're accountable. You go test stuff and show traceable verification.

Why is this hard to understand?

May 23, 2015 at 9:07 PM | Registered CommenterMicky H Corbett

And like all true losers, it always has to be someone else's fault. Cue the shadowy forces that are using us as their pawns.
Oh, now that is a cracking observation, TinyCO2!

May 23, 2015 at 9:17 PM | Registered CommenterRadical Rodent

I've actually thought of a good analogy to how the temperature data should be addressed especially when I see homogenisation.

I used to work with Rolls Royce. It may surprise some to know that EVERY jet engine they produce is known to them in detail. They characterise every single one, partly because there are small deviations in the engines due to manufacture and they rate them differently, but also because of what they are used for.

They know the details by testing not modelling.

Can the same be said for each temperature sensor since after all they ae being used to change people's lives?

May 23, 2015 at 9:42 PM | Registered CommenterMicky H Corbett

Tamino did a post on this some time back, well before this year's increase in temps. If you are interested I'll find it - but my guess is that you don't consider him a credible source.

Raff, thanks but don't bother (for reasons other than my perceptions of Tamino's credibility).

I think that questions are essentially inherently incapapable of being answered. As I said, to do any sort of meaningful statistical testing you have to have a statistical model known to be valid. In general, there are two ways to get such a model:

1. You derive it from a valid physical model of the system (eg differential equations describing what goes on inside it). An example would be the calculation of the spectral characteristics of the noise at the output of an electrical filter driven by wideband noise at its input. In the case of climate (or global mean temperature) such physical models don't exist ( although I suppose some people might dispute that).


2. You obtain it from measurements over a long enough period of time. If the system's statistical behavior is stationary (ie does not itself change with time) this can give a useful model. An example would be the construction of a model of the statistics of undersea acoustic noise by analysis of recordings taken over long intervals. But observations of global temperature are orders of magnitude too short to do this, quite apart from the fact that the statistical characteristics change from time to time:

This means the normal methods used in say target detection* cannot be used to answer questions such as "has warming halted?".

* Observe the system continuously, while asking the question "is what we are observing what we should expect as the output of our model, or has something changed (eg a target has appeared) so that what we are now observing is no longer explained as the output of our model?".

May 23, 2015 at 10:56 PM | Registered CommenterMartin A

Micky, you dont get to "[characterise] the measurement process, the variations in that process and the equipment used" for decades old data of unsure provenance. The paper is testing whether the algorithms work when applied to unknown simulated data. That is doubtless not as good as it could be if we set about instrumenting the earth now with new sensors. So how would you go about validating the homogenization methods?

All the main indices give very similar results, as do others such as those from Nick Stokes and even naive ones by skeptic Roger Andrews (see Euan Mearns blog). Everything shows that temps are rising. The real rate of change is bound to be somewhat different because there is patchy coverage, esp in the Arctic where temps are changing fastest. When combined with other evidence if change and with theory, we have what we need to act.

May 23, 2015 at 11:13 PM | Unregistered CommenterRaff

Not back, just passing through.

Micky H Corbett

Knowing each Trent engine intimately, with detailed testing of all parts and the complete engine, plus detailed telemetry in operation is a marvellous thing for an engineer. If the oil temperature in the cruise shows an upward temperature trend you can monitor until you are sure it is real, then take out the engine and check all the possible causes, finally repairing or replacing whatever was faulty.

I am not clear how you intend to apply this approach to climate change. Could you expand on how you would use an engineering approach to measuring and identifying the rate of warming and other climate change symptoms.

If current measurement technology is insufficient, what would you suggest replaces it? How much more investment and how many extra people would it need? How would you persuade the politicians to fund the extra effort.

How would you do test to identify and quantify the different effects such as albedo, solar insolatio, CO2, methane and Milankovich cycles?

I am not trying to be sarcastic, but I find it very difficult to envision how you can apply the testing, monitoring and diagnostic techniques you use on an engine to planet Earth.

The former is an object you designed, about which you can gather unlimited data and which is under your direct control. The latter is not designed, but the outcome of many processes.can only be monitored to a limited extent and is not sufficiently under your control to permit detailed testing or controlled trials.

May 23, 2015 at 11:21 PM | Unregistered CommenterEntropic man

Raff "Euan Mearns is an example"

Given you ability to misunderstand what people write, you'll forgive me for not taking your word for it.

May 24, 2015 at 12:10 AM | Unregistered CommenterTinyCO2

EM

Re read what you've written and think how can I know what a system is doing if I can't measure the system?

How,do you know the climate is changing how it's currently being stated to if accurate measurements can't be made? Do you just KNOW this because a theory tells you so? Do you believe it to be so?

I'm not advocating spending money on trying to force blood out of a stone. I'd like a few to realise that a lot of people have made a career out of fooling themselves.

The simple answer is that the current data set is woefully inaccurate to be used for real world applications and that those who are advocating it are trying to say that the data is as good as that with jet engines. That people can rely on climate data just like they rely on others knowing what engines do.

Do you now see why there's a problem? You've just stated the sceptical position without realising it.

May 24, 2015 at 12:25 AM | Registered CommenterMicky H Corbett

EM, one of the things said about climate modelling is that they can’t hind cast because they don’t have the starting conditions. In other words, creating software that can mimic the ebb and flow of ice ages is impossible. Ditto the current interglacial, the last three warm periods or even the last 1000 years. In fact they don’t seem to model any further back than the 60s (find any model plot that shows further back). We did hear validation of Mann’s work at one point by a phrase about his hockey stick being broadly in agreement with models. Well we know how good the hockey stick is, don’t we.

Hang on, if we need highly detailed measurements and observations of ocean currents, and ice, and clouds, and the sun and all the other things that are being used to explain why no model predicted the pause then we’ve only been measuring any of those in detail since… well I’m not even sure we’re doing it all now. If just land temperatures, CO2 and aerosols aren’t enough to roughly model climate then we can’t model climate. Since we have to adjust raw temperature measurements we don’t even have highly accurate starting conditions going back to the 60s. If ‘near enough’ was good enough for broadly accurate modelling then they should be able to model from about 1850. But they can’t.

And don’t even start on ocean temperatures. When you look at a modern plot of ocean temperatures as sampled by ARGO, can you even begin to consider the old methods were useful? If the spare energy is going into the oceans, what have the oceans done in the past?

Think how badly the election voting models failed. They just didn’t have enough information of what influences the voters. They failed right up to the last minute. Think how wrong they would have been 1 year in advance, 5 years, 20 years, 60 years. Sure, they would have had a fifty, fifty chance of calling it for the Conservatives but could they have given any useful detail about percentages, the SNP, the Lib Dems or UKIP? You might dismiss this by claiming that people are random, but are they much more random than the climate?

May 24, 2015 at 9:45 AM | Unregistered CommenterTinyCO2

It would be interesting to discover the reasoning of any representetive of BIG OIL who might have in some way paid Bishop Hill to be 'sceptical of climate science'? What common goal of the Bishop Hill gang would Big Oil be supporting? As far as I can see there is no common belief or political stance that unites us. We are a wholly disparate group of individuals who are united only by their disbelief that politicians and certain scientists can actually believe in CAGW.
Over the years some of our best people were seriously left wing and they argued just as hard as the rest that CAGW was a scam.
We constantly argue with each other about the science; and are not usually able to agree ^.^ Not least of these arguments was the epic thread by Rhoda asking for evidence about CO2 and CAGW.
Actually I think we do have something important in common (OK so we might not agree on this hehe); we are all independent thinkers who are not subject to fashion, brain washing, political loyalties or indeed anything other than evidence and proof when considering information touted as fact.

May 24, 2015 at 11:13 AM | Registered CommenterDung

Micky H Corbett

Your argument is known as the "impossible standards" straw man.

It simplifies down to a claim that because you cannot measure climate parameters to the standards used in precision engineering, you cannot usefully measure them at all.

Unfortunately policy has to be made on the hoof by those who turn up (ie.the politicians and Civil Servants) using the data available.

In practice the temperature data is probably as good as you are likely to get.

Consider accuracy.

Any competent engineer can design an automatic weather station capable of measuring temperature to 0.1C twice a day. This gives a sample of 730 measurements per year per station. The data for global temperature comes from 1250 off these high quality stations, giving a sample size n=266450 for the annual global means.

The accuracy of the mean increases in proportion to √n. Thus if n=1 you can give the measurement to the nearest 0.1C. If n=100 you can give a mean accurate to 0.01C. If n>266450 you can meaningfully write a global annual average temperature accurate to 0.002C.

Consider uncertainty.

Despite Martin A's sky dragon slayer approach to statistics the uncertainty in the climate data can be calculated reliably.

The data follow a normal distribution. If you plot a frequency distribution 95% of the individual measurements are less than two standard deviations from the mean.
An alternative way of say it; You can be 95% confident that the temperature of the system you are measuring is within 2SD of the mean of your sample.

Since all the groups calculating global temperatures use the same raw data it is not surprising that they quote similar uncertainties, on the close order of +/- 0.1C.

Consider variation and the number of stations.

Variation in the temperature data comes from four sources. One is the 0.1C uncertainty in individual measurements., Another is the day-to-day variation at each station due to weather. Thirdly there is the seasonal variation at each station. Finally there is the variation between the climate at different stations.

Taking measurements throughout the year minimises the effect of the first three. To determine how many stations you need to compensate for the fourth plot a graph; standard deviation on the Y axis and sample size, the number of stations, on the X axis.

At small sample sizes the SD is large because random sampling variations play a large part. As sample size increases the SD decreases. Above a critical sample size the graph flattens as the SD of the actual temperatures becomes dominant. This is the optimum number of stations. Any larger number gives little improvement

For analysis of global temperature that optimum is the 1250 stations they use.

In summary.

The climate monitoring system has been designed using what look to me like engineering principles to give the best output that physical and matematical limitations allow. This is the maximum quality of input likely to be available to the policy makers.

Refusing to make policy because you cannot have an impossible level of information is irrational.

May 24, 2015 at 11:58 AM | Unregistered CommenterEntropic man

I find it difficult to accept that accuracy of a device can be improved by using more devices. If the accuracy of a thermometer is 0.1°C, then the average temperature reading of an array of such thermometers must remain with an accuracy of 0.1°C. How can you be so sure that the errors in the thermometers are averaged over that array? With what confidence can you say that they CANNOT all be reading 0.1°C high (or low)?

Why does this supposedly increased accuracy not apply to historical records? Most of those “homogenised” historical data I have seen have been adjusted downwards, often to below the lowest of the selection. Surely, all those stations within the homogenised area would have had a collective accuracy far greater than has been given credence, using your logic? But, no, the error seems to have increased, and, for reasons unknown (despite quite insistent enquiries as to those reasons), seems to have been assumed to have been on the high side.

Sorry, EM, but unless someone is prepared to support you with a reasonable argument, then I will continue to view your premise that, using thermometers accurate to 0.1°C, an accuracy of 0.002°C is possible as pure and utter baloney.

May 24, 2015 at 12:31 PM | Registered CommenterRadical Rodent

EM

Your last sentence demonstrates the gulf between us. For me the urgent need is to understand what is happening and why but for you the urgent need is to make policy regardless of the quantity and quality of of the information you possess.

There is no need to make a policy to deal with CAGW if CAGW does not exist. Adaptation has served the human race well thus far and I am so glad that you are not part of our decision making class ^.^

May 24, 2015 at 12:47 PM | Registered CommenterDung

EM

I did not say you cannot use the data. I'm saying you cannot justify the accuracy of the data and hence derive claims from models based on that data by using assumptions and further models. I think you are the one creating the straw man.

As for your example of station data, you do not have any idea about the micro climate effects of each sensor. You don't have any characteristics that allow a user to adjust or correct the data for a single station based on physically measuring it. You assume that this is caught by using normal distributions and yet you have no idea if say a shift in a sensor reading is due to cutting back of vegetation (as Nullis in verba mentioned in a previous post). You don't have background calibrations, measurements of other parameters. Sets of data that show evolution of fine scale characteristics that you need to know if you are going to claim accuracies of 0.1. degrees. You simply don't know enough.

But yet you come here to tell me that it's okay I can explain that away with theory. I say yes you can scientifically because you can tell me you've made the assumption that you can model uncertainties just like the MO does. But when it comes to using that data for trends that affect policy you need to stop using assumptions and add tolerance to demonstrate what you don't know because you didn't actually measure it. Temperature sensors were designed to measure temperature to within a degree for weather purposes not to be used for climatology. Otherwise they would have been designed to be rigorous wouldn't you think?

As for data, let the politicians decide what course of action to take with less accuracy. But don't pretend it's better than it is.

I'm not saying this because of my opinion. I'm basing this on my own real world experience and training. And you have ignored the main point: people are claiming that they know enough about the climate based on theory and maths just like you have done above and think that because there are limitations to what can be measured this isn't important.

Making policy on data that is actually much more uncertain than claimed is dangerous. What's more dangerous is that with some forethought this is obvious and yet we still have people trying to claim that it is enough just to claim this scientifically.

You argue like a believer EM not like a scientist or engineer. You have rarely considered that measurements are the heart of science and engineering, that they are the lifeblood of theory itself. It's an error I see from people who aren't professional scientists and engineers.

May 24, 2015 at 12:51 PM | Registered CommenterMicky H Corbett

Radical Rodent

Try here.

Read the section headed Means. If you want more follow the link to standard error.

Micky H Corbett

If I arrive at your garage complaining that my car is detonating and pinking, with the temperature gauge reading well above normal you, as a mechanic, would diagnose that it was overheating.

II arrive at Slartibartfast's with a planet showing retreating glaciers, shrinking land ice sheets on Greenland and Antarctica, rising sea levels, increasing ocean heat content, climate zones and biomes moving to higher latitudes and higher altitudes, rising tropopause, cooling stratosphere and rising surface temperatures.He would also diagnose overheating.

I don't believe in climate change. I look at the variety of changes in the Earth system, all consistent with increasing energy input, and regard it as probable. Since these changes have consequences for our civilisation, policies to respond are necessary.

What I do not do is take expertise developed in the controlled environment of engineering and try to apply it to an Earth system in which such specialised knowledge is not applicable. It is an error I see from people who are professionals and engineers, when operating outside their own area of expertise.

May 24, 2015 at 1:37 PM | Unregistered CommenterEntropic man

EM

Data is data. How you get that data and how much you know about that data will determine what relationships and patterns you can derive from it.

If you look at changes in climate and propose mechanisms that would require a certain accuracy to be verified but you don't have it then all you are doing is speculating. Speculating is dangerous when people's lives are at risk. You don't actually know why the climate changes.

The reason why a mechanic would think your car is overheating is that it is a well characterised process and had been observed many times. Yet I'd expect him or her to check the car thoroughly anyway just in case it was something else. If they couldn't do that then the fix might not work or make the car even more dangerous. What do you do then? I've had the experience with my own car. They couldn't stop the thing from stalling in 1st gear. After many investigations it turned out it was a suction value and not the fuel pump which it normally is 90% of the time.

There's a question I was asked in an entrance test for a safety critical job which demonstrates the difference. I commented on it before on another discussion thread but I'll say it again but with a slight change that doesn't actually affect the answer

A farm produce units of wheat each month. Here are values for each year.

Year 1 - July - 20000 units
Year 2 - July - 25000 units
Year 3 - July - 30000 units
Year 4 - July - 35000 units
Year 5 - July - 40000 units

Estimate January's units for Year 6?

A) 42500 units
B) 41000 units
C) 43000 units
D) don't know

The answer is D.

Scientifically I could estimate A assuming the obvious linear relationship. Or it might be B as winter produces less yield.
But for engineering purposes the answer is don't know because the only reliable data you have is July. To make any other choice requires an unverifiable assumption.

If I really had to make a choice with any assumption it would be that January lies somewhere between 40000 and 45000 assuming that July's trend is accurate. But my first port of call would be don't know not because of the data per se but because of the use of the data.

May 24, 2015 at 2:14 PM | Registered CommenterMicky H Corbett

Micky H Corbett

Given your question I would also have chosen D.

"The reason why a mechanic would think your car is overheating is that it is a well characterised process and had been observed many times."

This is where your professional experience is leading you astray. You work in a field where the physics is long established and you can reliably apply the equations. Your machinery then operates accordingly. Theory, practice and measurement agree to the limits of your measurement technology. Concepts such as standard deviation and probability are no longer necessary. You also have 80 years of prior jet engineering experience available.

The problem comes when you move into climate science.

Thanks to the USAF the physics of the CO2 greenhouse effect is well understood, even by your standards.

The problem is understanding how that interacts with a planet much more complex and less predictable than a Trent. Under those conditions your information can never be complete, only sufficient, and statistics is a necessary part of your analysis.

There is also no past experience to guide us, nobody having tried to double the CO2 concentration before.

May 24, 2015 at 3:36 PM | Unregistered CommenterEntropic man