## Discussion > Place Your Betts

... that should clearly read the SB equation, not S&B equations...

As far as I can tell that Essex paper doesn't even mention the Stefan-Boltzmann law or blackbody radiation. The average temperature of the Earth is - in this context - not the average of the temperature, but the temperature of a blackbody that would radiate, on average, as much energy per square metre per second as the Earth radiates on average. In other words, solve the following equation for T,

F = sigma T^4,

where F is the average outgoing flux radiated from the Earth, and sigma is the Stefan-Boltzmann constant.

Also, in this context, when we talk about temperature changes, we're talking about anomalies, which is really just the average of the change in temperature for each location, not the average of the temperature for each location. Again, you can relate this to the Stefan Boltzmann law because if there is a small change in temperature dT, then the change in flux is

dF = 4 sigma T3 dT.

To claim that "There is no global temperature" - as Essex, McKitrick and Andresen do - is utterly ridiculous.

Michael hart

As a counterweight to Essex you should read this, from Ars Technica

Ken,

Even when viewed from space at such a distance that the Earth appears as a point

source, the radiation from it deviates from a black body distribution and so has no one

temperature [6].the earth is not at thermal equilibrium.

Is this just another of your cases of "Oh Lord, give me physics. But not yet."

ATTP the thing about Stetan-Boltzmann and Celcius/Kelvin came from the Rabett's discussion in 2005 of Essex and McKitrick's "Taken by Storm", where the duo discuss their no-global temperature ideas.

EM

You can't apply the sample of the mean to measurements that don't follow known distributions themselves. If you haven't characterised the individual sets of measurements you can only assume the measurement variation follows a normal distribution. Hence how the uncertainty reduces.

This assumption has to be tested if data is to be used for policy but can be okay for science as you normally publish your assumptions. Anyone who has had to have their metrology accountable for something knows this. The problem is that it hasn't be done.

For example, those that process satellite data are very clear about the limits of their uncertainty which you can see if you read Mears et al 2011 or the NOAA algorithm document.

"F = sigma T^4,

where F is the average outgoing flux radiated from the Earth, and sigma is the Stefan-Boltzmann constant."

Is it, or is it the average outgoing flux radiated/m^2 from the Earth.

I believe Essex et al were trying to point out that a global mean temperature was meaningless because of what was being measured and how it was averaged SoD had a post on it and put forward the following example:

"Now suppose that in 1999 the average annual temperatures are as follows:

Equatorial region: 30°C

Sub-tropics: 22°C, 22°C

Mid-latitude regions: 12°C, 12°C

Polar regions: 0°C, 0°C

So the “global mean surface temperature” = 14°C

Now in 2009 the new numbers are:

Equatorial region: 26°C

Sub-tropics: 20°C, 20°C

Mid-latitude regions: 12°C, 12°C

Polar regions: 5°C, 5°C

So the “global mean surface temperature” = 14.3°C – an increase of 0.3°C. The earth has heated up 0.3°C in 10 years!"

If you don't have enough measurements and you don't have a consistent measuring station and measuring technique and time of day, and you don't use the same period for calculating anomalies, in my view your output will be rubbish.

But I'm not a scientist, I'm an engineer, where probabilities really count and fuzzy logic doesn't help.

michael hart, unless you too are an astrophysicist, challenging ATTP on his own subject is a superb example of hubris. Something climate science "skeptics" seem to have in abundance.

Don't be a dipstick all your life, Raff. I learned that physics at high school before I went to University. It's nothing to do with "astro-physics" as you appear to believe.

michael, analyzing the spectrum of stars and bodies, black body spectra etc must be bread and butter stuff to an astrophysicist, don't you think? Yet you and the economist you quote know better it seems. You are like folks here chanting "the climate system is a couple non-linear chaotic system" as if they understand it. As I said, championship hubris. Who's winning amongst you?

And who cares whether the Earth has an average temperature anyway? Indices like GISS, HADCRUT or the "skeptic's" favourite, BEST, are of anomalies in the average surface temperature, not measures of "Earth temperature". Measuring Earth's spectrum from millions of miles away as you, Essex and Mckitrick want to is of little consequence. And the Rabett in the link above (Feb 3, 2016 at 6:42 PM) makes it clear that using any mathematical average other than the arithmetic mean is nuts.

Raff, temperature, thermodynamics & blackbody radiation laws are the same on earth as they are in space.

Christopher Essex understands them, and much more besides, very well and was doing climate models many decades ago. You know even less about me, but are still attempting to appeal to higher authority (presumably because you won't or can't make an argument yourself).

What makes you think you better than readers/commenters at BH when it comes to deciding who is competent and honest, and why should we accept your view?

michael, if Esses is so expert, why did he use Celsius instead of Kelvin in the calculations for the original of this graph http://photos1.blogger.com/blogger/4284/1095/1600/Graphic1.0.jpg

And why would he think it physically reasonable to argue that the average temperature of two drinks is better represented by the geometric mean than the arithmetic mean? See the Rabett's article if you don't understand (it is okay, I had to).

And please tell me why you or expert Essex care whether the Earth has an average temperature anyway, when that is not what the indices are measuring.

michael, if Esses [sic] is so expert, why did he use Celsius instead of Kelvin in the calculations for the original of this graph http://photos1.blogger.com/blogger/4284/1095/1600/Graphic1.0.jpg

Feb 4, 2016 at 12:47 PM | Unregistered CommenterRaff

Excuse me? If you don't like like Celsius [in a graph] , try adding 273.15 to it to get Kelvin.

If you had a point with that question, it is not obvious to me without further information. Sometimes it is necessary to use Kelvin in calculations, other times it may not be. I'm not yet convinced you know when such circumstances would apply.

I have to go out. Whether I go later to that blog you mention may depend on your reply. Along with aTTP, scroat, and the blog that cannot be named, I need to have have a very good reason to visit. Some of them are educated enough to know better. Some of them just lie when they feel the occasion demands it.

I suspect the question you are really asking is "Why did the wabbett choose to disagree with Essex?". Only he would know that. I can't be held accountable for his dissemblance and evasion. As I said, I may look later, but the usual patternn is to pick on a wholly irrelevant point to argue over, probably in the hope that their audience won't be able to spot what what they are doing. aTTP,you hero-de-jour is particularly ~~adept~~ transparent at that.

Did you also know he also teaches an Astrobiology course? That is the shortest straw in a science department, probably worthy of discussion thread (not about him, but about how university science teachers have to pander to the lowest common denominator for financial gain. I actually feel sorry for him, but more for science.).

And please tell me why you or expert Essex care whether the Earth has an average temperature anyway, when that is not what the indices are measuring.

That one is easy. I can help you there. It is because the "average temperature of the earth", despite probably being a scientifically invalid concept, is the metric that the environmental activists and alarmists decided was something they might be able to use to change global politic and economics by frightening people with its allegedly high level and rapid changes.

I also note you have not answered my question about why you consider yourself scientifically superior to BH readers.

michael, if you can't bear to read the Rabett, look at figure 1 of Taken By Storm briefing from E&M (http://www.uoguelph.ca/~rmckitri/research/TBSbriefing.pdf) and do some sums yourself:

In the graph, the 'radiation' average of water at 2C and coffee at 33C starts declining at somewhere around 28C. Basically they have added the T^4 for 33C and 2C, divided by two and taken the 4th root, giving just under 28C. Do it yourself, it is easy enough. They should have added T^4 for 275K and 306K, divided by two and taken the 4th root, giving 291.7K or 18.7C. The Rabett goes further to show that the arithmetic mean is the only average that makes physical sense.

The average temperature used by scientists (and activists) is not global temperature but global surface temperature. So E&M riffing on the theme of viewing Earth from millions of miles away and Plank spectra is irrelevant and will impress only people like you, who don't seem to understand what is being discussed.

I don't know whether I am "scientifically superior to BH readers" in general, but if you continue on this rather stupid line of reasoning, I think I know where I stand relative to you.

Micky H Corbett

d~1/√n is a power law relating precision of the mean and sample size. I do not think it matters whether the sample is Gaussian or not.

Is climate data gaussian? It depends on your variable.

Humidity, air pressure and temperature are Gaussian. Detrend to remove seasonal variation and long term trends and you get an approximately normal distribution.

Windspeed shows a skewed normal distribution because it cannot go below zero.

Rainfall is not Gaussian.

EM

I see you missed the point. You just described assumed distributions hence how they can be detrended. What I'm talking about is if the original uncertainty in the process of measurement cannot be determined or hasn't been determined the error cannot be reduced without some assumptions. Then it's a matter of deciding if the assumptions are reasonable and then taking responsibility to reduce and understand them if your data is to be used in reality.

So if a process produces an uncertainty of ± 0.5°C and nothing can be done to reduce that then that's the limit of the uncertainty. Taking 10 similar measurements doesn't reduce that uncertainty.This type of thing often happens with measurements not because of the instrument but because of the process of using the instrument. I've said before, you get things like discontinuous drifts where unbeknownst to you the sensor decides to read high for 3 months or 3 years and you only find out later. So you err on the side of caution and use what you know with higher tolerances applied.

This is a fundamental issue with experimental science and metrology. It's also common in engineering as you have to balance budget and responsibility.

For example, the famous bucket corrections can only contribute to the 0.1 degree uncertainty in the SST record if a normal distribution is assumed (it's all in the Met Office paper) and that their model of the process of taking a bucket measurement is dominated by the bucket cooling.

But there are other factors to include such as how much characterisation of bucket cooling (or heating) has been done - that turns out to be not much in 20 years so how can they say other effects don't matter?

How much of the process of taking a measurement - pre conditioning of the thermometer, time before reading etc has been characterised and shown to not produce comparible errors. Not much again since that data is hard to get.

So quite rightly the Met Office assumed a certain amount and presented it but only because it's a **scientific** paper i.e. without accountability. However if you are to take this into the "real world" that means you must be careful when saying the temperature error can be reduced by using sampling methods since the sampling methods themselves rely on certain assumptions.

If you ask anyone who actually takes temperature measurements for a living they'll tell you not to bother quoting to that uncertainty. They'll say this probably because they have to be accountable.

Micky H Corbett

I am not sure you are clearly distinguishing between resolution and accuracy.

If I used a single mercury to measure the temperature of two adjacent liquids the minimum difference I could detect is 1C. Using a single electronic thermometer I might detect a difference of 0.1C. That is resolution.

If the mercury thermometer has drifted out of calibration by -1C it would read 1C lower than the recently calibrated electronic thermometer for each measurement. That is accuracy, or lack of it.

Consider a large sample of temperature readings.

The precision of the mean is its resolution, which is improved by increasing the sample size Every hundredfold increase in sample size yields a tenfold increase in resolution.

The accuracy of the mean is indicated by its confidence limits. These reflect both natural variability andthe problems you describe. A carelessly taken sample will have larger confidence limits than a carefully taken sample.

Note that confidence limits are normally large in small samples. As the sample size increases, the confidence limits decrease until they reflect the variability of the actual data. Beyond that sample size confidence limits remain constant.

Thus the GISS 2015 global mean temperature is quoted as 14.87C+/-0.09C. The resolution of the mean is +/-0.01C and the accuracy of the mean is +/- 0.09C. The sample size is large enough that it accurately reflects the variability of the collection process.

You want to add an extra uncertainty factor on top of the confidence limits officially quoted. There is no need. Variations in the accuracy of individual measurements due to errors in calibration or technique are part of the variation which produces the confidence limits of the sample mean.

EM

You can only apply the sample of the mean if individual uncertainties follow known distributions. This is mathematical fact. Typically a normal distribution is chosen and very often is found.

If however you do not know the type of variation your measurement process produces you trade it off against what you call tolerance or resolution. But under no circumstances would you say that more measurements means less uncertainty.

You're arguing for theory against reality. If I had discrete units then yes but often a process will be deemed repeatable to a certain uncertainty and then that is the limit. Anymore is conjecture.

Have you ever performed metrology experiments? And been accountable for them? If you had you would understand the nuance I'm getting at. The temperature uncertainty is an ideal case meant to reflect theory. Sadly people take this as fact.

*"You can only apply the sample of the mean if individual uncertainties follow known distributions. This is mathematical fact."*

Not quite. So long as the variance of the distribution is known to be finite, and errors are independent, you can take the sample mean of samples from *any* distribution and it will approach a normal distribution with variance shrinking proportionally to sqrt(n). That's the Central Limit Theorem.

The issue is *independence*. Measurements are never exactly independent, and when they're not, the variance of the average shrinks to the residual covariance between measurement errors, and no further. You can average a few imprecise measurements and expect to get an improvement in accuracy, but you can't generally do it millions of times and expect it to keep improving forever.

Similarly with the question of averaging a limited number of surface temperature measurements from a comparatively small number of locations. If the difference between the temperature changes and measurement errors at those locations and the global average are independent, then it works like a survey sample. You can ask a thousand people selected uniformly at random, and tell which way forty million people are going to vote. But if you only ask people in one part of the country (e.g. Scotland) you'll get the wrong answer, because what you're measuring is correlated with location. Similarly, if global temperature change, or mean measurement error, are correlated with location, then sampling a finite subset of locations will result in an ineradicable error, no matter how many sample points you average.

The question of the Emperor of China's nose is also relevant here, if you know what that is.

On this particular question, statisticians trump astrophysicists. But regarding which average to use, it depends what you're using it for. Radiation from the top of the atmosphere isn't relevant to surface temperature measurements, and radiation from the surface isn't relevant to how the greenhouse effect works, anyway. The most appropriate measure is probably the total heat content of the Earth's oceans/atmosphere, since energy is a conserved quantity, and heat content is roughly linearly related to temperature - although you really ought to weight it by effective heat capacity at the relevant frequency if you're going to do that (which means the oceans dominate the land and air). However, even that isn't perfect.

Nor is it of any real use, without a validated statistical model of natural background variation. So the temperature rises - it doesn't mean anything if it was just random weather. And if you don't know how normal weather is supposed to behave, how can you possibly tell if any rise you see is abnormal?

On this particular question, statisticians trump astrophysicists.

Maybe, but if they use Celsius, instead of Kelvin, in the S-B equation then it's hard to take what they say seriously.

Nor is it of any real use, without a validated statistical model of natural background variation.

This sounds like the standard Keenan nonsense, which - based on what I've experienced - is mainly an illustration that even bright people can support utter barbage when it suits the narrative they'd like to promote.

This sounds like the standard Keenan nonsense, which - based on what I've experienced - is mainly an illustration that even bright people can support utter barbagePot. Kettle.(sic)when it suits the narrative they'd like to promote.

If you don't know what the natural background variation us "supposed" to be, Ken, how the hell can you know whether what you are seeing is within that variation or not?

Common Sense 101?

Nullius

The CLT is time independent. For the lifetime of a sensor you may never reach conditions sufficient for CLT so you can't assume it. You could get an idea by life testing.

Also the issue isn't just independence. It's getting an idea of which conditions create the biggest variation. In the bucket correction example there are a few more moving parts than just the dipping the thermometer after a certain time. So it may be that the uncertainty distribution of one part of the process swamps another.

Mike Jackson

The variability of a sample depends on sample size, accuracy of measurement and the natural variability.

The effect of sample size can be found by plotting total variability against sample size. At small sample sizes you get a slope; variability decreases as sample size increases. Then the slope flattens and variability becomes constant and independent of further increases in sample size. This indicates the optimum sample size, maximum accuracy for minimum effort.

The variability due to metrology is found by replication; taking repeat measurements under the same conditions. Once again, as you increase sample size variability trends towards the variability due to metrology.

Having accounted for sample size, the natural ariability is the total variability minus the metrology variability.

*"Maybe, but if they use Celsius, instead of Kelvin, in the S-B equation then it's hard to take what they say seriously."*

That depends on whether they acknowledged and corrected the error, or doubled down and defended the error for a few decades as the climate science community has done. Short-centered PCA? R-squared validation failures? Mis-located tree ring series? "As far as I can see, this renders the station counts totally meaningless"? "In other words, what CRU usually do. It will allow bad databases to pass unnoticed, and good databases to become bad, but I really don't think people care enough to fix 'em"?

Anyone can make a silly mistake, but it takes a climate scientist to say: "Why should I make the data available to you, when your aim is to try and find something wrong with it?"

How can you take anyone seriously who says, or defends, thing like that?

*"This sounds like the standard Keenan nonsense"*

It's not nonsense. It's standard mathematics - time series analysis as done in Box & Jenkins.

Although as your typical climate scientist says things like "It is also an ugly paper to review because it is rather mathematical, with a lot of Box-Jenkins stuff in it" about this sort of stuff, I don't suppose your attitude is much of a surprise, either. I mean, what sort of 'scientist' finds mathematics "ugly"?

---

*"For the lifetime of a sensor you may never reach conditions sufficient for CLT so you can't assume it."*

The conditions of the CLT are that the expectation and variance are finite, which is obviously satisfied on physical grounds, and independence.

The variance of a sum of random variables is just the sum of the elements of the covariance matrix. (Just multiply the random vector by the all-ones vector and use the linearity of expectations.) If the random variables are independent, then only the elements down the diagonal of the matrix are non-zero - they're just the variances of the individual variables. So the variance of a sum of n independent random variables is the sum of the variances, and the variance of the mean is the sum of the variances over n^2. That gives the 1/sqrt(n) dependence of the standard deviation.

But if the random variables are not independent and the off-diagonal elements of the covariance matrix are non-zero, then the variance of the sum tends towards the average of all these elements. It's the lack of exact independence that breaks any attempt to gain infinite accuracy by increasing sample size.

michael, thanks for the link. Now I know where Martin A got his story about global temperature fields and the idea that global average temperature is "bogus". And I though it was original. But weren't E & M the ones who used Celsius in S&B equations, or was that someone else? And didn't the paper you reference and the related book get laughed at rather a lot?