Buy

Books
Click images for more details

Twitter
Support

 

Recent comments
Recent posts
Currently discussing
Links

A few sites I've stumbled across recently....

Powered by Squarespace
« School brainwashing works | Main | Biofuels cause starvation »
Wednesday
Mar232011

Best keep quiet

Lots of fun is being had over the preliminary findings from the Berkelely Earth Surface Temperature project (BEST). The project team had made an announcement stating that they had processed 2% of the data through the algorithm and found the results showed warming.

A preliminary analysis of 2% of the Berkeley Earth dataset shows a global temperature trend that goes up and down with global cycles, and does so broadly in sync with the temperature records from other groups such as NOAA, NASA, and Hadley CRU. However, the preliminary analysis includes only a very small subset (2%) of randomly chosen data, and does not include any method for correcting for biases such as the urban heat island effect, the time of observation bias, etc. The Berkeley Earth team feels very strongly that no conclusions can yet be drawn from this preliminary analysis.

Scientist Ken Caldeira, who is not part of the team then decided to completely ignore the caveats and declare that CRU, GISS et al were vindicated:

I have seen a copy of the Berkeley group’s draft paper, which of course would be expected to be revised before submission.

Their preliminary results sit right within the results of NOAA, NASA, and HadCRU, confirming that prior analyses were correct in every way that matters. Their results confirm the reality of global warming and support in all essential respects the historical temperature analyses of the NOAA, NASA, and HadCRU.

Their analysis supports the view that there is no fire behind the smokescreen put up by climate science deniers.

So despite the enormous caveats put out by the BEST team, Caldeira has gone right ahead and drawn conclusions that suit his political case, with his statement then repeated by the usual suspects like Romm.

Anthony Watts has the full story.

Steven Mosher, who has been to meet the BEST team, has posted some interesting comments at Keith Kloor's site:

Zeke and I made a visit to discuss a few things with [BEST]. So they shared some very preliminary charts. 2% stuff. And we discussed what stage they were at in the project. We met with a good number of the team. I volunteered to do some R coding. They work in Matlab. I also volunteered to pass a couple papers along that covered some issues. We exchanged some mails, primarily on what I needed to get working on the data formats.  Zeke wrote a nice piece on our visit over at Lucia’s. I was gunna write one, but Zeke did a complete job, so what’s the point.

Romm then writes a post reporting that Ken had read the draft paper. This made no sense to me given the briefing that Zeke and I had received. The full data set had not been run through the algorithm, especially one key part, a really cool part.. So the idea that there was a draft paper made no sense to me. maybe the methods part could be written, but hardly the conclusions. Anyways I did some checking and turns out that Ken was reading another paper the team was working on.. not the surface stations paper.  At least thats the best info I have. Now, Im told that Romm is foaming at the mouth.  sheesh. what a marroon.

More comment at Andy Russell's.

PrintView Printer Friendly Version

Reader Comments (54)

Why did they have to announce a statement that they had processed 2% of the raw data which showed warming? How utterly naive can you be that this would NOT go viral within nano-seconds after publication? More loads of white noise to the already overflowing blogs and comments (Climate etc!), where everybody has his own opinion... *puke*

Scientists can be so incredible naive....*sigh*

Mar 23, 2011 at 1:05 PM | Unregistered CommenterHoi Polloi

Looks like Caldeira hasn't learn't his lesson ;)

"I was drawn in by Romm and Al Gore’s assistant into critiquing other parts of the chapter. Rather than acting deliberately, I panicked and commented on things that I now wish I would have been silent on. It was obviously a mistake to let myself get drawn into this, and I learned a quick and hard lesson in public relations."

http://www.freakonomics.com/2009/10/18/global-warming-in-superfreakonomics-the-anatomy-of-a-smear/?apage=11

Mar 23, 2011 at 1:18 PM | Unregistered CommenterTS

The BEST intial findings statement contains this:

"A preliminary analysis of 2% of the Berkeley Earth dataset shows a global temperature trend that goes up and down with global cycles, and does so broadly in sync with the temperature records from other groups such as NOAA, NASA, and Hadley CRU. However, the preliminary analysis includes only a very small subset (2%) of randomly chosen data, and does not include any method for correcting for biases such as the urban heat island effect, the time of observation bias, etc."

Two things puzzling about this: (1) Nowhere does this say that recent warming trends are egregious in size. We can infer only they are "broadly in synch" with the other temperature reconstructions. In other words, they resemble these in the timing of the cycles not in their amplitudes. (2) Watts claims that the 2% sample is from Japanese stations, but this is consistent neither with the statement above that the data were "randomly chosen", nor with the fact that they can talk of the analysis showing a "global temperature trend".

Mar 23, 2011 at 1:27 PM | Unregistered CommenterNicholas Hallam

@TS

Interesting.

Do you suppose Romm has stitched Caldiera up again?

Mar 23, 2011 at 1:45 PM | Unregistered CommenterGeckko

It is a pity BEST chose Japan. This country presents a huge can of worms for checking their methodologies.

Japan Meteorological Agency has charted the rise in the country's land temepratures over the last 100+ years using 17 weather stations. This rise in land temperature has been determined as 1.5C, double the global average. The stations are based at:

Abashiri - a village founded in 1872, now a city of 40,000 people.
Nemuro - a city founded in 1957, now with a population of 30,000
Suttsu - a town with a popualtion of 3,000
Yamagata - a city with a population of 250,000
Ishinomaki - a city with a population of 160,000
Fushiki - once a village now part of city of Takaoka, 170,000.
Nagano - a city of 370,00 people.
Mito - founded as a city in 1889, now with a population of 260,000
Iida - formed as a city in 1937, now with a population of 100,000
Choshi - part of the Greater Tokyo are, population 70,000
Sakai - part of Osaka area, population 800,000
Hamada - a city of 70,000 people.
Hikone - a city of 110,000 people
Miyazaki - founded in 1924, population 400,000
Tadotsu - a town of 24,000 people
Naze - a city founded in 1946, population 40,000
Ishigakijima - founded in 1908, population 46,000

Japan over the past 100 years has seen a phenomenal growth in population, industrialisation and urbanisation.

What exactly is being measured by Japan Meteorological Agency and now by BEST????

In trying to put their BEST foot forward, BEST have gone off with the wrong foot.

Japan is UHI hell.

Mar 23, 2011 at 2:21 PM | Unregistered CommenterMac

I suspect that the reaction to the BEST project is based on the fact that they are not part of "the community" and could easily be "off the reservation" or just possibly could provide some convincing arguments about the actual material used for the emperor's clothes.
As any good rugby playey will tell you, it's always best to get your retaliation in first!
Mac, I agree with you about Japan. I suppose there could be reasons for choosing that because of the UHI situation and the larger than average temperature increase. It depends what it is in their systems that they're trying to check or standardise, maybe.

Mar 23, 2011 at 3:46 PM | Unregistered CommenterSam the Skeptic

Sam, Mac --
From the WUWT article: "They chose Japan because it made for a compact insular test case for the code, combining rural, urban, and airport stations under one organization’s output to keep it simple." If Japan has significant UHI effects, as seems probable, its data provide an excellent opporunity to see if the BEST algorithms can identify and compensate for those effects.

Mar 23, 2011 at 4:09 PM | Unregistered CommenterHaroldW

"[The data] does not include any method for correcting for biases such as the urban heat island effect, the time of observation bias, etc." and yet it broadly agrees with NOAA, NASA, and Hadley CRU.
Nuff said.

Mar 23, 2011 at 4:33 PM | Unregistered CommenterJames P

"Romm is foaming"

Plus ça change. :-)

Mar 23, 2011 at 4:34 PM | Unregistered CommenterJames P

We need to remember the initial BEST analysis for Japan does not include any method for correcting for biases such as the urban heat island effect (UHI).

Now that is a serious omission in the context of Japan which is one of the most urbanised places on the planet. How would you define rural in Japan? Even on the the relatively sparsely populated island of Hokkaido the weather stations all appear to be located near to or in towns and cities.

I would suggest that every bit of temperature data in Japan is contaminated by UHI. That introduces a great deal of unceratinty in any analysis. Untangling UHI in Japan would be a great deal of work. It is almost a schoolboy error.

Not a good start by BEST.

Mar 23, 2011 at 4:57 PM | Unregistered CommenterMac

Mac

Maybe they chose it as a worst case? There's something to be said for defining it.

Mar 23, 2011 at 5:35 PM | Unregistered CommenterJames P

Why would the satellite data on tropospheric warming be wrong?

No UHI at 14,000 feet:

http://woodfortrees.org/plot/uah/plot/uah/trend/plot/rss/plot/rss/trend

It's not an argument about whether or not it is warming. It is. The argument is (or should be) about whether the rate of change supports the consensus value for climate sensitivity to CO2.

Despite vociferous claims to the contrary by the convinced, the observations show less warming than projected.

Hence Kevin Trenberth's concern over the whereabouts of the 'missing energy'.

Mar 23, 2011 at 5:42 PM | Unregistered CommenterBBD

Climate science is going nowhere.

Just imagine if Kepler had tried to reduce the solar system to a single homogenised average number of planets.

Mar 23, 2011 at 6:26 PM | Unregistered CommenterJack Hughes

What exactly is a "climate science denier"? What do they deny? and Where can I meet one?

Mar 23, 2011 at 6:27 PM | Unregistered CommenterSteve

A BEST type project is needed if we are going to have an historical temperature record that is longer than 30 years. Willis Eisenbach's latest post at WUWT provides a good summary of what the BEST folks are trying to do. It is certainly not perfect but the key players on the BEST project (e.g., Richard A Muller, Judith Curry) seem committed to transparency and integrity.

Romm is simply a pot banger and deserves attention only insofar as he has political connections to the more extreme parts of the Obama Administration.

Mar 23, 2011 at 6:56 PM | Unregistered Commenterbernie

Steve asks

What exactly is a "climate science denier"? What do they deny? and Where can I meet one?

I hear that you can find them on the Internet.

Mar 23, 2011 at 7:00 PM | Unregistered CommenterBBD

I believe they intend to do something with step-changes in data sets. Early days yet.

Mar 23, 2011 at 8:52 PM | Unregistered Commentersimpleseekeraftertruth

I understand that the present BEST project is only looking at land based stations. It is proposed to follow on with a sea surface temps project. I can’t find the ref but I saw a statement, I think at JC’s Climate Etc that funding is not yet in place for the extension into SST.

If correct this could create an issue of interpretation. SST is the major part of Global Surface Temperature and many questions have been raised with regard to the handling of SST data.

It is quite a stretch to think that 2% of land based stations can be an indication of anything Global.

Time will tell. Maybe it would be “best” if all was kept quiet until BEST produce a full Global product including SST?

Mar 23, 2011 at 9:28 PM | Unregistered CommenterGreen Sand

I agree with Hoi Polloi. This is naive.

But I can see where James P and Harold W might be coming from --- if they are trying "suck in " the AGW activists by letting them think that it is starting look like everything is OK , but based on Japan which should be very baised by UHI and then later showing this up with data from other areas less effected. If this is their game then I think they are stupid Gameplaying should not be part of this effort.

So as I said , I agree with Hoi Polloi.

Mar 23, 2011 at 9:49 PM | Unregistered CommenterRoss

"Why did they have to announce a statement that they had processed 2% of the raw data which showed warming? ...How utterly naive can you be that this would NOT go viral within nano-seconds after publication? ...*puke* Scientists can be so incredible naive....*sigh* -- Hoi Polloi

Transparency can be easily mistaken for premature ejaculation.

Mar 23, 2011 at 9:52 PM | Unregistered Commenterjorgekafkazar

the way that Romm has reacted is extraordinary. First he disses the team in detail for their contacts with everyone including Genghis Khan...and now that they might give findings that are within the accepted bounds, he welcomes tham.....he is a moron of the worst kind. he makes Blair, Campbell, Muir Russell seem honest.

Mar 23, 2011 at 10:39 PM | Unregistered Commenterdiogenes

It's a good lesson for all of us, diogenes, whenever we are tempted to play the man rather than the ball (and I have done it often enough myself), that it is easy to make oneself look very stupid as Romm has just done. Oops I did it again ;)

Mar 23, 2011 at 10:52 PM | Unregistered CommenterDavid S

Scientists can be so incredible naive....*sigh*

Sorry, I don't think the scientists did one single thing wrong here. Did they issue a statement that contained 100% truthful information with important caveats about the incompleteness of the data? Yes.

So what they are at fault for is giving a status update on the project? Really? You fault them for providing factual information?

There are 2 types of open and transparent.
1. document everything and after you finish, allow everyone to have all the info you gathered
2. let everyone have the info you have, when you have it and far before you finish.

#1 has the release of info and claims made well before others can dissect the software and data. #2 allows you to show that those making big claims about what the project has done have the refutations to those big claims right in the original statement.

If you want a science that is truely open and transparent throughout the entire process, then get used to such things from Romm (and friends) and do not fault the scientists for the actions or reactions of fools.

Mar 23, 2011 at 10:53 PM | Unregistered CommenterStilgar

Stilgar


Agreed.

Mar 23, 2011 at 11:03 PM | Unregistered CommenterBBD

Ross (9:49 PM)-
Just to clarify, I was not meaning to suggest that using Japan was part of any "game-playing" by the BEST team. As you say, playing games is self-destructive. I think the composition of the BEST team is the best assurance that this will be an honest effort to distill the information from the available data, rather than torturing the data to coerce support of a preconceived conclusion.

The team need a relatively small dataset to experiment with various algorithms intended to segregate "real" and "artificial" (e.g. UHI) effects. It seems that Japan provides a challenging temperature dataset in that it almost certainly has a significant UHI effect. At the same time, it does not present some of the difficulties of the entire dataset, being of (presumably) fairly uniform measurement technique, frequency and quality. I suspect that, as they proceed, the BEST team will choose other subsets with specific attributes, in order to select algorithms to compensate for other variations in measurements, such as observation time. [However, I suspect they won't be disclosing progress to other "friends of Joe"! ]

Mar 24, 2011 at 12:15 AM | Unregistered CommenterHaroldW

None of the surface air temperature analyses to date take instrumental systematic error into account. It's present in the air temperature data produced at each and every surface climate station, due primarily to the effects of to wind speed, solar loading, and local albedo; all of which are time-wise variable.

The data are corrupted with unknown error right from the start. If BEST doesn't address that, their results will be no more credible than those of CRU or GISS. If BEST does address systematic error, I'd expect them to conclude that the 20th century surface air temperature trend is unknowable.

For further analysis, see here (0.9 MB pdf download).

Mar 24, 2011 at 2:24 AM | Unregistered CommenterPat Frank

If you look at this graph comparing Sunshine Hours in Japan vs temperature in China, there is no surprises.

http://tallbloke.wordpress.com/2010/06/21/willie-soon-brings-sunshine-to-the-debate-on-solar-climate-link/

Sunshine, the #1 driver of climate, is up 20% since 1880.

They cleaned the air!

Japan's air used to be a filthy mess because inddustrialization and heating fuels.

Mar 24, 2011 at 2:46 AM | Unregistered CommenterBruce

“Changes in Japanese sunshine duration during the 20th
century correlate with the Northern Hemisphere temperature anomaly”

Stanhill,G., and Cohen, S. (2008). Solar radiation changes in Japan during the 20th century: evidence from sunshine duration measurements. Journal of the Meteorological Society of Japan 86(1):57-67

Mar 24, 2011 at 2:46 AM | Unregistered CommenterBruce

A copy of the paper on Japan's change in Sunshine Hours.

http://www.jstage.jst.go.jp/article/jmsj/86/1/57/_pdf

If BEST wanted a good control, they should have picked a country with no change in sunshine hours.

Instead it looks like they picked one with a large increase in Sunshine Hours.

The fix is in.

Mar 24, 2011 at 2:51 AM | Unregistered CommenterBruce

Point taken Harold W.

In response to Stilgar -- fair enough to have transparency etc , but to release results after looking at only 2% of data , I think will just lead to confusion at best. This is especially so if they are doing what Harold W says --experimenting with algorithms on a "sample" of the data.

Mar 24, 2011 at 3:17 AM | Unregistered CommenterRoss

Not just 2% of the data, 2% of the data RANDOMLY CHOSEN.

So all those neat little octette areas that other models try to force their data into can't be compared with the BEST results so far. I doubt 2% at random would give enough samples in enough areas to form an octette.

Unless you use the GISS method of 1 station for every 750 miles, and assume that represents reality over the whole area, regardless of features like lakes, mountains and seashores.

Mar 24, 2011 at 3:49 AM | Unregistered CommenterTW in the USA

Thanks for the clear synopsis, Bish. Few blogs manage to be both accurate and concise/easily understood. I've been aware of this issue for a while, but now I understand the key points.

Mar 24, 2011 at 7:50 AM | Unregistered CommenterMichael larkin

Stilgar,

Spot on, open science how refreshing the wind of change is.

Mar 24, 2011 at 8:10 AM | Unregistered CommenterLord Beaverbrook

I think the problem here comes back to Muller: this episode seems to have started when he spoke about the preliminary BEST results in a public talk at Berkeley on 19th March. This appears to have been the trigger for Caldeira to send the email to Climate Progress and for the BEST Findings page to be updated.

BEST is a project that doesn't really need someone hyping the results before they've even been completed. I hope that this is the last we hear from them before the full results have been published.

On the random data/Japan thing, I've not heard it mentioned anywhere but WUWT and Watts doesn't seem that open about why he brought Japan up. Maybe there were two sample periods - one where the homogeniety routine was tested (Japan might be a good place for this) and a more general, random one (that Caldeira has commented on).

Mar 24, 2011 at 8:52 AM | Unregistered CommenterAndy Russell

I agree with BBD yesterday at 5:42pm. The evidence from satellites that there's been some heating in the last 30 years or so is pretty much copper-bottomed. And the evidence from surface temperature records that there's been some warming for the last 100 or so years is also pretty solid. UHI will most likely have contaminated the surface records a bit, but it does not look as though that is a major factor. I expect the BEST project to come up, with a trend that is not all that different from CRU or GISS.

With respect to people like Pat Frank at 2:24 AM, I would suggest to them that averaging over many measurements tends to remove errors. If the thermometer in one weather station tends to 'read high' it will read high next month, and next year also. So that does not affect the anomaly - and if its error changes with time, then most likely another one somewhere else will have a changing error in the opposite direction. Some people also claim that you can't define an average global temperature. Again, I think that it would be best not to make such claims: you can always take an average. And if you take an average of an anomaly, you tend to get a sensible result. Measuring global temperature anomalies is not orders of magnitude harder than e.g. measuring the UK's GDP. We know you can't do that exactly, but equally you can do it about right.

So this story most likely does not matter all that much - it's most interesting with respect to Romm's frothing at the mouth about it.

Again following BBD, a more fruitful area for sceptics is to point out that there isn't really as much warming as models predict - especially recently - and to point out that proxy-based temperature reconstructions going back 1000 years are in tatters, with error bars way larger than the supposed changes over that time, so that we really can't claim with any confidence at all that it is warmer now than it was in the Middle Ages. It might be warmer now, it might be cooler now - we don't really know with enough accuracy to say.

Mar 24, 2011 at 9:16 AM | Unregistered Commenterj

BEST should have done two prelimary tests of their methodologies with;

1. Purely random data, using continous and discontinous data sets.

2. Chosen a country like New Zealand (same size as Japan) with a smaller number of weather stations and which we all know now hasn't been affected by global warming - the regional temperature has changed little over the past 100+ years.

Doing the above would have shown defects in the methodologies using random data and real data.

Instead they chose a country like Japan which has seen phenomenal changes in population, industrialisation and urbanisation over the past 100+ years. The impact of humanity on the temperature record would have been both profound and highly complex. The Japanese themselves have determined that changes in the regional temperature have been TWICE that of the global change. It is nigh impossible to disentangle the human element from the data.

To find as BEST have done that their initial efforts in Japan were in accord with other data-sets comes under the category "No Shit Sherlock!".

It also seems that BEST's best intentions are now suspect. Releasing the prelimary figures in this way and feeding them to known CAGW alarmists seems stupid at first, but now appears deliberate.

We are being had by BEST. These people are no different from the Team, they are behaving as their 2nd XI.

It is all rather disappointing, but so true to form.

Mar 24, 2011 at 9:30 AM | Unregistered CommenterMac

j

Thank you. I only wish that some others here would take a moment to consider the GATA change and its implications more carefully.

Reflexive scepticism isn't as useful or as effective as the considered variety.

Mar 24, 2011 at 11:58 AM | Unregistered CommenterBBD

j
Pat Frank has been systematically looking at (and publishing) on measurement error and uncertainty for some time. I think his point was that the errors are a function of time and therefore are less likely to be of the form you suggest.
As I said before the value of pursuing the BEST project largely depends upon the importance of a long time series of the earth's heat content. If the instrumental errors are too large for a meaningful analysis of the trend, then they are of minimal value. QED.

Mar 24, 2011 at 12:14 PM | Unregistered Commenterbernie

What is important for BEST is that not its analysis produces a result that compares favourably with what has gone before, but a result they can have confidence in. That means at this stage the data as a number is not important, but how the data is handled is.

So for BEST to state, "A preliminary analysis of 2% of the Berkeley Earth dataset shows a global temperature trend that goes up and down with global cycles, and does so broadly in sync with the temperature records from other groups such as NOAA, NASA, and Hadley CRU." is entirely wrong.

We do not want another situation where the data is tortured until it gives us the result we want. We want to know if BEST methodologies are sound and inspire confidence.

So far, not so good. Picking out Japan was wrong. Publishing the results on the numbers were wrong. Allowing the alarmists to hijack these numbers seems deliberate.

Mar 24, 2011 at 12:47 PM | Unregistered CommenterMac

bernie

And all this invalidates the clear warming trend from the satellite data how exactly?

Mar 24, 2011 at 1:05 PM | Unregistered CommenterBBD

BBD:
I am not questioning the warming record from the satellite data. My point has to do with the value of the BEST effort and the length of a temperature record. Clearly if a 30 year record is all that is needed for whatever attribution of AGW needs to be made, then the satellite record should suffice and the BEST effort is a waste of time and resources.

Mar 24, 2011 at 1:16 PM | Unregistered Commenterbernie

bernie

You say

As I said before the value of pursuing the BEST project largely depends upon the importance of a long time series of the earth's heat content. If the instrumental errors are too large for a meaningful analysis of the trend, then they are of minimal value. QED.

You cannot get even a vague estimate of the energy stored in the climate system from surface temperature alone. You need OHC for that.

So in your own terms, yes, BEST is a waste of time and resources.

Mar 24, 2011 at 1:44 PM | Unregistered CommenterBBD

Does anyone know the systematic error in the satellite tropospheric temperature record? Satellite sensors are calibrated before launch. What happens during orbit? Is there no bias in the record, a constant bias (removable by differencing), or a variable bias (not removable)? Does anyone know? We never see satellite tropospheric temperatures plotted with uncertainty bars. Are they also granted the false canonical purity that still generally graces the surface temperature record?

Satellite sea surface temperatures are calibrated against buoy SSTs. Recent studies of buoy SSTs show they're really no better than about (+/-)1C. That uncertainty translates right back into satellite SSTs and on into OHC.

It looks to me like the entire debate is everywhere located inside the grey region of unstated unacknowledged uncertainty. The debate is about physically meaningless numbers.

Mar 24, 2011 at 5:27 PM | Unregistered CommenterPat Frank

Pat Frank

You ask

Does anyone know the systematic error in the satellite tropospheric temperature record? Satellite sensors are calibrated before launch. What happens during orbit? Is there no bias in the record, a constant bias (removable by differencing), or a variable bias (not removable)? Does anyone know? We never see satellite tropospheric temperatures plotted with uncertainty bars. Are they also granted the false canonical purity that still generally graces the surface temperature record?

All questions I would put to that well-known climate alarmist Roy Spencer. I find the more recent good agreement between RSS and UAH suggestive of sound results - don't you?

Satellite sea surface temperatures are calibrated against buoy SSTs. Recent studies of buoy SSTs show they're really no better than about (+/-)1C. That uncertainty translates right back into satellite SSTs and on into OHC.

First, I have to admit I don't know which study or studies you are referring to re the uncertainty range of satellite measurements of SST cross-calibrated with buoys.

However, you are absolutely incorrect to conflate SST and OHC. They are entirely different measures of entirely different things. Nor has there ever been any attempt to calculate OHC directly from SST. XBTs and now ARGO provide the measurements for calculating OHC of the 2000m layer.

It looks to me like the entire debate is everywhere located inside the grey region of unstated unacknowledged uncertainty. The debate is about physically meaningless numbers.

There is more truth to this than the consensus admits. It has become a 'merchant of certainty', which is hardly best scientific practice, although it does suit a political agenda well.

Mar 24, 2011 at 6:24 PM | Unregistered CommenterBBD

Pat:
I am sure that John Christy and Roy Spencer would point you to the relevant literature. They must have gone through a lot to correct some of the early discrepancies and differences in their two estimates. Lucia at Blackboard may also be able to help.

Mar 24, 2011 at 8:26 PM | Unregistered Commenterbernie

J doesn't seem to understand the concepts of i.i.d. and, related, stationarity. Pat Frank is of the handful that does. First couple weeks of lecture in any respectible course on random variables. Most of those spouting the "errors cancel" rhetoric do not understand how or why they should, they simply regurgitated what another posted, or hopped over to Wikipedia, in regards to the law of large numbers. Silly...

Mark

Mar 25, 2011 at 11:36 PM | Unregistered Commentermark t

Another thought about this

"We met with a good number of the team. I volunteered to do some R coding. They work in Matlab."

Matlab is a simulation tool, originally designed for mechanical control systems. The folks who made it are kind of Microsoft-ish, they like to buy cometitors who have better featurs and integrate it into their product.

While not having worked in Matlab personnally, I have been associated in two projects where the designers started their work in Matlab. The first used Matlab's auto code generation feature. The code worked (kind of), and is traceable, but was junk when it came to run time, which is what control systems are all about. We eventually scrapped 80% of what Matlab generated, and used regular people coders.

The second project, several years later, different contractor, jumped in with Matlab. I warned them it would only give them a 70% solution at best. They didn't believe me. After 3 years af work, we are now almost 12 months behind, going through multiple test, analize and fix cycles. Matlab was dropped after 18 months because it is just too "clunky" and contractor back tracked into the human coders doing the work.

I have questions about any results BEST would get using Matlab, from my experiences. As there is no real world "check" for climate data, it is just a better organized way to get GI-GO, if you aren't very very careful. If they are using the CO2 records, which are very course with huge gaps and highly questionable, to show temp forcing, then it is a waste of time. Even with precise data in electro-mechanical systems, Matlab can't give useable end product without humans doing a lot of the work.

It will be interesting to see the claims at the end, and the validation criteria.

Mar 26, 2011 at 3:30 AM | Unregistered CommenterTW in the USA

mark t

Which invalidates the satellite record how, exactly?

Mar 26, 2011 at 11:04 AM | Unregistered CommenterBBD

I posted this up early in the thread, but since nobody ever seems to bother to read the thread, here is a link to the RSS/UAH takes on the satellite data:

http://woodfortrees.org/plot/uah/plot/uah/trend/plot/rss/plot/rss/trend

Pat Frank attempts to raise doubts about the accuracy of the satellite data above, but whether it's bang-on or not, the general upward trend is clear.

As I said earlier, surface temperatures are a very poor gauge of what's really going on in terms of energy accumulation in the climate system.

As Pat Frank correctly pointed out, there are no really good measures. TOA energetic imbalance cannot be measured directly and accurately by satellites and OHC measurements are, frankly, a bit of a muddle and a mess at present.

Claims of substantial warming of the upper ocean layer depend on a reconstruction which shows an increase of 8*10^22 J between 2003 and 2005 (NODC) or the same between 2002 and 2004 (Lyman et al. 2010).

There is no physical mechanism that could deliver this much energy to the upper ocean layer in two years. Something's wrong somewhere.

Mar 26, 2011 at 11:15 AM | Unregistered CommenterBBD

BBD, you're right about OHC. I seemed to remember inferential OHC using SST and thermal diffusion models, but I was wrong about that.

For an evaluation of buoy SSTs see W. J. Emery, ea (2001) "Accuracy of in situ sea surface temperatures used to calibrate infrared satellite measurements" JGR 106, 2387-2405. Here's the journal abstract. See also, for another example.

I wasn't attempting "to raise doubts about the accuracy of the satellite data." I was and am merely asking the most basic question in science, namely, 'How do they know?' If satellite SSTs are calibrated against in situ buoy SSTs, then clearly satellite SSTs are no more accurate than buoy SSTs, even if they're more precise. I may look further into this, but emphasize that I don't have an agenda about raising doubts. I just want to know how people know what they declare, when it comes to surface temperature.

Bernie, I'm sure you're right. I remember when John Christy was put through all kinds of grief correcting his satellite air T data for orbital decay. All that effort imposed on him to apply a correction to bring them into better correspondence with the surface air T record, which somehow and at the same time remained unexamined and retained its canonical purity in the eyes of the greater climatology community. I have nothing but respect for John Christy's scientific integrity and Roy's.

Thanks Mark, but you're being too complimentary. :-) The problem with the "errors cancel" argument, of course, is that with systematic error it's not necessarily so. Somehow lots of people have been mesmerized by the Central Limit Theorem and the Law of Large Numbers, and just assume they both apply simultaneously to surface temperature measurements.

One of the curious things about systematic errors is that, since they derive from uncontrolled variables, they can trend in a way that looks just like a systematic physical process. I suspect incautious people might be easily fooled by that. Those who don't have a lot of experience trying to measure something, for example.

Mar 26, 2011 at 9:54 PM | Unregistered CommenterPat Frank

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>