Where there is harmony, let us create discord
My recent posts touching on statistical significance in the surface temperature records have prompted some interesting responses from upholders of the climate consensus, with the general theme being that Doug Keenan and I don't know what we are talking about.
This is odd, because as far as I can tell, everyone is in complete agreement.
To recap, Doug has put forward the position that claims that surface temperatures are doing something out of the ordinary are not supportable because the temperature records are too short to define what "the ordinary" is. In more technical language, he suggests that a statistically significant rise in temperatures cannot be demonstrated because we can't define a suitable statistical model at the present time. He points out that the statistical model that is sometimes used to make such claims (let's call it the standard model) is not supportable, showing that an alternative model can provide a much, much better approximation of the real world data. This is not to say that he thinks that his alternative model is the right one - merely that because it is so much better than the standard one, it is safe to conclude that the latter is failing to capture a great deal of the variation in the data. He thinks that defining a suitable model is tough, if not impossible, and the only alternative is therefore to use a physical model.
As I have also pointed out, the Met Office does not dispute any of this.
So, what has the reaction been? Well, avid twitterer "There's Physics", who I believe is called Anders and is associated with Skeptical Science, tweeted this:
Can @MetOffice clarify their position wrt statistical models - in a way that @aDissentient might understand?
A response from John Kennedy appeared shortly afterwards, which pointed to this statement, which addresses Doug Keenan's claims, noting that there are other models that give better results and suggesting that the analysis is therefore inconclusive. Kennedy drew particular attention to the following paragraph:
These results have no bearing on our understanding of the climate system or of its response to human influences such as greenhouse gas emissions and so the Met Office does not base its assessment of climate change over the instrumental record on the use of these statistical models.
I think I'm right in saying that Doug Keenan would agree with all of this.
Anders has followed this up with a blog post, in which he says I don't understand the Met Office's position. It's a somewhat snide piece, but I think it does illuminate some of the issues. Take this for example:
Essentially – as I understand it – the Met Office’s statistical models is indeed, in some sense, inadequate.
Right. So we agree on that.
This, however, does not mean that there is a statistical model that is adequate.
We seem to agree on that too.
It means that there are no statistical models that are adequate.
Possibly. Certainly I think it's true to say that we haven't got one at the moment, which amounts to the same thing.
Then there's this:
[Statistical models] cannot – by themselves – tell you why a dataset has [certain] properties. For that you need to use the appropriate physics or chemistry. So, for the surface temperature dataset, we can ask the question are the temperatures higher today then they were in 1880? The answer, using a statistical model, is yes. However, if we want an answer to the question why are the temperatures higher today than they were in 1880, then there is no statistical model that – alone – can answer this question. You need to consider the physical processes that could drive this warming. The answer is that a dominant factor is anthropogenic forcings that are due to increased atmospheric greenhouse gas concentrations; a direct consequence of our own emissions.
Again, there is much to agree with here. If you want to understand why temperature has changed, you will indeed need a physical model, although whether current GCMs are up to the job is a moot point to say the least. (I'm not sure about Anders' idea of needing a statistical model to tell whether temperatures are higher today than in 1880 - as Matt Briggs is fond of pointing out, the way forward here is to subtract the measurement for 1880 from that for today - but that's beside the point).
All this harmony aside, I hope you will be able to see what is at the root of Anders's seeming need to disagree: he is asking different questions to the one posed at the top of this post. He wants to know why temperatures are changing, while I want to know if they are doing something out of the ordinary. I would posit that defining "the ordinary" for temperature records is not something that can be done using a GCM.
I think Anders' mistake is to assume that Doug is going down a "global warming isn't happening" path. In fact the thrust of his work has been to determine what the empirical evidence for global warming is - when people like Mark Walport say that it is clear that climate change is happening and that its impacts are evident, what scientific evidence is backing those statements up? I would suggest that anyone hearing Walport's words would assume that we had detected something out of "the ordinary" going on. But as we have seen, this is a question that we cannot answer at the present time. And if such statements are supported only by comparisons of observations to GCMs then I think words like "clear" and "evident" should not be used.
In my post above I said:
If you want to understand why temperature has changed, you will indeed need a physical model.
As I put it in a tweet to Anders, he and I are in glorious harmony.
He has just replied:
No, I really don't think we are. If you want to understand GW you need a physical model.
I laughed so much I got cramp in an intercostal.
Reader Comments (307)
ATTP
There's a lot of waffle on this thread so let's cut to the chase:
1) You can fit a statistical model to any data, whether one where the signal to noise ratio is high or low. The Scientific Method simply requires that you state your assumptions in doing so and that you reiterate these when drawing any conclusions. Or in concise terms: you always provide context.
2) From your blog post and what you are saying here you think that the radiative model for Co2 heating the Earth is correct and that this is the physical model you would use. However as a long discussion post on this blog alludes to and as Clive Best's website often touches on, all we know for sure is that Co2 reduces outgoing radiation in line of sight.
What then happens to this redistribution of energy is first and foremost determined by the dynamics of the complete system, not just a component. This is blatantly obvious from first principles of physics. Also to determine the effect that Co2 has we must make sure we aren't extrapolating experiments in a box to the atmosphere, or for that matter assuming gases act like black bodies. We have to consider all components, how they couple and how they interact. This is very hard to do.
So what do we do? We try and test out theories even bits of them and we always make sure we aren't fooling ourselves. We do not as was posted before, proceed from assertion.
Now I suspect you are a theorist as you tend to jump to conclusions and state stuff without characterisation. That's fine but it doesn't mean you are correct in your assertions.
Maybe you should consider the empircal "evidence" for your theories and also consider how that evidence was gathered (surprisingly another part of the Scientific Method). You'll probably find it's not sufficient for your claims.
Which is basically what everyone on here is saying.
I think it might even be 97% but that's just a guess.
Micky,
1) Agreed. I'm not objecting to the statistical models that are being used; I'm criticising the conclusions that are being drawn from some of the statistical anaylsis.
2) Not really. All I'm really saying is that statistical models can only really be used to determine properties of your data (as you seem to agree). However, they cannot - alone - be used to understand your data. For that you need some kind of physical model. My only point is that using a statistical model to claim that we don't know what's causing global warming or if its happening or not (which is what this post and Doug Keenan appear to be suggesting) is logically inconsisent since such models are unable - by themselves - to address such a question.
I don't really disagree with the rest of what you're saying (although I might dispute that I'm a theorist who simply jumps to conclusions, or that I don't consider the empirical evidence - but that's rather beside the point). This isn't about what's causing the warming specifically; it's about whether or not we can say anything about what's causing the warming if we only use statistical models. I would suggest that we can't, and I don't really see this as a particularly contentious issue.
ATTP - on the planet where I live the clouds are white, and as such, have a high albedo, so they reflect a significant amount of the incoming solar radiation back to space. Hence surface temperatures are much cooler on cloudy days than sunny days, and also noticeably drop when ever a lonely cloud passes over the Sun. I believe that there is extensive meteorological evidence which supports this observation. On clear days in summer, I have noticed that when the Sun is high in the sky, it can even be so warm that I can often go outside without a jacket on. Maybe even just a t-shirt on some days. It's true that at night, if it is cloudy, the temperatures are usually milder than they are when there are clear skies. But not always, as much depends on the wind speed and direction, and the role played by the mass transport of milder air from other regions. (This factor often neglected in favour of the more fashionable role attributed to clouds in the interception and reflection of upwards LWR). Never-the-less, I agree that clouds have a positive feedback at night, and probably also at the poles and northern latitudes, where the strength of the Sun is not so significant, and albedo therefore not so much of a loss. But it seems fairly clear to me that the incoming solar radiation which falls on the tropical and mid latitudes, and manages to evade reflection by clouds is of much greater magnitude than the escaping LW radiation on clear nights in higher latitudes. [I don't recall anyone ever saying to me on a cold night "go and sit down next to that rock which is still radiating all the warmth from the Sun when it was shining on it 5 hours ago"). If as you and the IPCC review suggests, clouds have a net positive feedback, would the world not ultimately mist over, such that the Sun would never shine again? A kind of perpetual steam pudding? What planet do you live on?
p.s. it is lapogus, with a L, as in grouse.
lapogus,
Apologies for the error in the spelling of your name.
It's not me who's suggesting this. It's the scientific literature. Also, you seem to be suggesting that it would produce some runaway process that would ultimately lead to clouds everywhere. Why would you think that? If the feedback is small relative to the original forcing then this isn't the likely outcome.
That leaves us with a bit of a problem, doesn't it? If cloud cover is net positive, then whatever is counteracting the combined effect of increased CO2 and increased cloud cover over the last 17+ years is powerful indeed.
"My only point is that using a statistical model to claim that we don't know what's causing global warming or if its happening or not (which is what this post and Doug Keenan appear to be suggesting) is logically inconsisent since such models are unable - by themselves - to address such a question."
Yes, and that's exactly what the Bishop and Doug Keenan are saying - that all the people from the climate scientists who have claimed positive evidence of global warming on the basis of trend+AR(1) models are wrong, since such purely statistical models are unable to address such a question, (and if they were, using textbook time series techniques would point to a trendless ARIMA model as preferred).
You appear to agree with that, but then you appear to keep on objecting that physical models can do the job and therefore the evidence for AGW is solid, as if you thought we were all neglecting or ignoring that. We're not.
The problem is that there are NO VALIDATED PHYSICAL MODELS of the global climate with which to do such a thing. You've just got GCMs, which are not sufficiently constrained, known not to reproduce observed climate in detail, and have anyway been implicitly tuned to the recent observations. GCMs are marvellous things for trying to understand stuff, but they can't be used for this purpose. Not yet.
Which doesn't leave a lot. The claim is often made that there's 'lots' of evidence, but people have difficulty actually pointing to it. I think the trouble is that people have said it so often that other people take it on trust, without checking. They assume it wouldn't have been said if it wasn't true. OK, then. But where IS it? Why would the Met Office have relied on invalid 'linear-trend+AR(1)' arguments if there were better ones easily available?
ATTP
A physical model has to validated. Until then it is just a theoretical model meaning it is no more valid than saying it is nature as Doug is pointing out.
From reading your blog you are putting forward that it's CO2 and radiative processes but that as many on here are trying to say is just a theoretical model. It is still an extrapolation of measured effects, and also has other assumptions.
That's the point. Just because you like the theory or it sounds "more" plausible doesn't mean it has any more value than a purely statistical model that does not assume to know anymore than variation.
So even though the Met Office say it's a physical model it's not. It's just a type of statistical model with the assumption that CO2 produces a heating effect in the atmosphere.
You have to show this by experiment.
Micky,
Maybe we could discuss whether or not a physical model has to be validated at some other time, as that seems secondary to the discussion we're having. The issue here is not whether or not the physical model is valid, but whether or not you can use a statistical model - alone - to claim that we don't know the physical processes associated with a dataset (is global warming happening or not). If a statistical model is incapable of telling us something, then using a statistical model to argue that we don't understand something seems logically inconsistent.
I disagree. If I have a plausible physical model (and can show that no other known physical model is equally plausible) then that is surely preferable to statistical model that has no physical underpinnings. In fact, I would argue that a statistical model with no physical underpinnings is not really a model.
Just out of interest, do you accept that the greenhouse effect is a consequence of radiatively active gases in our atmosphere, or are you of the "mass of the atmosphere" school of greenhouse theory?
"The issue here is not whether or not the physical model is valid, but whether or not you can use a statistical model - alone - to claim that we don't know the physical processes associated with a dataset (is global warming happening or not)"
You can't. We are agreed on that. Nobody is claiming that.
You can only do so if the statistical model is all the evidence there is. That's a separate claim, that has to be backed by other arguments and evidence, and has nothing at all to do with Doug's ARIMA-vs-AR(1) argument. We tend to assume people already know it, but if that's what you're objecting to, we can explore that. But nobody is claiming that the ARIMA-vs-AR(1) issue on its own says that. This may be no more than a misunderstanding of ambiguous language.
Nullius,
Then you are going to have to explain this to me
because that appears to be saying precisely what I'm suggesting is being said.
"Then you are going to have to explain this to me"
Do you recognise the difference between a statistical model and a purely statistical model?
To test significance, you need to know the statistical distribution of temperatures under each hypothesis. A statistical model is anything that calculates a distribution for a given set of conditions. A physical model can do so, too. For example, you can run a physics-based GCM many times, and observe the statistics of the output with and without ACO2. If the GCMs were known to reliably reproduce the behaviour of surface temperatures accurately, this would be a perfectly acceptable case of "a statistical model that would describe the normal behaviour of surface temperatures". However, this is extremely difficult to do, and to date, no GCM can do it. Neither can simpler physical models. Neither can purely statistical methods.
It's a hard problem. I don't blame climate scientists for not being able to do it. I don't even blame them for using what they've got to try to make a guesstimate as to whether it appears significant, as the best they can do. But I do criticise them for going around telling the world they've got rock solid evidence when they don't. You need a validated physical model of the natural background variation to do this, and we don't have one.
Nullius,
I don't think that is what is being suggested in what I quoted. Have you actually read what Doug Keenan is suggesting? I'm not convinced that you have.
ATTP
How can you show that no other model is plausible if you haven't done the experiment? That's the point about assertion.
You can't. All you can show in physics and science is what your last experiment tells you. And furthermore that the method on which you based that experiment is consistent. So for example using temperature anomalies as the means to compare climate models - the resolution a priori does not justify any statement that your comparison is real. But it doesn't stop climate scientists.
As for what and how I think the greenhouse effect works no I don't believe in the power of back radiation for one. Simply because my experience with building space hardware and testing it shows that convection and conduction are the heavy lifters wrt heat transfer. Water vapour is definitely a large factor as we can see moving from the tropics to poles. There's also the intrinsic lapse rate due to oxygen and nitrogen which suggests that water vapour actually smooths out temperature extremes on the planet. CO2 only has radiation as a means to transfer energy to the atmosphere and it is quite useless in the desert. In fact at the poles it appears that CO2 emits additional energy to space than what the surface black body does. So it means there's a lot of work that's need done in understanding coupling effects, natural oscillations, broadband emission in the stratosphere due to ionised particles and more before we pin the tail on the CO2 donkey. If ever.
And certainly not before we use this to alter the economies of countries and all that other political nonsense.
Micky,
I didn't say that no other model is plausible. I said no other "known" physically motivated model. I'm not asserting that the current models are correct. I'm suggesting that we currently do not know of another physically plausible model.
If you really are suggesting that the greenhouse effect is not a consequence of radiatively active gases in our atmosphere (as you seem to be doing) then I truly am wasting my time.
"I don't think that is what is being suggested in what I quoted. Have you actually read what Doug Keenan is suggesting? I'm not convinced that you have."
I do. And I've had extensive discussion with Doug on here about what he means. I'm pretty sure we're in agreement. Although if Doug notices this exchange and is moved to confirm or deny, that would be handy.
"I didn't say that no other model is plausible. I said no other "known" physically motivated model."
Is it "plausible" that there could be such a model, not yet known?
"If you really are suggesting that the greenhouse effect is not a consequence of radiatively active gases in our atmosphere (as you seem to be doing) then I truly am wasting my time."
Not liking the back-radiation argument is not the same thing as not believing IR-opacity of the atmosphere plays a role. It's more complicated than it is usually portrayed as being, and it's not unreasonable for someone not trained in physics to be sceptical when there are so many bogus, conflicting, and confusing arguments about coming from the mainstream. How is the layman supposed to tell? Take somebody's word for it? But diversions into the whole greenhouse-backradiation-lapse-rate argument are discouraged on here because they tend to derail the entire thread.
Nullius,
Well, as I understand it, Doug's work has no physics/physical mechanisms at all. If he would like to put the effort into convincing me otherwise, I'd be happy for him to do so.
NiV
Agreed about detailing the thread. But for point of order I'm a PhD in Physics and also a trained rocket scientist - ion thrusters. So I am trained in physics but more importantly validation and verification techniques.
ATTP
When I say don't believe in back radiation I mean the idea that this heats the surface. Not that a radiation field isn't present. Basic thermodynamics would say that at the typical temperature of the atmosphere convection is the heavy lifter when it comes to heat transfer and that according to the Least energy principle will dominate. It's a theoretical argument though just like AGW. But that's for another time.
"Well, as I understand it, Doug's work has no physics/physical mechanisms at all."
That's right. They don't. But it's only *part* of the argument for that particular statement, and one that arose in a particular context.
The government said the rise in temperatures was "statistically significant". When asked how this was shown, they replied with the trend + AR(1) model. Doug pointed out that AR(1) was incorrect, and that if you followed the textbook methods you would be led to trendless ARIMA instead. It was a case of offering a choice between two implausible models and claiming that because the one with the trend was slightly less implausible, the significance of the trend had been proved. Trendless ARIMA is not being offered as a realistic model of the physics - it is only being used as a counterexample in a specific technical argument about whether trend+AR(1) is an appropriate model to use to demonstrate significance: to say, here's a possible alternative model that if we use the Met Office's own methods on it instead, seems to show that the trend is *not* significant. This is all about showing that the trend+AR(1) argument is bogus, nothing else.
"I'm not sure there is a difference, but I'm not a huge fan of the term "back-radiation" myself"
There is a difference, and that's where we get the arguments. But it's best not to go there.
"Essentially, put more IR gases in the atmosphere and the amount of energy we radiate into space goes down. Therefore the amount of energy we receive will exceed the amount we lose and we will warm until we're back in radiative balance. That's basically it."
That's true so far as it goes, but it only leads to people asking why the amount of energy we radiate into space goes down. The reason for it is not obvious.
If you don't see why, try applying the same argument to the oceans. Water is transparent to visible and opaque to thermal IR, so it acts like a greenhouse gas. Sunlight enters, is absorbed, and re-radiates at longer wavelengths, but this is absorbed and re-radiated in all directions by the water above it, and generally blocked from escaping. It's a simple calculation to show that using the 'backradiation' mechanism this would heat water to thousands of degrees within a metre of the surface. So if you add water to the oceans, (think of it as a thin layer laid on top of the existing ocean,) do they radiate less to space and therefore warm up? Why not? (I'm not asking you to answer the question - just to think about it.)
I agree they don't and the atmosphere does, but it's not an 'obvious' bit of physics - and given that most people don't know how electricity works or why the sky is blue, it doesn't seem to me like not knowing this particular bit of rather obscure physics is any worse than any of the others, or renders someone incompetent to comment or hold an opinion. If we were required to know what we were talking about before speaking, we'd *all* have to shut up! :-)
There are people on *both* sides of the debate who don't know how it works. Is belief without understanding "scientific"? Is it not the case, as Feynman said, that "Science is the belief in the ignorance of experts"? Is not the proper position, if you don't know the physics yourself, to say "I don't know" rather than "I believe"? Opinions differ. I have my view, but there are many people who think belief without understanding is superior to disbelief.
> A statistical model is anything that calculates a distribution for a given set of conditions.
MattStat seems to go a bit further than that:
http://wmbriggs.com/blog/?p=8061
The harmony in the comment thread is an ode to love and joy, except perhaps for a dissenting voice.
> Nobody is claiming that [you can use a statistical model - alone - to claim that we don't know the physical processes associated with a dataset (is global warming happening or not)].
Besides that somebody, somewhere is offering a normative claim regarding the absence of a validated physical model of the natural background variation, here's Douglas:
http://bishophill.squarespace.com/blog/2013/5/27/met-office-admits-claims-of-significant-temperature-rise-unt.html
Since this looks like the main claim in Douglas' editorial, seeking harmony over what can or can't do.
Harmony has a strange genealogy.
Let's complete this sentence:
> Since this looks like the main claim in Douglas' editorial, seeking harmony over what can or can't do [...]
statistical models might be nice.
If Douglas could declare that he published his correspondence with Richard Muller without his permission, that would be nice too.
Willard,
"Might be".
> "Might be".
Indeed, dear Nullius, for here's a snippet from Douglas on that same thread at MattStat's:
http://wmbriggs.com/blog/?p=8061#comment-94300
Douglas' argument goes a bit beyond showing "that the trend+AR(1) argument is bogus" with an alternative, purely statistical model, a feat that does seem to underwhelm MattStat in another context.
"Douglas' argument goes a bit beyond showing "that the trend+AR(1) argument is bogus" with an alternative, purely statistical model, a feat that does seem to underwhelm MattStat in another context."
If he does either, you don't show it in what you quote.
How about a statistical analysis of land surface termperatures where each site is treated as a distinct microclimate. I have always been uncomfortable with the adjusting, anomalizing and homogenizing of land surface temperature readings in order to get global mean temperatures and trends. Years ago I came upon Richard Wakefield’s work on Canadian stations in which he analyzed the trend longitudinally in each station, and then compared the trends. This approach respects the reality of distinct microclimates and reveals any more global patterns based upon similarities in the individual trends. It is actually the differences between microclimates that inform, so IMO averaging and homogenizing is the wrong way to go.
In Richard’s study he found that in most locations over the last 100 years, extreme Tmaxs (>+30C) were less frequent and extreme Tmins <-20C) were less frequent. Monthly Tmax was in a mild lower trend, while Tmin was strongly trending higher , resulting in a warming monthly average in most locations. Also, Winters were milder, Springs earlier and Autumns later. His conclusion: What's not to like?
Now I have found that in July 2011, Lubos Motl did a similar analysis of HADCRUT3. He worked with the raw data from 5000+ stations with an average history of 77 years. He calculated for each station the trend for each month of the year over the station lifetime. The results are revealing. The average station had a warming trend of +0.75C/century +/- 2.35C/century. That value is similar to other GMT calculations, but the variability shows how much homogenization there has been. In fact 30% of the 5000+ locations experienced cooling trends.
Conclusions:
"If the rate of the warming in the coming 77 years or so were analogous to the previous 77 years, a given place XY would still have a 30% probability that it will cool down – judging by the linear regression – in those future 77 years! However, it's also conceivable that the noise is so substantial and the sensitivity is so low that once the weather stations add 100 years to their record, 70% of them will actually show a cooling trend.
Isn't it remarkable? There is nothing "global" about the warming we have seen in the recent century or so.The warming vs cooling depends on the place (as well as the month, as I mentioned) and the warming places only have a 2-to-1 majority while the cooling places are a sizable minority.
Of course, if you calculate the change of the global mean temperature, you get a positive sign – you had to get one of the signs because the exact zero result is infinitely unlikely. But the actual change of the global mean temperature in the last 77 years (in average) is so tiny that the place-dependent noise still safely beats the "global warming trend", yielding an ambiguous sign of the temperature trend that depends on the place."
http://motls.blogspot.ca/2011/07/hadcrut3-30-of-stations-recorded.html
> If he does either, you don't show it in what you quote.
Perhaps I should also have emphasized "People reasonably want to know the shade of grey for the observed data, i.e. whether the observed data lies within what would be expected due to natural variation" too. The only way for Douglas to circumvent MattStat's argument is to claim that the random walk is a plausible model of the empirical data. It thus seems to me that the random walk is this a model of natural variability.
To claim that Douglas does pure statistical tests may have been a way to minimize Douglas' results for the sake of seeking harmony.
***
We can observe Douglas' gerrymandering in the original formulation in the WSJ:
http://www.informath.org/media/a41.htm
Since MattStat's point is that one can always find statistical models (perhaps even an infinite set of models) that would be better fits for the data and all statistically significant. Unless Douglas wishes to argue that the assumptions behind a random walk provide a plausible model of natural variability, the argument that a statistical model is unfounded if you can find another one with a better fit has no merit.
***
Doug McNeall may have been more diplomatic when he told Douglas in a private correspondence:
> To say that observational evidence is not “statistically valid” is probably more a comment on our statistical framework, than our knowledge of the climate. I think it is unfair to make the argument that there is no “statistically valid” evidence, without stating what statistical framework we are working under, and then going on and showing that the evidence for warming is not statistically valid. To further extend this and say that there is “no observational evidence” for global warming is stretching the point even further.
http://www.informath.org/apprise/a5700/b11.pdf
For now, I'm not even talking about the last point, which Douglas seems to have reiterated above. But I could also use that claim that Douglas does imply a bit more than purely statistical models, whatever that expression could really mean.
Has Douglas released his response to this email?
***
Sometimes, I have the feeling that Douglas expect people won't read what he puts on his website and simply gobble up his PR endeavors.
Anders @ Jul 4, 2014 at 11:47 AM
Another example of Anders agreeing but adding flannel and asserting he doesn't:
//
not banned yet,
both of which assert that after a long duration a random walk's deviation from it's starting point is proportional to the square root of the duration.
That's not quite right, I think. What I think you've quoted is the rms (root mean square) distance. If you were to repeat a random walk process a large number of times, and plotted the position after n steps for each random walk, the distribution would be symmetrical about the starting position. The rms position would, however, be Sqrt(n).
//
Odd.
"The only way for Douglas to circumvent MattStat's argument is to claim that the random walk is a plausible model of the empirical data."
Doug isn't circumventing Matt's argument, he's using it. Doug is saying that the use of trend+AR(1) to demonstrate "significance" is bogus because trend+AR(1) is an arbitrary statistical model with no justification for it, and as Matt says, it's dead easy to pick models to give any conclusion you like. He demonstrates this by picking one, that happens to give the opposite conclusion.
I'll try an analogy. The UK government, on the advice of the Met Office, who are following the IPCC, says "6 is the biggest number evah!" When asked to justify this statement, they reply: "Let's compare the numbers 5 and 6. By using lots of maths that government ministers are unlikely to be able to follow, we show that 6 is bigger than 5. So 6 is the biggest. QED." Now Doug comes along and says "That's rubbish!" because, as Matt says, "You can always pick a bigger number." To prove this, Doug says "Instead of 5 let's take the number 7. When we do the same calculation the Met Office just did, we find that 6 is *not* bigger than 7. So 6 is not the biggest." This does *not* mean that Doug is saying 7 is the biggest number! He's just saying that 6 clearly isn't.
Matt said that if you want to know if temperature is rising, you just look at the temperatures, 'statistical significance' is not an issue for this question. Doug's reply was to the effect that it's not actually the question people are really interested in. People are legitimately interested in the separate question of whether the rise in temperatures is abnormal, for which you do need to know what "normal" means. Here, this has to be physics-based. As Matt says: "If we seek to understand this physics, it’s not likely that statistics will play much of role." and Doug replies "Yes, we agree." Nobody is asserting that the trendless ARIMA model is physics-based, only that it is no less physics-based than trend+AR(1).
As Matt says: "The exclusive, or lone, or only, or single, solitary, sole way to check whether any model is good is if it can skillfully predict new data, where “new” means as yet unknown to the model in any way - as in in any way. The reason skeptics exist is because no know model has been able to do this with temperatures past a couple of months ahead." This is exactly what I've been saying - models have to be validated to be able to answer this question, and none of them are.
And in particular, the trend+AR(1) model is not validated, which means the UK government's claims based on it that the rise has been shown to be "statistically significant" (i.e. abnormal) are not true. This is something I think we all agree on - me, the Bishop, Doug, Matt, the Met Office and even ATTP. The point ATTP seemed to be disagreeing with was that just because trend+AR(1) was unvalidated, that didn't mean all other physics-based models were, and Doug's ARIMA argument didn't show that. I agree. But I don't think it was intended to. The claim that none of the climate models are validated (or valid) is a *separate* claim relying on *separate* evidence, but which the Bishop (and Matt, and Doug) assumed everybody knew about.
But that aside, it's remarkable how much we *do* all agree. Do you think we could call that a "consensus"...? :-)
Nullius wrote: "And in particular, the trend+AR(1) model is not validated, which means the UK government's claims based on it that the rise has been shown to be "statistically significant" (i.e. abnormal) are not true."
Doug scored some scientific points against the Met Office, but we shouldn't exaggerate their importance. With some embarrassment, the Met Office has publicly withdrawn its assertion that a trend+AR(1) statistical model can be used to show that significant warming has occurred. They admit that GCMs are needed to show that observed warming is far greater than warming produced by unforced variability. The logical response should be to attack the validity of those GCMs; not keep yapping about statistical models. From a physics perspective, the only thing worse than a trend+AR(1) statistical model is a statistical model that behaves like a random walk. No matter how well such a model fits the 20th century, that model will be wrong for longer periods of time.
The real problem is that the GCMs the MET Office relies upon contain dozens of adjustable parameters that have been non-sytematically tuned to match current climate and possibly tuned to produce a good match (intentionally or unintentionally) between historic and modeled temperature for the 20th century. Nic Lewis claims that these models rely too much aerosol forcing to reduce the warming produced by anthropogenic GHGs. Any model tuned to match the historical record certainly can't be used to detect and attribute warming.
In the long run, it doesn't really matter whether 20th-century warming (which was poorly monitored prior to the satellite era) has or has not emerged with 90% or 95% certainty from the background of unforced climate variability. Even Lindzen's estimate for climate sensitivity (with slightly negative feedback) implies that anthropogenic GHGs minus aerosols contributed about 0.4 degC of warming over the past century. The important question is whether anthropogenic warming over the next century will be this small or 5-10 fold bigger (as GCMs project) if we don't dramatically limit emissions.
> Doug isn't circumventing Matt's argument, he's using it.
Sure, here's Matt's conclusion, again:
And here's Douglas' immediate response:
http://wmbriggs.com/blog/?p=8061#comment-94300
***
Before starting to pussyfoot on the conflicting conceptions of significance, validity, and plausibility, it might be wise to clarify the status of the documents on which this hurly burly rests.
Take for instance Frank's remark:
> The logical response should be to attack the validity of those GCMs; not keep yapping about statistical models.
This seems to relate to how Doug McNeal follows up the email we quoted earlier:
http://www.informath.org/apprise/a5700/b11.pdf
Just like with his correspondence with Richard Muller, Douglas has still to declare if he asked for permission before publishing it, and we still don't know which response Douglas made to McNeal's email.
"And here's Douglas' immediate response:"
Yup. I've already discussed this. But you need to read it in context. As I said: "Matt said that if you want to know if temperature is rising, you just look at the temperatures, 'statistical significance' is not an issue for this question. Doug's reply was to the effect that it's not actually the question people are really interested in." In other words, they disagree about what the question is, or should be, not the answer.
"The logical response should be to attack the validity of those GCMs; not keep yapping about statistical models."
Yes, exactly. The argument using the pure statistical models has been won, and the point agreed by everyone.
"The appropriate test is surely against our physical models, worked out from first principles, or against a physical-statistical model?"
If you can work out a complete physical model that is constrained by well-validated laws of physics to an exact solution, yes. Or alternatively, a validated but approximate physical model might do. I agree with this. But as the Bishop said, as Matt said, and as Frank now says, they can't and we don't have one. That's the problem.
Unless of course you'd like to point to one...?
"Douglas has still to declare if he asked for permission before publishing it"
Why should he need to? Did you ask Doug's permission before quoting from his comments?
At least Doug didn't quote bits of them out of context... ;-)
NiV
excellent exposition at Jul 5, 2014 at 6:19 PM - put in terms that virtually anyone could understand. As you say, the Met Office were wrong. I think one of the problems that we constantly come across is the failure of the establishment to be honest and open in admitting their mistakes. It's never a mea culpa, fair cop, got me bang to rights but always mealy-mouthed obfuscation - which is then regurgitated by the faithful. Your post is an object lesson in how to present the position clearly - as a scientist should. The fact that the establishment is incapable of such clarity is yet another indication that their aim is not to enlighten but to endarken. Thank you for your continued fight for science to prevail.
> The argument using the pure statistical models has been won.
Not at all, because Douglas gerrymanders on this issue.
If Douglas’ intuition makes him reject AR(1) as a plausible model, there’s no reason why it would make him accept a random walk as a plausible model for natural variability. Natural variability is not statistical phenomenon. Plausibility implies some physics.
If Douglas only wanted to present a statistical case, it falters on MattStat’s argument that one can always find another model than is a better fit. This argument follows directly from what Ye Old Statistician observed at MattStat’s:
http://wmbriggs.com/blog/?p=8061#comment-94259
So watch the pea. Against MattStat’s argument that significance is insignificant, Douglas hand waves to “people” who “reasonably” tell him that “we need to model natural variability.” Against AT, Vaughan, Frank, and the MET Office who holds that to model temperatures as a random walk is a bit farfetched (to say the very least), the pea switches back under Nullius’, Douglas’ (and before them VS” and others) “pure” statistics argument based on random walks.
In fact, Douglas himself agrees, if Doug McNeal correctly quotes him:
If we can agree that the central question is to decide which statistical models to choose, that AR(1) has limitations, and that our models should exhibit physical realism, to argue for random walks is simply inconsistent. No theory is supposed to stand tall against absurd or degenerate testing.
***
> If you can work out a complete physical model that is constrained by well-validated laws of physics to an exact solution, yes. Or alternatively, a validated but approximate physical model might do. [W]e don't have one. That's the problem.
This is where the agreement turns into a dispute. To repeat Richard Muller’s executive summary of the issue:
http://neverendingaudit.tumblr.com/post/11763136868
If Douglas’ desiderata make him reject all the models known so far, it might be saner and more economical to ignore these desiderata, and simply nod approvingly when Douglas rants against science. Promoting inactivism by using “we agree!” while hiding statistical pedantry shows little interest for scientific questions.
***
> Did you ask Doug's permission before quoting from his comments?
For now, I quoted stuff that he published on his website, and at MattStat’s. Douglas may not be able to claim the same. One does not simply publish private correspondence without permission.
The reason why I ask if Douglas responded to Doug should be obvious to anyone who read Doug’s email. Where’s Douglas’ fGn model, with a demonstration that it’s the only model with Keenanian physical realism?
The insightful comments from Nullius in Verba are extremely helpful. There are two paragraphs from those comments that I think are especially worth repeating.
(To emphasize—I have never advocated adopting any particular statistical model for drawing inferences from climatic data.)
(That scientists can go around doing that with impunity is a huge problem: there needs to be accountability.)
I tried to post this:
http://andthentheresphysics.wordpress.com/2014/07/05/adventures-on-the-hill/#comment-25970
Seems that I can't.
***
> I have never advocated adopting any particular statistical model for drawing inferences from climatic data
Seems that Douglas forgot about his fGn. Where’s Douglas’ fGn model, with a demonstration that it’s the only model with Keenanian physical realism?
Nullius:
Doug:
In which case we need to start asking different questions. I came at this in a semi-humorous way on Climate Audit in May. Brandon Shollenberger had written:
I replied that to get a lower percentage one might ask the question at the end of Michael Kelly’s submission in December to the Commons Select Committee on Energy and Climate Change:
I expect those voluntarily putting their hands up for that would be less than 97%. But establishing a proper chain of accountability starts with that question and getting policy makers, and voters, to ask it.
Doug,
Thanks. And you're welcome.
In science, I'd say, the only accountability needed is that if you're wrong, other people will be able to prove you're wrong. I think they genuinely believe, and I'd much rather scientists felt free to say what they think. We need to argue these things out in public, with everyone setting out their own position as strongly as they can, and we can't do that if everyone's got one eye on the personal and financial consequences of what they say. There's nothing wrong with being wrong. It's fear of 'accountability' that stops people admitting it.
Free speech means letting people say even the things you think they shouldn't say. Bearing in mind that if "accountability" for making scientific assertions was ever to be instituted in practice, it would most likely be the authorities that decided what was to be outlawed, this is a dangerous line for us in particular to take.
Yes, I agree it's frustrating and annoying that they will likely get away with it. But that's the price of freedom.
Willard,
A most amusing post from ATTP! I got the impression from his last exchange here he wasn't interested in pursuing the exchange, so I'll not bother to correct his misapprehensions over there. But it's nice to see he didn't think it was a complete disaster.
I assume you're referring to where Doug commented on a Tamino thread:
As you can see, that's not exclusive advocacy for fGn - just pointing out that a physics-based justification has been given for it. So if you're going to make the argument that only physics-based models are acceptable, well, we've got one. That's still only "one alternative", though.
So far as I am aware, it hasn't been validated in a formal sense. I haven't read through Koutsoyiannis' arguments in detail, so I can't say how solid its physical foundations are. It sounds interesting, but so far I'm sceptical.
Thanks for pointing it out, though.
NiV: No accountability? Interesting.
I realise you're saying something more subtle than that but I believe we need more accountability, up and down the chain between scientists and policy makers. There are many ways of trying to implement that, some of them harmful, so it should be debated. This may not be the thread for that, though Andrew's discord from harmony framing might suggest it's not far off topic.
Richard,
I depends whether we're talking about science or policy, and it depends whether one is talking in a professional or personal capacity. There are circumstances where accountability is appropriate.
In industry, one is often doing science where a lot of Other People's Money rides on you getting it right. If you screw up, somebody somewhere is going to lose a few million. (Or more.) Accountability is part of the deal. There are serious consequences for messing up, which does tend to concentrate the mind and stop a lot of the poor practice that academic science seems to get away with - but at the same time you do get paid for it, and you do agree to it beforehand.
The problems with climate science seem to be down to it starting out as an academic backwater with a rather lax attitude to quality. When the politics got involved, the scientists doing it were promoted to rock star status, and have basked in the glory. Unfortunately, they left the same people in charge and as a result the quality standards weren't updated, resulting in bad results passing undetected. And now, the scientists in the spotlight are stuck - they can't back out, and they can't ask for help, and they're not competent to fix it. All they've got left is bluff and bluster, and the fact the politicians still need them. However, they didn't sign up for that, probably didn't realise what they were getting into, and I do feel a little sympathy for them. Nevertheless, the consequences...
But as you say, that's a whole other discussion.
Richard Drake
Science is accountable initially to peer review and then to ongoing comparison with reality.
In the short term , politicians are accountable to Parliament and "trial by media". In the longer term they face elections.
What would you suggest to police the interface? And "Quid custodet ipso cuatodes?".
NiV:
Part of a convincing explanation of the mess we're in but I'd add one thing: the increase in funding that George HW Bush presided over (from memory) of $200 million per year to $2 billion per year, in four short years, meant that many came into climate science at this juncture. So those who weren't being lax in their quality going into 1989 were, arguably, swamped and rock star status was soon being accorded to some very young guns, the most obvious in retrospect being one M Mann. Thus we shouldn't be too hard on all those in the original backwater. But otherwise yes. Critical system failure.
> I assume you're referring to where Doug commented on a Tamino thread:
No, I am referring to Doug McNeal's email, which I cited earlier. Here was the relevant bit:
http://www.informath.org/apprise/a5700/b11.pdf
This bit was reproduced in the comment I could not publish here earlier.
***
In other news, Richard Telford also seems in violent agreement with the need to have a physical basis more than six months ago, at least if according to the title of the post:
http://quantpalaeo.wordpress.com/2013/10/31/statistics-are-not-a-substitute-for-physics/
I'm willing to call that a violent agreement in the spirit of harmony.
***
You think you're very clever indeed, said the old lady, but now I know you don't know anything, you see, everyone knows the answer to that it's accountability all the way down.
"No, I am referring to Doug McNeal's email, which I cited earlier."
To which I can give the same answer.
"In other news, Richard Telford also seems in violent agreement with the need to have a physical basis more than six months ago,"
Indeed. We're all in agreement on this point. Always have been.
"Replacing an oversimplified but informative model with a physically meaningless model is not progress."
That's incorrect. As I explained at length earlier in this thread, the ARIMA "random walk" model is no less physically valid or meaningful than a linear trend (or linear trend plus AR(1)). There's even a simple physical interpretation of it, although its relationship to reality is necessarily approximate/oversimplified, even if true.
The output being ARIMA(3,1,0) says that the net heat flow in or out of the system in a given year follows a 2nd order differential equation subjected to random shocks (due to cloudiness, for example), the output of which is integrated with any damping/nonlinearity too small to be resolved on the timescale of the data we have. All of physics is about differential equations. That's a large part of why ARIMA models are used to model stuff.
And again, the point is not to replace one with the other, as if we were replacing a 5 with a 7, it's to realise that both are unjustified and to throw both of them away!
> As I explained at length earlier in this thread, the ARIMA "random walk" model is no less physically valid or meaningful than a linear trend (or linear trend plus AR(1).
An explanation may have been nice instead of begging the main of question under dispute by arguing by assertion. Douglas simply compared two statistical models. It has nothing to do with any cloud shock mystique and plausible denials like "no less physically valid." This arm waving only moves the pea from the "pure statistics" shell to the "random physics" shell. More arm waving may be needed to provide a plausible argument that random walks make physical sense. A new random physics of climate with a revised concept of causality might be needed.
As hard as he might complain (oftentimes officially!) for a model that would meet his personal desiderata, until Douglas shows that they could be met, the onus is on him to prove he's not exhibiting the same pedantry that Richard Muller observed about Douglas years before about the same statistical stuff.
Doug,
I had rather given up on continuing this, but I'm a bit of a glutton for punishment. I also appreciate that I've been rather criticial of you on my blog, so you'd certainly be within your rights to simply ignore me. Of course, one might hope that someone who seems quite happy to throw around accusations of fraud and research misconduct might be open to criticism (one might also be wrong).
I read your "is a line trending upwards" document and I think I see where your confusion is coming from. Let me see if I can explain (although Richard Telford has already done a pretty good job of this, so I'm not confident). Imagine you gave me your data before telling me where it came from. I could then determine the trend of the best fit line and the uncertainty in the trend. I can clearly do that (ignoring correlation for the moment) without needing to be aware of the underlying process. Then you tell me where the data comes from (coin toss, dice rolling). Now I can produce a model that represents that process. In the case of coin tosses, or rolling a dice, I can do so almost entirely statistically, but that's rather beside the point. It's still a model. Now I can determine if my data is consistent with the model, or not.
Now let's consider the instrumental temperature record. You seem to be criticising the Met Office's analysis of this data set. However, it's simply an analysis. Determining the linear trend and the uncertainty on the trend in no way implies that they're suggesting that the underlying processes would produce precisely a linear trend (as pointed out by Richard Telford). All they're doing is determining some properties of the dataset. If you now want to understand the processes associated with that dataset, you need to consider a physical model, not simply some statistical model such as a random walk. The only way a random walk would be valid is if it was a reasonable representation of the underlying physical process that determine the evolution of the surface temperature. Since it isn't, it's not.
So, why do the Met Office regard the warming as significant? One simple answer is that if you simply mean "is the trend statistically different from zero" then the answer is clearly "yes". If you mean "is it significantly different from what we'd expect from non-anthropogenic sources only", the answer is also "yes". Why? Because if you develop a physical model that ignores anthropogenic processes, the result is not statistically consistent with the instrumental temperature record. However, you can develop models that include natural and anthropogenic processes that is statistically consistent with the instrumental temperature record. You may not like these models, but they do exist.
Nullius,
What's your point with regards to Arima(3,1,0)? Are you really suggesting that it reasonably represents physical processes that control our surface temperatures, or just suggesting that it could. Given that it probably doesn't, this would seem rather irrelevant.
Willard,
"An explanation may have been nice instead of begging the main of question under dispute by arguing by assertion. Douglas simply compared two statistical models. It has nothing to do with any cloud shock mystique and plausible denials like "no less physically valid.""
What would you like explained?
A non-zero linear trend is physically impossible as a model of the physics of temperature, because if you extend it far enough into the past or future, you get negative absolute temperatures. It's an impossibility, a nonsense, it contradicts the laws of physics, and observation. It cannot be.
What's so hard to understand about that?
It's not an issue because nobody would be so stupid (surely!) as to extend a linear trend beyond the interval for which it is valid, understanding intuitively that it is only intended as a short-term approximation to a short segment of the data. Because ARIMA models are less familiar to the statistical layman than linear trends, it is perhaps not so obvious that exactly the same principle is at work here. A 'random walk' model is a short-term approximation to a bounded model. They're necessary because the usual statistical methods fail if you try to treat it as a bounded process, but work reasonably accurately if you treat it as an unbounded one and take differences.
This is textbook stuff.
And I'm not sure what you mean by "mystique". This is the way ARIMA models are constructed in introductory texts, and the justification for using them. They describe physical processes in which a sequence of independent random events (noise, shocks, perturbations, random events) affect some dynamic physical quantity that if left to itself would behave in a way described by a differential equation. The effect is to 'filter' (in the electronic engineering sense) the random noise sequence. AR(1) means the change is affected by the current value. AR(2) means the change is affected by two consecutive values, which is a combination of the rate of change (approximated by the difference between them) and the last value. AR(3) also includes the second derivative.
And ARIMA(3,1,0) is the integral of an AR(3) process. So if the velocity is subject to a noisy 2nd order differential equation, the position will be an integral of it. Virtually all physical processes are governed by differential equations, so many physical processes subjected to noise tend to approximate various ARIMA-type processes. That's why we use them.
There is a textbook method for fitting such a model to data. First you perform a unit root test to see if it is non-stationary (the integral of some solution) and take differences until it passes. Then you can look at the autocorrelation function and partial autocorrelation function (or other clever maths) to diagnose the orders of the AR and MA parts. Then you fit the appropriate model to estimate the parameters. If you apply this method to the temperature data, it says there's one unit root, and the first differences fit an AR(3) model. That's why Doug picked ARIMA(3,1,0).
It's no more than the application of the standard textbook method for choosing a model to fit the data. It's as standard as the algorithm for fitting a linear trend to data. Any undergrad course on time series analysis will teach it.
However, just because that's what the data looks like, doesn't mean that's what it actually is. That's what you need physics for - to constrain the form of the solution. It's entirely possible to invent physics that *would* behave like ARIMA over short time intervals, so it's physically plausible in that sense, but this evidence is too weak to make definitive claims. It's not as weak as trying to look at that wobbly rise in the data and drawing a straight line through it (that would be madness!), but it's not far off.
I repeat - and emphasise - we do *NOT* make any claim that the ARIMA(3,1,0) model is based on any of the specific physics, or is how things actually work, or should be used for judging significance. That's not what it's for. That's not what we're trying to do.
It is simply a counterexample to the IPCC/MO claim that trend+AR(1) is the better fit to the data.
ATTP,
"Of course, one might hope that someone who seems quite happy to throw around accusations of fraud and research misconduct might be open to criticism (one might also be wrong)."
He was the last time I argued with Doug. I thought he was very graceful about it.
"Imagine you gave me your data before telling me where it came from. I could then determine the trend of the best fit line and the uncertainty in the trend."
Actually, no you can't. The concept of "best" fit relies on a model of the probability of errors to determine for which line the implied errors are most likely. The method of Ordinary Least Squares, for example , assumes independent Gaussian errors. (Because they're independent, the joint probability is the product of the Gaussian error pdfs, you can take logarithms to turn it into a sum of quadratics, and you therefore find the maximum likelihood where the sum of squared errors is least. If the errors are not independent Gaussian, OLS is not necessarily "best fit" optimal.)
You can, of course, chuck the data into your favourite stats software and ask it to calculate the OLS fit and standard error. A lot of people who have only been on an introductory stats course do - I blame the lecturers. But the values would be wrong, because you're implicitly assuming something in the calculation that isn't true.
"The only way a random walk would be valid is if it was a reasonable representation of the underlying physical process that determine the evolution of the surface temperature."
The surface temperature is roughly proportional to the heat content. The heat content of the system (by the first law of thermodynamics) is equal to the heat content of the system last year plus the net amount of heat gained (or lost) over the year. Suppose we ignore the anthropogenic effects and assume the heat gained/lost is simply down to the weather. If it's sunny, the Earth absorbs energy, if it's cloudy, it loses it. So the total amount of cloud, summed over the whole Earth and over one year, is closely related to the total heat gained or lost in that year. Now suppose that the cloudiness varies randomly about some set level, according to some dynamic model based on weather, but pretty much independently of the current heat content of the Earth (at least, for small variations). Then the heat input in/out in each year is the primary variable subjected to noise, and the heat content, and hence the temperature, is the integral of it. That would result in a 'random walk'.
Why would the cloudiness be independent of the current temperature? Because clouds are a consequence of temperature differences. Clouds occur when one spot is warmer than its surroundings, lowering the pressure, causing the moist surface air there to convect upwards, and thereby forming clouds. (Further, even if warmer air is moister, the air aloft is warmer too, and so it takes longer to reach an altitude where it is cold enough to condense - i.e. it is about the temperature difference between surface and at altitude.) However, if you raise the temperature of the whole globe uniformly, the differences remain the same, and so the formation of clouds occurs in the same places and to the same extent. To first order, we wouldn't expect it to make a difference.
There must be some other processes that do keep the temperature stable in the longer term, but they might be either too weak to show up with only a century of data, or they might only kick in when the offset gets beyond a certain point, non-linearly. Over a short enough interval, ignoring them doesn't result in much error.
Hence, an random walk is a reasonable representation of at least one of the underlying processes.
And why should the cloudiness apparently be governed by a second-order equation? I've no idea. Perhaps something like that index-cycle behaviour Lorenz noted? Maybe it's something climate scientists should look into.
Again, I emphasise, I am *NOT* making any claims here that this is how it works. I'm just spinning hypotheses and speculation. However, my point is that it is actually pretty easy to do so - it's not so outrageously unphysical a model that we can't come up with some plausible-sounding mechanisms to fit in a "just so" sort of way.
Nullius,
He appears to have called Richard Telford a troll, so I'm not hopeful.
I must say that I was hoping you might see the broader point.
Of course you're right that you really need to understand the data you're analysing. However, you can of course still ask the question "what is the best fit linear trend?" and "assuming the errors are independent and Gaussian, what are the uncertainties on this trend?". The point I was trying to make is that there is a difference between extracting information about your data set (data analysis) and understanding what your dataset is telling you about the real world (interpreting your data using physical models).
If all that this is is an argument about which of two analysis methods is superior, fine. That, however, is not how it appears to be being presented. Of course you can choose to analyse a dataset using a different method, but - as I think you agree - until you consider the underlying physics, you really can't say much about what that dataset is telling you about the real world. Also, just because you want to use a different method, doesn't invalidate other approaches.
Here's a simple question for you. If you consider the Met Office/IPCC's method it determines the best-fit linear trend and the uncertainties on this trend. This is fairly straightforward stuff that most should understand. It tells us - approximately - the rate at which the surface temperature is increasing. Of course, it doesn't tell us why, but that's why we need physical models. What does ARIMA(3,1,0) tell us about the dataset?
"However, you can of course still ask the question "what is the best fit linear trend?""
No. You can ask questions like "What is the best fit linear trend assuming errors are independent and Gaussian?" and "What do you get if you apply the Ordinary Least Squares algorithm to the data?" but these are not "the best fit", unless the errors happen to match the assumed model. Error models underlie a lot of statistical methods, and if you use a method that implicitly assumes an incorrect error model, the results can be highly misleading.
"If all that this is is an argument about which of two analysis methods is superior, fine."
No. This is all an argument to demonstrate that using trend+AR(1) to claim "significance" is unjustified. *Nothing else*.
No claim is made that ARIMA is superior. No claim is made that it is how the real physics works. No claim is made that it tells you anything about the real world. It is purely a demonstration that pure statistical methods *cannot* tell you anything about the real world, because we can come to diametrically opposite conclusions just by choosing different models. Using trend+AR(1) to claim "significance" is unjustified.
If somebody claims that 6 is the biggest number because 6 is bigger than 5, you can counter that argument by giving 7 as an alternative. That does *not* mean you're saying 7 is the biggest number.
"What does ARIMA(3,1,0) tell us about the dataset?"
It tells us (if you believe it) that the observed rise is spurious; that there is no underlying deterministic trend, only the chance accumulation of random noise. It tells us that even after a century it's still weather, not climate.
Nullius,
That is what I asked. Didn't you read what I wrote?
I saw this analogy of yours, but noone's trying to claim that 6 is the biggest number (that would be particularly stupid). They're simply pointing that 6 is bigger than 5. You seem to be arguing that you can't ask if 6 is bigger than 5 because 7 is also bigger than 5. I would argue that I can ask if 6 is bigger then 5, even if 7 is also bigger than 5.
No, I don't believe that and I doubt anyone else with any understanding of climate science would believe it either. This is the fundamental point. You're using a purely statistical model to CLAIM that the rise is spurious. You really can't make that claim without some physical model of how this rise could occur. I thought we'd agreed on that.