Buy

Books
Click images for more details

The extraordinary attempts to prevent sceptics being heard at the Institute of Physics
Displaying Slide 2 of 5

Twitter
Support

 

Recent comments
Why am I the only one that have any interest in this: "CO2 is all ...
Much of the complete bollocks that Phil Clarke has posted twice is just a rehash of ...
Much of the nonsense here is a rehash of what he presented in an interview with ...
Much of the nonsense here is a rehash of what he presented in an interview with ...
The Bish should sic the secular arm on GC: lese majeste'!
Recent posts
Links

A few sites I've stumbled across recently....

Powered by Squarespace

Entries in Climate: Statistics (111)

Thursday
Feb252016

Quote of the day, predictability edition

Even a fully deterministic system is fully unpredictable at climatic timescales when there is persistence.

From a Demetris Koutsoyiannis presentation.

Thursday
Feb252016

Stochastic Stern

You know all that money we have been spending on developing economic models of the effects of climate change? Well apparently it has mostly been wasted. At least that's the case according to Lord Stern, whose article in the sociology journal Nature says that we should be moving onto something more reliable.

Because the IAMs omit so many of the big risks, SCC estimates are often way too low. As a first step, the consequences being assessed should include the damages to human well-being and loss of life beyond simply reduced economic output. And the very large uncertainty, usually involving downward bias, in SCC estimates should always be made explicit...

A comprehensive review of the problems of using IAMs in climate economics called for the research community to develop a “third wave” of models. The authors identify various types of model that might offer advances. Two are: dynamic stochastic computable general equilibrium (DSGE) models, and agent-based models (ABMs).

It's also interesting to see stochastic modelling being touted in a week when climatologists have been outraged by a suggestion that such an approach might be useful in their own field. 

Tuesday
Feb232016

Two worlds collide

GWPF have release a very interesting report about stochastic modelling by Terence Mills, professor of applied statistics and econometrics at Loughborough University. This is a bit of a new venture for Benny and the team because it's written with a technical audience in mind and there is lots of maths to wade through. But even from the introduction, you can see that Mills is making a very interesting point:

 

The analysis and interpretation of temperature data is clearly of central importance to debates about anthropogenic globalwarming (AGW). Climatologists currently rely on large-scale general circulation models to project temperature trends over the coming years and decades. Economists used to rely on large-scale macroeconomic models for forecasting, but in the 1970s an increasing divergence between models and reality led practitioners to move away from such macro modelling in favour of relatively simple statistical time-series forecasting tools, which were proving to be more accurate.
In a possible parallel, recent years have seen growing interest in the application of statistical and econometric methods to climatology. This report provides an explanation of the fundamental building blocks of so-called ‘ARIMA’ models, which are widely used for forecasting economic and financial time series. It then shows how they, and various extensions, can be applied to climatological data. An emphasis throughout is that many different forms of a model might be fitted to the same data set, with each one implying different forecasts or uncertainty levels, so readers should understand the intuition behind the modelling methods. Model selection by the researcher needs to be based on objective grounds.

There is an article (£) in the Times about the paper.

I think it's fair to say that the climatological community is not going to take kindly to these ideas. Even the normally mild-mannered Richard Betts seems to have got a bit hot under the collar.

 

 

 

Tuesday
Jan122016

"Nothing in it is correct"

The eminent statistician (and occasional BH reader) Radford Neal has been writing a series of posts on global temperature data at his blog. There are three so far:

What can global temperature data tell us?

Has there been a pause in global warming?

and finally

Critique of "Debunking the climate hiatus", by Rajaratnam, Romano, Tsiang and Diffenbaugh.

Click to read more ...

Monday
Dec072015

On the floods in Cumbria

The floods in Cumbria are obviously attracting a lot of attention this morning. A couple of things are exercising my mind.

Firstly, as readers are pointing out, the claim that 340mm of rain fell in 24 hours seems suspect. Nobody seems quite sure where it came from. I have seen it suggested that this was the amount that fell over two days and in Unthreaded, Mike Post wonders if it is actually the November rainfall total.

Secondly, how do we know what the 100-year flood is in any given river basin? The process looks less than straightforward and involves a wealth of assumptions.

The first assumption is often but not always valid and should be tested on a case by case basis. The second assumption is often valid if the extreme events are observed under similar climate conditions. For example, if the extreme events on record all come from late summer thunder storms (as is the case in the southwest U.S.), or from snow pack melting (as is the case in north-central U.S.), then this assumption should be valid. If, however, there are some extreme events taken from thunder storms, others from snow pack melting, and others from hurricanes, then this assumption is most likely not valid. The third assumption is only a problem when trying to forecast a low, but maximum flow event (for example, an event smaller than a 2-year flood). Since this is not typically a goal in extreme analysis, or in civil engineering design, then the situation rarely presents itself. The final assumption about stationarity is difficult to test from data for a single site because of the large uncertainties in even the longest flood records[3] (see next section). More broadly, substantial evidence of climate change strongly suggests that the probability distribution is also changing[7] and that managing flood risks in the future will become even more difficult.[8] The simplest implication of this is that not all of the historical data are, or can be, considered valid as input into the extreme event analysis.

Lots to dig into.

Monday
Nov232015

A change to the playing field

Doug Keenan has posted a note at the bottom of the notice about his £100,000 challenge, indicating that he has reissued the 1000 data series. This was apparently because it was pointed out to him that the challenge could be "gamed" by hacking the (pseudo)random number generator he had used.

Brandon Shollenberger emails to say that this is a terrible thing, but I can't get terribly excited about it. Presumably it doesn't make any difference to those who think they can detect the difference between trending and non-trending series.

Wednesday
Nov182015

A $100,000 climate prize

Climatologists often claim that they are able to detect the global warming signal in the temperature records. If they are right then they are going to be having a very happy Christmas indeed, because Doug Keenan is offering them the chance to win a very large cash prize at his expense. Here are the details.

There have been many claims of observational evidence for global-warming alarmism. I have argued that all such claims rely on invalid statistical analyses. Some people, though, have asserted that the analyses are valid. Those people assert, in particular, that they can determine, via statistical analysis, whether global temperatures are increasing more that would be reasonably expected by random natural variation. Those people do not present any counter to my argument, but they make their assertions anyway.

In response to that, I am sponsoring a contest: the prize is $100 000. In essence, the prize will be awared to anyone who can demonstrate, via statistical analysis, that the increase in global temperatures is probably not due to random natural variation.


The file Series1000.txt contains 1000 time series. Each series has length 135 (about the same as that of the most commonly studied series of global temperatures). The series were generated via trendless statistical models fit for global temperatures. Some series then had a trend added to them. Each trend averaged 1°C/century—which is greater than the trend claimed for global temperatures. Some trends were positive; others were negative.

A prize of $100 000 (one hundred thousand U.S. dollars) will be awarded to the first person, or group of people, who correctly identifies at least 900 series: i.e. which series were generated by a trendless process and which were generated by a trending process.

Each entry in the contest must be accompanied by a payment of $10; this is being done to inhibit non-serious entries. The contest closes at the end of 30 November 2016.

The file Answers1000.txt identifies which series were generated by a trendless process and which by a trending process. The file is encrypted. The encryption key and method will be made available when someone submits a prize-winning answer or, if no prize-winning answers are submitted, when the contest closes.

More here.

Wednesday
Aug052015

What DECC knew

Updated on Aug 6, 2015 by Registered CommenterBishop Hill

Greenpeace have been doing some rather odd FOI work in recent months. It seems they have decided to investigate the series of parliamentary questions that Lord Donoughue put to to DECC ministers about the Met Office's statistical reasoning. Readers will recall that these questions were formulated with Doug Keenan's advice, were aimed at determining how the Met Office justified its claim that recent temperature rises were statistically significant, and that the eventual result, after months of non-answers from the Met Office, was that they effectively withdrew the claim.

The documents Greenpeace have made public are very interesting but I'm not sure that our environmentalist friends have considered exactly what it is they have got.

It does rather come across as if the DECC team wanted to "move on". In Document 4, the briefing ahead of the meeting between Keenan, Donoughue and the DECC team of Baroness Verma, David Mackay and David Warrilow, officials list their objectives for the meeting as being:

  • to demonstrate a willingness to listen
  • to demonstrate to Lord Donoughue that DECC's scientists are reasonable and, erm, scientific
  • to steer Lord Donoughue away from Keenan.

Click to read more ...

Thursday
Mar262015

A question for David Spiegelhalter

Updated on Mar 26, 2015 by Registered CommenterBishop Hill

I have a lot of time for David Spiegelhalter, the Cambridge University statistician who has become something of a go-to guy for the media on matters statistical. You certainly warm towards him when he sticks his head above the parapet to throttle a media health scare at source, as he did yesterday, responding to an article in the Telegraph that claimed that three alcoholic drinks a day could cause liver cancer.

There's no doubt that excessive drinking is bad for you and those around you. But does this justify exaggerated and misleading claims? They got their publicity, but perhaps the WCRF should value its scientific credibility a bit more.

Click to read more ...

Thursday
Feb192015

More numbers

Tamsin Edwards has posted some more details about the Climate by Numbers show at the start of next month. Of particular interest is the official blurb for the show:

In a special film for BBC Four, three mathematicians will explore three key statistics linked to climate change.

In Climate Change by Numbers, Dr Hannah Fry, Prof Norman Fenton and Prof David Spiegelhalter hone in on three numbers that lie at the heart of science’s current struggle to get a handle on the precise processes and impact of climate global climate change.

Prof Norman Fenton said: “My work on this programme has revealed the massive complexity of climate models and the novel challenges this poses for making statistical predictions from them.”

Click to read more ...

Monday
Feb022015

Oreskes savaged

Michael Lavine, a statistician from the University of Massachusetts Amherst has performed a very polite savaging of Naomi Oreskes over at Stats.org. Here's an excerpt:

 

After urging scientists to adopt a threshold less stringent than 95 percent in the case of climate change, [Oreskes says]:

WHY don’t scientists pick the standard that is appropriate to the case at hand, instead of adhering to an absolutist one? The answer can be found in a surprising place: the history of science in relation to religion. The 95 percent confidence limit reflects a long tradition in the history of science that valorizes skepticism as an antidote to religious faith. Even as scientists consciously rejected religion as a basis of natural knowledge, they held on to certain cultural presumptions about what kind of person had access to reliable knowledge. One of these presumptions involved the value of ascetic practices. Nowadays scientists do not live monastic lives, but they do practice a form of self-denial, denying themselves the right to believe anything that has not passed very high intellectual hurdles.

Yes, most scientists are skeptics. We do not accept claims lightly, we expect proof, and we try to understand our subject before we speak publicly and admonish others.

Thank goodness.

Read the whole thing.

 

Saturday
Jan172015

The temperature and the spin

Updated on Jan 17, 2015 by Registered CommenterBishop Hill

Many scientists on the whole seem to have been suitably cautious about alleged record-breaking temperatures, taking care to place the new data in the context of the error bars. It's also fair to say that others have been a bit wild.

The Science Media Centre has a couple of moderately level-headed responses, from Tim Palmer and Rowan Sutton, but as always with the SMC it's seen as important to get some input on climate change from a paleopiezometrist, from whom we learn that:

The new global temperature record announced today completely exposes the myth that global warming has stopped.

Click to read more ...

Wednesday
Jan072015

The trust me crowd and the show me crowd

The Chemist in Langley has another post on type 1 and type 2 errors, which is just as good as his last one. I found this quote particularly perspicacious:

A colleague at work describes the difference as roughly the “trust me crowd” versus the “show me crowd”. The trust me crowd can show that some anthropogenic climate change has happened in the past and that models suggest that future conditions are going to get worse. They produce their documentation via the peer reviewed press and in doing so address all the touchstones of the scientific method. Having met the high bar of “good science” they anticipate that their word will be taken as good.

The show me crowd looks at the “good science” and points out that many historical predictions of doom and gloom (that previously met the test of good science) have been shown to be overheated or just plain wrong. They also point out that the best models have not done a very good job with respect to the “pause”. Given this they ask for a demonstration that the next prediction is going to be better than the last one. This does not mean that they deny the reality of anthropogenic global warming. Rather they are not comfortable with cataclysmic predictions and calls for immediate action prior to a demonstration that those predictions can be supported with something approaching real data.

Tuesday
Jan062015

Oreskes on statistics

Naomi Oreskes' article in the New York Times the other day, in which she called for use of 90% rather than 95% confidence intervals, seems to be generating quite a lot of interest.

The best review is here:

Oreskes wants her readers to believe that those who are resisting her conclusions about climate change are hiding behind an unreasonably high burden of proof, which follows from the conventional standard of significance in significance probability. In presenting her argument, Oreskes consistently misrepresents the meaning of statistical significance and confidence intervals to be about the overall burden of proof for a scientific claim:

Ouch.

(This is also relevant)

 

Monday
Dec152014

Celebrating Hurst

Readers may be interested in this presentation by Cohn, Lins, Koutsoyiannis and Montanari about the life and work of Harold Hurst, the scientist who discovered the phenomenon of long-term persistence (LTP) while examining records of the flooding of the Nile. The presentation seems to date from the end of last year.

Many of you will know that LTP is pervasive in geoscience datasets, so you will no doubt be amused by this bit about the IPCC's consideration of the phenomenon:

...the SPM does not mention LTP, although it speaks about the internal climate variability, e.g.: “Internal variability will continue to be a major influence on climate, particularly in the near-term and at the regional scale.”

Click to read more ...