One small step for Science
Marcia McNutt, from June last year the editor in chief of Science, has issued a new reproducibility policy for the journal.
Science advances on a foundation of trusted discoveries. Reproducing an experiment is one important approach that scientists use to gain confidence in their conclusions. Recently, the scientific community was shaken by reports that a troubling proportion of peer-reviewed preclinical studies are not reproducible. Because confidence in results is of paramount importance to the broad scientific community, we are announcing new initiatives to increase confidence in the studies published in Science. For preclinical studies (one of the targets of recent concern), we will be adopting recommendations of the U.S. National Institute of Neurological Disorders and Stroke (NINDS) for increasing transparency.* Authors will indicate whether there was a pre-experimental plan for data handling (such as how to deal with outliers), whether they conducted a sample size estimation to ensure a sufficient signal-to-noise ratio, whether samples were treated randomly, and whether the experimenter was blind to the conduct of the experiment. These criteria will be included in our author guidelines.
This is a start I suppose. I can't see anything about availability of data and code, which is always going to be the starting point for reproducibility. Still, every little helps.
Reader Lance Wallace sends this further excerpt:
Because reviewers who are chosen for their expertise in subject matter may not be authorities in statistics as well, statistical errors in manuscripts may slip through. For that reason…we are adding new members to our Board of Reviewing Editors from the statistical community to ensure that manuscripts receive appropriate scrutiny in their methods of data analysis.
Which is definitely a win.
Reader Comments (20)
Does the new policy apply only to "peer-reviewed preclinical studies" or is there a wider scope? (The full article is behind a pay wall.)
"For preclinical studies (one of the targets of recent concern), we will be adopting recommendations of the U.S. National Institute of Neurological Disorders and Stroke (NINDS) for increasing transparency.* "
Anybody else seen Geoffrey Leans latest article in the DT
So this Canadian Data on dwindling Bird population in the Arctic has it been peer reviewed.
Sounds good, but since we know of numerous cases of journals ignoring their own rules, the proof of the pudding will be in the eating.
Exactly Neil, these guys can have all the rules they like but they will be meaningless if they aren't enforced in a meaningful way.
Regards
Mailman
Expect of course we we seen before its needs the will to enforce these ideas for them to work and far to often that has gone missing when a juicy article appears in full on support of 'the cause' To the benefit of 'the Team ' who have made a speciality of the smoke and mirrors approach to showing there data .
Oliver Geden tweets about a new climate paradigm.
=========
What about high energy physics? How can an experiment be reproducible if there is only one piece of equipment in the world that can perform it, especially when we know that the experimenters keep replacing parts until they get the results they want? Should we believe their claims about the Higgs boson, when we cant trust them on the speed of neutrinos?
More about that here: http://www.icouldbewrong.blogspot.ca/2012/06/on-new-discovery-about-speed-of.html
Reproducibility is closely bound up with 'open science' and the need for better peer review, including post-publication, Reddit / Wiki -style. Two or three selected reviewers for a paper - or the entire community?
There are two opposing forces at work. Towards the end of 2013, leading academic publisher Elsevier started systematically trying to get openly shared papers taken down from academic websites using the US Digital Millennium Copyright Act - see Economist and Washington Post.
On the other side is the 'open science' movement and academic social-networking sites, notably Academia.edu but also (curiously) another recently-acquired corner of the Elsevier empire (hedging its bets?). Complex copyright issues, no doubt - but also two fundamentally different models for how science best advances.
LTEC at 10.16pm
Read Miles Mathis' critique of the 'Higgs' discovery at milesmathis.com
I listened to the director of CERN, Rolf Heuer, at the Hay on Wye festival (2011) before the 'discovery' of the Higgs and he was already saying they needed oodles more money to confirm the yet undiscovered discovery.
So it goes on.
Since I've in academic publishing for some two decades, I must say that I'll believe when I see it (the role of editors and reviewers would be rewritten to an extent unimaginable to most editors and reviewers). That said, it is ONE step in the right direction.
Unfortunately, even simple, but good, research takes time and money to reproduce. To my way of thinking, this puts a premium on attempting research that is more likely to be useful and have wider applications, not just be interesting to researchers in the field.
For example, funding research to discover if recently increased atmospheric concentrations of carbon dioxide decrease the number of eggs laid by the duck-billed platypus, would not count as useful research, if you catch my drift.
Reviewers are generally unpaid for the work they do, but have limited time. Most statisticians are probably not kicking their heels waiting for something to do. So, yes, competent readers of the journal are more likely to examine the data for themselves if it is presented in full at the time of submission and publication. But we knew that already.
This was not driven by the editors, but by scientists at NIH.
So much very pretty crap gets into science.
The published error bars have been shrinking, in controls, since the 80's.
So, Willis' sexism paid off?
From the Ecclesiastical Uncle, an old retired bureaucrat in a field only remotely related to climate with minimal qualifications and only half a mind.
I sympathize with those who would not expect these dicta to matter at all in any situation where they might be thought to be useful.
Many organisations make rules, regulations, guidance notes and the like to regulate the way staff go about organisational business. They are merely a managerial tool of use in attempts to improve achievement of objectives, and being self inflicted, can be as readily discarded.
So if these new rules (or policies, regulations or guidelines) get in Nature's way, you can be sure they will, quite rightly, be ignored.
Of course, as when the IPCC played fast and loose with its rules, such practices routinely cause outsiders like the cognoscenti on this blog to complain. But, for those within the organisation, such complaints are merely tiresome, mere grumbles about the choice the organisation has made, in a free world, to organise its work, and that merits nothing more than to be ignored.
Complaints that the objective at issue is at fault cannot so justifiably be dismissed and must, by all logic, be the way to go about righting wrongs.
(But, of course, logic does not always prevail!)
Jan 27, 2014 at 3:21 AM | shub
Actually, what I said was that men tend to lie to strong, powerful, good looking, smart women … and I still hold that what I said was true, and that truth is a 100% ironclad defense against an accusation of sexism. The statement "Women can't run as fast as men" is not sexist, for example … check the Olympic records, it's simply true.
As is what I said.
I do give Dr. McNutt high marks for this change, however, although we'll have to see if it works. Since writing my non-sexist comment back whenever that was, I tried to get them to enforce their policy on archiving data … no joy at all on that one.
So … will they actually enforce their rules? That's the question.
w.
If it´s not broken, dont fix it.
Now they are trying to fix it.
I didn't think you were too 'sexist'. In fact, I didn't even pay attention to it until people talked about it. But, I could see how it might come across as 'sexist', given the times.
Good she's paying attention to this, of all the things on her plate.
As a professional software engineer with 35 years experience, I'm a little concerned about the requirement that code must be available.
For real reproducibility, it would be better if the algorithms used are fully and accurately described, but that the code is not available.
For extremely high availablility systems, where repair is close to impossible (e.g. satellites, sea-bed oil heads) there used to be a system which used three independant computers sharing no common hardware, using different operating systems, and with applications coded by three independant teams, and the results cross-checked in a voting system so that any errors in the hardware or software could be detected using a voting system.
The idea that if the CRU team releases their code then that would be fine, as the results could be reproduced, is good in that errors in the code may be spotted, but is also bad in that errors in the code may NOT be spotted, and the results trusted simply because they could be reproduced.
Jan 27, 2014 at 12:28 PM | steveta_uk
Steve, thanks for the view from the profession. In the climate world there are several problems with that approach.
The first is that we don't program computers in English. Why? Because English is totally inadequate to the task. That's why computer languages were invented—because natural languages are far too vague. As a result, in general it is NOT possible to "fully and accurately describe" a computer program in English as you suggest. If it were … we'd program in English.
The next problem is that your method wastes huge amounts of time finding any mistakes in the code, and may never find them. Ross McKitrick, for example, wrote a paper on climate science, and included the code. Within a week or so, a flaw was found in his code and corrected. Michael Mann wrote a paper and refused to include his code. It was not until a copy of his code was found on his computer that his error could be found … and except for our good fortune in his accidentally leaving his code exposed, he'd still be claiming that his code was perfect.
However, in the interim, thousands of man-hours, including some of my own were wasted screwing around trying to GUESS what Mann had done …
The next problem is this: suppose I were to have (somehow) come close to replicating Mann's code. I say to Mann "Your code is wrong." He says "No, your code is wrong" … and the conversation ends. Stalemate. He can't see my code, I can't see his, and we are getting different answers with no way to settle the question.
As a result, while (as you right point out) your system has been used IN-HOUSE for some high-stakes programs, when you go to compare the two contestants in-house, you have the code for both sides to see if anyone has found errors …
Science lives and dies by transparency, including transparency of code and data. We need real-time checking of papers. If Mann had published the code for the Hockeystick, it wouldn't have lasted a week. Instead, BECAUSE HE HID THE CODE, he was able to pass it off as a real result for some years, during which time it did incredible damage. And your plan would allow anyone to do the same ...
So no, Steve, allowing people to hide their code in the vain hope that a) someone else will go to all the trouble to chase a will-o-the-wisp, and that b) that we might figure out what the hidden code does, and c) that the original author doesn't just say "Nope, you got it wrong, try again", and d) an error might get corrected someday … well, none of that appeals in the slightest.
Or as Mosher says … free the data, free the code.
w.
It comes down to who paid for the code and who owns it. It also comes down to how can others check the work and validate it and who gets credit for doing the original work in the first place. It seems to me that patent law provides for all of that, but in the context of inventions. Patents are supposed to provide detailed instructions so that one skilled in the art can reproduce the work. They also recognise the people who originated the work and the people who paid for it.
I'm not suggesting for a moment that patent law is the answer here, but the principles could be applied. For example, the original code could be registered somewhere with some sort of copyright. It could be owned by the taxpayer but the originator would also be recorded.
Come to think of it, this works both ways. We could maybe use it to sue these folks if things don't work out as they hoped...