Peer review pickle
It is rapidly becoming a commonplace that peer review doesn't work. An article in Times Higher Education looks at the problems its use is causing at the research councils.
"Independent expert peer review" is contradictory. One submits a proposal and the councils ask experts to assess it. But these experts are likely to include proposers' closest competitors, even if they are selected internationally, because science is global - and real pioneers have no peers, of course. How then can the councils ensure that reviews are independent? To make matters worse, these experts can pass judgement anonymously: applicants don't know who put the boot in.
I suggest that the misuse of peer review is at the heart of the research councils' problems. Before about 1970, they largely restricted its use to the assessment of applications for large grants or expensive equipment. Scientific leaders protected the seed corn, ensuring that young scientists could launch radical challenges if they were sufficiently inspired, dedicated and determined. Today, the experts whose ignorance they would challenge might also influence their chances of funding.
There is obviously a suspicion that research funding is directed towards projects that will help the green cause and away from those that might question it, although it has to be said that evidence is thin on the ground. With a process like peer review involved, we suspicious members of the public are hardly going to be reassured though.
Reader Comments (3)
This is an interesting article. Note that it makes a distinction between peer-review for papers (grudgingly accepted to be just about OK in most fields) and peer-review for grant funding (where the author argues it is bad). You had a post a few months back about some rather similar suggestions that Susan Greenfield had made (also in the Time Higher Education). It led to a nice exchange of views. I certainly agree that peer review for grants consumes a lot of time, creates a system where people who write good proposals - as opposed to people who are good researchers (of course many fall in both categories) - tend to get a disproportionate amound of funding. It is also viewed by research councils as more reliable than it really is.
In the field of climate science, I would be pretty sure that sceptic vs. consensus grant proposals encounter the same difference in peer review as sceptic vs. consensus papers. I.e. proposals that set out explicitly to question the consensus would get a very rough ride indeed.
Nearly a decade ago, in response to the Bogdanov affair, I suggested that journals become more like insurance companies. Papers would be "insured" to pay out a pre-defined sum for each error identified. Grad students would make their names by finding as many errors as possible. At some point, the insurance would be removed from terminally dodgy papers. To maintain insurance, papers would need to be updated if their citations had problems.
The sum of money involved could be trivial or non-trivial: Don Knuth was writing cheques for $2.56 for errors in his books, and those cheques were framed. You could imagine a professor with cheques from Nature, Science, GRL, PNAS, PTRS etc. on his office wall.
I should probably patent the idea, but clearly won't.
There is a place for peer review as long as it is not the be all and end all of it.
I am very impressed by this proposal for a Scientific Integrity Act under which prizes are awarded to anybody, not just "peers" who find clear errors in significant papers. It would also improve peer review because it would become obvious if "peers" were missing errors by the "great and good" while treating outsiders differently.
http://noconsensus.wordpress.com/2010/11/23/how-a-scientific-integrity-act-could-shift-the-global-warming-debate/