I'm grateful to reader Steve for pointing me to this article by Carl Phillips, an epidemiologist, who is looking at the efficacy of peer review. The whole article is worth a look, but here are some choice quotes:
Do the reviewers ever correct errors in the data or data collection? They cannot – they never even see the data or learn what the data collection methods were. Do they correct errors in calculation or choices of statistical analysis? They cannot. They never even know what calculations were done or what statistics were considered. Think about what you read when you see the final published paper. That is all the reviewers and editors ever see too. (Note I have always tried to go the extra mile when submitting papers, to make this system work by posting the data somewhere and offering to show someone the details of any analytic method that is not fully explained. This behavior is rare to the point that I cannot name anyone else, offhand, who does it.)
Does this mean that if you just make up the data, peer review will almost certainly fail to detect the subterfuge? Correct.
Does this mean that if you cherrypick your statistical analyses to exaggerate your results, that peer review will not be able to detect it? Correct.
But it serves just fine for justifying the uprooting of the economy.