Its important to note that these problems occur mostly in psychology and biomedicine. In the latter, they have an excuse; experiments are hard and difficult to control, especially those involving human subjects. The issue there, in my mind, is one of theory and experimental design. When scientists think that testing compounds in plated cells and mice is efficacious research, or tracking genes in a favored modeled organism is a good idea, or studying one protein, not because it’s the most important, but because it’s probe-able, there is a lack of theoretical grounding and a broad view of the field and what experiments are necessary. It should not surprise us then, when people design and run poorly controlled experiments that, while flashy, are had to draw firm conclusions from – no one is looking at the theory and the field as a whole.
Psychology is, of course, a field full of charlatans – and many good scientists. But when your professional organization splits into the American Psychological Association, and the Association for Psychological Science (the latter, of course, runs the journal with the fewest overturned papers in the field), then you know you have a problem.
I have not seen these problems mentioned in chemistry or physics. There, science seems to be much more self correcting, despite the difficulty of the experiments. As for the statistics, biostatisticians aren’t the answer. Neither are complicated statistical methods, except in rare cases. When the experimenters don’t understand the stats, they cannot learn from them and cannot use them to design better studies. One’s prior should always be low for studies that use a complicated statistical analysis.
As a last note, the author does get genomic studies horribly wrong. The number of potential hypotheses is of the order of 10^9, not 1000. And the field has corrected for that since 2008 or so.