Andrew Gelman wrote:
If you find a cool data pattern and present it as such, you probably won’t get much attention. But if you wrap it in the garb of scientific near-certainty, there’s a chance you could hit the media jackpot. The incentives are all wrong.
Neatly summarizes the state of academic publishing, and why "published in a peer-reviewed journal" doesn't carry the weight that it used to.
I saw a speaker once (at a skeptics' conference) who laid out the cliched popular-culture tropes about "Science" and showed how inaccurate they are - the scientific community doesn't speak with one voice, knowledge doesn't 'progress', falsificationism doesn't work, etc. He then showed, at rather greater length, how all of these tropes were endorsed by Richard Dawkins, who effectively celebrated them as proof of just how solid and reliable scientific knowledge is - and I suspect the demonstration could be repeated for other prominent rationalists. "Science Tells Us..." has a firm grip on a lot of people's minds.
Posted by: Phil | 01/25/2018 at 09:16 AM
Phil: And there you are talking about proper science. We also have "fake" science... Most of "data science" is not "science" in that the scientific method is absent; and yet, the people doing those studies speak at the same or higher level of confidence as proper scientists.
Posted by: Kaiser | 01/25/2018 at 03:07 PM
Most psychology departments require that students at some stage take part in a study. End result is that there are lots of studies, most of which are never published except for the ones that achieve statistical significance. A great way of producing spurious relationships. The requirement for study registration would help, as then it would be possible to work out the proportion of studies at each institution that were published and therefore which institutions were just trying hypotheses until they got a result. Good science should mean that everything is publishable, irrespective of result.
Posted by: Ken | 01/30/2018 at 03:13 PM