Michael Inzlicht on loosing faith in science


Michael Inzlicht has posted an article on his blog about how he lost faith in psychological science after reading the now infamous paper on “false-positive psychology” 1.  He sums up his experience:

As a young psychology graduate student, I was a firm believer in our methods, our modes of inference, and in the substantial knowledge base that had been accumulated by the pioneers of the field. While I might have had a few moments of uncertainty and ambivalence about this or that finding or theory, October of that year stands out as the moment faith in our field was no longer tenable for me. It was during that month that I read Simmons, Nelson, and Simonsohn’s powerful False Positive Psychology paper, which demonstrates that common practices of our field—practices taught to me by my Ivy League professors—could lead us to statistically affirm patently false, sometimes absurd, hypotheses.

It is interesting for me to note that my experience is somewhat different: even when I was an undergraduate student and started studying in 2008, we had a feeling that something was odd. When discussing papers and findings we usually noted small sample sizes and spurious theories. But results were significant, so the discussion often ended with something along the lines of “The study might not be perfect and independent replications are required, but that’s what we have right now, so let’s continue with it…”.

Students asked to criticise papers often come up with the same arguments – as we did – without a solid basis for these arguments2 and those post-hoc arguments might lead to some post-hoc explanations in our own studies later on. However, we were not really convinced by many findings and theories we discussed but could not pin-point or verbalise what exactly we were missing. Most probably because we just did not know enough about the statistical and methodological frameworks used.

When reading the “false-positive psychology” paper for my Masters thesis I finally found some proof for my feeling that not everything published can be true. My ongoing PhD work now focuses around the problems of our scientific methods and, hopefully, can contribute something so that future students might find more trust in the published research findings than we did.

  1. Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant. Psychological Science, 22(11), 1359–1366. http://doi.org/10.1177/0956797611417632
  2. For example explaining how the effect size is probably overestimated due to “file drawer” and publication bias and the sample size is not sufficient for an adequately powered analysis for any smaller effect size.

Leave a Reply

Your email address will not be published. Required fields are marked *