Update (25.04.2018): The paper is now published at Royal Society Open Science and available here.
Most discussed and published findings from psychological research claim universality in some way. Especially for cognitive psychology it is the underlying assumption that all human brains work similarly — an assumption not unfounded at all. But also findings from other fields of psychology such as social psychology claim generality across time and place. It is only when replications fail to show an effect again, the limits of generality are discussed, i.e. in which way American participants differ from German participants.
Michael Inzlicht has posted an article on his blog about how he lost faith in psychological science after reading the now infamous paper on “false-positive psychology” 1.
My last blog post was on the difference between Sensitivity, Specificity and the Positive Predictive Value. While showing that a positive test result can represent a low probability of actually having a trait or a disease, this example used the values of Sensitivity and Specificity as pre-known input. For established tests and measures they indeed are often available in literature together with recommended cut-off values.1
In this post, I would like to show how the choice of a cut-off value influences quality criteria such as Sensitivity, Specificity and the like. If you just want a tool to play with, see my Shiny web application here.
On the About page I wrote, that I blog about things I come across while researching for my PhD. So, you may very well ask what this PhD is supposed to be about. For the interested reader — researchers and the uninitiated alike —, here is some overview on my current plans and research focus.
A lot of debate (and part of my thesis) revolve around replicability and the proper use of inferential methods. The American Statistical Association has now published a statement on the use and the interpretation of p-Values (freely available, yay). It includes six principles and how to handle p-Values. None of them are new in a theoretical sense. It is more a symbolic act to remind scientists to properly use and interpret p-values.
Over at the Non Significance blog, the author describes the case of a paper that has some strange descriptive statistics:
What surprised me were the tiny standard deviations for some of the Variable 1 and 2, especially in combination with the range given.
In the blog post, the author outlines his approach to make sense from the descriptive values. It seems to be likely that the reported Standard Deviations (SD) are actually Standard Errors of the Mean (SEM). I’d like to add to this blog post one argument based on calculus and one argument based on simple simulations to show that SEM’s are indeed much more likely than SD’s.1
The German Society of Psychology (DGPs) today announced that the court of honor has put an end to its investigation on Jens Förster after they mutually agreed to the retraction of two papers in the Journal of Experimental Psychology: General:1
By this [agreement] the proceedings against Prof. Dr. Jens Förster at the court of honor at the German Society of Psychology will be concluded. Prof. Förster is obliged to act upon the publishers of the Journal of Experimental Psychology to pursue a retraction of the following to publications:
Förster, J. (2009). Relations between perceptual and conceptual scope: How global versus local processing fits a focus on similarity versus dissimilarity. Journal of Experimental Psychology: General, 138(1), 88-111. http://dx.doi.org/10.1037/a0014484
Förster, J. (2011). Local and global cross-modal influences between vision and hearing, tasting, smelling, or touching. Journal of Experimental Psychology: General, 140(3), 364-389. dx.doi.org/10.1037/a0023175
This settlement is neither a confession of guilt by Prof. Förster nor an imputation of blame by the court of honor.
Past analyses and reports have hinted at a possible scientific misconduct, but he always denied those claims. However, this settlement is rather strange to me: either there is sufficient evidence of fake data or very questionable practices or there is none. In the first case, an investigation should formally be started to identify any publication that might be based on those data. And in the latter case a retraction seems unreasonable – what reason should a retraction have in that case? Especially when both parties are so eager to underline that no confession of guilt or blame is made.
Seems like I’m not the only one wondering about this course of events:
Er… Weird https://t.co/YN0XrZVb7v
— Andrew D Wilson (@PsychScientists) November 12, 2015
Odd deal. https://t.co/6ycssIO5tT
— J.P. de Ruiter (@JPdeRuiter) November 12, 2015
I’m pretty sure that this is not the end of the discussion and that there will be other investigations.