Replicability in Online Research

At the GOR conference in Cologne two weeks ago, I had the opportunity to give a talk on replicability in Online Research. As a PhD student researching this topic and working as a data scientist in market research, I was very happy to have the opportunity to give my thoughts on how the debate in psychological science might transfer to online and market research.

The GOR conference is quite unique since the audience is about half academics and half commercial practitioners from market research. I noticed my filter bubble, when only about a third of the audience knew about the “replicability crisis in psychology” (Pashler & Wagenmakers, 2012; Pashler & Harris, 2012).

Continue reading “Replicability in Online Research”

Thoughts on the Universality of Psychological Effects

Most discussed and published findings from psychological research claim universality in some way. Especially for cognitive psychology it is the underlying assumption that all human brains work similarly — an assumption not unfounded at all. But also findings from other fields of psychology such as social psychology claim generality across time and place. It is only when replications fail to show an effect again, the limits of generality are discussed, i.e. in which way American participants differ from German participants.  Continue reading “Thoughts on the Universality of Psychological Effects”

Research is messy: Two cases of pre-registrations

Pre-registrations are becoming increasingly important for studies in psychological research. This is a much needed change since part of the “replication crisis” has to do with too much flexibility in data analysis and interpretation (p-hacking, HARK’ing and the like). Pre-registering a study with planned sample size and planned analyses allows other researchers to understand what the initial thinking of the authors was, how the data fits to the initial hypothesis and where are differences between the study protocol and study results. In theory, it looks very simple: you think about a problem, you conceive a study, lay out the plan, register it, collect data, analyse and publish.  Continue reading “Research is messy: Two cases of pre-registrations”

ASA statement on p-Values: Improving valid statistical reasoning

A lot of debate (and part of my thesis) revolve around replicability and the proper use of inferential methods. The American Statistical Association has now published a statement on the use and the interpretation of p-Values (freely available, yay). It includes six principles and how to handle p-Values. None of them are new in a theoretical sense. It is more a symbolic act to remind scientists to properly use and interpret p-values.

Continue reading “ASA statement on p-Values: Improving valid statistical reasoning”