Using Topic Modelling to learn from Open Questions in Surveys

Another presentation I gave at the General Online Research (GOR) conference in March1, was on our first approach to using topic modelling at SKOPOS: How can we extract valuable information from survey responses to open-ended questions automatically? Unsupervised learning is a very interesting approach to this question — but very hard to do right.

Continue reading “Using Topic Modelling to learn from Open Questions in Surveys”

Replicability in Online Research

At the GOR conference in Cologne two weeks ago, I had the opportunity to give a talk on replicability in Online Research. As a PhD student researching this topic and working as a data scientist in market research, I was very happy to have the opportunity to give my thoughts on how the debate in psychological science might transfer to online and market research.

The GOR conference is quite unique since the audience is about half academics and half commercial practitioners from market research. I noticed my filter bubble, when only about a third of the audience knew about the “replicability crisis in psychology” (Pashler & Wagenmakers, 2012; Pashler & Harris, 2012).

Continue reading “Replicability in Online Research”

p-hacking destroys everything (not only p-values)

In the context of problems with replicability in psychology and other empirical fields, statistical significance testing and p-values have received a lot of criticism. And without question: much of the criticism has its merits. There certainly are problems with how significance tests are used and p-values are interpreted.1

However, when we are talking about “p-hacking”, I feel that the blame is unfairly on p-values and significance testing alone without acknowledging the general consequences of such behaviour in the analysis.2 In short: selective reporting of measures and cases3 invalidates any statistical method for inference. When I only selectively report variables and studies, it doesn’t matter whether I use p-values or Bayes factors — both results will be useless in practice. Continue reading “p-hacking destroys everything (not only p-values)”

ReplicationBF: An R-Package to calculate Replication Bayes Factors

Some months ago I’ve written a manuscript how to calculate Replication Bayes factors for replication studies involving F-tests as is usually the case for ANOVA-type studies.

After a first round of peer review, I have revised the manuscript and updated all the R scripts. I have a written a small R-Package to have all functions in a single package. You can find the package at my GitHub repository. Thanks to devtools and  Roxygen2, the documentation should contain the most important information on how to use the functions. Reading the original paper and my extension should help clarifying the underlying considerations and how to apply the RBF in a given situation.

I will update the preprint at arXiv soon too and add some more theoretical notes here on the blog about my perspective on the use of Bayes factors. In the meantime you might as well be interested in Ly et al.’s updated approach to the Replication Bayes factor, which is not yet covered in either my manuscript nor the R-package.

Please post bugs and problems with the R package to the issue tracker at GitHub.

Thoughts on the Universality of Psychological Effects

Most discussed and published findings from psychological research claim universality in some way. Especially for cognitive psychology it is the underlying assumption that all human brains work similarly — an assumption not unfounded at all. But also findings from other fields of psychology such as social psychology claim generality across time and place. It is only when replications fail to show an effect again, the limits of generality are discussed, i.e. in which way American participants differ from German participants.  Continue reading “Thoughts on the Universality of Psychological Effects”

Critiquing Psychiatric Diagnosis

I came across this great post at the Mind Hacks blog by Vaughan Bell, which is about how we talk about psychiatric diseases, their diagnosis and criticising their nature.

Debating the validity of diagnoses is a good thing. In fact, it’s essential we do it. Lots of DSM diagnoses, as I’ve argued before, poorly predict outcome, and sometimes barely hang together conceptually. But there is no general criticism that applies to all psychiatric diagnosis.

His final paragraph touches something which I also discuss in my course on Psychological Assessment and Decisions:

Finally, I think we’d be better off if we treated diagnoses more like tools, and less like ideologies. They may be more or less helpful in different situations, and at different times, and for different people, and we should strive to ensure a range of options are available to people who need them, both diagnostic and non-diagnostic.

Diagnoses are a man-made concept that can be helpful in order to make decisions and study the subject. Vaughan makes a great case for how this is true for both mental and somatic conditions.