ReplicationBF: An R-Package to calculate Replication Bayes Factors

Some months ago I’ve written a manuscript how to calculate Replication Bayes factors for replication studies involving F-tests as is usually the case for ANOVA-type studies.

After a first round of peer review, I have revised the manuscript and updated all the R scripts. I have a written a small R-Package to have all functions in a single package. You can find the package at my GitHub repository. Thanks to devtools and  Roxygen2, the documentation should contain the most important information on how to use the functions. Reading the original paper and my extension should help clarifying the underlying considerations and how to apply the RBF in a given situation.

I will update the preprint at arXiv soon too and add some more theoretical notes here on the blog about my perspective on the use of Bayes factors. In the meantime you might as well be interested in Ly et al.’s updated approach to the Replication Bayes factor, which is not yet covered in either my manuscript nor the R-package.

Please post bugs and problems with the R package to the issue tracker at GitHub.

Predicting EVE Online item sales (BVM Data Science Cup 2017)

This year, the BVM (German professional association for market and social researchers), hosted their first Data Science Cup. There were four tasks involving the prediction of sales data for the online sci-fi game “EVE Online”.

It was my first year working in market research and applying statistics and machine learning algorithms in a real-world context. So, naturally there is much room for improvements to my solution, but I ranked 3rd out of five, so I’m right in the middle. I would do many things differently today, but that’s how it’s supposed to be, right? For example, I would go through with a multilevel model, since the data has a natural hierarchy, that should be incorporated into the analysis.

I have uploaded my solution to a GitHub repository, so you might learn from my mistakes. In the README I have also included some of my reasoning and some technical details. But beware, the code is messy and badly documented – proceed with caution.

 

Thoughts on the Universality of Psychological Effects

Most discussed and published findings from psychological research claim universality in some way. Especially for cognitive psychology it is the underlying assumption that all human brains work similarly — an assumption not unfounded at all. But also findings from other fields of psychology such as social psychology claim generality across time and place. It is only when replications fail to show an effect again, the limits of generality are discussed, i.e. in which way American participants differ from German participants.  Continue reading “Thoughts on the Universality of Psychological Effects”

Critiquing Psychiatric Diagnosis

I came across this great post at the Mind Hacks blog by Vaughan Bell, which is about how we talk about psychiatric diseases, their diagnosis and criticising their nature.

Debating the validity of diagnoses is a good thing. In fact, it’s essential we do it. Lots of DSM diagnoses, as I’ve argued before, poorly predict outcome, and sometimes barely hang together conceptually. But there is no general criticism that applies to all psychiatric diagnosis.

His final paragraph touches something which I also discuss in my course on Psychological Assessment and Decisions:

Finally, I think we’d be better off if we treated diagnoses more like tools, and less like ideologies. They may be more or less helpful in different situations, and at different times, and for different people, and we should strive to ensure a range of options are available to people who need them, both diagnostic and non-diagnostic.

Diagnoses are a man-made concept that can be helpful in order to make decisions and study the subject. Vaughan makes a great case for how this is true for both mental and somatic conditions.

Stop the “Flipping”

I came across this interesting article at The Thesis Whisperer blog. It starts with the hypothesis being an academic is similar to “running a small, not very profitable business”. This is mainly down to two problems:

Problem one: There are a lot of opportunities that could turn into nothing, so it’s best to say yes to everything and deal with the possible overwork problem later.

Problem two: Since (outside of a teaching schedule) no one is really telling you what to do with every minute of your time, it can be hard to choose what to do next – especially if all the tasks seem equally important.

My personal experience is similar, but also somewhat different.  Continue reading “Stop the “Flipping””

New Preprint: A Bayes Factor for Replications of ANOVA Results

Already some weeks ago I have finished up some thoughts for a Replication Bayes factor for ANOVA contexts, which resulted in a manuscript that is available as pre-print at arXiv. The theoretical foundation was laid out before by Verhagen & Wagenmakers (2014) and my manuscript is mainly an extension of their approach. We have another paper coming up where we will use it to evaluate the success of an attempted replication of an interaction effect. Continue reading “New Preprint: A Bayes Factor for Replications of ANOVA Results”

New Paper: Impulsivity and Completion Time in Online Questionnaires

I’ve got my first first-author-paper published in Personality and Individual Differences. The paper is titled “Reliability and completion speed in online questionnaires under consideration of personality” (doi:10.1016/j.paid.2017.02.015) and was written together with Lina and ChristianContinue reading “New Paper: Impulsivity and Completion Time in Online Questionnaires”

Research is messy: Two cases of pre-registrations

Pre-registrations are becoming increasingly important for studies in psychological research. This is a much needed change since part of the “replication crisis” has to do with too much flexibility in data analysis and interpretation (p-hacking, HARK’ing and the like). Pre-registering a study with planned sample size and planned analyses allows other researchers to understand what the initial thinking of the authors was, how the data fits to the initial hypothesis and where are differences between the study protocol and study results. In theory, it looks very simple: you think about a problem, you conceive a study, lay out the plan, register it, collect data, analyse and publish.  Continue reading “Research is messy: Two cases of pre-registrations”

How statistics lost their power – and why we should fear what comes next

This is an interesting article from The Guardian on “post-truth” politics, where statistics and “experts” are frowned upon by some groups. William Davies shows how statistics in the political debate have evolved from the 17th century until today, where statistics are not regarded as an objective approach to reality anymore but as an arrogant and elitist tool to dismiss individual experiences. What comes next, however, is not the rule of emotions and subjective experience, but privatised data and data analytics that are only available to few anonymous analysts in private corporations. This allows populist politicians to buy valuable insight without any accountability, exactly what Trump and Cambridge Analytica did. The article makes a point how this is troublesome for liberal, representative democracies.