In December I already blogged about the ReplicationBF package, I made available on GitHub. It allows you to calculate Replication Bayes Factors for t- and F-tests. The preprint detailing the formulas for the latter was outdated and the method in the package was not optimal, so I recently updated both.
Update (25.04.2018): The paper is now published at Royal Society Open Science and available here.
Some months ago I’ve written a manuscript how to calculate Replication Bayes factors for replication studies involving F-tests as is usually the case for ANOVA-type studies.
After a first round of peer review, I have revised the manuscript and updated all the R scripts. I have a written a small R-Package to have all functions in a single package. You can find the package at my GitHub repository. Thanks to devtools and Roxygen2, the documentation should contain the most important information on how to use the functions. Reading the original paper and my extension should help clarifying the underlying considerations and how to apply the RBF in a given situation.
I will update the preprint at arXiv soon too and add some more theoretical notes here on the blog about my perspective on the use of Bayes factors. In the meantime you might as well be interested in Ly et al.’s updated approach to the Replication Bayes factor, which is not yet covered in either my manuscript nor the R-package.
Please post bugs and problems with the R package to the issue tracker at GitHub.
Already some weeks ago I have finished up some thoughts for a Replication Bayes factor for ANOVA contexts, which resulted in a manuscript that is available as pre-print at arXiv. The theoretical foundation was laid out before by Verhagen & Wagenmakers (2014) and my manuscript is mainly an extension of their approach. We have another paper coming up where we will use it to evaluate the success of an attempted replication of an interaction effect. Continue reading “New Preprint: A Bayes Factor for Replications of ANOVA Results”
Pre-registrations are becoming increasingly important for studies in psychological research. This is a much needed change since part of the “replication crisis” has to do with too much flexibility in data analysis and interpretation (p-hacking, HARK’ing and the like). Pre-registering a study with planned sample size and planned analyses allows other researchers to understand what the initial thinking of the authors was, how the data fits to the initial hypothesis and where are differences between the study protocol and study results. In theory, it looks very simple: you think about a problem, you conceive a study, lay out the plan, register it, collect data, analyse and publish. Continue reading “Research is messy: Two cases of pre-registrations”