Replicability, Data Quality and Bayesian Methods


On the About page I wrote, that I blog about things I come across while researching for my PhD. So, you may very well ask what this PhD is supposed to be about. For the interested reader — researchers and the uninitiated alike —, here is some overview on my current plans and research focus.

To provide some background: When I was in my Master’s studies, Uri Simonsohn published a paper1, where he described two cases of data fabrication/manipulation that were identified using statistical methods. The reported method is based, put very simple, on statistical considerations on the variability of the reported data and some simulations to underline them. As I found this approach and the general area of “fraud detection” very interesting, it didn’t take long until I decided to write my Master’s thesis on this method. My thesis explains how it works, what can be improved and how it can be applied to more general cases.2

It happens, that I read a lot about current (meta-scientific) issues with psychological researchers and problems in social psychology in particular, mainly cases of scientific misconduct and the more prevalent (and maybe more troubling) problems with p-hacking. Many of those things didn’t fit into my master’s thesis and after handing in my work, the Open Science Collaboration published the results from their Replication Project3 and fueled a debate about the replicability in psychological research.

This — more general approach to the focus of my master’s thesis —, basically, sets the stage for my PhD research, that revolves around replicability of psychological science and how improved/new/old methods can help us out and improve the way science is conducted. This is were questions of data quality 4 and Bayesian methods chime in.

The ongoing discussions around this topic are very exciting to follow and I hope to add something valuable to it soon. However, there are some scholars already arguing that the discussion should be over and all is said and done. Personally, I find this quite disturbing: meta-science and the improvement of statistical and empirical methods is an ongoing process, that should never come to an end. It is very much beneficial for our field — and any other field depending on thorough data analyses, really — to improve the way we explore and generate theories, collect and explore data, and come to the right conclusions based on statistical analysis. Discussing and questioning our status quo is the only way, science can fulfill its claim of being self-correcting.
And based on my, frankly little, insight in the world of science, I think there is still a lot of room for improvement. Not only in the methods, but also to the academic process in general (including publication and peer review system etc.).

Not everything of this huge issue can and will be covered in my dissertation, but I still hope to find some perspectives that are new, interesting and beneficial to the overall discussion. And I am very sure that a discussion within scientific psychology will also effect other disciplines, but also the world outside of science.

  1. Simonsohn, U. (2013). Just Post It: The Lesson From Two Cases of Fabricated Data Detected by Statistics Alone. Psychological Science, 24(10), 1875–1888. doi:10.1177/0956797613480366
  2. Still working on a paper covering my results, actually.
  3. Open Science Collaboration (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716–aac4716. doi:10.1126/science.aac4716
  4. Whatever that exactly means.

One response to “Replicability, Data Quality and Bayesian Methods”

Leave a Reply

Your email address will not be published. Required fields are marked *