New Paper: Reliability Estimates for Three Factor Score Estimators

Just a short post on a new paper that is available from our department. If you happen to have calculated factor score estimators after factor analysis, e.g. using Thurstone’s Regression Estimators, you might be interested in the reliability of the resulting scores. Our paper explains how to do this, compares the reliability of three different factor score estimators and provides R- and SPSS-scripts for easy estimation of the reliability. While some reviewers have argued, that this reliability cannot exist, I think, we have some good arguments how our perspective is in line with existing literature on psychometrics.

The paper is available as Open Access in the International Journal of Statistics and Probability and you can find the article here. I have uploaded the scripts to GitHub, so you can easily download them, add issues or create forks. The repository is at https://github.com/neurotroph/reliability-factor-score-estimators.

Fraud in Medical Research

Already in September last year, Der Spiegel published an interview with Peter Wilmshurst, a British medical doctor and whistleblower who made fraudulent practices in medical research public:

In the course of the 66-year-old’s career, he conducted studies for pharmaceutical and medical devices companies, and unlike many of his colleagues, never hesitated to publish negative results. He’s been the subject of multiple cases of legal action and risked bankruptcy and his reputation to expose misconduct in the pharmaceutical industry.

A very interesting article that’s worth reading. Fact is, that companies who have a strong economic interest in the scientific process will have an impact on the quality of the research. It is, again and again, horrible to learn how far companies try to go – and often successfully do. While medical companies has always been an obvious target (and perpetrator), the problem runs deeper than the narrative of “Big Pharma”.  Continue reading “Fraud in Medical Research”

Euro Cup predictions through Stan

After I calculated the probabilities of Germany dropping out of the world cup two years ago, I always wanted to do some Bayesian modeling for the Bundesliga or the Euro Cup that started yesterday. Unfortunately, I never came to it. But Andrew Gelman posted some model by Leonardo Egidi today on his blog:

Leonardo Egidi writes:

Inspired by your world cup model I fitted in Stan a model for the Euro Cup which start today, with two Poisson distributions for the goals scored at every match by the two teams (perfect prediction for the first match!).

The available PDF contains the results and the description for the model. Really interesting and already a perfectly predicted first match! But the model will not fit very well at the semi-finals… Germany losing to Italy? Again? Can’t be!

Choosing Cut-Offs in Tests

My last blog post was on the difference between Sensitivity, Specificity and the Positive Predictive Value. While showing that a positive test result can represent a low probability of actually having a trait or a disease, this example used the values of Sensitivity and Specificity as pre-known input. For established tests and measures they indeed are often available in literature together with recommended cut-off values.1

In this post, I would like to show how the choice of a cut-off value influences quality criteria such as Sensitivity, Specificity and the like. If you just want a tool to play with, see my Shiny web application here.

Continue reading “Choosing Cut-Offs in Tests”

Visualizing Sensitivity and Specificity of a Test

In my university course on Psychological Assessment, I recently explained the different quality criteria of a test used for dichotomous decisions (yes/no, positive/negative, healthy/sick, …). A quite popular example in textbooks is the case of cancer screenings, where an untrained reader might be surprised by the low predictive value of a test. I created a small Shiny app to visualize different scenarios of this example. Read on for an explanation or go directly to the app here.

Continue reading “Visualizing Sensitivity and Specificity of a Test”

The Valley of Shit

Having only started my PhD studies a few months ago, I am still eager and highly motivated to finish what I have just started. However, first doubts on the topic and the quality of my work already came (and went again, luckily), so I could relate to this post on the Valley of Shit:

The Valley of Shit is that period of your PhD, however brief, when you lose perspective and therefore confidence and belief in yourself. There are a few signs you are entering into the Valley of Shit. You can start to think your whole project is misconceived or that you do not have the ability to do it justice. Or you might seriously question if what you have done is good enough and start feeling like everything you have discovered is obvious, boring and unimportant. As you walk deeper into the Valley of Shit it becomes more and more difficult to work and you start seriously entertaining thoughts of quitting.

A great post, that I enjoyed reading. I will bookmark it to read it again whenever I find myself in such Valley of Shit.