Research is messy: Two cases of pre-registrations

Pre-registrations are becoming increasingly important for studies in psychological research. This is a much needed change since part of the “replication crisis” has to do with too much flexibility in data analysis and interpretation (p-hacking, HARK’ing and the like). Pre-registering a study with planned sample size and planned analyses allows other researchers to understand what the initial thinking of the authors was, how the data fits to the initial hypothesis and where are differences between the study protocol and study results. In theory, it looks very simple: you think about a problem, you conceive a study, lay out the plan, register it, collect data, analyse and publish.  Continue reading “Research is messy: Two cases of pre-registrations”

How statistics lost their power – and why we should fear what comes next

This is an interesting article from The Guardian on “post-truth” politics, where statistics and “experts” are frowned upon by some groups. William Davies shows how statistics in the political debate have evolved from the 17th century until today, where statistics are not regarded as an objective approach to reality anymore but as an arrogant and elitist tool to dismiss individual experiences. What comes next, however, is not the rule of emotions and subjective experience, but privatised data and data analytics that are only available to few anonymous analysts in private corporations. This allows populist politicians to buy valuable insight without any accountability, exactly what Trump and Cambridge Analytica did. The article makes a point how this is troublesome for liberal, representative democracies.

 

Predictions for Presidential Elections Weren’t That Bad

Nathan Silver’s FiveThirtyEight has had an excellent coverage of the US Presidential Elections with some great analytical pieces and very interesting insights in their models. Each and every poll predicted Hillary Clinton to win the election and FiveThirtyEight was no exception to that. Consequently, there was a lot of discussion on pollsters, their methods and how they – again after “Brexit” – failed to predict the outcome of the election. There are many parallels between the elections in the US and the Brexit-vote in the UK. At least for the US, however, the predictions weren’t that far off. And FiveThirtyEight in particular, gave Trump better chances than anyone else:

For most of the presidential campaign, FiveThirtyEight’s forecast gave Trump much better odds than other polling-based models. Our final forecast, issued early Tuesday evening, had Trump with a 29 percent chance of winning the Electoral College. By comparison, other models tracked by The New York Times put Trump’s odds at: 15 percent, 8 percent, 2 percent and less than 1 percent. And betting markets put Trump’s chances at just 18 percent at midnight on Tuesday, when Dixville Notch, New Hampshire, cast its votes.

Continue reading “Predictions for Presidential Elections Weren’t That Bad”

New Paper: Reliability Estimates for Three Factor Score Estimators

Just a short post on a new paper that is available from our department. If you happen to have calculated factor score estimators after factor analysis, e.g. using Thurstone’s Regression Estimators, you might be interested in the reliability of the resulting scores. Our paper explains how to do this, compares the reliability of three different factor score estimators and provides R- and SPSS-scripts for easy estimation of the reliability. While some reviewers have argued, that this reliability cannot exist, I think, we have some good arguments how our perspective is in line with existing literature on psychometrics.

The paper is available as Open Access in the International Journal of Statistics and Probability and you can find the article here. I have uploaded the scripts to GitHub, so you can easily download them, add issues or create forks. The repository is at https://github.com/neurotroph/reliability-factor-score-estimators.

Fraud in Medical Research

Already in September last year, Der Spiegel published an interview with Peter Wilmshurst, a British medical doctor and whistleblower who made fraudulent practices in medical research public:

In the course of the 66-year-old’s career, he conducted studies for pharmaceutical and medical devices companies, and unlike many of his colleagues, never hesitated to publish negative results. He’s been the subject of multiple cases of legal action and risked bankruptcy and his reputation to expose misconduct in the pharmaceutical industry.

A very interesting article that’s worth reading. Fact is, that companies who have a strong economic interest in the scientific process will have an impact on the quality of the research. It is, again and again, horrible to learn how far companies try to go – and often successfully do. While medical companies has always been an obvious target (and perpetrator), the problem runs deeper than the narrative of “Big Pharma”.  Continue reading “Fraud in Medical Research”

The Vanual: Customizing a Van for a Mobile Lifestyle

While sitting in one of my three offices, dreaming of beautiful, exotic and serene places is just natural. Zach Both does not dream about these places, he just goes there. But he is not a travel-a-my-life type of guy, but a film maker and designer who happens to life mobile: He customized a van to have a bed and a kitchen to live where likes to while still doing his day-to-day business (more or less):

Zach Both is a young filmmaker who in a past life worked as a designer and art director. His passion for telling unique and unusual stories through filmmaking has lead him to travel the country in a van that doubles as his mobile production company.

Thankfully, he made a website explaining how he re-worked the van. He also posted a lot of pictures of the process and the result.

I really like his project and would love to make something similar for holiday travels. But after reading all the “vanual”, I might need to learn how to do stuff first. Being all thumbs does not really make this process much easier, I guess.

Euro Cup predictions through Stan

After I calculated the probabilities of Germany dropping out of the world cup two years ago, I always wanted to do some Bayesian modeling for the Bundesliga or the Euro Cup that started yesterday. Unfortunately, I never came to it. But Andrew Gelman posted some model by Leonardo Egidi today on his blog:

Leonardo Egidi writes:

Inspired by your world cup model I fitted in Stan a model for the Euro Cup which start today, with two Poisson distributions for the goals scored at every match by the two teams (perfect prediction for the first match!).

The available PDF contains the results and the description for the model. Really interesting and already a perfectly predicted first match! But the model will not fit very well at the semi-finals… Germany losing to Italy? Again? Can’t be!