Thoughts on the Universality of Psychological Effects

Most discussed and published findings from psychological research claim universality in some way. Especially for cognitive psychology it is the underlying assumption that all human brains work similarly — an assumption not unfounded at all. But also findings from other fields of psychology such as social psychology claim generality across time and place. It is only when replications fail to show an effect again, the limits of generality are discussed, i.e. in which way American participants differ from German participants.  Continue reading “Thoughts on the Universality of Psychological Effects”

Critiquing Psychiatric Diagnosis

I came across this great post at the Mind Hacks blog by Vaughan Bell, which is about how we talk about psychiatric diseases, their diagnosis and criticising their nature.

Debating the validity of diagnoses is a good thing. In fact, it’s essential we do it. Lots of DSM diagnoses, as I’ve argued before, poorly predict outcome, and sometimes barely hang together conceptually. But there is no general criticism that applies to all psychiatric diagnosis.

His final paragraph touches something which I also discuss in my course on Psychological Assessment and Decisions:

Finally, I think we’d be better off if we treated diagnoses more like tools, and less like ideologies. They may be more or less helpful in different situations, and at different times, and for different people, and we should strive to ensure a range of options are available to people who need them, both diagnostic and non-diagnostic.

Diagnoses are a man-made concept that can be helpful in order to make decisions and study the subject. Vaughan makes a great case for how this is true for both mental and somatic conditions.

Stop the “Flipping”

I came across this interesting article at The Thesis Whisperer blog. It starts with the hypothesis being an academic is similar to “running a small, not very profitable business”. This is mainly down to two problems:

Problem one: There are a lot of opportunities that could turn into nothing, so it’s best to say yes to everything and deal with the possible overwork problem later.

Problem two: Since (outside of a teaching schedule) no one is really telling you what to do with every minute of your time, it can be hard to choose what to do next – especially if all the tasks seem equally important.

My personal experience is similar, but also somewhat different.  Continue reading “Stop the “Flipping””

New Preprint: A Bayes Factor for Replications of ANOVA Results

Already some weeks ago I have finished up some thoughts for a Replication Bayes factor for ANOVA contexts, which resulted in a manuscript that is available as pre-print at arXiv. The theoretical foundation was laid out before by Verhagen & Wagenmakers (2014) and my manuscript is mainly an extension of their approach. We have another paper coming up where we will use it to evaluate the success of an attempted replication of an interaction effect. Continue reading “New Preprint: A Bayes Factor for Replications of ANOVA Results”

New Paper: Impulsivity and Completion Time in Online Questionnaires

I’ve got my first first-author-paper published in Personality and Individual Differences. The paper is titled “Reliability and completion speed in online questionnaires under consideration of personality” (doi:10.1016/j.paid.2017.02.015) and was written together with Lina and ChristianContinue reading “New Paper: Impulsivity and Completion Time in Online Questionnaires”

Research is messy: Two cases of pre-registrations

Pre-registrations are becoming increasingly important for studies in psychological research. This is a much needed change since part of the “replication crisis” has to do with too much flexibility in data analysis and interpretation (p-hacking, HARK’ing and the like). Pre-registering a study with planned sample size and planned analyses allows other researchers to understand what the initial thinking of the authors was, how the data fits to the initial hypothesis and where are differences between the study protocol and study results. In theory, it looks very simple: you think about a problem, you conceive a study, lay out the plan, register it, collect data, analyse and publish.  Continue reading “Research is messy: Two cases of pre-registrations”

How statistics lost their power – and why we should fear what comes next

This is an interesting article from The Guardian on “post-truth” politics, where statistics and “experts” are frowned upon by some groups. William Davies shows how statistics in the political debate have evolved from the 17th century until today, where statistics are not regarded as an objective approach to reality anymore but as an arrogant and elitist tool to dismiss individual experiences. What comes next, however, is not the rule of emotions and subjective experience, but privatised data and data analytics that are only available to few anonymous analysts in private corporations. This allows populist politicians to buy valuable insight without any accountability, exactly what Trump and Cambridge Analytica did. The article makes a point how this is troublesome for liberal, representative democracies.

 

Predictions for Presidential Elections Weren’t That Bad

Nathan Silver’s FiveThirtyEight has had an excellent coverage of the US Presidential Elections with some great analytical pieces and very interesting insights in their models. Each and every poll predicted Hillary Clinton to win the election and FiveThirtyEight was no exception to that. Consequently, there was a lot of discussion on pollsters, their methods and how they – again after “Brexit” – failed to predict the outcome of the election. There are many parallels between the elections in the US and the Brexit-vote in the UK. At least for the US, however, the predictions weren’t that far off. And FiveThirtyEight in particular, gave Trump better chances than anyone else:

For most of the presidential campaign, FiveThirtyEight’s forecast gave Trump much better odds than other polling-based models. Our final forecast, issued early Tuesday evening, had Trump with a 29 percent chance of winning the Electoral College. By comparison, other models tracked by The New York Times put Trump’s odds at: 15 percent, 8 percent, 2 percent and less than 1 percent. And betting markets put Trump’s chances at just 18 percent at midnight on Tuesday, when Dixville Notch, New Hampshire, cast its votes.

Continue reading “Predictions for Presidential Elections Weren’t That Bad”

New Paper: Reliability Estimates for Three Factor Score Estimators

Just a short post on a new paper that is available from our department. If you happen to have calculated factor score estimators after factor analysis, e.g. using Thurstone’s Regression Estimators, you might be interested in the reliability of the resulting scores. Our paper explains how to do this, compares the reliability of three different factor score estimators and provides R- and SPSS-scripts for easy estimation of the reliability. While some reviewers have argued, that this reliability cannot exist, I think, we have some good arguments how our perspective is in line with existing literature on psychometrics.

The paper is available as Open Access in the International Journal of Statistics and Probability and you can find the article here. I have uploaded the scripts to GitHub, so you can easily download them, add issues or create forks. The repository is at https://github.com/neurotroph/reliability-factor-score-estimators.