Why “Prestige” is Better Than Your h-Index

Psychological science is one of the fields that is undergoing drastic changes in how we think about research, conduct studies and evaluate previous findings. Most notably, many studies from well-known researchers are under increased scrutiny. Recently, journalists and researchers have reviewed the Stanford Prison Experiment that is closely associated with the name of Philip Zimbardo. Many consider Zimbardo a “prestigious” psychologist. In the discussion about how we should think about doing science in times of the “replicability crisis”, the issue of “prestige” comes up in different forms. Among the recurring questions: Should we trust what prestigious researchers say? Should we put more faith in articles published in prestigious journals?

It seems as if some consider prestige to be something bad: Prestige is not earned through hard work but through capitalizing on past successes and putting oneself in the spotlight. They, in my experience, often criticize journals or conferences for inviting only prestigious authors or speakers. Others (knowingly or unknowingly) put a lot of trust into prestige, for example by accepting theories from prestigious sources more easily.  Continue reading “Why “Prestige” is Better Than Your h-Index”

New Preprint: Making “Null Effects” Informative

In February and March this year, I stayed at the Eindhoven Technical University in the amazing group with Daniël Lakens, Anne Scheel and Peder Isager, who are actively researching questions of replicability in psychological science. Over the two months I have learned a lot, exchanged some great ideas with the three of them – and was able to work together with Daniël on a small overview articleContinue reading “New Preprint: Making “Null Effects” Informative”

Replicability in Online Research

At the GOR conference in Cologne two weeks ago, I had the opportunity to give a talk on replicability in Online Research. As a PhD student researching this topic and working as a data scientist in market research, I was very happy to have the opportunity to give my thoughts on how the debate in psychological science might transfer to online and market research.

The GOR conference is quite unique since the audience is about half academics and half commercial practitioners from market research. I noticed my filter bubble, when only about a third of the audience knew about the “replicability crisis in psychology” (Pashler & Wagenmakers, 2012; Pashler & Harris, 2012).

Continue reading “Replicability in Online Research”

p-hacking destroys everything (not only p-values)

In the context of problems with replicability in psychology and other empirical fields, statistical significance testing and p-values have received a lot of criticism. And without question: much of the criticism has its merits. There certainly are problems with how significance tests are used and p-values are interpreted.1

However, when we are talking about “p-hacking”, I feel that the blame is unfairly on p-values and significance testing alone without acknowledging the general consequences of such behaviour in the analysis.2 In short: selective reporting of measures and cases3 invalidates any statistical method for inference. When I only selectively report variables and studies, it doesn’t matter whether I use p-values or Bayes factors — both results will be useless in practice. Continue reading “p-hacking destroys everything (not only p-values)”

Thoughts on the Universality of Psychological Effects

Most discussed and published findings from psychological research claim universality in some way. Especially for cognitive psychology it is the underlying assumption that all human brains work similarly — an assumption not unfounded at all. But also findings from other fields of psychology such as social psychology claim generality across time and place. It is only when replications fail to show an effect again, the limits of generality are discussed, i.e. in which way American participants differ from German participants.  Continue reading “Thoughts on the Universality of Psychological Effects”

Critiquing Psychiatric Diagnosis

I came across this great post at the Mind Hacks blog by Vaughan Bell, which is about how we talk about psychiatric diseases, their diagnosis and criticising their nature.

Debating the validity of diagnoses is a good thing. In fact, it’s essential we do it. Lots of DSM diagnoses, as I’ve argued before, poorly predict outcome, and sometimes barely hang together conceptually. But there is no general criticism that applies to all psychiatric diagnosis.

His final paragraph touches something which I also discuss in my course on Psychological Assessment and Decisions:

Finally, I think we’d be better off if we treated diagnoses more like tools, and less like ideologies. They may be more or less helpful in different situations, and at different times, and for different people, and we should strive to ensure a range of options are available to people who need them, both diagnostic and non-diagnostic.

Diagnoses are a man-made concept that can be helpful in order to make decisions and study the subject. Vaughan makes a great case for how this is true for both mental and somatic conditions.

How statistics lost their power – and why we should fear what comes next

This is an interesting article from The Guardian on “post-truth” politics, where statistics and “experts” are frowned upon by some groups. William Davies shows how statistics in the political debate have evolved from the 17th century until today, where statistics are not regarded as an objective approach to reality anymore but as an arrogant and elitist tool to dismiss individual experiences. What comes next, however, is not the rule of emotions and subjective experience, but privatised data and data analytics that are only available to few anonymous analysts in private corporations. This allows populist politicians to buy valuable insight without any accountability, exactly what Trump and Cambridge Analytica did. The article makes a point how this is troublesome for liberal, representative democracies.

 

Scientific Hoaxes and Bad Academic Writing

A new case of scientific hoax, that happened six years ago, is currently circulating:

Six years ago I submitted a paper for a panel, “On the Absence of Absences” that was to be part of an academic conference later that year—in August 2010. Then, and now, I had no idea what the phrase “absence of absences” meant. The description provided by the panel organizers, printed below, did not help. The summary, or abstract of the proposed paper—was pure gibberish, as you can see below. I tried, as best I could within the limits of my own vocabulary, to write something that had many big words but which made no sense whatsoever. I not only wanted to see if I could fool the panel organizers and get my paper accepted, I also wanted to pull the curtain on the absurd pretentions of some segments of academic life. To my astonishment, the two panel organizers—both American sociologists—accepted my proposal and invited me to join them at the annual international conference of the Society for Social Studies of Science to be held that year in Tokyo.

Continue reading “Scientific Hoaxes and Bad Academic Writing”