Weapons of math destruction

The rules of science are changing, to the exhilaration of some and apprehension of others.

The problem is by now well defined: when we run an analysis on an unstable effect using a small sample, we can get a variety of different statistical outcomes. If only some of those outcomes are acceptable by journal standards, the literature will produce a skewed image of existing effects.

Some published findings will lead to skepticism if there is solid theoretical grounding to render them implausible. However, if the theoretical framework hinges on the idea that the job of empirical psychology is to look for counterintuitive effects – effects that are validated exclusively through clever empirical means – the production of unlikely results can become a runaway process. And it has.

What can be verified, is whether the published statistics are plausible. This led to the unusual situation that methods-lovers began picking apart the literature on human nature.

In the space of barely a few years, we have seen the development of novel methods to test the trustworthiness of the published record, from single studies to effects described over large bodies of literature.

Then a site appeared for those willing to admit to their ‘file drawer’ studies, studies that have a low probability of being published for lack of statistical significance.

Large scale replication attempts ensued, and often failed spectacularly. Blogs and discussion groups cropped up, opening problem after problem with the ways we use experimental designs and statistics to draw our theoretical conclusions. Reviewers began insisting that researchers share their data publicly in order to publish. Researchers are uploading their manuscripts to bioArxiv and psyArxiv, inviting the possibility to get open peer review before they submit to journals.

Lately, psychology departments at a number of universities are asking for evidence of dedication to open science as a criterion for candidate selection.

If you are reading this and humming do you hear the people sing, then you are as caught up in the excitement and the revolutionary feeling of bottom-up change as I am.

To others, however, the tune is more resemblant of tomorrow belongs to me. Or rather, in these times of novel scientific rules, a new metaphor was coined: methodological terrorism.

The insider’s perspective

Susan Fiske, former president of the Association for Psychological Science and a giant in her field, paints a starkly different picture of the new scientific process in her op-ed column titled Mob Rule or Wisdom of Crowds?

In her view, humans are being trampled over in the pursuit of greater goals that are relevant only to a minority.  But the effect of this minority is anything but negligible: because of these self-appointed data police, graduate students are leaving academia, established researchers are not going up for tenure, senior researchers are retiring early. People are being personally attacked and publicly shamed for scientifically irrelevant reasons. Their tenure committees and public speaking sponsors get problematic anonymous letters. Their advisors and even family members are being implicated. Nobody is safe from the smear tactics of these destructo-critics, online vigilantes, these bullies.

How did it come to be that such a mob of terrorists within the scientific community could ignore ethical rules of conduct to so great an extent? It is, Fiske concludes, because they circumvent constructive peer-review. (…) Peer-reviewed critiques, she offers in contrast, serve science without destroying lives.

She says it’s not an attack on open science, but, not to put too fine a point on it: it’s an attack on open science.

Ironically, the column was leaked and is likely being amended pre-publication due to the reaction it caused.

The point of no return

Science is moving forward so quickly that I don’t even think it’s necessary to point out ways in which the article is wrong. I will instead list a some elements of the scientific revolution that trouble me, even though I consider myself a proud (if quite junior) member of the data police.

  1. Belief in published results. I have so little of it left.
  2. Belief in the role of empirical research. Getting to otherwise hidden truths was our thing, the critical point of departure from philosophy.
  3. Belief in the scientific method. I was taught there is such a thing. Now it seems every subfield would have been better off developing its own methods, fitted to its own questions and data.
  4. Belief in statistics. I was taught this is the way to impartial truths. Now I’m a p-value skeptic.
  5. Belief in the academic system. It incentivizes competition in prolifically creating polished narratives out of messy data.

I am sure that for some people, belief in the security of their job, their hard-earned status, their perceived competence (by self and others), their life’s work, and as a consequence their general well-being, really is at stake. I am also sure that many of us often fail in being as diplomatic and kind as we could be when we get caught up in criticism. And it’s true that it’s easier to get away with being snarky on social media, especially when a tweetstorm acquires a life of its own.

But I believe that this battle has already been won. The tide has turned, and there is no going back to a closed, pre-internet-era model of science. We can afford to be compassionate, even when a person in a position of authority labels us as unnamed terrorists. Such is the power of the masses.

On a completely personal note, my lab nickname is Polizei. I’m sure it’s meant to be endearing.

9 Comments

  1. Rickard Carlsson

    I wish this had been Fiskes editorial instead. Great stuff.

    Now you have two reviews recommending accepted so I suggest you consider this peer reviewed 😉

  2. David E Meyer

    This is a wonderful piece: clever apt title; persuasive analysis. I would just add one further caveat distinction. The main fault of the academic system, with its increasingly corporate mentality, has been to incentivize grant getting and exponentially increasing publication rates and citation counts, embodied in quant metrics like H-Indices and impact factors. Meanwhile, the leading publication ‘houses’ like Nature, Science, and PNAS — which aren’t just academic; they include industry and government too — have chosen to compete for recognition by focusing on sexy stuff. That’s not academia’s fault per se. Also there’s the pop culture of trade books, TED talks, and NY Times op eds that have incentivized sexy stuff. Again that is not academia’s fault per se. But some professional academics are at fault for buying into the latter trend head over heals instead of focusing on doing good serious vetted science. They yielded to temptation. These distinctions are relevant for seeing the way forward. Your last paragraph is correct: there will be no going back, just as Bob Dylan’s classic song forecast 50 years ago…

  3. I’ve participated in peer review 10 times now, with 8 publications. At best it is a fantastic resource that helped me get published and improved my articles. At worst is a random, oppressive, irrational system for suppressing heterodoxy. That some authors now circumvent it is hardly surprising. My own love affair with academia is over. I still rely on the work of academics, but I have lost any urge to join them or become one. My independence from the mainstream hegemony is one of the most important things I offer as a scholar/writer these days.

  4. “Belief in the role of empirical research. Getting to otherwise hidden truths was our thing, the critical point of departure from philosophy.”

    Well, that’s still true. Empirical research is still the best way, albeit it is difficult and there are many ways to go wrong.

    With empirical evidence it is difficult to uncover the truth but without it, it’s impossible.

    • Oh, I agree… And to clarify, I am excited to be re-thinking this list of mine for the most part… With the exception of belief in the published record, that one really gets to me, since I just don’t have time to read so many papers in so much depth to make up my own mind about all of them.

      But. In the re-thinking this particular one, I’ve wondered what we could conclude if, say, *all* our results pointed to the fact that small, hidden, unnoticed events influence something as important as our sense of morality (like that study where people ‘randomly’ found a coin and then gave money to charity or something). Would that mean that morality is largely driven by these small, unconscious factors? I would say no, it would merely mean that we are studying small, unconscious factors – we are not pitting them against the (rather obvious) fact that people think about their decisions. We are gearing our experiments towards “our thing”.

      In that sense, it’s more of a question of how empiry defines theory and vice versa than a questioning of the role of empiry itself.

Leave a Reply

Your email address will not be published. Required fields are marked *