Replication crisis summary

Neuroscientific research suffers from small subject samples, lack of theory and lack of consensus on data analysis protocols. This leads to experiments with p-values that contain large errors of measurement, where publication is selectively limited to those that (sometimes randomly) fall below 0.05. Such a situation leads to any one study result being unreliable, which further hinders our ability to interpret our results or reasonably plan further experiments. Here is a list of things I will try do to in the future, a few suggestions to alleviate some of these problems.

  1. Build replications into current study designs. Start with looking for an effect that is already published, then build on additional manipulations. That way nobody needs to dedicate their time exclusively to replications, while published results will still be routinely checked.
  2. If you are unable to come up with strong, precise, fully worked out hypotheses about all the possible and impossible things the brain might do in response to your experimental manipulations, then make this a two-step process. Start out with some broad hypotheses about large scale neural activity, then stop the data analysis and reassess. Think about mutually exclusive interpretations of your data, then try to come up with additional, targeted hypotheses about the nature of the neural activity underlying your effects.
  3. Publish the results of all your studies, if you think your experiments were methodologically and technically sound. These won’t all get into journals with high impact factors, but still write them up with care. It is incredibly important for others to know what does and doesn’t work. Always give exact p-values for nonsignificant results.The difference between 0.05 and 0.12 might be negligible; the difference between 0.12 and 0.9 is not.
  4. Always plot the raw data in your manuscripts (a measure of central tendency and a measure of dispersion). Do not just show t-maps, model fits, correlations with indices, difference plots. Show what neural activity looks like for each experimental condition. This makes it easier to compare results across studies, and gives others a clearer insight into the data pattern underlying your effect. If at all possible, upload your data and scripts to a public repository.
  5. Appreciate people’s research for more than their p-values. Hiring committees will mostly care about that, but we can behave differently. Give people’s failed experiments the attention they deserve. Failed experiments make us think, and thinking is a good thing.

Oh, and increase your sample sizes. Everybody knows this, let’s just do it already.

Leave a Reply

Your email address will not be published. Required fields are marked *