Kia Nobre’s Brain and Cognition Lab recently discussed best research practices. Ana’s list of suggestions was accepted as a starting point, and the discussion continued from there. Overall, the atmosphere was of easy acceptance of new research practices that will hopefully enhance the reliability of our findings and conclusions.
We noted that the general lack of theoretical coherence in the field makes it difficult to interpret the meaning of new findings, and the lack of methodological coherence makes it difficult to understand precisely how the data were calculated. This adds importance to the reliability of our findings.
We agree that our research is often exploratory, and that our approach to our data analyses and research reports should reflect that. We feel it’s important to allow ourselves to be at least somewhat data-driven.
In order to tone down our inevitable confirmation bias and to ensure the overall reliability of our analyses pipelines, the following suggestions were made:
- Establish a buddy system, where one person will re-do the other person’s data analyses (after preprocessing). The first person would give only general guidelines about what to test.
- Establish a frenemy system, whenever we have competing hypotheses about the outcome of an experiment. The other person would look at our data with their opposing confirmation bias, pushing us to explore alternative interpretations.
- Share our data and scripts publicly. We are still exploring server solutions for this, with Zenodo currently appearing to be the most promising place.
There was some small amount of uncertainty about data sharing, due to the fear that someone might scoop us if we intend to conduct further analyses. We concluded that the chance of someone wanting to look at the exact same research question is very small, and that the chance of opening up new collaborations through data sharing is much higher. Data sharing also means that we will need to re-evaluate our internal guidelines about keeping clean scripts and clear analysis pipelines.
The next question was how to go about treating null results. It is not easy to publish these, there is a lack of guidelines on how to write up such results. It is also easier to report on these when effects are conclusively absent, but often this is not the case. This leads to a worry that the literature could become cluttered with findings that are difficult to make sense of, and that we would be adding to this clutter.
There were no final conclusions on this, but some suggestions were made:
- Embrace the idea of the ‘shittiest result’section in our papers, where we would list one thing that didn’t align with our hypotheses.
- Place these types of results in the supplementary materials of our papers.
- Blog about our failures, without going into the publication process.
Here we noted that an important source of frustration is the statistical need to reach a true/false conclusion (about the presence of an experimental effect) after looking at a p-value. We feel we’re applying an unnatural dichotomy to a broad spectrum of results, where what we really care about are degrees of relationships between variables of interest. This led to a discussion about the advantages of adopting Bayesian statistics. Bayesian estimation could allow us to make the kind of inference that we are interested in: how strong is the evidence for our hypothesis? With this type of estimate, it is easier to make conclusive statements about weak or nonexistent experimental effects.
Bayesian statistics involve model comparisons. We determined two potential ethical pitfalls here.
- Model comparisons can be tricky for exploratory analyses, and we should refrain from testing the entire space of possible models only to cherry-pick the best combination for our research report.
- We should refrain from pitting our favourite model against an intentionally bad one. The final conclusion in our research reports must always be about the models.
We intend to come back to these topics in the future, to assess how well we managed to implement our decisions and to discuss other topics of concern. We wish you all a happy holiday season, and may all your experiments replicate.