Sample Size Planning – Even when required, not usually done well

In many research fields, sample sizes are too small for the research questions being asked.  In neuroscience, the field I work in, this problem is now very well documented (see Button et al., 2013; Szucs & Ionnidis, 2016).

In response to this (and other problems), many journals have issued new publication standards, often including the requirement to explain/justify the sample size selected.  For example Nature Neuroscience announced a new pre-publication checklist to great fan fare back in 2013:

For studies using biological samples, we will require authors to state whether statistical methods were used (or not) to predetermine sample size, and what criteria they used to identify and deal with outliers while running the experiment.

So…how’s that going?  Apparently, not well.  A new paper by Goodhill (2017) examines the sample-size justifications from one recent issue of Nature Neuroscience (August 2016).

Of the 15 papers in that issue, only 1 claimed a sample size designed to achieve reasonable power for an expected effect (though it failed to mention what that effect size was).  Most of the rest simply said that sample size was set based on previous studies…something that might makes sense if it wasn’t for the very well-documented fact that previous studies were almost certainly too small!  Some papers even seemed confused about what it means to justify a sample size, with one explaining it this way:

Normality of the data distributions was assumed, but not formally tested.

Sad!

In case that’s not depressing enough, Goodhill found that Nature Neuroscience is unwilling to publish his comment.  Apparently, it is not worthy of comment that the guidelines for enhancing rigor have, at least in terms of sample size, proven to be toothless and meaningless.

If you’re not first in line to consider and publish cogent criticisms of your methods, you’re not doing science.  So maybe the journal should consider renaming itself Nature Neuro

Better times await us all, I am sure.

References

Button, K. S., Ioannidis, J. P. a., Mokrysz, C., Nosek, B. a., Flint, J., Robinson, E. S. J., & Munafò, M. R. (2013). Power failure: why small sample size undermines the reliability of neuroscience. Nature Reviews. Neuroscience, 14(5), 365–76. https://doi.org/10.1038/nrn3475

Denes Szucs, A., & Ioannidis, J. P. (2016). Empirical assessment of published effect sizes and power in the recent cognitive neuroscience and psychology literature. https://doi.org/10.1101/071530

Goodhill, G. J. (2017). Is neuroscience facing up to statistical power?, 1–5.  http://arxiv.org/abs/1701.01219

Raising standards. (2013). Nature Neuroscience, 16(5), 517–517. https://doi.org/10.1038/nn.3391

About

I'm a teacher, researcher, and gadfly of neuroscience. My research interests are in the neural basis of learning and memory, the history of neuroscience, computational neuroscience, bibliometrics, and the philosophy of science. I teach courses in neuroscience, statistics, research methods, learning and memory, and happiness. In my spare time I'm usually tinkering with computers, writing programs, or playing ice hockey.

Leave a Reply

Your email address will not be published. Required fields are marked *

*