Sample sizes are too dang small…

Here’s another incredible paper by John Ioannidis and associates.  This one uses text mining to examining the statistical results of thousands of cognitive neuroscience and psychology papers.  It finds that sample sizes being used remain far too small: the typical paper has power of 0.12, 0.44, and 0.73 to detect small, medium, and large effect sizes, respectively.   In addition, there seem to be lots of statistical errors: 14% of papers report a result as statistically significant even if the underlying stats do not reach statistical significance!  Based on this analysis, the authors conclude that it really is likely that more than half of “significant” findings are false positives.  Depressing…but also a call to action to abandon NHST, embrace the New Statistics and Open Science, and to do better…we really can do better!



I'm a teacher, researcher, and gadfly of neuroscience. My research interests are in the neural basis of learning and memory, the history of neuroscience, computational neuroscience, bibliometrics, and the philosophy of science. I teach courses in neuroscience, statistics, research methods, learning and memory, and happiness. In my spare time I'm usually tinkering with computers, writing programs, or playing ice hockey.

Leave a Reply

Your email address will not be published. Required fields are marked *