Sample sizes are too dang small…
Here’s another incredible paper by John Ioannidis and associates. This one uses text mining to examining the statistical results of thousands of cognitive neuroscience and psychology papers. It finds that sample sizes being used remain far too small: the typical paper has power of 0.12, 0.44, and 0.73 to detect small, medium, and large effect sizes, respectively. In addition, there seem to be lots of statistical errors: 14% of papers report a result as statistically significant even if the underlying stats do not reach statistical significance! Based on this analysis, the authors conclude that it really is likely that more than half of “significant” findings are false positives. Depressing…but also a call to action to abandon NHST, embrace the New Statistics and Open Science, and to do better…we really can do better!
Leave a Reply