To what extent do new statistical guidelines change statistical practice?
In 2012 the Psychonomic Society (PS) adopted a set of forward-thinking guidelines for the use of statistics in its journals . The guidelines stressed the use of a priori sample-size planning, the reporting of effect sizes, and the use of confidence intervals for both raw scores and standardized effect-size measures. Nice!
To what extent did these guidelines alter statistical practice? Morris & Fritz (2017) report an natural experiment undertaken to help answer this question (Morris & Fritz, 2017) . They analyzed papers printed before and after the guidelines were released (2013 and 2015; the 2013 data is actually after the guidelines were released, but all the papers analyzed were accepted for publication prior to the release). The papers analyzed were from journals published by PS or from a journal of similar caliber and content but not subject to new guidelines. In total, about 1000 articles were assessed (wow!).
What were the findings? Slow, small, but detectable improvement with tremendous room for further improvement:
- Use of a priori sample size planning increased from 5% to 11% in PS journals but did not increase in the control journal.
- Effect size reporting increased from 61% to 70%, though this increase was mirrored in the control journal
- Use of raw score confidence intervals increased from 11% to 18%, with no change in the control journal
- Confidence intervals for standardized effect sizes were reported in only 2 papers from 2015; an improvement from 0 papers in 2013, but hardly consistent with the PS guidelines to include these.
The authors conclude that more must be done, but don’t offer specifics. Suggestions?
Leave a Reply