The ASA and p Values: Here We Go Again
The above announcement is from the February ASA (American Statistical Association) newsletter. (See p. 7 for the announcement and the list of 15 members of the Task Force.)
Why won’t statistical significance simply whither and die, taking p<.05 and maybe even p values with it? The ASA needs a Task Force on Statistical Inference and Open Science, not one that has its eye firmly in the rear view mirror, gazing back at .05 and significance and other such relics.
I shouldn’t be so negative: I definitely am glad that ‘reproducibility’ is a focus, even if ‘Open Science’ may suggest a wider view.
To welcome the new Task Force, Andrew Gelman posted an invitation to discussion. His post, which is here, is sensible. Stuart Hurlbert makes some useful early contributions, but most of the 152 (as of now) comments make me tired and depressed as I skim through.
You may recall the background, including:
- The 2016 ASA statement on p values, critical of p values and especially of using .05 or any other sharp cutoff. (Critical, but not sufficiently critical, imho.)
- The 2017 ASA Symposium on Statistical Inference, at which Bob gave a great talk, of course mainly about estimation (the new statistics).
- The 2019 special issue of The American Statistician with 43 articles about what should follow traditional p value practice. I recommend our (Bob’s) article on the new statistics, also the article Abandon Statistical Significance by McShane, Gelman, and others, and Coup de Grâce for a Tough Old Bull: “Statistically Significant” Expires by Hurlbert and others. It’s disappointing that so many of the 43 articles focus on finding some way for p values, in some form, to live on. It’s time to turn off such life support for p values!
I suggest that much of what’s needed can be summarised by this basic logic:
- We need Open Science (replication, preregistration, open data and materials, fully detailed publication whatever the results, …).
- Open Science requires replication.
- Replication requires meta-analysis.
- Meta-analysis requires estimation. (Frequentist, Bayesian, or…)
Note that there is NO necessary role for NHST or p values in any of the above. Historically, p values have caused much damage, especially by prompting selective reporting, which biases meta-analysis, perhaps drastically. Simply let them whither and fade into the background.
Beyond all that, we’d love to be moving to more quantitative modelling, which makes estimation even more necessary.
Watch for a call for submissions to the Task Force.
Bring on the revolution…