‘The New Statistics’ (2013) Wins Sage 10-Year Impact Award

The New Statistics: Why and How (abstract below) explained the advantages of moving on from NHST to the new statistics (estimation and meta-analysis) and the need for better practices to improve research integrity. I’m delighted that an award from Sage indicates the article seems to be helping researchers improve what they do. Next: Can ITNS2 help the next generation do even better?

The article appeared online in late 2013, so was considered when Sage examined the citation numbers of all articles appearing in any of the 400+ journals Sage published back in 2013. It was one of the top three most cited, so has been given a Sage 10-Year Impact Award. Sage’s announcement is here. Sage has just published a blog post about it with headline:

Now for the abstract:

The article was commissioned by Eric Eich, then editor-in-chief of Psychological Science, to appear immediately following his famous editorial Business Not As Usual. This opened the Journal’s first issue of 2014 and announced sweeping changes in the journal’s submission requirements, which, for many psychologists, marked the arrival of Open Science.

Interview

Sage’s blog post includes an email interview with me. Here’s a brief summary:

What was it in your own background that led to your article?

When I was a teenager my father gave me a simple explanation of significance testing. I said something like “That’s weird, sort of backwards. And why .05?” He replied “I agree, but that’s the way we do it.”

Over decades of teaching I became ever more dissatisfied with NHST, and focused ever more on confidence intervals (CIs).

Was there an article that had a particularly strong influence on you?

Frank Schmidt (1996) wrote: “It is now possible to use meta-analysis to show that reliance on significance testing retards the development of cumulative knowledge.” A revelation!

What did Schmidt’s article lead to?

About 2003 I started using an Excel forest plot to give a simple explanation of meta-analysis in my intro course. I was delighted: Students told me it just made sense. Of course, for meta-analysis you need a CI from each study, while p values are irrelevant, even misleading.

      Figure: Dances of means, confidence intervals, and p values.

In 2009 I uploaded a video of the dance of the p values. I became passionate about advocating the new statistics (estimation and meta-analysis). I wrote Understanding The New Statistics: Effect Sizes, Confidence Intervals, and Meta-Analysis (UTNS, 2012).

What was happening in psychology at about that time?

Ioannidis (2005) explained how reliance on NHST was a major cause of the replication crisis. Largely in response to that crisis, Open Science arrived—perhaps the most important advance in how science is done for a very long time.

Eric Eich’s famous editorial Business Not As Usual in the January 2014 issue of Psychological Science marked the arrival of Open Science in psychology. Months earlier Eric had invited me to write a tutorial article to support the changes he wanted. This was The New Statistics: Why and How and was published immediately following his editorial.

What has been the reception of the article?

Mainly very positive. Some have felt I went too far in advising that in most cases it’s better not to use NHST at all. Some Bayesians have been unhappy with the focus on confidence intervals.

Revisiting that article, what would you have done differently?

I used the term ‘research integrity’, but ‘Open Science’ was coming into use and I soon realized that was way better. Reading the article today, for ‘research integrity’ read ‘Open Science’.

Otherwise, I think the article has held up well, including all 25 guidelines in Table 1.

What has happened since

Psychological Science has continued to lead in the adoption of Open Science practices.

Meta-science, also known as meta-research, has emerged and now thrives as a highly multi-disciplinary field. It applies the scientific method to improve that method—wonderful!

What have you been doing since?

I teamed with Robert Calin-Jageman to write the first intro statistics textbook based on the new statistics and with Open Science all through. The second edition has just come out: Introduction to The New Statistics: Estimation, Open Science, and Beyond, 2nd edition (ITNS2, 2024). It has much improved software, as we explain in Calin-Jageman & Cumming (2024), which is on open access.

We believe this book can sweep the world—we’ll see! To read the Preface and Chapter 1 go to www.thenewstatistics.com. In the second para is a link to the book’s Amazon site. Click ‘Read sample’.  

References

Calin-Jageman, R., & Geoff Cumming, G. (2024). From significance testing to estimation and Open Science: How esci can help. International Journal of Psychology,     https://doi.org/10.1002/ijop.13132

Cumming, G. (2012). The New Statistics: Effect sizes, confidence intervals, and meta-analysis. New York: Routledge. 

Cumming, G. (2014) The new statistics: Why and how. Psychological Science. 25(1), 7-29. https://doi.org/10.1177/0956797613504966

Cumming, G., & Calin-Jageman, R. (2024). Introduction to The New Statistics: Estimation, Open Science, & Beyond, 2nd edition. New York: Routledge.

Eich, E. (2014) Business not as usual. Psychological Science, 25(1), 3–6. https://doi.org/10.1177/0956797613512465

Ioannidis, J. P. A. (2005). Why most published research findings are false. PLoS Medicine 2: e124. https://doi.org/10.1371/journal.pmed.0020124

Schmidt, F. L. (1996). Statistical significance testing and cumulative knowledge in psychology: Implications for training of researchers. Psychological Methods, 1(2), 115-129. https://doi.org./10.1037/1082-989X.1.2.115 

Leave a Reply

Your email address will not be published. Required fields are marked *

*