A Year Ago eNeuro Encouraged Estimation: It’s Working
Bob tweeted about a new eNeuro editorial. It’s a one-year progress report on encouragement for eNeuro authors to use estimation: fully 52 of 100 articles included estimation information!
The editorial is short, and a great read. Christophe Bernard, the editor-in-chief, includes links to his 2019 editorial that announced the initiative, our article explaining estimation that was published in eNeuro at the same time, and a recent blog post in which eNeuro authors reflect on their experiences of figuring out how to include estimation in their analyses.
Christophe also includes a brief intro to estimation, with links to the dance of the p values, Gordon’s esci on the web simulations and tools, Bob’s esci in jamovi, and other estimation resources. A terrific beginner’s guide.
Other editors take note
That’s all great to hear, and I salute Christophe for his initiative and persistence. Take note, other journal editors (Journal of Neuroscience?), it can be done! Judging by author comments in that eNeuro blog post, researchers who have taken the plunge can see the benefits and are generally keen to continue with estimation.
Where it all started
This may have all started back in November 2018 at the giant SfN conference in San Diego, where Bob moderated a PD Workshop he had organized: Improving Your Science: Better Inference, Reproducible Analyses, and the New Publication Landscape. Christophe was one of the speakers and may have become an enthusiast that day. Shortly after, he started working towards the 2019 announcement and editorial. Bob’s workshop was the acorn…
Editors have in the past tried to improve statistical practices
The highly encouraging eNeuro story prompts me to think back to past efforts by enterprising journal editors to move statistical practices beyond p values. Here’s a brief word about a few.
Ken Rothman in medicine: American Journal of Public Health, and Epidemiology
More than 40 years ago, Ken Rothman published articles advocating confidence intervals and explaining how to calculate them in various situations. He was influential in persuading the International Council of Medical Journal Editors (ICMJE) to include in their 1988 revision of their Uniform Requirements for Manuscripts Submitted to Biomedical Journals the following:
“When possible, quantify findings and present them with appropriate indicators of measurement error or uncertainty (such as confidence intervals). Avoid sole reliance on statistical hypothesis testing, such as the use of p values, which fail to convey important quantitative information. . . .”
Rothman, an assistant editor during 1984-87 at the American Journal of Public Health, insisted that authors of manuscripts he assessed remove all references to statistical significance, NHST, and p values. We, in Fidler et al. (2004), examined articles published in various years from 1982 to 2000 and found that CI reporting increased from 10% to 54% during the Rothman years, then remained at a similar level through to 2000—as was becoming standard in other medical journals, following the ICMJE policy of 1988.
In 1990 Rothman founded the journal Epidemiology and declared that this journal did not publish NHST or p values. For the 10 years of his editorship it basically didn’t, while CI reporting reached more than 90%.
BUT, even when CIs were reported—often merely as numbers in tables—they were rarely referred to, or used to inform interpretation ☹ We suspected that researchers needed way more explanations, examples, and guidance to appreciate what estimation can offer.
Geoff Loftus at Memory & Cognition
Geoffrey Loftus, Editor of Memory & Cognition from 1994 to 1997, strongly encouraged presentation of figures with error bars and avoidance of NHST. He even calculated error bars for numerous authors who claimed it was too difficult for them. We, in Finch et al. (2004), reported that use of figures with bars increased to 47% under Loftus’s editorship and then declined. However, bars were rarely used for interpretation, and NHST remained almost universal. It seemed that even strong editorial encouragement, and assistance with analyses, was not sufficient to bring about substantial and lasting improvement in psychologists’ statistical practices.
Eric Eich, as editor-in-chief of Psychological Science, initiated perhaps the most important and successful journal transformation, at least in psychology. At the start of 2014 he published his famous editorial Business Not as Usual, which introduced Open Science badges, encouragement to use the new statistics, and other important advances. He published Cumming (2014), the tutorial article on the new statistics that he’d invited me to write.
When Steve Lindsay took over as editor-in-chief he introduced further advances, including Preregistered Direct Replications. His Swan Song Editorial recounts the Open Science advances from 2014 to 2019, with evidence of sweeping changes in authors’ practices and what the journal has published. (I posted about that editorial here.)
Now editor-in-chief Patricia Bauer is continuing Open Science policies. For example, the Submission Guidelines still state that “Psychological Science recommends the use of the “new statistics”—effect sizes, confidence intervals, and meta-analysis—to avoid problems associated with null-hypothesis significance testing…”. They include links to our site, my tutorial article, and my videos introducing the new statistics that were recorded at the 2014 APS Convention.
I’d like to think that Rothman, Loftus, and other editors who, decades ago, tried so hard to encourage better practices did help bring about the advent of Open Science, which shook things up sufficiently to give later enterprising editors a better chance of getting their wonderful initiatives to stick.
Christophe has continued and broadened the crusade to great effect.
I’m delighted to see the evidence that so many of these positive changes look like they will persist and spread further. Bring that on!
And Bob and I hope, of course, that ITNS2 can help students understand why Open Science and the new statistics is the natural, better, and more easily understood way to do things.
Cumming, G. (2014) The new statistics: Why and how. Psychological Science. 25, 7-29. https://doi.org/10.1177/0956797613504966
Fidler, F., Thomason, N., Cumming, G., Finch, S., & Leeman, J. (2004). Editors can lead researchers to confidence intervals, but can’t make them think: Statistical reform lessons from medicine. Psychological Science, 15, 119-126.
Finch, S., Cumming, G., Williams, J., Palmer, L., Griffith, E., Alders, C., Anderson, J., & Goodman, O. (2004). Reform of statistical inference in psychology: The case of Memory & Cognition. Behavior Research Methods, Instruments & Computers, 36, 312-324.