Open Science Practices: Patchy Progress in Two Psychology Journals

(Revised 18 March 2022, to add comments about replication, and the effectiveness of journal-specific guidelines, and badges.)

What Progress With Open Science? In Brief:

Judging from Psychological Science (PS) and the Journal of Experimental Psychology: General (JEPG), during 2013-2020:

  • Reporting of Confidence Intervals (CIs) increased markedly from 2013 to 2015 ๐Ÿ™‚ but has plateaued at around 60% of articles since then ๐Ÿ™
  • CIs are still rarely explicitly used to inform interpretation of results ๐Ÿ™ ๐Ÿ™
  • Use of Effect Sizes (ESs) to inform interpretation has increased steadily to around 50% ๐Ÿ™‚
  • Provision of Open Data and Analysis Code, and Open Materials has increased steadily, from near zero to around 50% ๐Ÿ™‚
  • Use of Preregistration has increased from near zero to around 30% ๐Ÿ™‚
  • Use of NHST remains almost universal ๐Ÿ™
  • Overall, only 1.9% of articles reported replications ๐Ÿ™ and there were no registered reports.
  • Larger changes in PS than JEPG suggest strong journal-specific policies and offering Open Science badges can be effective ๐Ÿ™‚ ๐Ÿ™‚

Conclusion: Open Science has made enormous strides since 2013, but there is still a long way to go. Keep at it!

Our Two Studies

In 2017, David Giofrรจ and colleagues reported a study of the frequency of use of various statistical and Open Science (OS) techniques from the start of 2013 to the end of 2015, in PS and JEPG. We recently uploaded a preprint that updates the picture, for articles published in those two journals from the start of 2016 to the end of 2020.

From 2013 to 2015

In January 2014, Psychological Science famously announced dramatic changes to its instructions to authors. Erich Eich, then Editor-in-Chief, explained in this editorial. Among other changes, use of The New Statistics was strongly encouraged, reliance on NHST was discouraged, fully detailed reporting was required, and badges could be earned by providing open data, or open materials, or by reporting preregistered research. The figure below shows the proportions of articles using various practices each year from 2013, before the changes, to 2015, when there had been time for the changes to influence what was published.

We included JEPG for comparison. Its instructions to authors were not as detailed, and relied largely on a general reference to the APA Publication Manual. There were no marked changes to its requirements during the period.

Proportions of articles in the two journals that used various statistical and Open Science practices, 2013-2015,

The good news includes: In PS, use of CIs (item 2 in the figure) and provision of open data (8) and open materials (9) increased dramatically from 2013 to 2015. Note that OS badges to acknowledge 8 and 9 (also 10) were introduced in 2014. Some justification for chosen sample size (6) and explanation of any data exclusions (7) also increased strongly. There were similar but generally not so marked changes in JEPG, consistent with PS’s strong journal-specific guidelines and offering of badges. JEPG changes perhaps reflected the rapid spread of OS consciousness occurring back then, even without specific changes to that journal’s policies.

The bad news includes: NHST (1) remained close to universal; use of CIs for interpretation and discussion of results (4) remained very low; and preregistration (10) was very rarely used.

From 2016 to 2020

We used the same procedure to assess practices during 2016-2020. We added one practice: the provision of data analysis code. This table reports percentages of empirical articles that used the various practices, for each journal in each year:

Overall, CI use (row 2) held up but hardly increased further, NHST (1) remained near-universal (boo!), and use of CIs for interpretation (4) remained rare (double boo). It’s great to see that several desirable practices all steadily increased, including: use of ESs for interpretation (5); provision of open data (8), open materials (9), and open code (11); and preregistration (10).

Explaining choice of sample size (6) and data exclusions (7) generally increased. Row 3 refers to internal meta-analysis, meaning meta-analysis of two or more studies reported in the article itself. It was almost unknown in 2013 and more recently occurred in as many as around 10% of articles–that’s a great development.

Reporting a replication of a previously published study wasn’t one of the practices we investigated in detail, but we can report that, overall, only 1.9% of articles reported such a replication, and none was a registered report.

As in the earlier period, PS generally did better or much better than JEPG, consistent with journal-specific guidelines and badges being effective ways to bring about OS improvements.

Looking back to the picture in 2013, it’s clear that, at least judging from these two leading journals, we’ve come a very long way towards more open and trustworthy published research. But there’s still much progress to be made, especially by encouraging replication, and explicit use of ESs and, in particular, CIs to inform interpretation of results. Yep, the new statistics simply make more sense–as intro students keep telling us.

Geoff

Giofrรจ, D., Cumming, G., Fresc, L., Boedker, I., & Tressoldi, P. (2017). The influence of journal submission guidelines on authors’ reporting of statistics and use of open research practices. PLOS ONE. https://doi.org/10.1371/journal.pone.0175583

Giofrรจ, D., Boedker, I., Cumming, G., Rivella, C., & Tressoldi, P. (2022, submitted for publication). The influence of journal submission guidelines on authors’ reporting of statistics and use of open research practices: Five years later. https://osf.io/preprints/metaarxiv/8ya3m

Leave a Reply

Your email address will not be published.

*