Adventures in Replication: Scientific journals are not scientific

The essence of science is seeking and weighing evidence on both sides of a proposition.  One might think, then, that when a scientific journal publishes a research paper it then acquires a special interest in publishing subsequent replications or commentary on that topic.   We might call this the “principle of eating your own dog food”.  Or maybe the “you published it, you own it” policy.  Or perhaps just the “scientific journals should publish science” doctrine.

If only, if only.  Looking at the scars I’ve incurred dragging 6 replication papers across the publication line, my conclusion is that most journals reject these notions in practice if not in policy.

The main tactic for avoiding the publication of embarrassing replication results is the old saw of interest and importance.  Apparently, interest and importance have a very strange life cycle for an editor. It is at its peak when a new paper on a topic is submitted with p < 0.05.  This high level of interest and importance leads to publication, press releases, and breathy popular-press coverage.  A few months later, though, when a replication result comes in indicating that perhaps the original result is optomistic, the life cycle of interest and importance reaches a sudden and dramatic end.  Editors and reviewers now find the topic sooooo boring and trivial; they cannot fathom why their readers would want to continue to read about the topic.  After all, who could possibly find scientific value in learning that a previous result was unreliable?  Who would their readers be, a bunch of scientists?

The extreme double-standard of what counts as interesting from original to replication manuscript means that at least some journals are acting simply as printed monuments to confirmation bias.  They are no more scientific or self-correcting than Vogue or GQ (that may be unfair to Vogue and GQ–for all I know they may be more welcoming of contrary viewpoints and data).

I’m not enjoying being so cynical, but looking through the reviews I’ve collected from my replication work, the pattern is pretty clear.  For each replication I’ve conducted I have first submitted to the journal that originally published the research (except for 2 studies completed for specific journals).

So far, only 1 replication paper I’ve submitted has made it past the ‘interest’ bar at the original journal–that was at Social Psychology and Personality Science (kudos to SPPS, though see below).

Below are some examples of editor’s comments I’ve received.  In each case, the comments come from the journal that originally published the research.  In each case, the rejection is of a replication manuscript that reports multiple, high-powered, pre-registered replications, almost always with the exact materials as the original.  In most cases, the replications also included positive controls to prove researcher competence, and varied conditions and/or participant pools to ensure the finding of little-to-no effect was robust across multiple conditions.  In other words, these replications as close to air-tight as humanly possible.  Of course, there is no such thing as a perfect replication–but if your epistemic standards are so high as to find these efforts unacceptable, well then you likely are too skeptical to be sure you’re even reading this blog (hail to the Evil Genius!).

Here are some highlights in my adventures in having replications be rejected from the journals that published the original paper:

  • Journal of Experimental Social Psychology.  I conducted replications of this study (Price, Ottati, Wilson, & Kim, 2015) which originally showed that manipulations of task difficulty produce large changes in open-mindedness.  The original study was covered extensively in the popular press.  The replications showed little-to-no effect of task difficulty manipulations (though other aspects of the original research did replicate quite well).  The editor listed importance as the first criterion for rejection:
    • “the case for why this is an important replication effort from a scientific perspective is much less clear”
    • What am I missing?  if the original study was scientifically important enough for JESP, isn’t the fact that key experiments are unreliable equally scientifically important?
    • Just received this rejection (submitted in April, rejected in August on the basis of a review by the original author (who recommended publication,) and by 1 additional reviewer (who recommended rejection).  I appealed for a third review, and that is now pending).
  • Science.  Working with a student and collaborators at two other institutions, I conducted replications of this study (Gervais & Norenzayan, 2012) which originally showed that manipulations of analytic thinking decrease religious belief.  The replications across multiple sites of one study showed little-to-no-effect and in the meantime additional studies showed the manipulations used in the original research have no validity.  We submitted a 300-word note on the replication results to Science.  The submission was not reviewed.  The editor, Gilbert Chin, wrote back this form letter:
    • “Because your manuscript was not given a high priority rating during the initial screening process, we have decided not to proceed to in-depth review. The overall view is that the scope and focus of your paper make it more appropriate for a more specialized journal.”
    • I wrote back “I guess science as whole is self-correcting, but Science the journal is not.”
    • No response
    • The paper was eventually published in PLOS One (Sanchez, Sundermeier, Gray, & Calin-Jageman, 2017)after also being rejected from Psychological Science and Social Psychology without review.
  • Social Psychology and Personality Science – Working with a team of students I conducted replications of this paper (Burgmer & Englich, 2012) which reported studies showing that feelings of power produce large increases in motor skill.  We submitted a set of replications to the SPPS–each showing little-to-no effect despite strong manipulation check data.  The paper editor, Gerben van Kleef, cited importance/theoretical contribution as the primary reason the paper could not be accepted:
    • “What was actually most critical in my decision (and perhaps I did not make this sufficiently clear in my letter) is the requirement that papers published in SPPS should make a compelling theoretical and empirical contribution to the literature. Reporting evidence suggesting that a particular effect may be difficult to replicate or may be weaker than earlier studies suggested, even if demonstrated beyond doubt, is only half of the story. Something new must be added subsequently. (Note that I am referring just to SPPS policies now. Other journals may hold different standards.)”
    • This response is somewhat about interest/importance, but it also shades into another criterion that insulates journals from publishing replications of previous papers–the demand to go beyond and demonstrate some new and interesting theoretical development.  That’s a great criterion for a new paper.  But if you are replicating a study and find that the data presented are unreliable, there’s nothing there to go beyond.  Somehow, showing a theory to be erroneous is not a theoretical contribution.
    • This was only the second replication project I had worked on.  Some of the reviewer comments were really lazy (one complained that I hadn’t included CIs or effect sizes in the paper..which was the main thing we had reported!).  But there were also some good suggestions, and I ended up extending the series of replications further, though still finding little to no effect.
    • The paper was then rejected from JPSP with a set of reviews that were absolutely bananas.  More on that later.
    • It finally found a home in PLOS One (Cusack, Vezenkova, Gottschalk, & Calin-Jageman, 2015), the last refuge of the replicator.
    • I didn’t give up on JPSP and sent another replication in about a year later (Moery & Calin-Jageman, 2016).  This second replication also found little-to-no effect, but the editors were quite clear that they felt an obligation to consider publication of a paper challenging a previous JPSP manuscript.  Kudos!

References

Burgmer, P., & Englich, B. (2012). Bullseye! Social Psychological and Personality Science, 4(2), 224–232. https://doi.org/10.1177/1948550612452014
Cusack, M., Vezenkova, N., Gottschalk, C., & Calin-Jageman, R. J. (2015). Direct and Conceptual Replications of Burgmer & Englich (2012): Power May Have Little to No Effect on Motor Performance. PLOS ONE, 10(11), e0140806. https://doi.org/10.1371/journal.pone.0140806
Gervais, W. M., & Norenzayan, A. (2012). Analytic Thinking Promotes Religious Disbelief. Science, 336(6080), 493–496. https://doi.org/10.1126/science.1215647
Moery, E., & Calin-Jageman, R. J. (2016). Direct and Conceptual Replications of Eskine (2013). Social Psychological and Personality Science, 7(4), 312–319. https://doi.org/10.1177/1948550616639649
Price, E., Ottati, V., Wilson, C., & Kim, S. (2015). Open-Minded Cognition. Personality and Social Psychology Bulletin, 41(11), 1488–1504. https://doi.org/10.1177/0146167215600528
Sanchez, C., Sundermeier, B., Gray, K., & Calin-Jageman, R. J. (2017). Direct replication of Gervais & Norenzayan (2012): No evidence that analytic thinking decreases religious belief. PLOS ONE, 12(2), e0172636. https://doi.org/10.1371/journal.pone.0172636
About

I'm a teacher, researcher, and gadfly of neuroscience. My research interests are in the neural basis of learning and memory, the history of neuroscience, computational neuroscience, bibliometrics, and the philosophy of science. I teach courses in neuroscience, statistics, research methods, learning and memory, and happiness. In my spare time I'm usually tinkering with computers, writing programs, or playing ice hockey.

Leave a Reply

Your email address will not be published. Required fields are marked *

*