Sackler Colloquim on Reproducibility – Field Report 1

This week I (Bob) am attending the Sackler Colloquium on Reproducibility in Research.  It’s an event put on by the National Academy of Sciences.

For the blog this week I’ll be posting some of my thoughts on the discussion.  Here’s my first field report.

First session — A welcome from NAS president Marcia McNutt, a typology of reproducibility from Victoria Stodden, and a report on clinical trials reporting from Kay Dickerson.  Lots of groundwork to cover… nothing particularly shocking or interesting presented this morning.

Science works, but it doesn’t?  At the end of her comments, Marcia McNutt offered a caution: science clearly works, so we can acknowledge the replication crisis, but shouldn’t end up with the view that science is completely broken.  This is true and sensible.  It is a constant feature of science to self-reflect, self-criticize, and self-improve.  The current replication crisis is just part of a long tradition of improvement.  It’s a sign of the health and vitality of science, not a sign of the end times.  Cool.

At the same time, though, the replication crisis does offer an epistemic puzzle.  How can science produce such excellent outcomes (new drugs, new treatments, new technologies, etc.) if there are widespread problems with reliability of published results?  Here are two related possibilities:

  • Science is right in the long-run, wrong in the short-run.  The successful outcomes of science reflect ideas and tools which have survived a Darwinian contest for survival, being tested and vetted in many contexts over time until finally being implemented in an applied setting.  The current mix of studies, however, are not ones which have survived this gauntlet.  In fact, as Umberto Eco points out, the cutting edge of science always operates at the edge of our experience and reason.  Science must always be churning through ideas and thoughts that will one day seem outlandish and wrong, as this is the only way to find the few ideas that will one day see true.  To use an agricultural metaphor: the delicious fruits of science are harvested from large piles of fertilizer.
  • Science is growing cluttered?  Another possible explanation is that the growth of science is clogging up journals with dreft (I once heard a prominent researcher declare than 90% of what you find on PubMed is spam).  Science grows rapidly–since Newton the number of published papers has doubled roughly every 20 years!  Science doesn’t grow evenly, though.  The number of very high quality papers and very influential people grows much slower than the total output.  This is known as Lokta’s law.  This means that over time the ratio of awesome science to dreft increases.  To return to the farming analogy above, over time the fertile soil of scientific inquiry becomes less productive.  It takes more manure to produce the delicious fruits we enjoy.  Something to think about is how long this can continue.  Many cultures have collapsed due to soil exhaustion.  What happens if the inelectable growth of science eventually leads to discovery exhaustion, where there is so much junk it becomes impossible to identify and cultivate the fruit?  I’m not saying we’re there yet, but as de Solla Price pointed out long ago, growth is usually highest right up to the point of collapse.  Something to think about!

The journal Science has been a leader for fostering reproducibility?!  McNutt said that when she interviewed as editor she highlighted reproducibility as a key issue she would focus on and on which Science would be a leader.   She implicated that this had, indeed, occurred, citing for example Science‘s publication of the reproducibilty project manuscript (OSC, 2015).  Personally, I don’t find this a credible claim, though I’d be happy to be wrong.  Science seems to me to be a big part of the problem.  Meta-analysis has shown that psychology papers published in Science are essentially implausible (Francis et al., 2014).  I’ve had personal experience of Science being completely uninterested in correcting the record on unreplicable research it had published.   Maybe I’m not sifting all the evidence?  But to my mind, the journal Science does not spring to my mind as a leader in improving our research.



Francis, G., Tanzman, J., & Matthews, W. J. (2014). Excess success for psychology articles in the journal science. PloS One, 9(12), e114255.

Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716-aac4716.


I'm a teacher, researcher, and gadfly of neuroscience. My research interests are in the neural basis of learning and memory, the history of neuroscience, computational neuroscience, bibliometrics, and the philosophy of science. I teach courses in neuroscience, statistics, research methods, learning and memory, and happiness. In my spare time I'm usually tinkering with computers, writing programs, or playing ice hockey.

Posted in Open Science, Replication, Uncategorized

Leave a Reply