Adventures in Replication: Introduction
Over the past 5 years or so, I (Bob) have been a bit replication crazy–I’ve conducted about 10 direct replication projects in collaboration with undergraduate students at Dominican. I became obsessed in part because I wanted to know for myself if the alarm bells being raised during the ‘replication crisis’ were really valid–could it really be that many of the important scientific findings celebrated in the press are unreliable? Unfortunately, my conclusion from personal experience is yes. The vast majority of the replication projects I’ve been involved in have been unable to reproduce the very strong patterns of results reported in the original manuscripts. My students and I have found that power doesn’t really improve motor skill (Cusack, Vezenkova, Gottschalk, & Calin-Jageman, 2015) , superstition doesn’t really improve golf skills (Calin-Jageman & Caldwell, 2014) , being exposed to organic food doesn’t really make you a jerk (Moery & Calin-Jageman, 2016) , holding your face in the form of a smile doesn’t really make you think cartoons are funnier (Wagenmakers et al., 2016) , engaging in analytic thinking doesn’t really decrease your religious belief (Sanchez, Sundermeier, Gray, & Calin-Jageman, 2017) , and seeing red doesn’t make a romantic partner massively more attractive (Lehmann & Calin-Jageman, 2017). In each case it’s not just that we found results that are ‘not significant’–it’s that we found effect sizes close to 0 with CIs that generally exclude all but the very weakest and most scientifically intractable of effects. It’s not just that we failed to find strong effects once–it’s that in each case we did a series of studies with varying conditions/populations but found consistently next-to-no effect. And it’s not a matter of competence, because early on we developed the approach of adding in positive controls, and we’ve had no problem obtaining reliable effects of well-known psychological phenomena with the same participants and conditions. So, the balance of the evidence I’ve published so far suggests that the replication crisis is very, very real. And that’s just what we’ve managed to publish at this point–there are still 3 more papers on the way, at least.
I can feel my replication fever now starting to break. I’ve learned a lot not just about replication but about science in general. I’ve picked up new methods and view points that I’m finding useful in my own research. And I’ve convinced myself of the sad and depressing truth that much of what has recently passed as the very best of empirical research is almost complete nonsense. Blech. I’m not sure if I have much more to learn from additional replications–though I have found them wonderful vehicles for student growth and development, so I imagine I’ll continue to supervise replication projects for the foreseeable future.
As the fever fades, I want to collect my thoughts and reflect on the process of replication. So I’m starting a series of blog posts on what I’ve learned from being a replicator. This is the inaugural post–just kind of a place holder to say: Stay tuned!
Leave a Reply