Registered Reports: Conjuring Up a Dangerous Experiment

Last week I (Bob) had my first Registered Report proposal accepted at eNeuro. It’s another collaboration with my wife, Irina, where we will test two popular models of forgetting. The proposal, pre-registration, analysis script, and preliminary data are all on the OSF: Contrary to popular practice, we developed our proposal for original research, not for a replication. We opted for a registered report because we wanted to set up a fair and transparent test between two models–it seemed to us both that this required setting the terms of the test publicly in advance and gaining initial peer review that our approach is sensible and valid.

Although having the proposal accepted feels like a triumph, I am sooooo anxious about this. I’m anxious because our proposal represents what Irina calls a “Dangerous Experiment”. She came up with this phrase in grad school when she was running a experiment which had the potential to expose much of her previous work as wrong. It was stomach-churning to collect the data. In fact, someone on her faculty even suggested ways she could present her work that would let her avoid doing the experiment. Irina decided that avoidance was not the right strategy for a scientist (yes, she’s amazing), and that she had to white-knuckle through it. In that first experience with a dangerous experiment she was vindicated.

Since then, we often discuss Dangerous Experiments and we push each other to find them and confront them head on. Sometimes they’ve ended in tears (which is why we no longer study habituation​1​ or use certain “well-established” protocols for Aplysia​2​). Other times we’ve been vindicated, to our great relief and satisfaction​3​. Lately the philosopher of science Deborah Mayo has popularized the idea of a severe test as important in moving science forward. I haven’t finished her book, but I suspect Irina and Mayo would get along.

Our experience has convinced me that Registered Reports will typically yield Dangerous Experiments–that this is their strength and also what makes them so terrifying. For registered reports, though, the danger is not in shattering the research hypothesis–the danger comes from the stress put upon the strength and mastery of your method. A registered report requires you to plan your study very carefully in advance–defining your sample, your exclusions, your research questions, your analyses, and your benchmarks for interpretation. You have to be pretty damn sure you know what you’re doing, because if you fail to anticipate an eventuality then the whole enterprise could collapse. So it’s making a tight rope and then seeing if you can really walk it. Dangerous, indeed. But the payoff is walking across a chasm towards some epistemic firm ground–that mystical place where legend has it you can move the world.

Putting together our registered report required doing a lot of “pre data” work to assure ourselves that we had a design and protocol we could feel confident would be worth executing with fidelity. We simulated outcomes under different models to ensure the analyses we were planning would be sensitive enough to discriminate between them. We developed positive controls that could give us independent assessments of protocol validity. We also expanded our design to include an internal replication to provide an empirical benchmark for data reliability. By mentally stepping through the project and conferring with the reviewers we built a tight rope we *think* is actually a sure bet to cross safely. Time will tell.

The whole process reminds me of something I used to do as a kid when playing Hearts: I used to lay down my first 5 plays on the table (face down) and then turn them up one-by-one as the tricks played out. It drove my siblings crazy. Usually I guessed wrong about how play would go and would have to delay the game to pick up my cards and re-think. Every once in a while, though, I would get to smugly turn the cards over in series like the Thomas Crown of playing cards. Registered reports ask for something like this: Are your protocols and standards well-developed enough that you can sequence them and execute them according to plan and still end up exactly where you want to be?

Does the dangerous nature of a registered report support the the frequent criticism that pre-registration is a glass prison? Perhaps. If this whole endeavor crashes and burns I’ll probably move closer to that point of view. But I can’t help but feel that this is how strong science must be done–that if you can’t point at the target and then hit it you don’t really know what you’re doing. That’s ok, of course–we’re lost in lots of fields and need exploration, theory building, and the like. Not every study needs to be a registered report. But it does seem to me that Registered Reports are the ideal to aspire to–that we can’t really say an effect is “established” or “textbook” or “predicted by theory” until we can repeatedly call our shots and make them. Or so it seems to me at the moment…. check back in 2 months to see what happened.

Oh yeah… if you’re here on this stats blog but curious about the science of forgetting, here’s the study Irina and I are conducting. We have come up with what we think is a very clever test between two long-standing theories of forgetting. Neurobiologists have tended to think of forgetting as a decay process, where entropy (mostly) dissolves the physical traces of the memory. Psychologists, however, argue that forgetting is a retrieval problem, not a storage problem. They contend that memory traces endure (perhaps forever), but becomes inaccessible due to the addition of new memories.

Irina and I are going to test these theories by tracking what happens when a seemingly forgotten memory is re-kindled, a phenomenon called savings memory. If forgetting is a decay process, then savings should involve re-building the memory trace, and it should thus be mechanistically very similar to encoding. If forgetting is retrieval failure, then savings should just re-activate a dormant memory, and this should be a distinct process relative to encoding. Irina and I will track the transcriptional changes activated as Aplysia experience savings and compare this to Aplysia that have just encoded a new memory. We *should* be able to get some pretty clear insight into the neurobiology of both savings and forgetting.

I genuinely have no idea which model will be better supported by the data we collect… depending on the day I can convince myself either way. As I mentioned above, my anxiety is not over which model is right but over if our protocol will actually yield a quality test…. fingers crossed.

  1. 1.
    Holmes G, Herdegen S, Schuon J, et al. Transcriptional analysis of a whole-body form of long-term habituation in Aplysia californica. Learn Mem. 2014;22(1):11-23.
  2. 2.
    Bonnick K, Bayas K, Belchenko D, et al. Transcriptional changes following long-term sensitization training and in vivo serotonin exposure in Aplysia californica. PLoS One. 2012;7(10):e47378.
  3. 3.
    Cyriac A, Holmes G, Lass J, Belchenko D, Calin-Jageman R, Calin-Jageman I. An Aplysia Egr homolog is rapidly and persistently regulated by long-term sensitization training. Neurobiol Learn Mem. 2013;102:43-51.

I'm a teacher, researcher, and gadfly of neuroscience. My research interests are in the neural basis of learning and memory, the history of neuroscience, computational neuroscience, bibliometrics, and the philosophy of science. I teach courses in neuroscience, statistics, research methods, learning and memory, and happiness. In my spare time I'm usually tinkering with computers, writing programs, or playing ice hockey.

Leave a Reply

Your email address will not be published. Required fields are marked *