Gaining expertise doesn’t have to close your mind–another adventure in replication

You may have seen it on the news: being an expert makes you close-minded.  This was circa 2015, and the news reports were about this paper (Ottati, Price, Wilson, & Sumaktoyo, 2015) by Victor Ottati’s group, published in JESP.  The paper showed an ‘earned dogmatism effect’–finding that “situations that engender self-perceptions of high self-expertise elicit a more close-minded cognitive style”.  I think the extensive news coverage was related to the zeitgeist that still pervades–the anxious sense that there is no rationality and that even those who we hoped would know better do not, etc.  Except for just one thing…the research that helped fuel our collective epistemic dread was not, itself, entirely trustworthy.

You could see the warning signs right away.  There were two types of experiments in Ottati et al. (2015).  In one type, participants were asked to imagine being experts in a social scenario and then to predict how open-minded they would be.  These studies were conducted within-subjects with lots of participants, yielding very precise effect-size estimates.  But, they were based solely on participants guessing how they might behave in an imagined scenario.  The second type of experiment is the one that garnered the press attention–participants were given an easy or difficult task, with the easy task being used to generate a sense of expertise (because you were so good at the task).  Then participants said how open-minded they actually felt.  In these studies, those given the easy task felt *much* less open-minded—BUT the studies were very small, the effect size estimates were very broad, and there were serious procedural issues (such as differential dropout in the difficult condition).  Moreover, there were none of the new best practices that might help instill some confidence in the findings–across the multiple studies none of the between-subjects ones were directly replicated/extended, there was no sample-size planning, no assurance of full reporting, no data sharing…it all felt so “pre-awakening”.  In fact, by my reading, the paper violated several of the JESP guide to authorship tenants which had been published earlier that year, prior to the reported submission date of the paper.  That really tore me up–weren’t we making any progress?

This question began a 2-year Odyssey of replicating Ottati et al (2015).  I’m pleased that my replication paper is now published, and that it is, in fact, published in JESP, the same journal that published the original (Calin-Jageman, 2018).  What I found is pretty much what I might have predicted when I first read the paper.  The well-powered within-subjects experiments replicated beautifully.  The under-powered between-subjects experiments did not replicate well at all–across multiple attempts with different subject pools I obtained overall effect sizes very close to 0 with narrow confidence intervals.  Participants do predict they will be close-minded in a situation of expertise, but the current best evidence indicates this does not happen in practice (though, who knows–maybe some other way of operationalizing the variables will yield results).

Here are some things I learned during this replication adventure:

  • Ottati and his team are not close-minded.  They were incredibly gracious and cooperative.  I think they’ll be writing a commentary.
  • Absence of evidence is not the same as evidence of absence.  When I read the paper and it had none of the best practices I have come to expect in modern research (sample-size planning, pre-registration, etc.) I thought for sure there was some funny-business going on.  But in emailing back-and-forth it became clear that the researchers had fully reported their design, had not used run-and-check, had not buried unsuccessful research, etc.  They could have helped themselves by making all this clear, but it was good to be reminded that just because researchers haven’t stated the “21-word solution” doesn’t mean they are gaming the system.
  • Having tried really hard to diagnose where things went wrong with the original research, I’m down to two points: inadequate sample-size (duh!) and differential dropout.  I hadn’t even thought about differential dropout while working on the replications, but then I found this paper about how common and problematic this is with MTurk samples (Zhou & Fishbach, 2016).  Sure enough, it opened my eyes–the original Ottati paper always had more MTurk participants in the easy condition than in the difficult condition.  I don’t know for sure, but I think that’s what did the original research in.  In my case I used much larger samples and I also drew not only on MTurk participants but also other pools (e.g. Psych 101 students) that do not so actively stop when there is a bit of work to do in an experiment.  I need to write another post on why MTurk should be dead to social psychologists.
  • I had to work really hard to convince my section editor at JESP that this paper warranted publication.  It was actually rejected initially and part of the reasoning was that I hadn’t justified why the replications were done or why JESP should publish it.  I’m glad they re-considered, but I still consider it axiomatic–if your journal published a paper you automatically should consider replications interesting to your readers, perhaps especially when it revises the purported knowledge already published.
  • I had to cut from the paper discussion of JESP’s publishing guidelines.  Part of my reason for doing this set of replications was to point out that they are either not being enforced or they are not toothsome enough to prevent major type M errors.  But the editor suggested I not discuss this.  The reasoning was strange–apparently even though the updated guidelines had been published before Ottati et al. (2015) had been submitted, the editor didn’t seem convinced that anything had actually changed at that point.  Interesting!  It’s anecdotal, but I keep hearing about journals rolling out impressively strict new guidelines…but not really lifting a finger to train section editors or reviewers to be sure they are implemented.  Boo.
  • Ottati and his team pointed out lots of ways the Earned Dogmatism Effect could still be true and are also not nearly as down on the imagined scenarios as I am.  Fair enough.  It will be interesting to see if something solid can be developed from this.  If so, I’d be thrilled.  As it stands, I am happy to have finally have a paper with some good news–the within-subjects imagined/scenario designs replicate very, very well across multiple types of participants.
  • The interactions with Ottati were excellent, but the review process at JESP was not inspriing–it took a long time, one of the reviewers didn’t know anything about replication research or the new statistics, and the third had only very superficial comments.  I don’t think my responses went back to any reviewers.  Of all the social psych work I’ve now done, this was the least useful and rigorous review process… oh wait, no.. 2nd worse behind a Plos ONE paper.
  • I keep running into the misconception that only Bayesian’s can support the null hypothesis.  Even in a manuscript that reports effects sizes with Confidence Intervals and interprets them very explicitly throughout in ways that make clear there is good support for the null or something reasonably close to it.  That’s a stubborn misconception.  Fortunately I was able to get some quick help from EJ Wagenmakers (thanks!) and reported a replication Bayes factor (Ly, Etz, Marsman, & Wagenmakers, 2017).  I still don’t believe it adds anything beyond the CIs I had reported, but nothing wrong with another way of summarizing the results.

    References

    Calin-Jageman, R. J. (2018). Direct replications of Ottati et al. (2015): The earned dogmatism effect occurs only with some manipulations of expertise. Journal of Experimental Social Psychology. doi: 10.1016/j.jesp.2017.12008
    Ly, A., Etz, A., Marsman, M., & Wagenmakers, E.-J. (2017). Replication Bayes Factors from Evidence Updating (p. ). PsyArXiv. doi: /10.17605/osfio/u8m2s
    Ottati, V., Price, E. D., Wilson, C., & Sumaktoyo, N. (2015). When self-perceptions of expertise increase closed-minded cognition: The earned dogmatism effect. Journal of Experimental Social Psychology, 61, 131–138. doi: 10.1016/j.jesp.2015.08003
    Zhou, H., & Fishbach, A. (2016). The pitfall of experimenting on the web: How unattended selective attrition leads to surprising (yet false) research conclusions. Journal of Personality and Social Psychology, 111(4), 493–504. doi: 101037/pspa0000056
About

I'm a teacher, researcher, and gadfly of neuroscience. My research interests are in the neural basis of learning and memory, the history of neuroscience, computational neuroscience, bibliometrics, and the philosophy of science. I teach courses in neuroscience, statistics, research methods, learning and memory, and happiness. In my spare time I'm usually tinkering with computers, writing programs, or playing ice hockey.

Leave a Reply

Your email address will not be published. Required fields are marked *

*