Open Science DownUnder: Simine Comes to Town
Simine on the credibility revolution
First up was Simine, speaking to the title THE CREDIBILITY REVOLUTION IN PSYCHOLOGICAL SCIENCE. Her slides are here. She reminded us of the basics then explained the problems very well. Enjoy her pithy quotes and spot-on graphics.
My main issue with her talk, as I said at the time, was the p value and NHST framework that she used. I’d love to see the parallel presentation of the problems and OS solutions, all set out in terms of estimation. Of course it’s easy to cherry-pick and do other naughty things when using CIs, but, as we discuss in ITNS, there should be less pressure to p-hack, and the lengths of the CIs give additional insight into what’s going on. Switching to estimation doesn’t solve all problems, but should be a massive step forward.
A vast breadth of disciplines
Kristian Camilleri described the last few decades of progress in history and philosophy of science. Happily, there’s now much HPS interest in the practices of human scientists. So there’s lots of overlap with the concerns of all of us interested in developing OS practices.
Then came speakers from psychology (naturally), but also evolutionary biology, law, statistics, ecology, oncology, and more. I mentioned the diversity of audiences I’ve been invited to address this year on statistics and OS issues–from Antarctic research scientists to cardiothoracic surgeons.
Mainly we noted the commonality of problems of research credibility across disciplines. To some extent core OS offers solutions; to some extent situation-specific variations are needed. A good understanding of the problems (selective publication, lack of replication, misleading statistics, lack of transparency…) is vital, in any discipline.
Fiona’s own research group at The University of Melbourne is IMeRG (Interdisciplinary MetaResearch Group). It is, as its title asserts, strongly interdisciplinary in focus. Researchers and students in the group outlined their current research progress. See the IMeRG site for topics and contact info.
Predicting the outcome of replications
Bob may be the world champion at selecting articles that won’t replicate: I’m not sure of the latest count, but I believe only 1 or 2 of the dozen or so articles that he and his students have very carefully replicated have withstood the challenge. Only 1 or 2 of their replications have found effects of anything like the original effect sizes. Most have found effect sizes close to zero.
Several projects have attempted to predict the outcome of replications, then assessed the accuracy of the predictions. Fiona is becoming increasingly interested in such research, and ran a Replication Prediction Workshop as part of the jamboree. I couldn’t stay for that, but she introduced it as practice for larger prediction projects she has planned.
You may know that cyberspace has been abuzz this last week or so with the findings of Many Labs 2, a giant replication project in psychology. Predictions of replication outcomes were collected in advance: Many were quite accurate. A summary of the prediction results is here, along with links to earlier studies of replication prediction.
It would be great to know what characteristics of a study are the best predictors of successful replication. Short CIs and large effects no doubt help. What else? Let’s hope research on prediction helps guide development of OS practices that can increase the trustworthiness of research.
P.S. The Australasian Meta-Research and Open Science Meeting 2019 will be held at The University of Melbourne, Nov 7-8 2019.