Eating Disorders Research: Open Science and The New Statistics

I’m in Sydney, the great Manly surf beach just over the road. It’s an easy ferry ride to the Opera House and city centre. Lindy and I started this trip up from Melbourne with a few days with a cousin, at her house high above Killcare beach an hour north of Sydney. We enjoyed watching the humpback whales migrating south.

To business. I’m at the 24th Annual Meeting of the global Eating Disorders Research Society. I gave my invited talk and workshop yesterday. It seemed to go well, and all the informal chat I’ve had with folks since has been positive. There was already very clear awareness of the need for change, even if much of the detail I discussed was new to many.

The slides for my talk are here, and for the workshop are here.

I thought that one of the most interesting discussions was about the challenges of conducting replications in eating disorders (ED) research. I’d anticipated that by bringing up the great paper by Scott Lilienfeld and colleagues on replication in clinical psychology. When Scott took over as editor of Clinical Psychological Science, he introduced badges and policies to encourage Open Science practices.

That paper is Tachett et al. (2017). It discussed issues relevant to replication; we agreed yesterday that these are largely relevant also to ED replication research. The issues included:

* Case studies, qualitative methods, correlational studies
* Exploratory, question-generating studies
* Large archival data sets
* Small specialised populations, small-N studies
* Difficulty of standardising measures, and treatments
* Messy and noisy data
* Need to focus on effect sizes, and to pool data where possible

Some of the main conclusions were that researchers should, where possible, aim for:
* Reduced QRPs
* Preregistration, of suitable kind; open materials and data, where possible
* Independent replications, use existing data where appropriate
* Improvements to current practices, to improve replicability
* Increased statistical power (larger N, better control, stronger IVs)

Two overall conclusions of mine were:
* Think meta-analytically …and therefore use the new statistics
* Tailor OS solutions to the research field

This year I have given presentations to Antarctic and marine research scientists, orthodontists, and now ED researchers. My main take-home message is that the issues and problems are largely similar across disciplines and that to some extent the solutions are similar, but that to an important extent the solutions need to be figured out in each different research context.

Happy replicating, happy surfing,
Geoff

Tackett, J. L., Lilienfeld, S. O., Patrick, C. J., Johnson, S. L., Krueger, R. F., Miller, J. D., … & Shrout, P. E. (2017). It’s time to broaden the replicability conversation: Thoughts for and from clinical psychological science. Perspectives on Psychological Science, 12, 742-756. tiny.cc/ClinPsyRep

Leave a Reply

Your email address will not be published. Required fields are marked *

*