What Psychology Teachers Should Know About Open Science and the New Statistics
Bob recently tweeted about this great paper of his, with Beth Morling:
Morling, B., & Calin-Jageman, R. J., (2020). What Psychology Teachers Should Know About Open Science and the New Statistics. Teaching of Psychology, 47 (2), 169-179. doi: 10.1177/0098628320901372
First, here’s the overview diagram, a great teaching explainer in itself:
I agree with just about all they say, and note in particular:
- The title refers to ‘psychology teachers‘, not only statistics and methods teachers. In the journal it’s placed in The Generalist’s Corner. This is important: Every teacher of psychology (yep, and lots of other disciplines too) needs to take up Open Science issues when presenting and discussing any research findings. Beth and Bob give lots of advice on good ways to do this. Authors of *any* psychology textbook take note.
- “Psychological science is experiencing a period of rapid methodological change. (p. 169)” That’s a restrained way to put it–arguably the advent of Open Science is the most exciting and important advance in how science is done for a long time. Bring it on.
- Three questions provide a framework and mnemonic for the new statistics–the three simple questions to the right in the diagram above. They are on point, though I’d consider “How precise?” as an alternative to the second, even if it’s not as straightforward and pithy as “How wrong?”. The three also appear in the title of Bob’s and my TAS article.
- There’s so much more gold: links to great teaching resources for Open Science, simulations, suggestions for classroom dialogue, and more. (Discuss preregistration by playing it out in the classroom: students make predictions, record these, then analyse data and discuss results in the light of their prior expectations.)
- The authors’ passion for teaching, and for the essential changes they are discussing, shines through. They could make more of what I’m sure is their conviction–that the new ways are way more satisfying to teach, and way more readily understood by students. Happier students *and* teachers: What’s not to like!
Points I’m pondering:
‘Registration’ or ‘preregistration’?
I posted about this question a couple of months ago. ‘Registration’ is long established in medicine. Why does psychology persist with ‘prereg…’, a longer term, with its internal redundancy? It’s not a big deal, and maybe we’re stuck with the messiness and possible ambiguity of using both terms. Beth and Bob stick with current psychology practice by using ‘prereg…’ throughout, but explain ‘registered reports’–which are simply reports based on preregistered (and refereed) plans.
Do we have the full story?
I do like the three questions (dot point 3 above), but I also like very much our beginning Open Science question, introduced on p. 9 in ITNS. ‘Do we have the full story?’ can easily be explained as prompting scrutiny of numerous aspects of research, from preregistration through informative statistics (ESs and CIs, of course); full reporting of the method, data, and analyses; to consideration of other relevant studies that may not have been made available.
Confirmatory or Planned?
My main disagreement with the authors is over their use of confirmatory/exploratory as the distinction between analyses that have been planned, and preferably preregistered, and those that are exploratory. It’s a vital distinction, of course, but ‘confirmatory’, while a traditional and widely-used term, does not capture well the intended meaning. Confirmatory vs exploratory probably originates with the two approaches to using factor analysis. It could make sense to follow an exploratory FA that identified a promising factor structure with a test of that now-prespecified structure with a new set of data. That second test might reasonably be labelled ‘confirmatory’ of that structure, although the data could of course cast doubt on rather than confirm the FA model under test.
By contrast, a typical preregistered investigation, in which the research questions and the corresponding data analysis are fully planned, asks questions about the sizes of effects. It estimates effect sizes rather than seeks to ‘confirm’ anything. Even an evaluation of a quantitative model against data typically focuses on estimating parameters and perhaps estimating goodness of fit, rather than confirming, in some yes/no way, the model. Therefore I regard ‘planned’, rather than ‘confirmatory’, as a more accurate and appropriate term to use in opposition to ‘exploratory’. I’d vote for planned/exploratory as the terms to describe the vital distinction in question.
It’s a great article, well worth reading and discussing with colleagues.