I’ve been a bit obsessed with the forest plot for, I’d guess, close to 20 years. Partly because I love pictures, partly because the forest plot can tell us so much. I regard it as the beautiful face of meta-analysis. These days it’s probably the beautiful face of Open Science. Consider this forest plot, which is part of Figure 9.7 from ITNS.
The data are from Calin-Jageman and Caldwell (2014) and we discuss the example on pp. 239-243 of ITNS. The first six studies are from one lab and are estimates of the extent to which various superstitious beliefs can enhance performance. Even a quick glance raises concerns–there seems to be insufficient variability from study to study and, what’s more, they all achieve statistical significance, mostly be a small amount.
The picture of those six studies alerts us to possible p-hacking and/or selection of results. The six results simply look too good, from the traditional NHST perspective in which p values matter.
The last two studies, by Bob’s group, are preregistered replications, carefully designed to be as similar as possible to the top study, but with larger samples. There is a strong suggestion that we have a clear case of failure to replicate, and the effect is actually of negligible size–although we can’t, of course, rule out the possibility of some unknown moderator that accounts for the seeming difference between the first six and the last two studies.
So, with a little extra information (e.g., about preregistration or otherwise) the forest plot can be highly informative, and summarise what a research literature has to say on a research question of interest.
Back around 2001 I had built the first version of ESCI meta-analysis. I decided that meta-analysis is sufficiently important, and the forest plot sufficiently simple, to include it in my intro statistics & design course. My beginning first year undergraduates (i.e., freshmen) thus encountered the forest plot–and meta-analysis–about two months in.
It turned out to be a great teaching moment for me. Not very often (in my experience anyway) do students say ‘ok, that all makes sense, no big deal’. Being newbies, they couldn’t worry that no textbook included meta-analysis, that back then many researchers regarded it as the province of technical experts, and that journals were only beginning to publish meta-analytic reviews.
After developing ESCI and my teaching approach for a couple of years I presented a paper at ICOTS7, the International Conference on Teaching Statistics, in Brazil in 2006. My paper is titled META-ANALYSIS: PICTURES THAT EXPLAIN HOW EXPERIMENTAL FINDINGS CAN BE INTEGRATED and is here.
Reading that short and simple paper now is nostalgic for me. It foreshadows many of the main issues we discuss in Chapter 9 in ITNS, although of course not Open Science and preregistration.
I’m hoping that teaching meta-analysis, probably by way of simple forest plots, is now becoming widespread. The simplest forest plot in ITNS is Figure 1.4 which appears as early in the book as p. 11.
In our symposium at APS next May my contribution is titled Open Science Is Best Practice Science, with an emphasis on teaching. One thing I plan to talk about is the value of the forest plot for presenting an introduction to meta-analysis and Open Science.
I’m therefore keen to hear from anyone with thoughts about, or experience of, using forest plots in the intro statistics and/or design course. If you would care to, please make a comment below to this post. Many thanks!
Calin-Jageman, R. J., & Caldwell, T. L. (2014). Replication of the Superstition and Performance Study by Damisch, Stoberock, and Mussweiler (2010). Social Psychology, 45(3), 239–245. https://doi.org/10.1027/1864-9335/a000190