On March 21, 2017 Uri Simonsohn revealed an interesting new blog post on funnel plots, arguing based on some simulations that they are not as useful for detecting publication bias as might be thought http://datacolada.org/58. It’s an interesting post, and worth reading. As far as I could tell, though, the problems only apply to meta-analyses that incorporate a range of studies in which researchers expect very different effect sizes… which to me seems kind of against the idea of doing the meta-analysis to begin with. So I’m not sure if the circumstances in which funnel plots are miseleading are very common… maybe? I’m working on a meta-analysis right now, but it compiles a large set of very similar studies and I don’t see any possibility that researchers are customizing their sample sizes based on effect size expectations in the way that produces trouble in the Simonsohn blog post. But that’s just my project.
Anyways, there’s a meta-story here as well. Just 10 days after Simonsohn’s blog post hit the interwebs, there was a write up about it in Nature (Cressey, 2017) . Wow. The summary in Nature interviewed some folks to get their responses, but these seem to have drawn primarily from folks who initially responded via Twitter.
The process here is pretty amazing: a researcher posts some simulations and commentary on their blog, ‘reviews’ pour in from the Twitter-verse, and 10 days latter a high-profile journal summarizes that whole discourse. What a neat model for a rapid-feedback form of science in the future. The only problem is: only a very few people have blogs that could garner that much attention, regardless of the quality/importance of the post. Maybe there should be a blog collective or something like that where researchers post. Maybe this is the publishing model of the future?