Brain Stimulation – Can we trust the empirical record?
Brain stimulation research has been exploding in neuroscience. First came the rapid adoption of Transcranial Magnetic Stimulation (TMS), a technique in which powerful magnetic fields are used to create inductive currents within the skull. More recently, Direct Current Stimulation (DCS) has burst onto the scene, a technique where current is simply pushed through the skull (it’s not much more sophisticated than strapping a small battery to your head). These techniques have launched literally thousands of studies, as researchers have been drawn by the allure of treating mental disorders by cheaply tweaking brain function. Best of all, these techniques offer the promise of personalized medicine, as the stimulation locations, magnitudes, and frequencies can be adjusted for each patient to obtain the best results.
Two recently published papers throw some cold water on TMS (Héroux, Loo, Taylor, & Gandevia, 2017) and DCS research (Héroux, Taylor, & Gandevia, 2015) . Both survey researchers in their field, asking about their success replicating published results and their perceptions that others in their field use Questionable Research Practices.
- Researchers reported that they frequently used previous sample sizes to determine their own sample sizes, a practice that is problematic given that sample sizes in brain stimulation research are known to be too small.
- For TMS, only 20% reported using power analysis to determine sample sizes. For DCS, 60% reported that they sometimes do this, but a random sample of 100 papers found only 6% mentioned power analysis for sample-size determination.
- Depending on the protocol, about 30-50% of respondents reported being unable to replicate previously published findings. Most, though, had chosen to give up on the protocol rather than publish negative results.
- Although relatively few researchers admitted to engaging in questionable research practices themselves, many reported that they believed others in the field were under-reporting, p-hacking, and the like.
I must admit, that reading these papers is a bit difficult: the exact wording of the surveys is not made entirely clear, and the way results are reported is sometimes very confusing (it can be tough to tell if percentages given are per respondant, per paper, per respondent per technique, etc.). Still, the overall message seems quite clear: researchers within the field can have trouble reproducing findings and perceive others as using questionable research practices. Thus, the fairly sunny literature on these techniques may be highly misleading, as the doubts and failed replications don’t seem to be making it into print. The team that conducted the survey points out that their data helps explain how so many papers on these techniques can show statistically significant results despite very low power–probably because they are just the tip of the iceburg.
It’s a shame to type this up. I remember rTMS exploding onto the scene when I was in grad school, and the equal excitement when DCS posters started showing up at conferences. The public is hungry for remedies, and there are already tons of clinics offering these therapies, many of them operating outside the U.S. It seems likely that the literature on brain stimulation is hopelessly biased, and that bias is offering the scientific veneer needed for these clinics to fleece many desperate patients. We can and should do better.