Banning p values? The journal ‘Political Analysis’ does it

Back in the 1980s, epidemiologist Kenneth Rothman was a leader of those trying to persuade researchers across medicine and the biosciences to use CIs routinely. The campaign was successful to the extent that the International Council of Medical Editors stated that CIs, or equivalent, should always be reported and that researchers should not rely solely on p values. Since then, the great majority of empirical articles in medicine have reported CIs, although often the intervals are not discussed or used as the basis for interpretation, and p values remain close to universal.

Rothman went further, always arguing that p values should simply not be used, at all. He founded a new journal, Epidemiology, in 1990 and was chief editor for close to a decade. He announced at the start that the journal would not publish any p values. We reported an evaluation of his bold experiment in Fidler et al. (2004). We found that he succeeded–virtually no p values appeared in the journal during his tenure as editor. Epidemiology demonstrated that good science can flourish totally without p values; CIs were usually the basis for inference. Wonderful!

By contrast, in other cases enterprising editors ‘strongly encouraged’ the use of CIs instead of p values, but did not ban p values outright. For an example in psychology, see Finch et al. (2004). Researchers made more use of CIs, which was an improvement, but p values were still usually reported and used.

Very recently, the incoming editor to the political science journal, Political Analysis, announced a ban on p values. The editorial announcing the new policy is here.

Here is the key para from the editorial:
“In addition, Political Analysis will no longer be reporting p-values in regression tables or elsewhere. There are many principled reasons for this change—most notably that in isolation a p-value simply does not give adequate evidence in support of a given model or the associated hypotheses. There is an extremely large, and at times self-reflective, literature in support of that statement dating back to 1962. I could fill all of the pages of this issue with citations. Readers of Political Analysis have surely read the recent American Statistical Association report on the use and misuse of p-values, and are aware of the resulting public discussion. The key problem from a journal’s perspective is that p-values are often used as an acceptance threshold leading to publication bias. This in turn promotes the poisonous practice of model mining by researchers. Furthermore, there is evidence that a large number of social scientists misunderstand p-values in general and consider them a key form of scientific reasoning. I hope other respected journals in the field follow our lead.”

Imho that’s a fabulous development. Yes, progress is happening across many disciplines. I’ll be eager to watch how things go in Political Analysis. Of course I join the new editor in the hope expressed in the final sentence above.

Geoff
P.S. Thanks to Fiona Fidler for the breaking news.

Fidler, F., Thomason, N., Cumming, G., Finch, S., & Leeman, J. (2004). Editors can lead researchers to confidence intervals, but can’t make them think: Statistical reform lessons from medicine. Psychological Science, 15, 119-126.

Finch, S., Cumming, G., Williams, J., Palmer, L., Griffith, E., Alders, C., Anderson, J., & Goodman, O. (2004). Reform of statistical inference in psychology: The case of Memory & Cognition. Behavior Research Methods, Instruments & Computers, 36, 312-324.

Leave a Reply

Your email address will not be published. Required fields are marked *

*