Banning p values? The journal ‘Political Analysis’ does it

Back in the 1980s, epidemiologist Kenneth Rothman was a leader of those trying to persuade researchers across medicine and the biosciences to use CIs routinely. The campaign was successful to the extent that the International Council of Medical Editors stated that CIs, or equivalent, should always be reported and that researchers should not rely solely on p values. Since then, the great majority of empirical articles in medicine have reported CIs, although often the intervals are not discussed or used as the basis for interpretation, and p values remain close to universal.

Rothman went further, always arguing that p values should simply not be used, at all. He founded a new journal, Epidemiology, in 1990 and was chief editor for close to a decade. He announced at the start that the journal would not publish any p values. We reported an evaluation of his bold experiment in Fidler et al. (2004). We found that he succeeded–virtually no p values appeared in the journal during his tenure as editor. Epidemiology demonstrated that good science can flourish totally without p values; CIs were usually the basis for inference. Wonderful!

By contrast, in other cases enterprising editors ‘strongly encouraged’ the use of CIs instead of p values, but did not ban p values outright. For an example in psychology, see Finch et al. (2004). Researchers made more use of CIs, which was an improvement, but p values were still usually reported and used.

Very recently, the incoming editor to the political science journal, Political Analysis, announced a ban on p values. The editorial announcing the new policy is here.

Here is the key para from the editorial:
“In addition, Political Analysis will no longer be reporting p-values in regression tables or elsewhere. There are many principled reasons for this change—most notably that in isolation a p-value simply does not give adequate evidence in support of a given model or the associated hypotheses. There is an extremely large, and at times self-reflective, literature in support of that statement dating back to 1962. I could fill all of the pages of this issue with citations. Readers of Political Analysis have surely read the recent American Statistical Association report on the use and misuse of p-values, and are aware of the resulting public discussion. The key problem from a journal’s perspective is that p-values are often used as an acceptance threshold leading to publication bias. This in turn promotes the poisonous practice of model mining by researchers. Furthermore, there is evidence that a large number of social scientists misunderstand p-values in general and consider them a key form of scientific reasoning. I hope other respected journals in the field follow our lead.”

Imho that’s a fabulous development. Yes, progress is happening across many disciplines. I’ll be eager to watch how things go in Political Analysis. Of course I join the new editor in the hope expressed in the final sentence above.

P.S. Thanks to Fiona Fidler for the breaking news.

Fidler, F., Thomason, N., Cumming, G., Finch, S., & Leeman, J. (2004). Editors can lead researchers to confidence intervals, but can’t make them think: Statistical reform lessons from medicine. Psychological Science, 15, 119-126.

Finch, S., Cumming, G., Williams, J., Palmer, L., Griffith, E., Alders, C., Anderson, J., & Goodman, O. (2004). Reform of statistical inference in psychology: The case of Memory & Cognition. Behavior Research Methods, Instruments & Computers, 36, 312-324.

3 Comments on “Banning p values? The journal ‘Political Analysis’ does it

  1. Thank you Deborah for your comment. I totally agree that it’s vital to consider the assumptions of any statistical model we are using; this requires knowledgeable judgment in the research context. I wouldn’t want to see simple significance tests used for that, or for any other purpose. More basically, I’m not especially concerned to engage with the detailed reasons why that editor elected to ban p values–it sounds like there was a range of reasons. I am very happy to stand and applaud, and wish the journal well.
    Geoff Cumming

    • I hadn’t noticed your reply. Blind applause for any and all bans of a method (that is at least one integral part of distinguishing signal from noise), regardless of how poorly substantiated the reasons, is poor science. I’m afraid that’s the type of attitude behind the unthinking use of methods that you and I are keen to expunge.

      And what are the methods for testing assumptions that do not (implicitly or explicitly) use statistical significance test reasoning? Eyeballing is not a method.

  2. The editor is saying that unless a statistical tool gives a measure of evidence in support of a hypothesis or model in utter isolation it should be banned. This is really quite incredible. Insofar as any statistical method is capable of testing its assumptions, it will be to simple significance tests that they will turn. Without this, the applications are illicit. Therefore, banning significance tests ensures your application of statistics will be illicit.

Leave a Reply

Your email address will not be published. Required fields are marked *