Month: May 2017

From NHST to the New Statistics — How do we get there?

APS just wrapped up.  Geoff and I were privileges to help host a symposium on making progress moving the field away from p values towards the New Statistics.  Our co-conspirators were fellow text-book author Susan Nolan, Psychological Science editor Stephen

Posted in NHST, Open Science, Teaching, The New Statistics

Getting the whole story: journals could be more encouraging

Even though replication is a cornerstone of the scientific method, psychology journals rarely publish direct replications (though that situation may be changing).  Why not?  Is it self-censorship, with authors not bothering to conduct or submit such studies?  Or is it

Posted in Open Science, Replication

from the APS Convention in Boston

Bob and I are in Boston this weekend for the annual APS Convention. It’s great to catch up, and discuss a million things about ITNS and this blog, and our future plans. Our publisher told us yesterday that early signs

Posted in ITNS, Open Science, Teaching, The New Statistics

Confirmatory Research – A special issue of JESP

Catching up a bit, but in November of 2016 the Journal of Experimental Social Psychology published a special issue dedicated just to confirmatory research.  http://www.sciencedirect.com.proxy.cc.uic.edu/science/journal/00221031/67/supp/C The whole issue is well-worth reading: There is  an excellent guide to pre-registration (ostensibly for

Posted in Open Science, Replication

A cool new journal is Open

APS (The Association for Psychological Science) recently launched its sixth journal: Advances in Methods and Practices in Psychological Science. A dreadful mouthful of a title–why not drop ‘Advances in’ for a start–but it looks highly promising. Maybe it will become

Posted in Open Science, Replication

Publishing unexpected results as a moral obligation for scientists

Amen! A revised European Code of Conduct for Research Integrity now specifically calls on researchers and publishers to not bury negative results.  Specifically, the guidelines formulate this principle for publication and dissemination: Authors and publishers consider negative results to be

Posted in Open Science

What the datasaurus tells us: Data pictures are cool

In various places in ITNS, especially Chapter 11 (Correlation) we discuss how important it is to make good pictures of data, to reveal what’s really going on. Calculating a few summary statistics–or even CIs–often just doesn’t do the job. Many

Posted in Statistical graphics

Now for some good news: SIPS

The Society for the Improvement of Psychological Science (SIPS) held its first meeting last year, with around 100 good folks attending. Working groups have been–would you believe–working hard since then. The second meeting is 30 July to 1 August, in

Posted in Open Science, Stats tools

Methodological awakening: backlash against the backlash

Science has had some rough times lately, no doubt.  No need to rehearse the many findings indicating that we have some problems that need a fixing.  As Will Gervais put it, we’re in the midst of a “methodological awakening”.  The

Posted in Open Science, Replication

Replication problems are not competence problems

Why do some replication studies fail to produce the expected results?  There are lots of possible reasons: the expectation might have been poorly founded, the replication study could have been under-powered, there could be some unknown moderator, etc.  Sure, but

Posted in Replication