Category: Replication

NeuRA Ahead of the Open Science Curve

I had great fun yesterday visiting NeuRA (Neuroscience Research Australia), a large research institute in Sydney. I was hosted by Simon Gandevia, Deputy Director, who has been a long-time proponent of Open Science and The New Statistics. Neura’s Research Quality

Replications: How Should We Analyze the Results?

Does This Effect Replicate? It seems almost irresistible to think in terms of such a dichotomous question! We seem to crave an ‘it-did’ or ‘it-didn’t’ answer! However, rarely if ever is a bald yes-no decision the most informative way to

AIMOS — The New Interdisciplinary Meta-Research and Open Science Association

Association for Interdisciplinary Meta-Research & Open Science (AIMOS) I had a fascinating two days down at the University of Melbourne last week for the first AIMOS conference. The program is here and you can click through to see details of

Meta-Science: It’s all Happening in Melbourne

Are you interested in meta-science? In Open Science? If so, check out the inaugural conference of AIMOS, the Association for Interdisciplinary Research &Open Science. It’s a two-day meeting, on 7 & 8 November, at the University of Melbourne. There’s an

‘Open Statistics’: It’s All Happening in Italy

I knew good things were happening at the University of Bologna this (northern) summer. Now I know the details. The brochure is here, and this is part of the title page: What do they mean by ‘Open Statistics’? As I

The TAS Articles: Geoff’s Take

Judging Replicability: Fiona’s repliCATS Project

Judging Replicability Whenever we read a research article we almost certainly form a judgment of its believability. To what extent is it plausible? To what extent could it be replicated? What are the chances that the findings are true? What

Moving to a World Beyond “p < 0.05”

The 43 articles in The American Statistician discussing what researchers should do in a “post p<.05” world are now online. See here for a list of them all, with links to each article. The collection starts with an editorial: Go

Joining the fractious debate over how to do science best

At the end of the month (March 2019) the American Statistical Association will publish a special issue on statistical inference “after p values”. The goal of the issue is to focus on the statistical “dos” rather than statistical “don’ts”. Across

Sizing up behavioral neuroscience – a meta-analysis of the fear-conditioning literature

Inadequate sample sizes are kryptonite to good science–they produce waste, spurious results, and inflated effect sizes.  Doing science with an inadequate sample is worse than doing nothing.  In the neurosciences, large-scale surveys of the literature show that inadequate sample sizes

Top