Internal Meta-Analysis: The Latest

I recently wrote in favour of internal meta-analysis, which refers to m-a that integrates evidence from two or more studies on more-or-less the same question, all coming from the same lab and perhaps reported in a single article. The post is here.

This month’s issue of Significance magazine carries an article that also argues in favour of internal meta-analysis, which it refers to as single paper meta-analysis.

McShane, B. B. & Böckenholt, U. (2018). Want to make behavioural research more replicable? Promote single paper meta-analysis. Significance, December, 38-40. (The article is behind a paywall, so I can’t give a link to the full paper.)

The article provides a link to software that, the authors claim, makes it easy to carry out meta-analysis, using their recommended hierarchical (or multilevel) model fit to the individual-level observations.

Geoff

P.S. Note that Blakely McShane, the first author, is also first author of the article Abandon Statistical Significance that I recently wrote about–see here.

Abandon Statistical Significance!

That’s the title of a paper accepted for publication in The American Statistician. (I confess that I added the “!”) The paper is here. Scroll down below to see the abstract. The paper boasts an interdisciplinary team of authors, including Andrew Gelman of blog fame.

I was, of course, eager to agree with all they wrote. However, while there is much excellent stuff and I do largely agree, they don’t go far enough.

Down With Dichotomous Thinking!

Pages 1-11 discuss numerous reasons why it’s crazy to dichotomise results, whether on the basis of a p value threshold (.05, .005, or some other value) or in some other way–perhaps noting whether or not a CI includes zero, or the Bayes Factor is greater than some threshold. I totally agree. Dichotomising results, especially as statistically significant or not, throws away information, is likely to mislead, and is a root cause of selective publication, p-hacking, and other evils.

So, What to Do?

The authors state that they don’t want to ban p values. They recommend “that the p-value be demoted from its threshold screening role and instead, treated continuously, be considered along with currently subordinate factors (e.g., related prior evidence, plausibility of mechanism, study design and data quality, real world costs and benefits, novelty of finding, and other factors that vary by research domain) as just one among many pieces of evidence.” (abstract)

That all seems reasonable. Yes, if p values are mentioned at all they should be considered as a graduated measure. However, the authors “argue that it seldom makes sense to calibrate evidence as a function of the p-value.” (p. 6) Yet in later examples, for example in Appendix B, they interpret p values as indicating strength of evidence that some effect is non-zero. I sense an ambivalence: The authors present strong arguments against using p values, but cannot bring themselves to make the logical next step and not use them at all.

Why not? One reason, I think, is that they don’t discuss in detail any other way that inferential information from the data can be used to guide discussion and interpretation, alongside the ‘subordinate factors’ that they very reasonably emphasise all through the paper. For me, of course, that missing inferential information is, most often, estimation information. Once we have point and interval estimates from the data, p values add nothing and are only likely to mislead.

In the context of a neuroimaging example, the authors state that “Plotting images of estimates and uncertainties makes sense to us” (p. 15). Discussing pure research they state “we would like to see researchers simply report results: estimates, standard errors, confidence intervals, etc., with statistically inconclusive results being relevant for motivating future research.” That’s fine, but a long way short of recommending that point and interval estimates almost always be reported and almost always used as the primary data-derived inferential information to guide interpretation.

Meta-analytic Thinking

There is a recommendation for “increased consideration of models that use informative priors, that feature varying treatment effects, and that are multilevel or meta-analytic in nature” (p. 16). That hint is the only mention of meta-analysis.

For me, the big thing missing in the paper is any sense of meta-analytic thinking–the sense that our study should be considered as a contribution to future meta-analyses, as providing evidence to be integrated with other evidence, past and future.. Replications are needed, and we should make it as easy as possible for others to replicate our work. From a meta-analytic perspective, of course we must report point and interval estimates, as well as very full information about all aspects of our study, because that’s what replication and meta-analysis will need. Better still, we should also make our full data and analysis open.

For a fuller discussion of estimation and meta-analysis, see our paper that’s also forthcoming in The American Statistician. It’s here.

Geoff

Here’s the abstract of McShane et al.:

Open Science DownUnder: Simine Comes to Town

A week or two ago Simine Vazire was in town. Fiona Fidler organised a great Open Science jamboree to celebrate. The program is here and a few of the sets of slides are here.

Simine on the credibility revolution

First up was Simine, speaking to the title THE CREDIBILITY REVOLUTION IN PSYCHOLOGICAL SCIENCE. Her slides are here. She reminded us of the basics then explained the problems very well. Enjoy her pithy quotes and spot-on graphics.

My main issue with her talk, as I said at the time, was the p value and NHST framework that she used. I’d love to see the parallel presentation of the problems and OS solutions, all set out in terms of estimation. Of course it’s easy to cherry-pick and do other naughty things when using CIs, but, as we discuss in ITNS, there should be less pressure to p-hack, and the lengths of the CIs give additional insight into what’s going on. Switching to estimation doesn’t solve all problems, but should be a massive step forward.

A vast breadth of disciplines

Kristian Camilleri described the last few decades of progress in history and philosophy of science. Happily, there’s now much HPS interest in the practices of human scientists. So there’s lots of overlap with the concerns of all of us interested in developing OS practices.

Then came speakers from psychology (naturally), but also evolutionary biology, law, statistics, ecology, oncology, and more. I mentioned the diversity of audiences I’ve been invited to address this year on statistics and OS issues–from Antarctic research scientists to cardiothoracic surgeons.

Mainly we noted the commonality of problems of research credibility across disciplines. To some extent core OS offers solutions; to some extent situation-specific variations are needed. A good understanding of the problems (selective publication, lack of replication, misleading statistics, lack of transparency…) is vital, in any discipline.

IMeRG

Fiona’s own research group at The University of Melbourne is IMeRG (Interdisciplinary MetaResearch Group). It is, as its title asserts, strongly interdisciplinary in focus. Researchers and students in the group outlined their current research progress. See the IMeRG site for topics and contact info.

Predicting the outcome of replications

Bob may be the world champion at selecting articles that won’t replicate: I’m not sure of the latest count, but I believe only 1 or 2 of the dozen or so articles that he and his students have very carefully replicated have withstood the challenge. Only 1 or 2 of their replications have found effects of anything like the original effect sizes. Most have found effect sizes close to zero. 

Several projects have attempted to predict the outcome of replications, then assessed the accuracy of the predictions. Fiona is becoming increasingly interested in such research, and ran a Replication Prediction Workshop as part of the jamboree. I couldn’t stay for that, but she introduced it as practice for larger prediction projects she has planned.

You may know that cyberspace has been abuzz this last week or so with the findings of Many Labs 2, a giant replication project in psychology. Predictions of replication outcomes were collected in advance: Many were quite accurate. A summary of the prediction results is here, along with links to earlier studies of replication prediction.

It would be great to know what characteristics of a study are the best predictors of successful replication. Short CIs and large effects no doubt help. What else? Let’s hope research on prediction helps guide development of OS practices that can increase the trustworthiness of research.

Geoff

P.S. The Australasian Meta-Research and Open Science Meeting 2019 will be held at The University of Melbourne, Nov 7-8 2019.

Cabbage? Open Science and cardiothoracic surgery

“The best thing about being a statistician is that you get to play in everyone’s backyard.” –a well-known quote from John Tukey.

Cabbage? That’s CABG–see below.

A week or so ago Lindy and I spent a very enjoyable 5 days of sun, surf, and sand at Noosa Heads in Queensland. I spoke at the Statistics Day of the Annual Scientific Meeting of ANZSCTS (Australian and New Zealand Society of Cardiothoracic Surgeons). The program is here (scroll down to p. 18).

My first talk, to open the day, was “Setting the scene–problems with current design, analysis and reporting of medical research”. The slides are here.

In the afternoon I spoke on “‘Open science’–the answer to the problem?”. The slides are here.

Once again, I learned that:

  • The problems of selective publication, lack of reproducibility, and lack of full access to data and materials are, largely, common across numerous disciplines. And many researchers have increasing awareness of such problems.
  • Familiar Open Science practices (preregistration, open materials and data, publishing whatever the results, …) have wide applicability. However, each discipline and research field needs to develop its own best strategies for achieving, as well as it can, Open Science goals.

Technology races on…

I referred to a 2018 meta-analysis (pic below) that combined the results of 7 RCTs that compared two ways to rejoin the two halves of the sternum (breast bone) after open-chest surgery. The conclusion was that there’s not much to choose between wires and traditional suturing.

That was a 2018 article, but two commercial exhibitors were touting the advantages of devices that they claimed were better than either procedure assessed in the Pinotti et al. review. One was a metal clamp that has, apparently, been used for thousands of patients in China and has just been approved for use in Australia, on the basis of one RCT. The second looked like up-market plastic cable ties.

Open Science may set out ideal practice for researchers, but meanwhile regulators and practitioners must constantly make judgments on the basis of possibly less than ideal amounts of evidence, less than desirable levels of precision of estimates.

PCI or CABG? Just run a replication!

PCI is percutaneous coronary intervention, usually the insertion of a stent in a diseased section of coronary artery. The stent is typically inserted via a major blood vessel, for example the femoral artery from the groin.

CABG (“Cabbage”) is the much more invasive coronary artery bypass grafting, which requires open-chest surgery.

How do they compare? Arie Pieter Kappetein told us the fascinating story of  research on that question. He described the SYNTAX study, a massive comparison of PCI and CABG that involved 85 centres across the U.S. and Europe. At the 5-year follow-up stage, little overall difference was found between the two very different techniques. Some clinical advice could be given. There were many valuable subgroup analyses, some of which gave only tentative conclusions.

Replication was needed! More than 5 years and $80M later, he could describe results from the even larger EXCEL study. Again, there were many valuable insights and little overall difference, and the researchers are now seeking funding to follow the patients beyond 5 years. Recently his team has published a patient-level meta-analysis of results from 11 randomised trials involving 11,518 patients. Some valuable differences were identified and recommendations for clinical practice were made but, again, there was little overall difference in several of the most important outcomes–such as death.

So, in some fields, replication, if possible at all, is rather more challenging than simply running another hundred or so participants on your simple cognitive task!

Databases

Some of the most interesting papers I attended were retrospective studies of cases sourced from large patient databases. Such databases, as large and detailed as possible, are a highly valuable research resource. One seminar was devoted to the practicalities of setting up a major thoracic database, alongside the existing Australian cardiac database. The vast range of practicalities to be considered made clear how challenging it is to set up and keep running such databases.

Co-incidentally, The New Yorker that week published a wonderful article by Atul Gawande–one of my favourite writers–with the title Why Doctors Hate Their Computers. It seemed to me so relevant to that day’s cardiothoracic database discussions.

I hope you never have to worry about whether to prefer PCI or cabbage!

Geoff

Internal Meta-Analysis: Useful or Disastrous?

A recent powerful blog post (see below) against internal meta-analysis prompts me to ask the question above. (Actually, Steve Lindsay prompted me to write this post; thanks Steve.)

In ITNS we say, on p. 243: “To carry out a meta-analysis you need a minimum of two studies, and it can often be very useful to combine just a few studies. Don’t hesitate to carry out a small-scale meta-analysis whenever you have studies it would be reasonable to combine.”

Internal meta-analysis
The small number to be combined could be published studies, or your own studies (perhaps to appear in a single journal article) or, of course, some of each. Combining your own studies is referred to as internal meta-analysis. It can be an insightful part of presenting, discussing, and interpreting a set of closely related studies. In Lai et al. (2012), for example, we used it to combine the results from three studies that used three different question wordings to investigate the intuitions of published researchers about the sampling variability of the p value. (Those studies are from the days before preregistration, but I’m confident that our analysis and reporting was straightforward and complete.)

The case against
The blog post is from the p-hacking gurus and is here. The main message is summarised in this pic:

The authors argue that even a tiny amount of p-hacking of each included study, and/or a tiny amount of selection of which studies to include, can have a dramatically large biasing effect on the result of the meta-analysis. They are absolutely correct. They frame their argument largely in terms of p values and whether or not a study, or the whole meta-analysis, gives a statistically significant result.

Of course, I’d prefer to see no p values at all, and the whole argument made in terms of point and interval estimates–effect sizes and CIs. Using estimation should decrease the temptation to p-hack, although estimation is of course still open to QRPs: results are distorted if choices are made in order to obtain shorter CIs. Do that for every study and the CI on the result of the meta-analysis is likely to be greatly and misleadingly shortened. Bad!

Using estimation throughout should not only reduce the temptation to p-hack, but also assist understanding of each study, and the whole meta-analysis, so may reduce the chance that an internal meta-analysis will be as misleading as the authors illustrate.

Why internal?
I can’t see why the authors focus on internal meta-analysis. In any meta-analysis, a small amount of p-hacking in even a handful of the included studies can easily lead to substantial bias. At least with an internal meta-analysis, which brings together our own studies, we have full knowledge of the included studies. Of course we need to be scrupulous to avoid p-hacking any study, and any biased selection of studies, but if we do that we can proceed to carry out, interpret, and report our internal meta-analysis with confidence.

The big meta-analysis bias problem
It’s likely to be long into the future before many meta-analyses can include only carefully preregistered and non-selected studies. For the foreseeable, many or most of the studies we need to include in a large meta-analysis carry risks of bias. This is a big problem, probably without any convincing solution short of abandoning just about all research published earlier than a few years ago. Cochrane attempts to tackle the problem by having authors of any systematic review estimate the extent of various types of biases in each included study, but such estimates are often little more than guesses.

Our ITNS statement
I stand by our statement in favour of internal meta-analysis. Note that it is made the context of a book that introduces Open Science ideas in Chapter 1, and discusses and emphasises them in many places, including in the meta-analysis chapter. Yes, Open Science practices are vital, especially for meta-analysis! Yes, bias can compound alarmingly in meta-analysis! However, the problem may be less for a carefully conducted internal meta-analysis, rather than more.

Geoff

Lai, J., Fidler, F., & Cumming, G. (2012). Subjective p intervals: Researchers underestimate the variability of p values over replication. Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 8, 51-62. doi:10.1027/1614-2241/a000037

Eating Disorders Research: Open Science and The New Statistics

I’m in Sydney, the great Manly surf beach just over the road. It’s an easy ferry ride to the Opera House and city centre. Lindy and I started this trip up from Melbourne with a few days with a cousin, at her house high above Killcare beach an hour north of Sydney. We enjoyed watching the humpback whales migrating south.

To business. I’m at the 24th Annual Meeting of the global Eating Disorders Research Society. I gave my invited talk and workshop yesterday. It seemed to go well, and all the informal chat I’ve had with folks since has been positive. There was already very clear awareness of the need for change, even if much of the detail I discussed was new to many.

The slides for my talk are here, and for the workshop are here.

I thought that one of the most interesting discussions was about the challenges of conducting replications in eating disorders (ED) research. I’d anticipated that by bringing up the great paper by Scott Lilienfeld and colleagues on replication in clinical psychology. When Scott took over as editor of Clinical Psychological Science, he introduced badges and policies to encourage Open Science practices.

That paper is Tachett et al. (2017). It discussed issues relevant to replication; we agreed yesterday that these are largely relevant also to ED replication research. The issues included:

* Case studies, qualitative methods, correlational studies
* Exploratory, question-generating studies
* Large archival data sets
* Small specialised populations, small-N studies
* Difficulty of standardising measures, and treatments
* Messy and noisy data
* Need to focus on effect sizes, and to pool data where possible

Some of the main conclusions were that researchers should, where possible, aim for:
* Reduced QRPs
* Preregistration, of suitable kind; open materials and data, where possible
* Independent replications, use existing data where appropriate
* Improvements to current practices, to improve replicability
* Increased statistical power (larger N, better control, stronger IVs)

Two overall conclusions of mine were:
* Think meta-analytically …and therefore use the new statistics
* Tailor OS solutions to the research field

This year I have given presentations to Antarctic and marine research scientists, orthodontists, and now ED researchers. My main take-home message is that the issues and problems are largely similar across disciplines and that to some extent the solutions are similar, but that to an important extent the solutions need to be figured out in each different research context.

Happy replicating, happy surfing,
Geoff

Tackett, J. L., Lilienfeld, S. O., Patrick, C. J., Johnson, S. L., Krueger, R. F., Miller, J. D., … & Shrout, P. E. (2017). It’s time to broaden the replicability conversation: Thoughts for and from clinical psychological science. Perspectives on Psychological Science, 12, 742-756. tiny.cc/ClinPsyRep

Cochrane: Matthew Page Wins the Prize!

Years ago, Matthew Page was a student in the School of Psychological Science at La Trobe University (in Melbourne), working with Fiona Fidler and me. He somehow (!) became interested in research methods and practices, especially as related to meta-analysis. He moved to Cochrane Australia, which is based at Monash University, also in Melbourne.

After completing a PhD there he had a post-doc with Julian Higgins, of Cochrane and meta-analysis fame, in Bristol, U.K.. Then he returned to Cochrane Australia, where he is now a research fellow.

He has been building a wonderful research record, working with some leading scholars, including Julian Higgins (of course) and the late Doug Altman.

It was wonderful to hear, a day or two ago, this announcement:
“Cochrane Australia Research Fellow and Co-Convenor of the Cochrane Bias Methods Group Matthew Page recently took out this year’s Bill Silverman Prize, which recognises and celebrates the role of constructive criticism of Cochrane and its work.”

Read more about the prize and Matt’s achievements here.

Congratulations Matt!

Geoff

The pic below shows Prof David Henry (left) presenting Matt with this year’s Bill Silverman Prize at the Cochrane Colloquium in Edinburgh.

Draw Pictures to Improve Learning?

In ITNS we included a short section near the start describing good strategies for learning, based on empirical studies. Scattered through the book are reminders and encouragement to use the effective strategies. Now, just as we’re thinking about possible improvements in a second edition, comes this review article:

Fernandes, M. A., Wammes, J. D., & Meade, M. E. (2018). The surprisingly powerful influence
of drawing on memory. Current Directions in Psychological Science, 27, 302–308. DOI:10.1177/0963721418755385

It’s behind a paywall, but here is the abstract:

The surprisingly powerful influence of drawing on memory: Abstract
The colloquialism “a picture is worth a thousand words” has reverberated through the decades, yet there is very little basic cognitive research assessing the merit of drawing as a mnemonic strategy. In our recent research, we explored whether drawing to-be-learned information enhanced memory and found it to be a reliable, replicable means of boosting performance. Specifically, we have shown this technique can be applied to enhance learning of individual words and pictures as well as textbook definitions. In delineating the mechanism of action, we have shown that gains are greater from drawing than other known mnemonic techniques, such as semantic elaboration, visualization, writing, and even tracing to-be-remembered information. We propose that drawing improves memory by promoting the integration of elaborative, pictorial, and motor codes, facilitating creation of a context-rich representation. Importantly, the simplicity of this strategy means it can be used by people with cognitive impairments to enhance memory, with preliminary findings suggesting measurable gains in performance in both normally aging individuals and patients with dementia.

For the original articles that report the drawing-for-learning studies, see the reference list in the review article, or search for publications in 2016 and after by any of the three authors.

A few thoughts
I haven’t read the original articles, and the review doesn’t give values for effect sizes, but the research program–largely published in the last couple of years–takes am impressively broad empirical approach. There are many comparisons of different approaches to encoding, elaboration, and testing of learning. Drawing holds up very well in the great majority of comparisons. There are interesting suggestions, some already tested empirically, as to why drawing is so effective as a learning strategy.

As usual, lots of questions spring to mind. How effective could drawing be for learning statistical concepts? How could it be used along with ESCI simulations? Would it help for ITNS to suggest good ways to draw particular concepts, or should students be encouraged to generate their own representations?

These and similar questions seem to me to align very well with our basic approach in ITNS of emphasising vivid pictorial representations whenever we can. The dances, the cat’s eye picture, the forest plot…

Perhaps we should include drawing as a powerful extra recommended learning strategy, with examples and suggestions included in ITNS at strategic moments?

As usual, your comments and advice are extremely welcome. Happy sketching!

Geoff

ITNS–The Second Edition!

Routledge, our publisher, has started planning for a second edition. That’s very exciting news! The only problem is that Bob and I can’t think of anything that needs improving. Ha! But, seriously, we’d love to hear from you about things we should revise, update, or somehow improve. (Of course, we’d also love to hear about the good aspects.) We’d especially like to hear from:

Teachers who are using ITNS. What do you like? What’s missing? What are the irritations? What difficulties have you encountered?

Students who are using ITNS. Same questions! Also, how could the book be more appealing, accessible, effective, even fun?

Potential teachers. You have considered ITNS, perhaps examining an inspection copy, but you decided against adoption. Why? Was it mainly the book and ancillaries, or outside factors? How could we revise so that you would elect to adopt?

The Routledge marketing gurus tell us that one strong message back from the field is: “ITNS is really good, just what the world needs and should be using. But for me, right now, it’s too hard to change. I’ll wait until others are using it, maybe until I’m forced to change.” If that’s how you feel, please let us know.

Perhaps that position is understandable, but it seems to conflict with the enthusiasm with which some (many?) young researchers are embracing Open Science, and the major changes to research practices that Open Science requires. Consider, for example, the emergence of SIPS and, just recently, the Down-Under version.

That position (i.e., it’s too hard to change right now) also contrasts strongly with the strong and positive responses that Bob and I get whenever we give talks or workshops about the new statistics and Open Science.

So we’re puzzled why more teachers are not yet switching their teaching approach–we’ve tried hard to make ITNS and, especially, its ancillaries as helpful as we can for anyone wishing to make the switch.

Thinking about how we could improve ITNS, here are a few of the issues you may care to comment about:

Open Science Lots has happened since we finalised the text of ITNS. We would certainly revise the examples and update our report of how Open Science is progressing. However, the basics of Open Science, as discussed in Chapter 1 and several later chapters, endure. ITNS is the first introductory text to integrate Open Science ideas all through, so we had to figure out for ourselves how best to do that. How could we do it better?

ESCI ESCI is intended to make basic statistical ideas vivid, memorable, and easy to grasp. It also allows you to analyse your own data and picture the results, for a range of measures and simple designs. Many of the figures in the book are ESCI images. However, in ESCI you can’t, for example, easily load, save, and manage files. The online ancillaries include workbooks with guidance for using ITNS with SPSS, and with R. Should we consider replacing ESCI, noting that we want to retain the graphics and simulations to support learning? Should we retain ESCI, but include more support for Jamovi, JASP, or something else? Other strategies we should consider?

NHST and p values We present these in Chapter 6, after the basics of descriptives, sampling, and estimation in earlier chapters. You can elect to skip this chapter, or give it as little or as much emphasis as you wish. Is this the best chapter organisation?

Ancillaries We offer a wide range via the publisher’s companion website. What’s most useful? Least useful? How could we improve the ancillaries?

…they are just a few thoughts. Tell us about anything you wish. You could even tell us it’s all wonderful, if you like!

In advance, many thanks,

Geoff
P.S. Make a public comment below, and/or email either of us, off list:
Geoff g.cumming@latrobe.edu.au
Bob rcalinjageman@dom.edu

Open Science DownUnder — Fiona Fidler reports

Last week, the 2018 Australasian Open Science Conference was held in Brisbane at the University of Queensland: The first conference in Oz on the themes of Open Science and how to improve how science is done. They expected 40 and 140 turned up! By all reports it was a rip-roaring success. So mark your diary for the second meeting, likely to be in Melbourne on 7-8 Nov 2019. That’s a great time of year to escape the misery of the Northern Hemisphere in winter and take in a bit of sun, sand, and surf–and good science.

Fiona Fidler kindly provided the following brief report of last week’s meeting:

A new Open Science and Meta-Research community in Australia

Our research group recently attended the Australasian Open Science (un)conference at the University of Queensland. The meeting was modelled on SIPS (Society for Improving Psychological Science), which means the focus was on doing things, not streams of long talks.

For the first meeting of its kind in Australia, it certainly pulled a crowd. Organises Eric Vanman, Jason Tangen and Barbara Masser (Psychology, UQ) had initially expected attendance of around 40. In the end, 140 of us gathered in Brisbane. And more still engaged through twitter, during and after the conference. It has been wonderful to discover this Australia community and great plans to stay connected are emerging, e.g., formalising a Melbourne Open Science community and working towards our own Australian and interdisciplinary SIPS-style society. If you’re reading this and you’d like to add your name to the list of people interested in these things, send me (Fiona) an email (fidlerfm@unimelb.edu.au) and I’ll make sure you receive the survey Mathew Ling (Deakin Uni) is currently setting up.

This first meeting included: hackathons to establish checklists for assessing the reliability of published research; brainstorming sessions about open science practices in applied research; discussions (unconferences) on the existence of QRPs outside of a hypothesis testing framework, and practical problems in computational reproducibility; R workshops and sessions on creating ‘open tools’. A Rapid Open Science Jam at the end of the first day resulted in new project ideas, including one to survey undergraduate intuitions about open science practice. View the full program for all the other good things I haven’t mentioned (there are many). And of course, there’s more on twitter: #uqopenscience.

We are all very grateful to the large and impressive group of student volunteers who contributed to the great success of #uqopenscience, including Raine Vickers-Jones who opened the conference with warmth and enthusiasm.

In 2019 the conference will move to Melbourne and take on a slightly more interdisciplinary flavour, as the Australasian Interdisciplinary Open Science Conference. We expect to see ecologists, biologists, medical researchers and others, in addition to the existing psychology base. Tentative dates 7-8 Nov 2019 at the University of Melbourne. We anticipate being able to offer a limited number of travel scholarships for students and ECRs.

For now, look for updates on imerg.info or contact the organising committee Fiona Fidler (Uni Melb, @fidlerfm), Hannah Fraser (Uni Melb @HannahSFraser), Emily Kothe (Deakin, @emilyandthelime) and Jennifer Beaudry (Swinburne, @drjbeaudry).
_________________
Thanks Fiona for that report. Shortly after sending it, she sent another message–saying ‘Here’s a much better blog post’ and then giving the link to:
Eight take-aways from the first Australasian Open Science conference

Well, whether or not better, it certainly has tons of great stuff. I’d love to have been there!

Here are a couple of thoughts of mine:

So young! Like SIPS, it looks like the median age of participants is less than half my age! Which is fantastic. If ever we worried that the next generation of researchers would play it safe and just do what their professors told them to do, well we need not have worried. They are creating the new and better ways to do science, and are finding ways to get it out there and happening. All of which is great. (Hey, reach for ITNS when it can help, especially by helping beginners into OS ways.)

Not just psychology Note #6, ‘it isn’t just psychology’, and also Fiona’s comments about the range of disciplines likely to be involved in the second meeting, next November. Psychology has the research skills to do the meta-research and collect evidence about scientific practices, and to develop many of the policies, tools, and materials needed for OS. That can all be valuable for numerous other disciplines as they make their versions of the OS journey.

Geoff

Top