Booklisti. For Finding Interesting Books, Now Including ITNS2 Read More »

The post Booklisti. For Finding Interesting Books, Now Including ITNS2 first appeared on Introduction to the New Statistics.

]]>**Booklisti**, would you believe, comprises lots of short lists of books that hang together. I have two lists, **ITNS2 **appearing in each. My **first list** is just **UTNS**, my first book, and **ITNS2**, our second edition of the intro book.

My **second list** (go to that site for links to the books listed below), titled **Open Science and how to do better research with better statistics**, comprises:

**Introduction to The New Statistics, Second Edition** By *Geoff Cumming and Robert Calin-Jageman*

**Understanding The New Statistics** By *Geoff Cumming*

**A Student’s Guide to Open Science **By *Charlotte Pennington*

**The Seven Deadly Sins of Psychology: A Manifesto for Reforming the Culture of Scientific Practice** By *Chris Chambers*

**Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth** By *Stuart Ritchie*

**Beyond Significance Testing **By *Rex B. Kline*

**Research Methods in Psychology: Evaluating a World of Information** By *Beth Morling*

**The Design of Experiments in Neuroscience** By *Mary E. Harrington*

Happy exploring of **Booklisti **and happy reading.

Geoff

The post Booklisti. For Finding Interesting Books, Now Including ITNS2 first appeared on Introduction to the New Statistics.

]]>Meet Petra: Enthusiasm for Archaeology, Open Science, and Better Statistics Read More »

The post Meet Petra: Enthusiasm for Archaeology, Open Science, and Better Statistics first appeared on Introduction to the New Statistics.

]]>It was a pleasure to meet **Petra Vaiglova** a few days ago while she was in Melbourne for **an archaeological science conference**. **Fiona Fidler** joined us for lunch–thanks to her for hosting.

Originally from the Czech Republic (Czechia), **Petra **has lived, studied, and worked all over the world, as you can see at **her site**. Her doctorate is from Oxford. Her **TEDx talk** outlines some of her research interests.

She arrived in Australia a couple of years ago as a post doc at **Griffith University**, Queensland. Soon after, she launched into organising what became a three-day online **Workshop on Good Statistical Practice in Archaeology**–open to anyone of any discipline.

I first learned of her enthusiasm for **statistical reform** and **Open Science** when she kindly invited me to speak at that workshop. I gave a talk on the new statistics, and another on using Bob’s new **esci **software in **jamovi**–in archaeological science, as in any other discipline. I posted about the workshop **here**.

Earlier this year Petra took up a **lectureship at ANU in Canberra**, and enthusiastically took on teaching statistics and topics in archaeological science to both undergraduate and postgraduate students.

She expressed keen interest in our second edition, even volunteering to help. Over many months last year she worked through final drafts of most chapters, picking up many errors and infelicities. Later she worked painstakingly through the proofs of many chapters, picking up elusive tiny errors. She made an immense contribution to ITNS2, as Bob and I acknowledge on p. xxvii.

So last week Petra, Fiona and I had much to discuss. Petra will be at **AIMOS in November** (more details **here**), no doubt meeting many of the good folks who make the Meta-science scene in Australia so lively and multi-disciplinary.

As an associate editor of the **Journal of Archaeological Science** she is helping develop guidelines for that journal to encourage reproducibility and Open Science practices.

I discovered years ago that **archaeological science **has Open Science lessons for us all–see my post **here**. I wish Petra all strength for her continuing efforts towards statistical reform and Open Science!

Geoff

The post Meet Petra: Enthusiasm for Archaeology, Open Science, and Better Statistics first appeared on Introduction to the New Statistics.

]]>Estimation, Open Science, and Bob’s Wonderful New esci Read More »

The post Estimation, Open Science, and Bob’s Wonderful New esci first appeared on Introduction to the New Statistics.

]]>- Three dramatisations of the enormous unreliability of the
*p*value. Can these help weaken researchers’ addiction to NHST that has withstood more than half a century of cogent rational critiques? - Bob’s wonderful new open-source
**esci**software with great estimation-based figures: See worked examples, and work along if you wish.

We argue that researchers should test less, estimate more, and adopt Open Science practices. We outline some of the flaws of null hypothesis significance testing and take three approaches to demonstrating the unreliability of the *p* value. We explain some advantages of estimation and meta-analysis (“the new statistics”), especially as contributions to Open Science practices, which aim to increase the openness, integrity, and replicability of research. We then describe **esci **(estimation statistics with confidence intervals): a set of online simulations, and an R package for estimation that integrates into **jamovi **and **JASP**. This software provides (a) online activities to sharpen understanding of statistical concepts (e.g., “The Dance of the Means”); (b) effects sizes and confidence intervals for a range of study designs, largely by using techniques recently developed by Bonett; (c) publication-ready visualisations that make uncertainty salient; and (d) the option to conduct strong, fair hypothesis evaluation through specification of an interval null. Although developed specifically to support undergraduate learning through the **2nd edition of our textbook**, **esci **should prove a valuable tool for graduate students and researchers interested in adopting the estimation approach. Further information is at **https://thenewstatistics.com**.

This is the first time (1) the **dance of the p values** (search YouTube), (2)

This component of **esci **is a **set of simulations and tools** by our colleague **Gordon Moore** that run in any browser. Explore the dances, play with sampling distributions, find critical values, and more.

Bob’s **esci **is an open-source package in R, which can be run in R, or within **jamovi**** **or (by December 2024) in **JASP**. We describe the wide range of measures and designs **esci **can analyse, including meta-analysis, and work through several examples. We emphasise figures that highlight uncertainty, especially by picturing confidence intervals.

We argue that *p* values, if used at all, are most valuable in the context of **hypothesis evaluation** based on an interval null hypothesis, and best understood with the help of an **esci **figure–see Figure 3.

Interactions can be challenging to understand and interpret; again **esci **provides figures designed to help–see Figure 4.

The article advises download of **jamovi**-format data files (*Gender math IAT.omv*, *Gender math IAT ma.omv*, *Campus Involvement.omv* and *MeditationBrain.omv*) from **https://osf.io/uhwj2**. Since the final version of the article was submitted Bob has integrated into **esci** all the data files used in **ITNS2**, including these four, so download from OSF is no longer needed.

Figures 3 and 4 are just two illustrations from the example **esci **analyses discussed in the article.

To open a data file within **esci**, click top left in **jamovi**, then click **Open**, **Data Library**, and scroll to see all the data files for ITNS2 arranged by chapter. Figure 2 shows selection of the first example file used in the article.

Figure 3 is an **esci **figure from a two independent groups analysis of the *Gender math IAT* file. The grey areas on the CIs are what we call **plausibility curves**. These illustrate variation in the plausibility, or relative likelihood, that values across and beyond the interval are the population value.

Figure 4 is one of the ways **esci **can display a 2 x 2 interaction–part of an RCT analysis of the *MeditationBrain *file.

If you wish, work along with the examples. The rich UI (user interface) of **esci **gives lots of scope to make figures look just as you want them–there’s advice about how to tweak your figures to look like those in the article.

As ever, we’d love to hear your comments on the new book and new software. Enjoy.

Geoff

The post Estimation, Open Science, and Bob’s Wonderful New esci first appeared on Introduction to the New Statistics.

]]>To Find Interesting Books, Explore shepherd.com, Now Including ITNS2 Read More »

The post To Find Interesting Books, Explore shepherd.com, Now Including ITNS2 first appeared on Introduction to the New Statistics.

]]>Our recommended list is now:

**A Student’s Guide to Open Science **By *Charlotte Pennington*

**Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth** By *Stuart Ritchie*

**Beyond Significance Testing **By *Rex B. Kline*

**Research Methods in Psychology: Evaluating a World of Information** By *Beth Morling*

**The Design of Experiments in Neuroscience** By *Mary E. Harrington*

**shepherd **offers lots of ways to explore. Happy reading.

Geoff

The post To Find Interesting Books, Explore shepherd.com, Now Including ITNS2 first appeared on Introduction to the New Statistics.

]]>Choosing a Textbook Cover Design Read More »

The post Choosing a Textbook Cover Design first appeared on Introduction to the New Statistics.

]]>It’s a delicious moment when the publisher sends a number of options their graphic designer has dreamed up for the cover. Below are the options for the three books. In each case, can you pick our choice? Our choices are below—don’t scroll down yet…** **

** **

**UTNS (2012)** …at left.

**ITNS1 (2017)** …below.

**ITNS2 (2024)** …below.

The three sets, all framed expertly by Lindsay my wife, hang above my desk:

**UTNS (2012)** …at left

Middle option

** ITNS1 (2017)** …at right

Leftmost option

**ITNS2 (2024)** …below

Top right option, as below left. (We were offered just the other five but asked to see the bottom right design in the bottom left colours, so Routledge sent the top right option, which we chose.)

However, when we received our printed books, we discovered that Routledge had actually used a modification of our chosen design, as at right. Not exactly our choice, but not bad.

The post Choosing a Textbook Cover Design first appeared on Introduction to the New Statistics.

]]>‘Treasure’: Claire’s Gorgeous Resin Artwork Read More »

The post ‘Treasure’: Claire’s Gorgeous Resin Artwork first appeared on Introduction to the New Statistics.

]]>** Treasure**, at left, by

Walk into our living room and be struck by the vibrancy and depth of colour of *Treasure*, so much more alive than any small printed copy can be.

Claire generously agreed that *Treasure* could be used on the cover of our three statistics textbooks. See the note on the copyright page of each.

At right, top to bottom:

**UTNS (2012)**

**ITNS1 (2017)**

**ITNS2 (2024)**

People often make comments like: “Looks like slices through some sort of stones”, “It’s wriggling things under a microscope!” or “Go snorkelling and see things like those?”

Claire’s response is “It’s an artwork, see it as you wish!” She mentions also that there’s no official top or bottom: hang it horizontally or vertically, either way up.

Besides looking great, is there any justification for it appearing on statistics books? People often make comments like: “There’s a pattern of those blues—oh, no there’s not”, or “Look, those ones sort-of alternate, but not quite”. Claire says that she often “starts to make a pattern, then breaks it”. That all sounds to me like trying to find some sort of regularity in randomness, which is one way of describing the aim of statistical inference: Can we identify a difference, or other pattern, lurking in the sampling variability, how large or strong is that pattern, and how confident can we be in our conclusion? That’s the central concern of our books.

Do you agree with the graphic designer’s choice of part-images from *Treasure* for the covers?

Geoff

NEXT: Choosing a cover design.** **

The post ‘Treasure’: Claire’s Gorgeous Resin Artwork first appeared on Introduction to the New Statistics.

]]>Vale Danny Kahneman, Giant of Statistical Cognition and Much Else Read More »

The post Vale Danny Kahneman, Giant of Statistical Cognition and Much Else first appeared on Introduction to the New Statistics.

]]>**Danny Kahneman** died on 27 March at 90. The APS announcement is **here**. I’ve **posted about him** before. The best quick read may be **this 2016 New Yorker piece** by Cass R. Sunstein and Richard Thaler of **Nudge **fame.

He won the Nobel Prize for Economics in 2002 for foundational work on **behavioural economics **that was joint with **Amos Tversky**, who died in 1996.

I have two particular reasons for thinking of him:

A misconception rather than a law, this was described in the famous article **Kahneman and Tversky (1971)**: Even quantitatively literate psychology researchers were likely to grossly over-estimate the probability that a replication of a study that obtained *p* = .05 would itself be statistically significant. This was a very early example of **statistical cognition**, the field that has been my primary research interest these last 25 years or so.

Moreover, that demonstration of drastic under-estimation of the sampling variability of the *p* value helped prompt my development of the **dance of the p values**,

**Anne**, a distinguished cognitive psychologist, during 1968-1971 supervised my DPhil research at Oxford. In 1978 she married Daniel Kahneman. They moved to North America and were together at Princeton for many years before her death in 2018.

She appears at left receiving from Obama the U.S. **National Medal of Science** in 2013.

I salute the memory of these two fine scientists from whom I’ve learned an enormous amount.

Geoff

Cumming, G. (2008). Replication and *p* Intervals: *p* values predict the future only vaguely, but confidence intervals do much better. *Perspectives on Psychological Science, 3*(4), 286–300. **https://doi.org/10.1111/j.1745-6924.2008.00079.x**

Tversky, A., & Kahneman, D. (1971). Belief in the law of small numbers. *Psychological Bulletin, 76*(2), 105–110. **https://doi.org/10.1037/h0031322**

The post Vale Danny Kahneman, Giant of Statistical Cognition and Much Else first appeared on Introduction to the New Statistics.

]]>Fun with esci in R: The simple two-group design Read More »

The post Fun with esci in R: The simple two-group design first appeared on Introduction to the New Statistics.

]]>We’ll start with a simple two-group design. Specifically, we’ll use data from Experiment 4 of (Kardas & O’Brien, 2018). In this study, participants watched a video explaining how to do a simple mirror-tracing task (Cusack, Vezenkova, Gottschalk, & Calin-Jageman, 2015). Participants were randomly assigned to watch the training video either 1 time or 20 times. They then predicted how they would perform on the task (0-100%) and then completed the task (0-100%). Kardas & O’Brien found that watching the training video repeatedly boosted confidence (predicted scores) but not performance.

**Opening the data – R**

If you haven’t installed esci yet, you can do so with:

`install.packages("esci")`

Once installed, we will load esci into memory and then we will store the Kardas & O’Brien data set bundled in esci, giving it the name **mydata**

```
library(esci)
mydata <- esci::data_kardas_expt_4
```

**Analyze the data in R**

We are going to analyze the effect of video **Exposure **on **Prediction** scores. We can do this with the estimate_mdiff_two command. We’ll want to store the result, so tell R to store it in a new variable called **estimate**.

```
estimate <- esci::estimate_mdiff_two(
data = mydata,
outcome_variable = Prediction,
grouping_variable = Exposure,
conf_level = 0.95,
assume_equal_variance = TRUE
)
```

Note that we’ve decided to assume equal variance… but it’s probably a better default **not **to do this.. and it’s easy enough to change the command by setting assume_equal_variance to FALSE.

**Inspect the result**

What we get back in R is a list, a complex object that contains other objects. You can inspect this object in lots of different ways, but lets try listing the objects it contains:

```
names(estimate)
[1] "properties" "es_mean_difference_properties"
[3] "es_mean_difference" "es_median_difference"
[5] "es_median_difference_properties" "es_smd_properties"
[7] "es_smd" "es_mean_ratio"
[9] "es_mean_ratio_properties" "es_median_ratio"
[11] "es_median_ratio_properties" "overview"
[13] "raw_data"
```

We can see that our results has properties, and then a bunch of different objects that start with es — that’s short for effect size. We get a mean difference, a median difference, an smd, a mean ratio, and a median ratio. Many of these have their own properties as well. Finally, we also get an overview and raw_data.

Let’s see the overview:

```
> estimate$overview
outcome_variable_name grouping_variable_name grouping_variable_level mean mean_LL mean_UL median
1 Prediction Exposure 1 56.37795 52.90820 59.84770 60
2 Prediction Exposure 20 67.76224 64.49236 71.03212 71
median_LL median_UL sd min max q1 q3 n missing df mean_SE median_SE
1 53.57284 66.42716 22.07273 0 100 40 71.5 127 0 268 1.762318 3.279225
2 66.12580 75.87420 17.66669 0 100 59 81.0 143 0 268 1.660803 2.486881
```

You can see that overview is a table — it lists each group found in the data (1x and 20x exposure) and provides basic descriptive statistics: mean with confidence interval, median with confidence interval, standard deviation, etc.

Let’s take a look at the es_mean_difference table:

```
> estimate$es_mean_difference
type outcome_variable_name grouping_variable_name effect effect_size LL
1 Comparison Prediction Exposure 20 67.76224 64.492358
2 Reference Prediction Exposure 1 56.37795 52.908204
3 Difference Prediction Exposure 20 ‒ 1 11.38429 6.616553
UL SE df ta_LL ta_UL
1 71.03212 1.660803 268 65.020985 70.50349
2 59.84770 1.762318 268 53.469143 59.28676
3 16.15202 2.421576 268 7.387331 15.38124
```

You can see that this table gives us the mean and confidence interval of the 20x group, of the 1x group, and **of the difference between them**, reporting (in row 3) the **contrast **between the 20x and 1x group. The 1x group, in this case is the **reference group** — we express the effect size *relative *to the 1x group. The mean difference in prediction scores is 11.38 95% CI [6.6, 16.15]. We also get the standard error, degrees of freedom, and the confidence interval at **t**wo **alpha **(90% CI in this case). Clearly, watching the instructional video make a pretty big difference in predictions, it boosted confidence by over 10 points on a 0-100 scale in this sample! There is some uncertainty about the size of the effect, but overall, it seems clear that more video exposure leads to more confidence.

Notice that we have some other ways of expressing the effect size. For one, we can examine **median **differences–probably a better idea in most cases in psychology, but not widely done.

```
> estimate$es_median_difference
type outcome_variable_name grouping_variable_name effect effect_size LL UL SE
1 Comparison Prediction Exposure 20 71 66.125803 75.87420 2.486881
2 Reference Prediction Exposure 1 60 53.572837 66.42716 3.279225
3 Difference Prediction Exposure 20 ‒ 1 11 2.933636 19.06636 4.115567
ta_LL ta_UL
1 66.909445 75.09055
2 54.606155 65.39385
3 4.230494 17.76951
```

This table is setup similarly to the es_mean_difference — we again get each group and the **contrast **between them. There is more uncertainty here (a difference of 11 points with a 95% CI [2.9, 19.06])… the data are consistent with a large median difference but also with a fairly small median difference of just 2.9 points (and valuers near the CI boundary are not very different in their compatibility with the data). So we’d still want to be cautious about concluding there is a meaningful median difference.

Want more ways to express this? Of course! We can also thing about the **ratios **between the group means or medians. Here’s the **ratio of medians**:

```
> estimate$es_median_ratio
outcome_variable_name grouping_variable_name effect effect_size LL UL comparison_median
1 Prediction Exposure 20 / 1 1.183333 1.037629 1.349498 71
reference_median
1 60
```

The 20x group had a median 1.18x the 1x group, but the CI is broad [1.037, 1.349].. so somewhere between a very small to very large increase in median is compatible with this data.

And, of course, psychologists remain a bit obsessed with Cohen’s d. So let’s look at the es_smd table:

```
> estimate$es_smd
outcome_variable_name grouping_variable_name effect effect_size LL UL numerator denominator
1 Prediction Exposure 20 ‒ 1 0.5716119 0.327274 0.8149238 11.38429 19.86031
```

SE df d_biased
1 0.1244027 268 0.5732178

This is a fairly large effect: d = 0.57 95% CI [0.33, 0.81] and the confidence interval is fairly narrow — we could fairly easily plan a sensitive follow-up study to help confirm and better characterize this effect.

But wait, there are lots of approaches to Cohen’s d… what is the denominator that was used and what flavor of Cohen’s d have we produced? Take a look at es_smd_properties to find out.

`> estimate$es_smd_properties`

$message
This standardized mean difference is called d_s because the standardizer used was s_p. d_s has been corrected for bias. Correction for bias can be important when df < 50. See the rightmost column for the biased value.

Ah, so this is *d*s – because it used the pooled standard deviation. If we had set assume_equal_variance to FALSE we’d have obtained d_avg, which uses the average of the group standard deviations as the normalizer.

**Visualiz**ations

We don’t just want a bunch of tables… let’s **see **this data.

This is easy in esci, we just pass our stored result (**estimate**) to an appropriate plot function. In this case, we’ll use plot_mdiff to visualize a mean or median difference:

```
esci::plot_mdiff(
estimate,
effect_size = "mean"
)
```

and we get this beautiful figure:

Which we can then customize to our heart’s content (it’s a ggplot2 plot object).

Want to see the median difference instead? Here we go:

```
esci::plot_mdiff(
estimate,
effect_size = "median"
)
```

and we get:

**Evaluating a Hypothesis**

Although Kardas & O’Brien conducted several studies on video exposure, this was the first study they conducted using mirror tracing as the performance task. Therefore, they probably didn’t yet have a clear quantitative prediction to test — they weren’t really ready for hypothesis testing. Imagine, though, that you are going to conduct a replication study. Based on Kardas & O’Brien, you believe increased video exposure produces a **substantive **change in confidence, and you decide to define this as at least a 5 point difference in means. In other words, you’re specifying an **interval null**. The skeptic’s hypothesis (the null hypothesis) is that any difference in confidence will be negligible (< 5 point difference) and your hypothesis is that it will be substantive (>5 point difference).

We can visualize your prediction against the results by tweaking our call to plot_mdiff just a bit:

```
esci::plot_mdiff(
estimate,
effect_size = "mean",
rope = c(-5, 5)
)
```

We’ve passed a two-element vector that defines the interval null. This is called a ROPE or region of practical equivalence. We’ve defined the rope using the concatenate function in R which creates vectors: c(-5, 5) — that means create a vector with elements -5 and 5 and send that to the function where it is expecting a ROPE to be defined.

Here’s what we get:

You can see the ROPE shaded in in red and pink, and you can visually compare the results with the predictions of you and the skeptic. The rules for declaring victory or simple: if the whole CI of the result is inside the ROPE, the skeptic wins, if the whole CI is outside, you win, and if there is overlap there is a draw. In this case, we can see that the CI on the difference is fully outside the ROPE (though not by a ton). If the ROPE had really been established *a priori *and a sensitive experiment designed to test the predictions, we’d now have a strong confirmatory indication that their is, indeed, a substantive effect of video exposure on confidence (well, strong statistical evidence… we’d still need to think about the internal and external validity of our study and the extent to which this supports our claim as well).

Want to conduct the hypothesis test a bit more formally? esci can help with the test_mdiff function, which takes arguments very similar to what we passed to plot_mdiff:

```
esci::test_mdiff(
estimate,
effect_size = "mean",
rope = c(-5, 5)
)
```

We get back a complex object, but one of its components is a table called interval_null which has this content:

```
$interval_null
test_type outcome_variable_name effect rope
1 Practical significance test Prediction 20 ‒ 1 (-5.00, 5.00)
confidence CI
1 95 95% CI [6.616553, 16.15202]\n90% CI [7.387331, 15.38124]
rope_compare p_result
```

1 95% CI fully outside H_0 p < 0.05
conclusion significant
1 At α = 0.05, conclude μ_diff is substantive TRUE

Viola!

In this example, we conducted a hypothesis test on mean differences, but we could just as easily work with median differences just by changing the effect_size argument to “median”. How cool is it to be able to conduct interval null tests of differences in median!? Think how sophisticated you will feel!

**Conclusions**

We’ve taken a quick tour of analyzing a two group design in esci.

esci is still in development. I expect the visualization functions, like plot_mdiff, to still change a bit. But the overall workflow should hopefully be stable and sensible: you generate an estimate with an estimate_ function, you can then visualize it (plot_ functions) and/or evaluate a hypothesis with it (test_ functions). The estimate_function produces complex lists with all the results you need: overview table, various es_ tables reporting different effect sizes, and _properties lists with all the nitty-gritty details. And that’s that!

- Cusack, M., Vezenkova, N., Gottschalk, C., & Calin-Jageman, R. J. (2015). Direct and Conceptual Replications of Burgmer & Englich (2012): Power May Have Little to No Effect on Motor Performance (J. M. Haddad, Ed.). Public Library of Science (PLoS). doi: 10.1371/journal.pone.0140806
- Kardas, M., & O’Brien, E. (2018). Easier Seen Than Done: Merely Watching Others Perform Can Foster an Illusion of Skill Acquisition. SAGE Publications. doi: 10.1177/0956797617740646

The post Fun with esci in R: The simple two-group design first appeared on Introduction to the New Statistics.

]]>Vale Bob Rosenthal, Statistical Reform Leader and Much Else Read More »

The post Vale Bob Rosenthal, Statistical Reform Leader and Much Else first appeared on Introduction to the New Statistics.

]]>I was much saddened to read of the death last month of **Bob Rosenthal**. See this **obituary**; and **another **in the *New York Times*.

I met him first in 1996 when I called on him at Harvard to discuss statistical reform. What a gentle, encouraging, and thoroughly nice person! What a giant intellect! He loved nothing better than to find innovative solutions to tricky problems.

Considering **statistical reform** and **Open Science**:

- He was an early proponent of a focus on
**effect sizes**, especially his favourite, Pearson**correlation,**.*r* - He was a pioneer of
**meta-analysis**and identified what he called the**file-drawer effect**. **Rosenthal and Gaito (1963)**reported evidence that researchers’ confidence in an effect drops sharply as the*p*value increases past .05; they labelled this the**cliff effect**. This was an early example of**statistical cognition**–the empirical study of how people understand statistical concepts and reports. We still need much more of that, imho.- Around 2009
**Jerry Lai**wanted to investigate the cliff effect as part of**his PhD**. He sent Bob a very polite request for any further information about the original study. Promptly, back came an encouraging message to Jerry and a scan of several hand-written pages of the original data. From almost 50 years earlier! A wonderful example of**Open Data**(well, available data), with no excuses about hard disk crashes and superseded storage formats. - He advocated analysis of well-chosen
**contrasts**as better than the customary reliance on Anova and*p*values (*, **, ***, or*ns*) to interpret omnibus main and interaction effects. He stated that “the problem is that omnibus tests … do not usually tell us anything we really want to know”.(1985) by*Contrast Analysis: Focused Comparisons in the Analysis of Variance***Rosenthal and Rosnow**remains an accessible and powerful explanation. UTNS, and both editions of ITNS take this planned contrast approach (these days, with preregistration) to the analysis of complex designs. - In 2008 Fiona Fidler and I were working on
**Confidence Intervals : Better Answers to Better Questions**. We sent a draft to Bob who was working on an accompanying article**Effect Sizes : Why, When, and How to Use Them**. Bob responded with enthusiasm, saying he loved our article and also offering valuable suggestions.

Bob’s nickname among his students was “**Prof ARRRZZZental**“, recognising his love of correlation *r*.

I salute his memory and his enduring contribution to improving how we do things.

Geoff

The post Vale Bob Rosenthal, Statistical Reform Leader and Much Else first appeared on Introduction to the New Statistics.

]]>Brian Nosek Speaks: A BJKS Podcast Read More »

The post Brian Nosek Speaks: A BJKS Podcast first appeared on Introduction to the New Statistics.

]]>Here are the sections:

00:00: Brian’s early interest in improving science

15:24: How the Center for Open Science got funded (by John and Laura Arnold)

26:08: How long is COS financed into the future?

29:01: What if COS isn’t benefitting science anymore?

35:42: Is Brian a scientist or an entrepreneur?

40:58: The future of the Center for Open Science

51:13: A book or paper more people should read

54:42: Something Brian wishes he’d learnt sooner

58:53: Advice for PhD students/postdocs

I recently **posted **about other **BJKS podcasts**—**Benjamin James Kuper-Smith** talking with **Simine Vazire**, **Chris Chambers**, and **me**, among others.

Happy listening (or even reading the transcripts)

Geoff

The post Brian Nosek Speaks: A BJKS Podcast first appeared on Introduction to the New Statistics.

]]>