The Simple Paired Design: How Does the Correlation Relate to SD(diff)?

Can you help?

Surely someone has written about this question? Please let me know where!

In the paired design, the two measures, for example Pretest and Posttest, are usually positively correlated. The SD of the paired differences (sdiff), is usually less than sPre and sPost. That small SD or, equivalently, the positive Pearson correlation (rPrePost) gives us a sensitive and thus attractive design–if carry-over effects are not a large problem.

We need either sdiff or rPrePost to calculate the CI on the mean of the differences, or to calculate t for the paired t test.

The relationship

There is an algebraic relationship between rPrePost and sdiff, but what is it? I can’t recall ever having come across it, and I haven’t been able to find it in any books to hand, or via searching online. But I’m sure it’s out there somewhere.

I figured it out to be:

I used a modified version of the Simulate paired page of ESCI chapters 5-6 to test this relationship for a large number of simulated paired data sets. It held exactly in every case.

For the Paired analysis in esci, using the Analyze summary data option, the user needs to enter M and s for each of Pretest and Posttest, sample size n, and the correlation rPrePost. Then esci could use the formula above to report sdiff as part of the analysis.

We would also like to provide the option for the user to enter sdiff, then see the analysis, including the value of rPrePost calculated using that formula.

Can you help?

If you know any published discussion of that formula, I’d love to know about it!

Thanks,

Geoff

2 Comments on “The Simple Paired Design: How Does the Correlation Relate to SD(diff)?

  1. Fun facts! I’ve been playing with Bob’s latest version of esci Paired, not yet released. For the analysis of Summary data, you enter mean and SD for each measure, and N. Then you can enter either r or s-diff. In either case, esci reports the calculated value of the other option. Enter r=0 and, of course, s-diff equals RMS of s1 and s2, the SDs of the two measures. That’s in effect the two independent groups case. Now try entering the extreme and unrealistic values of r=-1 and r=1. I found that, for r=-1, s-diff=s1+s2. And for r=1, s-diff=|s1-s2|. Amazing! Just add and subtract SDs, no sign of a square root anywhere. Apply the formula and it makes sense, but it was news to me.
    Geoff

  2. Bob has answered my question by referring me to:
    Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Introduction to Meta-Analysis. Wiley.
    See p. 24, where Equation 4.15 agrees exactly with my formula.
    I used that book extensively while writing about meta-analysis in UTNS, and my copy of it has numerous annotations and yellow sticky labels. But nothing on p. 24, so I missed that essential relation, or have forgotten it was here. On my shelves all the while. Thanks Bob!

Leave a Reply

Your email address will not be published.

*