–updated with a link from Ken Kelley to access the functions in the paper, 6/28/2018–

In a new-statistics world, the best way to choose *N* for a study is to use **precision for planning** (**PfP**), also known as accuracy in parameter estimation (AIPE). Both our new-statistics books explain PfP and why it is better than a power analysis–which is the way to choose *N* in an NHST world. ESCI allows you to use PfP, but only for the two independent groups and paired designs.

The idea of PfP, as you may know, is to choose a target MoE; in other words, choose a CI length that you do not wish to exceed. Then PfP tells you the *N* that will deliver that MoE (or shorter)–either on average, or with 99% assurance.

PfP is a highly valuable approach, hampered to date by the lack of software to make using it easy for a full range of measures and designs. Indeed the PfP techniques required to build such software have been developed only comparatively recently; many have been developed by Ken Kelley and colleagues. Further developments are needed and now Ken and colleagues have published a new article with important advances:

Kelley, K., Darku, F. B., & Chattopadhyay, B. (2018). Accuracy in parameter estimation for a general class of effect sizes: A sequential approach. *Psychological Methods, Vol 23*(2), 226-243. http://dx.doi.org/10.1037/met0000127

The translational (simplified) abstract is below, at the bottom.

The article may be available here, or you may need to get it via your library.

Traditional PfP, as described in our two books and implemented in ESCI, has some severe limitations:

1. The population distribution is assumed known–usually a normal distribution.

2. A particular effect size measure is used, for example the mean or Cohen’s *d*.

3. A value needs to be assumed for one or more population parameters, even though these are usually unknown. For example, our books and ESCI support PfP when target MoE is expressed in units of population standard deviation, even though this is usually unknown.

4. Traditional PfP gives a single fixed value of *N *for the target MoE to be achieved (on average, or with 99% assurance).

Very remarkably, the Kelley et al. article improves on **ALL 4** of these aspects of traditional PfP! Imho, this is a wonderful contribution to our understanding of PfP and to the range of situations in which PfP can be used. It will, I hope, contribute to the much wider use of PfP for sample-size planning.

Much of the article is necessarily quite technical, but here is my understanding of the approach, in relation to the 4 points above.

1. A non-parametric approach is taken, meaning that no particular form of the population distribution is assumed. Using the central theorem makes the analysis tractable.

2. A very general form of effect size measure is assumed (in fact, the ratio of two functions of the population parameters). A large number of familiar effect size measures, including the mean, mean difference, and Cohen’s *d*, are special cases of this general measure, so the PfP technique that Kelley et al. develop can be applied to any of these familiar measures, as well as many others.

3. The sequential approach they take–see 4 below–allows them to estimate the relevant population parameters, and to update that estimate as the process proceeds. No dubious assumption of parameter values is required.

4. Conventional approaches to statistical inference rely on *N *being specified in advance. Open Science has emphasised that *data peeking* invalidates *p *value and other conventional approaches to inference. (In data peeking, you run a study, analyse, then decide whether to run some more participants, for example until statistical significance is achieved.) Avoiding data peeking is one reason for preregistration–which includes stating *N *in advance, or at least the stopping rule, which must not depend on the results obtained.

However, **sequential analysis** was developed about 75 years ago in the NHST world. It is seldom used in the behavioral sciences, but allows you to analyse data collected to date and then decide whether to continue, or to stop and declare in favour of the null hypothesis, or the specified alternative hypothesis. The stopping rule is designed so the procedure gives Type I and Type II error rates that are as selected for the NHST. Yes, sequential analysis is more complex to use, and you don’t know in advance how many participants you will need, but it can on average lead to smaller *N *being required than for conventional fixed-*N *approaches.

Kelley et al. have very cleverly used the sequential approach to PfP and, at the same time, have solved 3 above. The idea is that you take a pilot sample of size *N*1, then use the results from that to estimate relevant parameters and to calculate the MoE on your effect size estimate. If that MoE is not sufficiently short to provide the precision you are seeking, you test a further *N*0 participants (*N*0 is generally small, and may be 1). Then again estimate the parameters and calculate MoE. It that MoE is not sufficiently small, test a further *N*0 participants, and so on, until you achieve the desired precision.

Then interpret the final effect size estimate and its CI. Yes, the method my be complex, but it is very general and should on average give a smaller *N *than conventional PfP would require.

I find the generality and potential of the method stunning, and I can’t wait to see it made available within full-function data analysis applications. That will give a great boost to the highly desirable shift from power analysis to PfP, and more generally from NHST to the new statistics. Hooray!

Geoff

—UPDATE — Ken Kelley writes:

`On my web site is a link with instructions and code for a few specific instances of the method. The link is here:`

`https://www.dropbox.com/s/g413qn6fv0c4gtq/Functions%20for%20download.txt?dl=0`

`For each of the effect sizes, there are several functions that need to be run first. But, after getting those into one's workspace, the actual function is easy to use. The functions available at the above link are for the coefficient of variation, for a regression coefficient in simple regression, and for the standardized mean difference. `

`My co-authors and I have plans to develop an R package for a more general applications. In fact, we already have made progress on the package, which will focus on sequential methods`

**Translational Abstract**

Accurately estimating effect sizes is an important goal in many studies. A wide confidence interval at the specified level of confidence (e.g., .95%) illustrates that the population value of the effect size of interest (i.e., the parameter) has not been accurately estimated. An approach to planning sample size in which the objective is to obtain a narrow confidence interval has been termed accuracy in parameter estimation. In our article, we first define a general class of effect size in which special cases are several commonly used effect sizes in practice. Using the general effect size we develop, we use a sequential estimation approach so that the width of the confidence interval will be sufficiently narrow. Sequential estimation is a well-recognized approach to inference in which the sample size for a study is not specified at the start of the study, and instead study outcomes are used to evaluate a predefined stopping rule, which evaluates if sampling should continue or stop. We introduce this method for study design in the context of the general effect size and call it “sequential accuracy in parameter estimation.” Sequential accuracy in parameter estimation avoids the difficult task of using supposed values (e.g., unknown parameter values) to plan sample size before the start of a study. We make these developments in a distribution-free environment, which means that our methods are not restricted to the situations of assumed distribution forms (e.g., we do not assume data follow a normal distribution). Additionally, we provide freely available software so that readers can immediately implement the methods.

P.S. I haven’t yet located the software mentioned in the final sentence above. Ken’s great software for PfP (and other things) is MBESS, so that may be where to look.

## Leave a Reply