**55,000** – the approx. number of times **jamovi** was downloaded in March

**2,500** – the approx. number of times **esci** was added to **jamovi** in March

Each of these is about **double** the number for three months earlier! At this rate, everyone on Earth will have their own copy within a year or two—roughly speaking

Of the 38 modules available in the **jamovi** library, **esci** is currently the **5 ^{th} most popular**—demonstrating that it’s fully usable despite still being in development. Hats off to Bob!

In case it’s new to you, **jamovi** is the free, open-source stats software that crushes SPSS. It’s even better with added **esci**—which is designed to go with the second edition of ITNS, currently in preparation.

To get started, see **this post**.

Even simpler than downloading jamovi—tho’ this is quick and easy—just click on the **big green button** at **the jamovi home page** to open jamovi directly in any browser. Then play. (The online version is experimental, and modules can’t yet be added.)

Enjoy, and please let’s have your comments and suggestions,

Geoff

]]>The **editorial** is short, and a great read. **Christophe Bernard**, the editor-in-chief, includes links to his **2019 editorial** that announced the initiative, **our article** explaining estimation that was published in eNeuro at the same time, and a recent **blog post** in which eNeuro authors reflect on their experiences of figuring out how to include estimation in their analyses.

Christophe also includes a brief intro to estimation, with links to the **dance of the p values**, Gordon’s

That’s all great to hear, and I salute Christophe for his initiative and persistence. Take note, other journal editors (*Journal of Neuroscience*?), it can be done! Judging by author comments in that eNeuro blog post, researchers who have taken the plunge can see the benefits and are generally keen to continue with estimation.

This may have all started back in November 2018 at the giant SfN conference in San Diego, where Bob moderated a **PD Workshop** he had organized: **Improving Your Science: Better Inference, Reproducible Analyses, and the New Publication Landscape**. Christophe was one of the speakers and may have become an enthusiast that day. Shortly after, he started working towards the 2019 announcement and editorial. Bob’s workshop was the acorn…

The highly encouraging eNeuro story prompts me to think back to past efforts by enterprising journal editors to move statistical practices beyond *p* values. Here’s a brief word about a few.

More than 40 years ago, Ken Rothman published articles advocating confidence intervals and explaining how to calculate them in various situations. He was influential in persuading the **International Council of Medical Journal Editors** (ICMJE) to include in their 1988 revision of their **Uniform Requirements for Manuscripts Submitted to Biomedical Journals** the following:

“When possible, quantify findings and present them with appropriate indicators of measurement error or uncertainty (such as confidence intervals). Avoid sole reliance on statistical hypothesis testing, such as the use of *p* values, which fail to convey important quantitative information. . . .”

Rothman, an assistant editor during 1984-87 at the *American Journal of Public Health*, insisted that authors of manuscripts he assessed remove all references to statistical significance, NHST, and *p* values. We, in **Fidler et al. (2004)**, examined articles published in various years from 1982 to 2000 and found that CI reporting increased from 10% to 54% during the Rothman years, then remained at a similar level through to 2000—as was becoming standard in other medical journals, following the ICMJE policy of 1988.

In 1990 Rothman founded the journal *Epidemiology* and declared that this journal did not publish NHST or *p* values. For the 10 years of his editorship it basically didn’t, while CI reporting reached more than 90%.

BUT, even when CIs were reported—often merely as numbers in tables—they were rarely referred to, or used to inform interpretation We suspected that researchers needed way more explanations, examples, and guidance to appreciate what estimation can offer.

Geoffrey Loftus, Editor of *Memory & Cognition* from 1994 to 1997, strongly encouraged presentation of figures with error bars and avoidance of NHST. He even calculated error bars for numerous authors who claimed it was too difficult for them. We, in **Finch et al. (2004)**, reported that use of figures with bars increased to 47% under Loftus’s editorship and then declined. However, bars were rarely used for interpretation, and NHST remained almost universal. It seemed that even strong editorial encouragement, and assistance with analyses, was not sufficient to bring about substantial and lasting improvement in psychologists’ statistical practices.

Eric Eich, as editor-in-chief of *Psychological Science*, initiated perhaps the most important and successful journal transformation, at least in psychology. At the start of 2014 he published his famous editorial **Business Not as Usual**, which introduced Open Science badges, encouragement to use the new statistics, and other important advances. He published **Cumming (2014)**, the tutorial article on the new statistics that he’d invited me to write.

When Steve Lindsay took over as editor-in-chief he introduced further advances, including Preregistered Direct Replications. His **Swan Song Editorial** recounts the Open Science advances from 2014 to 2019, with evidence of sweeping changes in authors’ practices and what the journal has published. (I posted about that editorial **here**.)

Now editor-in-chief Patricia Bauer is continuing Open Science policies. For example, the **Submission Guidelines** still state that “*Psychological Science* recommends the use of the **“new statistics”**—effect sizes, confidence intervals, and meta-analysis—to avoid problems associated with null-hypothesis significance testing…”. They include links to **our site**, my **tutorial article**, and **my videos** introducing the new statistics that were recorded at the 2014 APS Convention.

I’d like to think that Rothman, Loftus, and other editors who, decades ago, tried so hard to encourage better practices did help bring about the advent of Open Science, which shook things up sufficiently to give later enterprising editors a better chance of getting their wonderful initiatives to stick.

Christophe has continued and broadened the crusade to great effect.

I’m delighted to see the evidence that so many of these positive changes look like they will persist and spread further. Bring that on!

And Bob and I hope, of course, that ITNS2 can help students understand why Open Science and the new statistics is the natural, better, and more easily understood way to do things.

Geoff

Cumming, G. (2014) The new statistics: Why and how. *Psychological Science*. *25,* 7-29. https://doi.org/10.1177/0956797613504966

Fidler, F., Thomason, N., Cumming, G., Finch, S., & Leeman, J. (2004). Editors can lead researchers to confidence intervals, but can’t make them think: Statistical reform lessons from medicine. *Psychological Science, 15,* 119-126.

https://doi.org/10.1111/j.0963-7214.2004.01502008.x

Finch, S., Cumming, G., Williams, J., Palmer, L., Griffith, E., Alders, C., Anderson, J., & Goodman, O. (2004). Reform of statistical inference in psychology: The case of *Memory & Cognition*. *Behavior Research Methods, Instruments & Computers, 36,* 312-324.

https://doi.org/10.3758/BF03195577

**Cohen’s d** is the ratio of an effect size (often a mean, or difference between means) to a standard deviation. Typically both are estimates from the data, so it’s hardly surprising that the distribution of

Back then we used a very early version of ESCI to illustrate how sliding two ever-changing noncentral *t* curves along the *d* axis (**the pivot method**) allowed us, for those two designs, to find the lower and upper bounds of the CI on the *d* calculated from sample data. The figure below uses the version of ESCI that goes with UTNS to illustrate the pivot method.

For the paired design we couldn’t, alas, find even a good approximate way to calculate a CI on *d*.

Happily, by the time I was writing UTNS, Algina & Keselman (2003) had proposed an approximate solution to the problem of the paired case. They reported simulations that showed their method was pretty good, for a limited range of situations. In UTNS, pp. 306-307, I described my efforts to use simulations to assess their method. I found I could broaden the range of cases for which the approximation did well. Even so, there were limits, as stated in ESCI. For example, *N* had to be at least 6, and * d_{unbiased}* could not be greater than 2. But at least ESCI could provide a quite good approximate CI on

The usual *d* = [(effect size)/SD] overestimates δ. A simple correction factor, which depends on the *df* of the SD, gives us *d*_{unbiased}, which is what we should routinely use. In UTNS, for the paired case, I followed Borenstein et al. (2009) and used *df* = (*N* – 1), even though this seemed a little strange, given that the SD is estimated from the standard deviations of both measures (e.g., the pre-scores and the post-scores).

Goulet-Pelletier & Cousineau (2018 **here**, and erratum 2020 **here**) report a wide-ranging review of *d* and its CI. Their simulations suggest that in the paired case debiasing should use *df* = 2(*N* – 1), not (*N* – 1) as I used in UTNS and ESCI. They refer to *d*_{unbiased} as *g*.

Then Fitts (2020 **here**) investigated this issue and found by simulation that the debiasing *df* needs to reflect ρ, the population correlation between the two measures. When ρ = 0, as in the independent groups case, *df* = 2(*N* – 1), as for independent groups. If ρ = 1, then *df* = (*N* – 1). Intermediate values of ρ need intermediate values of *df*.

Cousineau (2020 **here**) took a major step forward by finding a good approximation to the distribution for *d* in the paired design, and a formula for the *df* that includes ρ.

Now, hot off the press, Cousineau & Goulet-Pelletier (2021 **here**) report a massive set of simulations that assess eight (!!) ways to calculate an approximate CI on *d*, five of them being their new proposals. The Algina-Keselman method that I used in UTNS turns out to be reasonable, but isn’t the best. The best is the ‘Adjusted Λ’ [“lambda-prime”] method’, which is one of their new proposals. This gives CIs that have very close to 95% coverage, and some other desirable properties, for a wide range of values of *N*, *d*, and ρ.

See **their paper **for a description of the method, and on p. 58 the R code. It’s probably what we’ll use in **esci jamovi**.

This progress makes me very happy. Maybe you too?

Geoff

Algina, J., & Keselman, H. J. (2003). Approximate confidence intervals for effect sizes. *Educational and Psychological Measurement*, *63*, 537–553. https://doi.org/10.1177/0013164403256358

Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). *Introduction to meta-analysis*. New York, NY: John Wiley & Sons.

Cousineau, D. (2020). Approximating the distribution of Cohen’s *d _{p}* in within-subject designs.

Cumming, G., & Finch, S. (2001). A primer on the understanding, use and calculation of confidence intervals that are based on central and noncentral distributions. *Educational and Psychological Measurement, 61*, 532-574. https://doi.org/10.1177/0013164401614002

Fitts, D. (2020). Commentary on “a review of effect sizes and their confidence intervals, part I: The Cohen’s *d* family”: The degrees of freedom for paired samples designs. *The Quantitative Methods for Psychology*, *16*(4), 281–294. https://doi.org/10.20982/tqmp.16.4.p281

Goulet-Pelletier, J.-C., & Cousineau, D. (2018). A review of effect sizes and their confidence intervals, Part I: The Cohen’s *d* family. *The Quantitative Methods for Psychology*, *14*(4), 242–265. https://doi.org/10.20982/tqmp.14.4.p242

Goulet-Pelletier, J.-C., & Cousineau, D. (2020). Erratum to Appendix C of “A review of effect sizes and their confidence intervals, Part I: The Cohen’s *d* family”. *The Quantitative Methods for Psychology*, *16*(4), 422–423. https://doi.org/10.20982/tqmp.16.4.p422

Cousineau, D., & Goulet-Pelletier, J.-C. (2021). A study of confidence intervals for Cohen’s *d _{p}* in within-subject designs with new proposals.

**Precision for Planning **tells us what *N *we need to achieve the precision we’d like. It’s a much better way to plan than the traditional use of statistical power, which works only within an NHST framework. Far better to adopt an **estimation framework** (the new statistics) and use PfP.

For an intro to PfP, see Chapter 10 in **ITNS**. For more detail, see Chapter 13 in **UTNS**.

For a **two independent groups study**, with two groups of size *N*, below is the PfP picture. Recall that **MoE **is the **margin of error**, which is half the length of a CI. I’ve set the slider at the bottom to **target MoE** = 0.50, meaning that I want to estimate the difference between the group means with a 95% CI having MoE of 0.50. In other words, each arm of the CI should be 0.50 in length.

The lower axis is marked in units of population SD, which we can think of as units of **Cohen’s d**. The cursor marks a target MoE of 0.50 in those units.

The **black curve** shows how required *N* increases dramatically as we aim for smaller values of MoE–in other words, greater precision and a shorter CI. Use this curve to investigate how *N* trades with likely precision.

The small curve at the bottom shows how MoE varies for *N* = 32. It’s usually close to 0.50, but can be as short as 0.40 or long as 0.60, and occasionally even a little outside that range. Use the large slider to move the cursor and see the **MoE distribution** for other values of target MoE and *N*.

The figure gives us a **handy benchmark**, worth remembering: Any study with two independent groups of size 32 will estimate the difference between the group means with a 95% CI that has MoE of 0.50, on average.

The black curve can only give us *N* for MoE that’s sufficiently small **on average**. But we can do better. The **red curve**, below, tells us the *N* we need to achieve target MoE with **assurance of 99%**. This is the *N* that gives MoE smaller than target MoE on at least 99% of occasions. The grey curve reminds us of the ‘on average’ curve–the black curve in the figure above.

**precision for planning** supports PfP for what are probably the two most common designs:** two independent groups**, and the **paired design**. The paired design, with a single repeated measure (for example Pretest-Posttest) has the advantage, where it is possible and appropriate, of usually giving higher precision. The critical feature is the correlation in the population between the two measures, such as Pretest and Posttest. Higher correlation gives a shorter CI on the paired difference and therefore higher precision.

To use PfP we need to specify a value for **ρ** (Greek rho), the **population correlation**. Ideally, previous research gives us a reasonable estimate we can use; otherwise we might have to guess. For research with human participants, typical values are often around .6 to .9.

Here’s a PfP picture for the **Paired Design**, with **ρ set to .70**.

The red curve shows us that a single group of *N* = 21 suffices for target MoE = 0.50 with assurance, when ρ = .70. Compare with two groups of *N* = 44 for the independent groups design. Great news!

However, as you might guess, *N* is highly sensitive to ρ. For ρ = .60 we need *N* = 25, but for ρ = .80 we need only *N* = 16 (or *N* = 9, on average).

It’s wonderful that **precision for planning** makes it easy to explore how *N*, target MoE, choice of design, and–for the Paired Design–ρ, all co-vary. Be fully informed before you choose a design and *N*!

Go to **esci web** and see all six components as here:

Search the blog for ‘**Gordon**‘ to find three posts introducing the previous five components.

Please explore any and all of the **six components**. Send your bouquets to **Gordon Moore**. Your comments and suggestions to any of us.

Enjoy!

Geoff

]]>As you may recall, **ITNS2 **will be accompanied by Bob’s data analysis software, **esci**, in R, and Gordon’s web-based simulations and tools, all of which are based on, and go beyond, my Excel-based **ESCI**. Together the web-based goodies, now including **dance r**, comprise

**dance r **takes random samples from a

Playing yourself is *way* better than seeing the pic. A few things to try:

- Watch the
**population cloud**change for different*ρ*values - Explore the changing length and
**asymmetry of CIs**for different*r*values - Watch the sampling distribution of correlations (the
) build*r*heap - See how its
**skew**changes with*ρ* - Investigate the capture percentage of
**95% CIs** - Study what changes, and how fast, as you change
**N**

A key challenge for students–and researchers–is to build good intuitions about the extent of **uncertainty**, including the extent of sampling variability. **dance r** is a great arena in which to build those intuitions about

As I say, we’d love to have your feedback.

Enjoy.

Geoff

]]>Science is under attack around the world, and vital data are being ignored–or totally rejected. Time for a **good news** statistics story. My bedside reading is a recent issue of **Significance **(unfortunate title!) magazine, which goes to members of both the Royal Statistical Society (U.K) and the American Statistical Association.

It’s mostly behind a paywall for 12 months, but, happily, this article is a free **download**: **Science after Covid-19, Faster, better, stronger?** Dare we hope?!

Simon Schwab and Leonhard Held, of the Centre for Reproducible Science, University of Zurich, describe how this year:

- 30 publishers agreed to make Covid-19 research papers and data
**freely available**–no paywalls - Uploading of covid-related
**preprints**exploded, as the figure above tells - Quick action is encouraging rapid and open reviews of preprints, e.g. via
**Outbreak Science**

Schwab and Held also discuss:

- The value of peer review
studies are conducted. Some journals offer*before***registered reports**, and aim to review study plans within 7 days. - Ways that fast and high quality
**peer reviewing**can be supported. - The need for rigour and
**best-practice methods**, as well as speed, and prompt**systematic reviews**. Then presentation of evidence-based advice for public policy and practice.

They conclude “courses in good research practice should be widely adopted to address highly relevant topics such as study design, open science, statistics and reproducibility … and preparation must also include the training of teams for rapid synthesis of relevant evidence. We cannot be prepared enough for the next global health crisis.”

In other words, **Open Science**! Bring it on–on World Statistics Day, and every day.

Geoff

]]>As I explained, **ITNS2 **will be accompanied by Bob’s data analysis software, **esci**, in R, and Gordon’s web-based simulations and tools, all of which are based on, and go beyond, my Excel-based **ESCI**. Together the web-based goodies comprise **esci web**, which you can open in your browser **here**. (Or use the ESCI menu above and choose **esci web** from the dropdown.) From today, **esci web** has four components, with perhaps two yet to come.

** distributions, d picture, and correlation** are visual statistical tools, developed in JavaScript. We’d love to have your feedback.

See the curves, explore *z *scores, find areas, find critical values.

What does *d* = 0.2 look like? How much overlap of distributions? What about *d* = 0.5, 1.0, 1.5, …?

What do you think is the *r* value in each of these scatterplots?

——— Don’t read on just yet. Have an eyeball of the scatterplots. What is each *r*?

——— Last chance… look back up…

OK, the correlation is .3 in all cases. True, if possibly strange. (All the data sets come from a bivariate normal distribution, and in all cases the data set correlation is .3.)

Pro tip: Eyeball, or turn on, a cross through the means, as in lower right. Then eyeball the approximate comparative number of dots in (top right + lower left) quadrants and the (top left + lower right) quadrants. Correlation is a tussle between those first two (the *matched *quadrants) and the second two (the *unmatched*).

Investigate that and other cool things in **correlation**.

As I say, access **esci web** **here**, and please let us have your comments.

Enjoy,

Geoff

]]>…as I was asked recently. A question every author loves to hear. The short answer is **ITNS**, preferably to be followed by **ITNS2**, coming in 2021 we hope. Here’s an overview:

Main changes from the first edition: **fabulous new software**:

**esci **(in R) for data analysis and great graphs with CIs, by Bob, and

**esci web** (in javascript) for dance simulations and tools, by Gordon Moore

There’s even more about **Open Science**, and some new examples–timely studies that have used Open Science practices.

The first introductory textbook to combine **the new statistics** (CIs, estimation, meta-analysis) with **Open Science** practices, from the start and all through. Starts at the very beginning, but goes far enough to include meta-analysis, regression, and simple two-way designs. Basic formulas only, many pictures and interactive demos. Lots of examples. Lots of online resources to support teachers and students.

A streamlined version of **ESCI**, software that runs under Excel, is used throughout the book. More information **here**. Read Contents and Chapter 1 **here**. Publisher’s website **here**. Support materials are **here**. **ESCI intro **is **here**. Amazon page **here**.

The original book, aimed at upper year undergraduates through to researchers. Explains in detail why the dichotomous thinking of NHST is damaging and should be replaced by **the new statistics**. **Estimation **and **meta-analysis** are introduced from the start. Some is just a little technical, for example three chapters on meta-analysis. Predates Open Science. No regression. Accompanied by original **ESCI**, running under Excel. More information **here**. Publisher’s website **here**. **ESCI **is **here**. Amazon page **here**.

Whichever you choose, I hope the book, software and all the materials serve you well. Together let’s change the world, towards better research and statistical practices.

Geoff

]]>I’m delighted to report that they have now posted a **preprint **of their results **here**. We’d love to have **your comments and suggestions**.

Max explored six approaches to calculating a CI for the DR. He used simulation to investigate their properties, especially coverage, and identified two that give excellent CIs. He provides (**here**) R code to allow any researcher to calculate the CI on the DR for their own data, for a range of measures. All Max’s simulation materials are available on OSF **here**, so anyone can recreate or extend Max’s work.

Below is Figure 1 from the preprint, as an example of how the DR and its CI may be reported in a forest plot.

In the figure, DR = 1.40 is reported along with three conventional measures of heterogeneity, all with CIs. Both the RE (Random Effects) and FE (Fixed Effect) diamonds are shown in the forest plot, so it’s easy to eyeball DR, which is simply the length of the RE diamond divided by that of the FE diamond. DR = 1 suggests little or no heterogeneity, and increasing values of DR suggest increasing heterogeneity. One vital message is given by the CI on the DR, which is [0, 3.09], so this meta-analysis, which integrates only 10 studies, can give us only a very imprecise estimate of heterogeneity.

Along with the DR, the figure reports the 95% prediction interval (PI) for true effect sizes as a further estimate of heterogeneity. Borenstein et al. (2017) advocated use of the PI, which is reported here to be 0.285. The red line segment just under the RE diamond pictures that length. Informally, that segment illustrates the likely extent of spread of true effect sizes. The PI is 4 x *T*, where *T* is the estimated population SD of true effect sizes. The very long CI reported for *T* indicates once again a very imprecise estimate of heterogeneity.

In the preprint we conclude that the DR, and its CI, can be valuable for students as they learn about meta-analysis, and for researchers as they interpret and communicate their meta-analyses.

**It would be great to have any comments about Max’s work and the preprint. Thanks!**

Geoff

Max: mrcairns994@gmail.com Geoff: g.cumming@latrobe.edu.au

Borenstein, M., Higgins, J. P., Hedges, L. V., & Rothstein, H. R. (2017). Basics of meta-analysis: *I*^{2} is not an absolute measure of heterogeneity. *Research Synthesis Methods, 8*, 5-18. doi:10.1002/jrsm.1230

We are now releasing Gordon’s **dances** in beta, and seek your feedback. Developed in JavaScript, **dances** opens in your browser via **this link**. ITNS2 will be accompanied by Bob’s data analysis software, **esci**, in R, and Gordon’s web-based simulations, all of which are based on, and go beyond, my Excel-based **ESCI**. The first and most important of Gordon’s simulations is **dances**, which replaces and goes beyond **CIjumping** in ESCI.

Below are four examples of **dances** bringing key statistical ideas alive. These are frozen images: It’s ** way **more convincing watching the simulations dancing down the screen.

Getting started with **dances**:

- Open
**dances**in a browser - Click on the ‘
**?**’ at top right in the control panel (left side of screen) to turn on popout tips, which give brief explanations when the mouse hovers over labels or controls. - Use the three big buttons. Play as you wish. Click ‘Clear’ to start again.

Take repeated samples of size *N* = 20 from the pictured normally distributed population. Watch the pattern of values (blue open circles) jump around from sample to sample. Watch the means (green dots) from successive samples dance down the screen: So much variation, even with samples of size 20! This is the **dance of the means**.

Place 95% CIs on each of the dancing means, again with samples of *N* = 20. CIs that don’t capture the population mean, mu (blue line), are red. In the short term, red CIs seem to come very haphazardly, sometimes rarely, sometimes in clumps. In the long term, however, very very close to 95.0% of CIs will capture mu and 5.0% will be red.

This happens when CIs are all the same length, being based on the population SD, sigma, assumed known. Remarkably, it also happens when, as in the picture below, CIs vary in length because they are based on sample SDs, when sigma is assumed not known. Either way, we are seeing the **dance of the CIs**.

The falling means pile up to form the **mean heap**; means in the heap keep their colour, red or green. In the long run, the mean heap shape will closely match the theoretically expected, normally distributed, sampling distribution curve.

The **central limit theorem** states that, almost whatever the shape of the population distribution, the sampling distribution of sample means will be approximately normal. Furthermore, the larger the samples, the closer the sampling distribution will be to normal.

In **dances** you can draw whatever weird shape of population distribution you choose, then take samples of some chosen size, *N*, and compare the mean heap with the normal curve.

The figure below shows that, even with my hand-drawn, highly skewed population, and samples as tiny as *N* = 3, the mean heap is much less skewed than the population, and surprisingly close in shape to the symmetric normal curve.

Run a replication, exactly the same as the original experiment but with a new sample, and find that the *p* value is likely to be very different. The sampling variability of the *p* value is surprisingly large: Alas, we simply shouldn’t trust any *p* value.

The figure below shows the **dance of the CIs** and the corresponding *p* values—which vary from <.001 to more than .8! Deep blue patches mark *p*>.10, through to bright red patches for *p*<.001. This is the **dance of the p values**!

Population mean, mu, is 60, and SD, sigma, is 20. The null hypothesis is H0: mu0 = 50, so the effect size in the population is half of sigma, or Cohen’s delta = 0.50, conventionally considered to be a medium-sized effect. With *N* = 16, the power is about .50, which is typical for many research fields in psychology and some other disciplines.

The running simulation is way more vivid than any picture, especially when sounds are turned on, ranging from a bright trumpet for *p*<.001 down to a deep trombone for *p*>.10.

Change *N*, or population effect size, and see generally lower or higher *p* values but, most surprisingly, in every case the values of *p* still jump around dramatically.

For videos of such dances, search YouTube for ‘**dance of the p values**’ and ‘**significance roulette**’.

Figures and dances like those shown here will come in Chapters 4, 5, and 6 in ITNS2.

Meanwhile, please have a play with Gordon’s wonderful **dances** and let us have your thoughts and suggestions. Thanks.

Geoff

]]>