Home Mathematics

# Methods for Three or More Groups

This chapter develops nonparametric techniques for one way analysis of variance.

Suppose Хы are samples from К potentially different populations. That is, for fixed к, Xki, . ■ ■, Хкмк are independent and identically distributed, each with cumulative distribution function Fk. Here к E {1,..., K} indexes group, and Mk represents the number of observations in each group. In order to determine whether all populations are the same, test a null hypothesis

vs. the alternative hypothesis H : there exists j, k. and x such that Fj(x) ф Fk(x). Most tests considered in this chapter, however, are most powerful against alternatives of the form IIa : Fk(x) < Fj(x)Vx for some indices k, j, with strict inequality at some k, j, and x. Of particular interest, particularly for power calculations, are alternatives of the form

for some constants в,

## Gaussian-Theory Methods

Under the assumptions that the data Xki are Gaussian and homoscedastic (that is, having equal variances) under both the null and alternative hypotheses, the null hypothesis H0 is equivalent to = р.д, for all pairs j, к for jij = E [Xji. One might test Ho vs. Ha via analysis of variance (ANOVA).

Let Xk. = ЕЙi Xki/Mk, X.. = ELi E;=l: ^/Efe=i ЛЛ-, and

for

When the data have a Gaussian distribution, and (4.1) holds, the numerator and denominator of (4.3) have 2 distributions, and are independent; hence the ratio W has an F distribution.

When the data are not Gaussian, the central limit theorem implies that the numerator is still approximately > as long as the minimal M, is large, and as long as the distribution of the data is not too far from Gaussian. However, neither the 2 distribution for the denominator of (4.3), nor the independence of numerator and denominator, are guaranteed in this case. Fortunately, again for large sample sizes and data not too far from Gaussian, the strong law of large numbers indicates that the denominator of (4.3) is close to the population variance of the observations, and the denominator degree of freedom for the F distribution is large enough to make the F distribution close the the 2 distribution. Hence in this large-sample close-to-Gaussian case, the standard analysis of variance results will not mislead.

### Contrasts

Let /д = Е[Хи]. Continue considering the null hypothesis that fij = /д for all j, к pairs, and consider alternative hypotheses in which all have the same finite variance a2, but the means differ, in a more structured way than for standard ANOVA. Consider alternatives such that fik+i — /Д are the same for all k. and denote the common value by Д > 0. One might construct a test particularly sensitive to this departure from the null hypothesis using the estimate Д. If Д is approximately Gaussian, then the associated test of the null hypothesis (4.1) vs. the ordered and equidistant alternative is constructed

as T = (A — Eo A )/Vai'o A ; this statistic is compared to the standard

Gaussian distribution in the usual way.

An intuitive estimator Д is the least squares estimators; for example, when К = 3 then A = (X) — Xi.)/2, and when К = 4 then A = (ЗХ4. + X3. — X2. — 3Xi.)/10. Generally, the least squares estimator is a linear combination of group means, of form ct,X/,. for a set of constants cj,. such that

with Cfc evenly spaced. In this case, Eo ^Д =0 and

and one may use the test statistic

If a is known, then the null distribution of T is the standard Gaussian distribution. If T is the С.-_/у distribution.

In this case, W of (4.3) retains its level, but has less power against this ordered alternative.

A linear combination X!a-=i ck^k. of group means, with constants summing to zero as in (4.5), is called a contrast.

Standard parametric methods will be compared to nonparametric methods below. In order to make these comparisons using methods of efficiency, the pattern of numbers of observations in various groups must be expressed in terms of a single sample size N; presume that Mk = A;,. A’ for all к € {1,..., К}. Under alternative (4.2), and with £2 = Var[X,j] known, then

In the case when the shift parameters and the contrast coefficients are equally spaced, and for groups of equal size (that is, Ok = (к — 1)A for some A > 0, Cfc = 2к + 1), and A = 1 /К),

Then //(A) = (К2l)K/6, and er(0) = sKJ(A’2 — l)/3, and the efficacy, as defined in §2.4.1.2, is

### Multiple Comparisons

In the event that the null hypothesis of equal distributions is rejected, one naturally asks which distributions differ. Testing for group differences pairwise (perhaps using the two-sample f-test) allows for K(K — l)/2 chances to find a significant result. If each test is done at nominal level, this will inflate the family-wise error rate, or the proportion of experiments that provide any incorrect result. This family-wise error rate will be bounded by the nominal level used for each separate test multiplied by number of possible comparisons performed (1/2K(K — 1)), but such a procedure, called the Bonferroni procedure, will usually result in a very conservative bound.

Alternatively, consider Fisher’s Least Significant Difference (LSD) method. First, perform the standard analysis of variance test (4.3). If this test rejects Ho, then test on each pairwise comparison between mean ranks, and report those pairs whose otherwise uncorrected p-values are smaller than the nominal size. Tests in this second stage may be performed using the two-sample pooled t-test (3.2), except that the standard deviation estimate of (4.4) may be substituted for sp, with the corresponding increase in degrees of freedom in (3.5).

Fisher’s LSD method fails to control family-wise error rate if К > 3. To see this, suppose Fi(ar) = F^a;) = • • • = Fk~(x) = Fk(x — Д) for Д ф 0. Then null hypotheses Fj(x) = Fj(x) are true, for i,j < K. One can make Д so large that the analysis of variance test rejects equality of all distributions with probability close to 1. Then multiple true null hypotheses are tested, without control for multiplicity. If К = 3, there is only one such test with a true hull hypothesis, and so no problem with multiple comparisons.

Contrast this with Tukey’s Honest Significant Difference (HSD) method (Tukey, 1953, 1993). Suppose that Yj ~ @(0,1/Mj) for j € {1,..., K}, U ~ Xmi ancl that the Yj and U are independent. Assume further that Mj are all equal. The distribution of maxiX; j(y/UjnjfM~j) is called the Studentized range distribution with К and m degrees of freedom. If Mj are not all equal,

has the Studentized range distribution with К and m degrees of freedom, approximately (Kramer, 1956); extensions also exist to correlated means (Kramer, 1957). Let (jK.m.a be the 1 — a quantile of this distribution, and let Ек,т be its cumulative distribution function.

One then applies this distribution with Yj = (Xj — Hj)/a and U/m the standard sample variance S'2. Here a is the common standard deviation. If one then sets

for N = Mk, then for any a € (0,1),

and the collection of tests that rejects the hypothesis щ = fij if Pjk < a provides simultaneous test level less than or equal to a. Furthermore, if

then

This method had been suggested before the Studentized range distribution had been derived (Tukey, 1949).

We now proceed to analogs of one-way analysis of variance that preserve nominal test size for small samples and highly non-Gaussian data.

## General Rank Tests

By analogy with test (4.3), and with the scoring ideas of §3.2, create a test by first ranking all of the observations in a data set, to obtain rank Rki for replicate i in group к. One then creates non-decreasing scores ai,..., a/у, assigns scores Аы = o.Rki to the ranked observations, and calculates the score sums One might express the score sums as in (3.7), as

Tq^ = ^2f=iajljk for equal to the 1 if the item ranked j in the combined sample comes from the group k, and 0 otherwise. Analogously to the numerator in (4.3), let

for null expectations Eo as calculated in (3.10), and quantities tq,: to be

determined later. The next subsection will calculate covariances of the T^'}, and the following subsection will demonstrate that Wg has an approximate xj< ( null distribution, if

The remainder of this section considers the joint distribution of the calculates their moments, and confirms the asymptotic distribution for Wg-

### Moments of General Rank Sums

First and univariate second moments of Tq^ are as in §3.2.2, and are given by (3.10) and (3.11), respectively. The covariance between T(and Tq for j ф k, can be calculated by forming a general rank statistic Tq’^ combining both groups j and k. to obtain the sum of ranks for individuals in either group j or group k. Note that Tq’^ = Tq'* --Т^ and, furthermore, Var Tq '^ may be found by applying (3.11), with the number of observations whose ranks are summed being ДД. + Mj. Then

and

### Construction of a Chi-Square-Distributed Statistic

This subsection shows that the distribution of Wq is well-approximated by a Xk i distribution.

The use of the Gaussian approximation for the distribution of two-sample general rank statistics was justified at the end of §3.4.1. This argument addressed the distributions of rank sums associated with each group separately; the argument below requires that the joint distribution of the sums of ranks over the various groups be multivariate Gaussian. Hajek (1960), while proving more general distributional finite population sampling results, notes that the results similar to those of Erdos and Reyni (1959) can be applied to all linear combinations of rank sums from separate groups, and hence the collection of rank sums is approximately multivariate Gaussian. This result requires some condition forcing all group proportions to stay away from zero; liminfjv-joo Mk/N > 0 should suffice.

Statistics formed by squaring observed deviations of data summaries from their expected values, and summing, arise in various contexts in statistics. Often these statistics have distributions that are well-approximated as y2. For example, in standard parametric one-way analysis of variance, for Gaussian data, sums of squared centered group means have a y2 distribution, after rescaling by the population variance. Tests involving the multinomial distribution also often have the y2 distribution as an approximate referent. In both these cases, as with the rank test developed below, the y2 distribution has as its degrees of freedom something less than the number of quantities added. In the analysis of variance case, the у2 approximation to the test statistic may be demonstrated by considering the full joint distribution of all group means, and integrating out the distribution of the grand mean. In the multinomial case, the y2 approximation to the test statistic may be demonstrated by treating the underlying cell counts as Poisson, and conditioning on the table total. In the present case, one might embed the fixed rank sum for all groups taken together into a larger multivariate distribution, and either conditioning or marginalizing, but this approach is unnatural. Instead, below a smaller set of summaries, formed by dropping rank sums for one of the groups, is considered. This yields a distribution with a full-rank variance matrix, and calculations follow.

The total number of observations is N = Zk=i Mk- Let Y be the К - 1 by 1 matrix

(note excluding the final group rank sum), for oj = /(N — )/[N(a — a2)]. The covariances between components j and k of Y are — yjMjMk/N, and the variance of component j is 1 — Mj/N. Let и be the К — 1 by 1 matrix (y/WjN, • • •, y/MK-i/N)J. Then

This proof will proceed by analytically inverting Var[V]. Note that

since v и = XljLi1 Mj/N = 1 — Мц/N. Then

Hence

Also,

The above calculation required some notions from linear algebra. The calculation (4.13) requires an understanding of the definition of matrix multiplication, and the associative and distributive properties of matrices, and (4.14) requires an understanding of the definition of a matrix inverse. Observation (4.15) is deeper; it requires knowing that a symmetric non-negative definite matrix may be decomposed as V = D D, for a square matrix D, and an understanding that variances matrices in the multivariate case transform as do scalar variances in the one-dimensional case.

One might compare this procedure to either the standard analysis of variance procedure, which is heavily reliant on distribution of responses. Alternatively, one might perform the ANOVA analysis on ranks; this procedure does not depend on distribution of responses.

 Related topics