Fine Art

.

In statistics, the Kolmogorov–Smirnov test (K–S test) is a nonparametric test for the equality of continuous, one-dimensional probability distributions that can be used to compare a sample with a reference probability distribution (one-sample K–S test), or to compare two samples (two-sample K–S test). The Kolmogorov–Smirnov statistic quantifies a distance between the empirical distribution function of the sample and the cumulative distribution function of the reference distribution, or between the empirical distribution functions of two samples. The null distribution of this statistic is calculated under the null hypothesis that the samples are drawn from the same distribution (in the two-sample case) or that the sample is drawn from the reference distribution (in the one-sample case). In each case, the distributions considered under the null hypothesis are continuous distributions but are otherwise unrestricted.

The two-sample KS test is one of the most useful and general nonparametric methods for comparing two samples, as it is sensitive to differences in both location and shape of the empirical cumulative distribution functions of the two samples.

The Kolmogorov–Smirnov test can be modified to serve as a goodness of fit test. In the special case of testing for normality of the distribution, samples are standardized and compared with a standard normal distribution. This is equivalent to setting the mean and variance of the reference distribution equal to the sample estimates, and it is known that using these to define the specific reference distribution changes the null distribution of the test statistic: see below. Various studies have found that, even in this corrected form, the test is less powerful for testing normality than the Shapiro–Wilk test or Anderson–Darling test.[1]

Kolmogorov–Smirnov statistic

The empirical distribution function Fn for n iid observations Xi is defined as

\( F_n(x)={1 \over n}\sum_{i=1}^n I_{X_i\leq x} \)

where \( I_{X_i\leq x} \) is the indicator function, equal to 1 if Xi ≤ x and equal to 0 otherwise.

The Kolmogorov–Smirnov statistic for a given cumulative distribution function F(x) is

\( D_n=\sup_x |F_n(x)-F(x)| \)

where sup x is the supremum of the set of distances. By the Glivenko–Cantelli theorem, if the sample comes from distribution F(x), then \( D_n \) converges to 0 almost surely. Kolmogorov strengthened this result, by effectively providing the rate of this convergence (see below). The Donsker theorem provides yet a stronger result.

In practice, the statistic requires a relatively large number of data points to properly reject the null hypothesis.
Kolmogorov distribution

The Kolmogorov distribution is the distribution of the random variable

\( K=\sup_{t\in[0,1]}|B(t)|, \)

where B(t) is the Brownian bridge. The cumulative distribution function of K is given by[2]

\( \operatorname{Pr}(K\leq x)=1-2\sum_{i=1}^\infty (-1)^{i-1} e^{-2i^2 x^2}=\frac{\sqrt{2\pi}}{x}\sum_{i=1}^\infty e^{-(2i-1)^2\pi^2/(8x^2)}. \)

Both the form of the Kolmogorov–Smirnov test statistic and its asymptotic distribution under the null hypothesis were published by Andrey Kolmogorov,[3] while a table of the distribution was published by Nikolai Vasilyevich Smirnov.[4] Recurrence relations for the distribution of the test statistic in finite samples are available.[3]
Kolmogorov–Smirnov test

Under null hypothesis that the sample comes from the hypothesized distribution F(x),

\( \sqrt{n}D_n\xrightarrow{n\to\infty}\sup_t |B(F(t))| \)

in distribution, where B(t) is the Brownian bridge.

If F is continuous then under the null hypothesis \( \sqrt{n}D_n \) converges to the Kolmogorov distribution, which does not depend on F. This result may also be known as the Kolmogorov theorem; see Kolmogorov's theorem for disambiguation.

The goodness-of-fit test or the Kolmogorov–Smirnov test is constructed by using the critical values of the Kolmogorov distribution. The null hypothesis is rejected at level \( \alpha \) if

\( \sqrt{n}D_n>K_\alpha,\, \)

where Kα is found from

\( \operatorname{Pr}(K\leq K_\alpha)=1-\alpha.\, \)

The asymptotic power of this test is 1.
Test with estimated parameters

If either the form or the parameters of F(x) are determined from the data Xi the critical values determined in this way are invalid. In such cases, Monte Carlo or other methods may be required, but tables have been prepared for some cases. Details for the required modifications to the test statistic and for the critical values for the normal distribution and the exponential distribution have been published by Pearson & Hartley (1972, Table 54). Details for these distributions, with the addition of the Gumbel distribution, are also given by Shorak & Wellner (1986, p239). The Lilliefors test represents a special case of this for the normal distribution.
Two-sample Kolmogorov–Smirnov test

The Kolmogorov–Smirnov test may also be used to test whether two underlying one-dimensional probability distributions differ. In this case, the Kolmogorov–Smirnov statistic is

\( D_{n,n'}=\sup_x |F_{1,n}(x)-F_{2,n'}(x)|, \)

where \( F_{1,n} \) and \( F_{2,n'} \) are the empirical distribution functions of the first and the second sample respectively.

The null hypothesis is rejected at level \( \alpha \) if

\( \sqrt{\frac{n n'}{n + n'}}D_{n,n'}>K_\alpha. \)

Note that the two-sample test checks whether the two data samples come from the same distribution. This does not specify what that common distribution is (e.g. normal or not normal).
Setting confidence limits for the shape of a distribution function

While the Kolmogorov–Smirnov test is usually used to test whether a given F(x) is the underlying probability distribution of Fn(x), the procedure may be inverted to give confidence limits on F(x) itself. If one chooses a critical value of the test statistic Dα such that P(Dn > Dα) = α, then a band of width ±Dα around Fn(x) will entirely contain F(x) with probability 1 − α.
The Kolmogorov–Smirnov statistic in more than one dimension

The Kolmogorov–Smirnov test statistic needs to be modified if a similar test is to be applied to multivariate data. This is not straightforward because the maximum difference between two joint cumulative distribution functions is not generally the same as the maximum difference of any of the complementary distribution functions. Thus the maximum difference will differ depending on which of \( \Pr(x < X \and y < Y) \) or \( \Pr(X < x \and Y > y) \) or any of the other two possible arrangements is used. One might require that the result of the test used should not depend on which choice is made.

One approach to generalizing the Kolmogorov–Smirnov statistic to higher dimensions which meets the above concern is to compare the cdfs of the two samples with all possible orderings, and take the largest of the set of resulting K-S statistics. In d dimensions, there are 2d−1 such orderings. One such variation is due to Peacock (1983) and another to Fasano & Franceschini (1987): see Lopes et al. (2007) for a comparison and computational details. Critical values for the test statistic can be obtained by simulations, but depend on the dependence structure in the joint distribution.
See also

Cramér–von Mises test
Jarque–Bera test
Kuiper's test
Siegel–Tukey test
Dvoretzky–Kiefer–Wolfowitz inequality
P-value

Footnotes

^ Stephens, M. A. (1974). "EDF Statistics for Goodness of Fit and Some Comparisons". Journal of the American Statistical Association (American Statistical Association) 69 (347): 730–737. doi:10.2307/2286009. JSTOR 2286009.
^ Marsaglia, G., Tsang, W. W., Wang, J. (2003) "Evaluating Kolmogorov’s Distribution", Journal of Statistical Software, 8 (18), 1–4. jstor
^ a b Kolmogorov, A. (1933) "Sulla determinazione empirica di una legge di distribuzione" G. Inst. Ital. Attuari, 4, 83
^ Smirnov, N.V. (1948) "Tables for estimating the goodness of fit of empirical distributions", Annals of Mathematical Statistics, 19, 279

References

Eadie, W.T.; D. Drijard, F.E. James, M. Roos and B. Sadoulet (1971). Statistical Methods in Experimental Physics. Amsterdam: North-Holland. pp. 269–271. ISBN 0444101179.
Stuart, Alan; Ord, Keith; Arnold, Steven [F.] (1999). Classical Inference and the Linear Model. Kendall's Advanced Theory of Statistics. 2A (Sixth ed.). London: Arnold. pp. 25.37–25.43. ISBN 0-340-66230-1. MR1687411.
Corder, G.W., Foreman, D.I. (2009).Nonparametric Statistics for Non-Statisticians: A Step-by-Step Approach Wiley, ISBN 9780470454619
Pearson E.S., Hartley, H.O. (Editors) (1972) Biometrika Tables for Statisticians, Volume II. CUP. ISBN 0-521-06937-8.
Shorak, G.R., Wellner, J.A. (1986) Empirical Processes with Applications to Statistics, Wiley. ISBN 0-471-86725-X.
Stephens, M.A. (1979) Test of fit for the logistic distribution based on the empirical distribution function, Biometrika, 66(3), 591-5.
Peacock, J. A. (1983). "Two-dimensional goodness-of-fit testing in astronomy". Monthly Notices of the Royal Astronomical Society 202: 615–627.[1]
Fasano, G., Franceschini, A. (1987) A multidimensional version of the Kolmogorov–Smirnov test. Monthly Notices of the Royal Astronomical Society (ISSN 0035-8711), vol. 225, 155–170.[2]
Lopes, R.H.C., Reid, I., Hobson, P.R. (2007) "The two-dimensional Kolmogorov-Smirnov test". XI International Workshop on Advanced Computing and Analysis Techniques in Physics Research (April 23–27, 2007) Amsterdam, the Netherlands. [3]

External links

Short introduction
KS test explanation
JavaScript implementation of one- and two-sided tests
Online calculator with the K-S test
Open-source C++ code to compute the Kolmogorov distribution and perform the K-S test
Paper on Evaluating Kolmogorov’s Distribution; contains C implementation. This is the method used in Matlab.

Mathematics Encyclopedia

Retrieved from "http://en.wikipedia.org/"
All text is available under the terms of the GNU Free Documentation License

Home - Hellenica World