Kuiper's test

Kuiper's test is used in statistics to test that whether a data sample come from a given distribution (one-sample Kuiper test), or whether two data samples came from the same unknown distribution (two-sample Kuiper test). It is named after Dutch mathematician Nicolaas Kuiper.[1]

Kuiper's test is closely related to the better-known Kolmogorov–Smirnov test (or K-S test as it is often called). As with the K-S test, the discrepancy statistics D+ and D represent the absolute sizes of the most positive and most negative differences between the two cumulative distribution functions that are being compared. The trick with Kuiper's test is to use the quantity D+ + D as the test statistic. This small change makes Kuiper's test as sensitive in the tails as at the median and also makes it invariant under cyclic transformations of the independent variable. The Anderson–Darling test is another test that provides equal sensitivity at the tails as the median, but it does not provide the cyclic invariance.

This invariance under cyclic transformations makes Kuiper's test invaluable when testing for cyclic variations by time of year or day of the week or time of day, and more generally for testing the fit of, and differences between, circular probability distributions.

  1. ^ Kuiper, N. H. (1960). "Tests concerning random points on a circle". Proceedings of the Koninklijke Nederlandse Akademie van Wetenschappen, Series A. 63: 38–47.