Probability density function | |||
Parameters |
shape (real) (real) | ||
---|---|---|---|
Support |
for for | ||
CDF | see text | ||
Mean | , otherwise undefined | ||
Median | |||
Mode | |||
Variance |
| ||
Skewness | |||
Excess kurtosis |
The q-Gaussian is a probability distribution arising from the maximization of the Tsallis entropy under appropriate constraints. It is one example of a Tsallis distribution. The q-Gaussian is a generalization of the Gaussian in the same way that Tsallis entropy is a generalization of standard Boltzmann–Gibbs entropy or Shannon entropy.[1] The normal distribution is recovered as q → 1.
The q-Gaussian has been applied to problems in the fields of statistical mechanics, geology, anatomy, astronomy, economics, finance, and machine learning.[citation needed] The distribution is often favored for its heavy tails in comparison to the Gaussian for 1 < q < 3. For the q-Gaussian distribution is the PDF of a bounded random variable. This makes in biology and other domains[2] the q-Gaussian distribution more suitable than Gaussian distribution to model the effect of external stochasticity. A generalized q-analog of the classical central limit theorem[3] was proposed in 2008, in which the independence constraint for the i.i.d. variables is relaxed to an extent defined by the q parameter, with independence being recovered as q → 1. However, a proof of such a theorem is still lacking.[4]
In the heavy tail regions, the distribution is equivalent to the Student's t-distribution with a direct mapping between q and the degrees of freedom. A practitioner using one of these distributions can therefore parameterize the same distribution in two different ways. The choice of the q-Gaussian form may arise if the system is non-extensive, or if there is lack of a connection to small samples sizes.