Foundations of statistics

The Foundations of Statistics are the mathematical and philosophical bases for statistical methods. These bases are the theoretical frameworks that ground and justify methods of statistical inference, estimation, hypothesis testing, uncertainty quantification, and the interpretation of statistical conclusions. Further, a foundation can be used to explain statistical paradoxes, provide descriptions of statistical laws,[1] and guide the application of statistics to real-world problems.

Different statistical foundations may provide different, contrasting perspectives on the analysis and interpretation of data, and some of these contrasts have been subject to centuries of debate.[2] Examples include the Bayesian inference versus frequentist inference; the distinction between Fisher's significance testing and the Neyman-Pearson hypothesis testing; and whether the likelihood principle holds.

Certain frameworks may be preferred for specific applications, such as the use of Bayesian methods in fitting complex ecological models.[3]

Bandyopadhyay & Forster[4] identify four statistical paradigms: classical statistics (error statistics), Bayesian statistics, likelihood-based statistics, and information-based statistics using the Akaike Information Criterion. More recently, Judea Pearl reintroduced formal mathematics by attributing causality in statistical systems that addressed the fundamental limitations of both Bayesian and Neyman-Pearson methods, as discussed in his book Causality.

  1. ^ Kitcher & Salmon (2009) p.51
  2. ^ Efron 1978.
  3. ^ van de Schoot, Rens; Depaoli, Sarah; King, Ruth; Kramer, Bianca; Märtens, Kaspar; Tadesse, Mahlet G.; Vannucci, Marina; Gelman, Andrew; Veen, Duco; Willemsen, Joukje; Yau, Christopher (2021-01-14). "Bayesian statistics and modelling". Nature Reviews Methods Primers. 1 (1). doi:10.1038/s43586-020-00001-2. hdl:20.500.11820/9fc72a0b-33e4-4a9c-bdb7-d88dab16f621. ISSN 2662-8449.
  4. ^ Bandyopadhyay & Forster 2011.