Ray Solomonoff | |
---|---|
Born | |
Died | December 7, 2009 | (aged 83)
Alma mater | University of Chicago (M.S. in Physics, 1951) |
Known for | Algorithmic probability, General Theory of Inductive Inference, Solomonoff induction, Kolmogorov complexity |
Notable work | "A Formal Theory of Inductive Inference" (1964), concept of Algorithmic Probability, foundational work on machine learning |
Awards | Kolmogorov Award (2003) |
Scientific career | |
Fields | Mathematics, Artificial intelligence, Algorithmic information theory |
Institutions | Oxbridge Research, MIT, University of Saarland, Dalle Molle Institute for Artificial Intelligence |
Ray Solomonoff (July 25, 1926 – December 7, 2009)[1][2] was an American mathematician who invented algorithmic probability,[3] his General Theory of Inductive Inference (also known as Universal Inductive Inference),[4] and was a founder of algorithmic information theory.[5] He was an originator of the branch of artificial intelligence based on machine learning, prediction and probability. He circulated the first report on non-semantic machine learning in 1956.[6]
Solomonoff first described algorithmic probability in 1960, publishing the theorem that launched Kolmogorov complexity and algorithmic information theory. He first described these results at a conference at Caltech in 1960,[7] and in a report, Feb. 1960, "A Preliminary Report on a General Theory of Inductive Inference."[8] He clarified these ideas more fully in his 1964 publications, "A Formal Theory of Inductive Inference," Part I[9] and Part II.[10]
Algorithmic probability is a mathematically formalized combination of Occam's razor,[11][12][13][14] and the Principle of Multiple Explanations.[15] It is a machine independent method of assigning a probability value to each hypothesis (algorithm/program) that explains a given observation, with the simplest hypothesis (the shortest program) having the highest probability and the increasingly complex hypotheses receiving increasingly small probabilities.
Solomonoff founded the theory of universal inductive inference, which is based on solid philosophical foundations[4] and has its root in Kolmogorov complexity and algorithmic information theory. The theory uses algorithmic probability in a Bayesian framework. The universal prior is taken over the class of all computable measures; no hypothesis will have a zero probability. This enables Bayes' rule (of causation) to be used to predict the most likely next event in a series of events, and how likely it will be.[10]
Although he is best known for algorithmic probability and his general theory of inductive inference, he made many other important discoveries throughout his life, most of them directed toward his goal in artificial intelligence: to develop a machine that could solve hard problems using probabilistic methods.