Inferential theory of learning

Inferences are kept in the middle of all categories.

Inferential Theory of Learning (ITL) is an area of machine learning which describes inferential processes performed by learning agents. ITL has been continuously developed by Ryszard S. Michalski, starting in the 1980s. The first known publication of ITL was in 1983.[1] In the ITL learning process is viewed as a search (inference) through hypotheses space guided by a specific goal. The results of learning need to be stored. Stored information will later be used by the learner for future inferences.[2] Inferences are split into multiple categories including conclusive, deduction, and induction. In order for an inference to be considered complete it was required that all categories must be taken into account.[3] This is how the ITL varies from other machine learning theories like Computational Learning Theory and Statistical Learning Theory; which both use singular forms of inference.

  1. ^ Michalski, Ryszard S. (1993). "Inferential theory of learning as a conceptual basis for multistrategy learning". Machine Learning. 11 (2–3): 111–151. doi:10.1007/bf00993074. ISSN 0885-6125.
  2. ^ "Inferential Theory of Learning – GMU Machine Learning and Inference Laboratory". www.mli.gmu.edu. Retrieved 2018-12-04.
  3. ^ Naidenova, Xenia (2010). Machine learning methods for commonsense reasoning processes : interactive models. Hershey, PA: Information Science Reference. ISBN 9781605668109. OCLC 606360112.