In artificial intelligence, with implications for cognitive science, the frame problem describes an issue with using first-order logic to express facts about a robot in the world. Representing the state of a robot with traditional first-order logic requires the use of many axioms that simply imply that things in the environment do not change arbitrarily. For example, Hayes describes a "block world" with rules about stacking blocks together. In a first-order logic system, additional axioms are required to make inferences about the environment (for example, that a block cannot change position unless it is physically moved). The frame problem is the problem of finding adequate collections of axioms for a viable description of a robot environment.[1]
John McCarthy and Patrick J. Hayes defined this problem in their 1969 article, Some Philosophical Problems from the Standpoint of Artificial Intelligence. In this paper, and many that came after, the formal mathematical problem was a starting point for more general discussions of the difficulty of knowledge representation for artificial intelligence. Issues such as how to provide rational default assumptions and what humans consider common sense in a virtual environment.[2]
In philosophy, the frame problem became more broadly construed in connection with the problem of limiting the beliefs that have to be updated in response to actions. In the logical context, actions are typically specified by what they change, with the implicit assumption that everything else (the frame) remains unchanged.