The philosophy of artificial intelligence is a branch of the philosophy of mind and the philosophy of computer science[1] that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will.[2][3] Furthermore, the technology is concerned with the creation of artificial animals or artificial people (or, at least, artificial creatures; see artificial life) so the discipline is of considerable interest to philosophers.[4] These factors contributed to the emergence of the philosophy of artificial intelligence.
The philosophy of artificial intelligence attempts to answer such questions as follows:[5]
Can a machine act intelligently? Can it solve any problem that a person would solve by thinking?
Are human intelligence and machine intelligence the same? Is the human brain essentially a computer?
Can a machine have a mind, mental states, and consciousness in the same sense that a human being can? Can it feel how things are? (i.e. does it have qualia?)
Questions like these reflect the divergent interests of AI researchers, cognitive scientists and philosophers respectively. The scientific answers to these questions depend on the definition of "intelligence" and "consciousness" and exactly which "machines" are under discussion.
Important propositions in the philosophy of AI include some of the following:
Turing's "polite convention": If a machine behaves as intelligently as a human being, then it is as intelligent as a human being.[6]
The Dartmouth proposal: "Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."[7]
John Searle's strong AI hypothesis: "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."[9]
Hobbes' mechanism: "For 'reason' ... is nothing but 'reckoning,' that is adding and subtracting, of the consequences of general names agreed upon for the 'marking' and 'signifying' of our thoughts..."[10]
^Müller, Vincent C. (2023-07-24). "Philosophy of AI: A structured overview". Nathalie A. Smuha (Ed.), Cambridge Handbook on the Law, Ethics and Policy of Artificial Intelligence.
^Bringsjord, Selmer; Govindarajulu, Naveen Sundar (2018), "Artificial Intelligence", in Zalta, Edward N. (ed.), The Stanford Encyclopedia of Philosophy (Fall 2018 ed.), Metaphysics Research Lab, Stanford University, archived from the original on 2019-11-09, retrieved 2018-09-18
^Russell & Norvig 2003, p. 947 define the philosophy of AI as consisting of the first two questions, and the additional question of the ethics of artificial intelligence. Fearn 2007, p. 55 writes "In the current literature, philosophy has two chief roles: to determine whether or not such machines would be conscious, and, second, to predict whether or not such machines are possible." The last question bears on the first two.
^This version is from Searle (1999), and is also quoted in Dennett 1991, p. 435. Searle's original formulation was "The appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states." (Searle 1980, p. 1). Strong AI is defined similarly by Russell & Norvig (2003, p. 947): "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis."
^Hobbes 1651, chpt. 5 harvnb error: no target: CITEREFHobbes1651 (help)