Conceptual dependency theory is a model of natural language understanding used in artificial intelligence systems.
Roger Schank at Stanford University introduced the model in 1969, in the early days of artificial intelligence.[1] This model was extensively used by Schank's students at Yale University such as Robert Wilensky, Wendy Lehnert, and Janet Kolodner.
Schank developed the model to represent knowledge for natural language input into computers. Partly influenced by the work of Sydney Lamb, his goal was to make the meaning independent of the words used in the input, i.e. two sentences identical in meaning would have a single representation. The system was also intended to draw logical inferences.[2]
The model uses the following basic representational tokens:[3]
A set of conceptual transitions then act on this representation, e.g. an ATRANS is used to represent a transfer such as "give" or "take" while a PTRANS is used to act on locations such as "move" or "go". An MTRANS represents mental acts such as "tell", etc.
A sentence such as "John gave a book to Mary" is then represented as the action of an ATRANS on two real world objects, John and Mary.
DESCRIPTION | ACTION | EXAMPLE |
---|---|---|
Transfer of abstract relationship | ATRANS | give |
Transfer of the physical location of the object | PTRANS | go |
Application of physical force to an object | PROPEL | push |
Grasping of an object by an actor | GRASP | clutch |
Movement of a body part by its owner | MOVE | kick |