Backward induction is the process of determining a sequence of optimal choices by reasoning from the endpoint of a problem or situation back to its beginning using individual events or actions.[1] Backward induction involves examining the final point in a series of decisions and identifying the optimal process or action required to arrive at that point. This process continues backward until the best action for every possible point along the sequence is determined. Backward induction was first utilized in 1875 by Arthur Cayley, who discovered the method while attempting to solve the secretary problem.[2]
In dynamic programming, a method of mathematical optimization, backward induction is used for solving the Bellman equation.[3][4] In the related fields of automated planning and scheduling and automated theorem proving, the method is called backward search or backward chaining. In chess, it is called retrograde analysis.
In game theory, a variant of backward induction is used to compute subgame perfect equilibria in sequential games.[5] The difference is that optimization problems involve one decision maker who chooses what to do at each point of time. In contrast, game theory problems involve the interacting decision of several players. In this situation, it may still be possible to apply a generalization of backward induction, since it may be possible to determine what the second-to-last player will do by predicting what the last player will do in each situation, and so on. This variant of backward induction has been used to solve formal games from the beginning of game theory. John von Neumann and Oskar Morgenstern suggested solving zero-sum, two-person formal games through this method in their Theory of Games and Economic Behaviour (1944), the book which established game theory as a field of study.[6][7]