This article includes a list of general references, but it lacks sufficient corresponding inline citations. (July 2012) |
The horizon effect, also known as the horizon problem, is a problem in artificial intelligence whereby, in many games, the number of possible states or positions is immense and computers can only feasibly search a small portion of them, typically a few plies down the game tree. Thus, for a computer searching only a fixed number of plies, there is a possibility that it will make a detrimental move, but the effect is not visible because the computer does not search to the depth at which its evaluation function reveals the true evaluation of the line (i.e., beyond its "horizon").
When evaluating a large game tree using techniques such as minimax with alpha-beta pruning, search depth is limited for feasibility reasons. However, evaluating a partial tree may give a misleading result. When a significant change exists just over the horizon of the search depth, the computational device falls victim to the horizon effect.
In 1973 Hans Berliner named this phenomenon, which he and other researchers had observed, the "Horizon Effect."[1] He split the effect into two: the Negative Horizon Effect "results in creating diversions which ineffectively delay an unavoidable consequence or make an unachievable one appear achievable." For the "largely overlooked" Positive Horizon Effect, "the program grabs much too soon at a consequence that can be imposed on an opponent at leisure, frequently in a more effective form."
Greedy algorithms tend to suffer from the horizon effect.
The horizon effect can be mitigated by extending the search algorithm with a quiescence search. This gives the search algorithm ability to look beyond its horizon for a certain class of moves of major importance to the game state, such as captures in chess.
Rewriting the evaluation function for leaf nodes and/or analyzing more nodes will solve many horizon effect problems.