The Taylor rule is a monetary policy targeting rule. The rule was proposed in 1992 by American economist John B. Taylor[1] for central banks to use to stabilize economic activity by appropriately setting short-term interest rates.[2] The rule considers the federal funds rate, the price level and changes in real income.[3] The Taylor rule computes the optimal federal funds rate based on the gap between the desired (targeted) inflation rate and the actual inflation rate; and the output gap between the actual and natural output level. According to Taylor, monetary policy is stabilizing when the nominal interest rate is higher/lower than the increase/decrease in inflation.[4] Thus the Taylor rule prescribes a relatively high interest rate when actual inflation is higher than the inflation target.
In the United States, the Federal Open Market Committee controls monetary policy. The committee attempts to achieve an average inflation rate of 2% (with an equal likelihood of higher or lower inflation). The main advantage of a general targeting rule is that a central bank gains the discretion to apply multiple means to achieve the set target.[5]
The monetary policy of the Federal Reserve changed throughout the 20th century. The period between the 1960s and the 1970s is evaluated by Taylor and others as a period of poor monetary policy; the later years typically characterized as stagflation. The inflation rate was high and increasing, while interest rates were kept low.[6] Since the mid-1970s monetary targets have been used in many countries as a means to target inflation.[7] However, in the 2000s the actual interest rate in advanced economies, notably in the US, was kept below the value suggested by the Taylor rule.[8]
The Taylor rule is typically contrasted with discretionary monetary policy, which relies on the personal views of the monetary policy authorities. The Taylor rule often faces criticism due to the limited number of factors it considers.