Evan Soltas
Apr 9, 2012

Fed Knows Best?

Taylor rule deviations have some predictive power over future policy.

One of the potentially strongest arguments for discretionary monetary policy is if the central bank, in effect, knew better than what policy a Taylor rule recommended. (For this exercise, I'm using the variant of the Taylor rule developed by Paul Krugman and Greg Mankiw, discussed here.)

Usually, advocates for rule-based monetary policy look at any divergence from a Taylor rule as a policy error. Now, to test the case for discretion, I ask if those divergences are predictive or anticipatory. In other words, when the Fed sets the federal funds rate below what a Taylor rule would recommend, how often does that indicate that future Taylor rule recommendations will be lower than the present? Most essentially,  can the Fed see things -- namely, inflation or recession -- coming?

Looking at the time series data, there is a decent case to be made that Fed discretion qualitatively pre-empted what Mankiw and Krugman's Taylor rule would recommend. Look at the graph below:In the 1990s, Fed policy errors predicted the eventual need to tighten towards the end of the decade. In 2000-2002, when the Fed set rates below what the Taylor rule recommended, the Taylor rule's recommendation later fell. Then again in 2007-2008, what was then perceived to be aggressive rate-cutting by the Bernanke Fed appeared predictive of what the Taylor rule would later recommend as the economy soured.

Of course, since the 1990s, there are obvious counterarguments in the data -- most importantly, Fed policy lagged behind the rising Taylor rule rate from 2003 to 2006. Here the Fed's policy error was not predictive of an eventual worsening of conditions and a lower Taylor rule rate; the higher Taylor rule rate was predictive of the Fed's subsequent tightening in 2004-2006.

We can ask this question more systematically: how predictive is the deviation of Fed policy from the Taylor rule of future changes what the Taylor rule will recommend in 1 year, 6 months, or 3 months?

The answer, looking at the graphs below, is that the Fed's "policy error" has statistically significant predictive power over the future Taylor rule, but the size of this effect is modest. (Scroll down for continued discussion of the results.)
The three graphs tell us that the Fed's policy errors explain 15.4 percent of the variance in the 1-year change of the Taylor rule recommendation, 15.5 percent of the variance in the 6-month change of the Taylor rule recommendation, and 10.7 percent of the variance in the three-month change of the Taylor rule recommendation. These three results are highly statistically significant, which means we can say with almost complete certainty that Fed discretion has been predictive, although the benefits appear to be small in magnitude.

The percentages, of course, come from the r-squared values in Excel, a.k.a the coefficient of determination, which I use to calculate p-values, which measure the chance that these results could have occurred randomly. Low p-values (p can range between 1 and 0) mean that it is unlikely to have occurred by chance. The p-values are 1 * 10-8 for the 1-year and 6-month tests, and 2.45 * 10-6 for the 3-month test.

A 1 percent Taylor rule deviation suggests that the Taylor rule recommendation one year in the future will be 1.33 percent higher than today, that a Taylor rule recommendation 6 months in the future will be 1 percent higher than today, and that a Taylor rule recommendation 3 months in the future will be 0.67 percent higher than today.

The implications for monetary policy are interesting. This post shows there is a case that discretion may have theoretically outperformed a Taylor rule from 1992 to 2007.