Evan Soltas
Sep 3, 2013

5 Things I Learned from FedSim

Warning: Get your nerd goggles on, because this is going to be a wild ride.

I just released FedSim, my attempt to build a realistic monetary-policy simulation game. I expected that coding would be a challenge. What I didn't expect was how difficult it would be to create and calibrate a model of the economy that would be believable.

I thought I would be up to the task -- that, in short, I had enough of an understanding to make it real. I didn't. I learned a huge amount about how to think about macroeconomics in building the model. I thought I should share some of those lessons.

We all think we can describe how macroeconomic indicators would respond to X, Y, and Z -- and how they would interact. Until we have to, well, write all of those relationships out into Excel formulas and then test the model over and over again until it works out correctly, no matter how the game is played. The problem, in short, is one of consistency: The simulation has to be correctly calibrated in so many different respects that you can have it just right in one game and totally wrong in another.

A few specific conclusions:

1. Business-cycle asymmetries matter.

Brad DeLong and Larry Summers wrote a paper in 1984 arguing that there was no significant evidence that business cycle was asymmetric -- that is, recessions were steep and deep, expansions were shallow and prolonged. On the other hand, James Hamilton argued in 1989 that you need to think of the economy as entering "modes" -- a recession mode, a growth mode, and so on. Those are opposite approaches and lead to very different styles of statistical analysis.

I found that a truly symmetric business cycle looked implausible in simulation. Fortunately I knew about this academic literature that tried to characterize the shape of business cycles. What kept happening was that after a normal-length expansion, the simulation would produce an extended but shallow recession that looked nothing like the U.S. experience: unemployment rising 0.5 percent, and real GDP at zero growth, every year for 5 years.

Here's a graph of the year-over-year change in the unemployment rate since 1948, so you can see what I mean:

On a quick check, the evidence for skewness in the unemployment rate and in de-trended GDP data has grown somewhat stronger since 1984. I still don't quite have the model right in this respect -- its recessions are still too long and too shallow -- but that's mainly because I couldn't think of an easy way to write this asymmetry into the program.

2. The Phillips Curve is dead.

Before building the model, I expected that a Phillips Curve would be important part -- that is, that there should be a substantial rise in inflation following a period of low unemployment. Not so much that there is an exploitable trade-off, but rather that inflation should punish the monetary policymaker for allowing low unemployment.

I found, however, that this consistently produced implausible simulations in a 2013 environment. In particular, we just haven't seen an episode of wage-push inflation since 1980, and trying to put one in didn't make sense in light of the 1990s. Maybe a big Phillips Curve would have made more sense in the 1970s. As you can see in this graph:

The finished version of the model increases inflation only slightly when the two-year moving average of unemployment is low. And I made the increases nonlinear, so that you're only in any significant trouble when unemployment dips into the 4-percent range.

3. The monetary policy reaction function is steep.

Here's a quick quiz: If unemployment falls one percentage point, by how many percentage points should the real federal funds rate rise?

The answer: Way more than you would think. Or, at least, way more than I thought when I first wrote the model down. The value I settled on was that the stance of monetary policy is roughly maintained when the real federal funds rate rises 1.8 percentage points. That decision came from data on the level of the fed funds rate and the level of the unemployment rate (graph here), changes in the two levels (graph here) and Greg Mankiw's version of the Taylor rule. And, of course, repeated trial-and-error in simulation.

My model estimates the monthly change in the unemployment rate based on the current unemployment rate and the real federal funds rate. (Let's put aside lag and noise for a moment.) A zero real fed funds rate, for example, is consistent with no monthly change in unemployment with a 7.8-percent unemployment rate, but at 5-percent unemployment, you need a 1.6-percent real fed funds rate for there to be no change in unemployment. To raise or lower unemployment by 0.1 percent per month, you have to raise or lower the real federal funds rate by some 2.5 percentage points.

Here's a graph to show you what I mean. The vertical axis is the lagged unemployment rate, the horizontal axis is the lagged real federal funds rate. The middle line indicates the combination of the real federal funds rate and the unemployment rate that generates no change in the unemployment rate. The line above that increases unemployment by 0.1 percent per month; the lowest line decreases it by the same amount.

You should also think about the Fed's commitment to keep its policy rate low until unemployment falls below 6.5 percent or inflation passes 2.5 percent. Put the inflation threshold aside for a moment. If we wait until 6.5 percent, the natural rate of unemployment is near 5 percent and the federal funds rate should be near 4 percent over the long term, then the conclusion is that the pace of increase in the federal funds rate in 2014 and 2015 is going to be swift. There is no room to push that threshold down further, at least on a permanent basis.

4. Lags matter.

Economics blogs rarely talk about policy lags. That is probably because they are boring. There is a sense of "I want my monetary policy, and I want it now" in the online conversation. It's completely out of line with the way monetary policy actually works.

When I tested my simulation game, I realized that the lags really are the only thing that makes it difficult. (Well, lags in the context of forecast uncertainty.) It was very easy to make policy errors that looked dumb in hindsight: not realizing that, if you didn't respond to the drop in inflation now, you would come to regret it in a year, when you couldn't cut the policy rate fast enough, or getting crushed by rising unemployment because you chose to tighten when you were afraid of accelerating inflation a year ago that is now gone.

The unfortunately-named statistician Eugen Slutsky showed back in 1927 paper that repeatedly taking the moving average of random data generates apparently cyclical behavior. If you are at all familiar with macroeconomic debates before the neoclassical synthesis -- when John Hicks and Paul Samuelson institutionalized Keynes's ideas with math -- you'll know that the big critique of Slutsky's position was that he never bothered to explain the why. Why would a model of accumulating random shocks represent the economy?

My model makes me sympathetic to Slutsky's argument. If policymakers respond to lagged data, or moving average data, that could be a cause of Slutsky's cyclical swings. And that's exactly what these macroeconomic aggregate time series are.

5. Downward nominal rigidity is not as simple as it looks.

A number of economists (Paul Krugman among others) have consistently argued that downward nominal rigidity in prices and wages -- that is, their resistance to falling in nominal terms -- implies that macroeconomic policy should accept a higher inflation target. Usually, it's in the range of 4 percent per annum, as opposed to 1-2 percent.

The problem with this argument isn't that higher inflation is a slippery slope, as is too often argued in response. It's that the conversation often fails to link up with any thorough assessments of the costs and benefits of higher versus lower trend rates of inflation. Fortunately, there are many of these. Akerlof, Dickens and Perry (1996) is a classic. Billi and Kahn (2008) provide a quick summary of many others. Fagan and Messina (2009) work with excellent data on nominal-wage changes.

Two papers -- Coibon, Gorodnichenko and Wieland (2011) and Bernanke (2004) -- are worth a bit more discussion here. The former repeats its analysis many times under different specifications, and what it shows is that given any reasonable set of assumptions -- the amount of uncertainty for model parameters, the frequency of zero-lower-bound events, etc. -- it is basically impossible to estimate an optimal rate of inflation above 3 percent. Even tripling the likelihood of a ZLB event won't get you there. Bernanke came to the same conclusion, saying that a 2-percent target is pretty robust to whatever assumptions and model one uses.

One other thing comes up with downward nominal rigidity: The argument that it raises the optimal rate of inflation isn't so clear, and the Coibion et al. paper finds the opposite of what Krugman and others have been arguing. The argument is that by making deflation and nominal wage cuts less likely, inflation becomes more stable, making lower inflation (say, 1 percent per year) more desirable, even if near-zero inflation is made less desirable.

I didn't believe that finding when I first read it -- my Bloomberg colleague Matt Klein and I have talked about this -- but my simulation showed me that it makes more sense than I thought previously. The model's downward nominal rigidity for inflation (especially core inflation) made it a good strategy to park inflation at 1.0-1.5 percent and then just think about real stabilization for the rest of the game.