In his recent Economist debate with Brad DeLong on whether the inflation target should be raised, eminent monetary economist Bennett McCallum emphasizes the Friedman rule as an important determinant of the optimal long-term rate of inflation:
First, in the absence of the ZLB, the optimal steady-state inflation rate—according to standard new Keynesian reasoning—lies somewhere between the Friedman-rule value of deflation at the steady-state real rate of interest (therefore something like –2% to 4%) and the Calvo-model value of zero, with careful calibration indicating that the weight on the latter may be considerably larger. Second, a theoretically attractive modification of the Calvo model would imply that the weight on the second of these values should be zero, so that the Friedman-rule prescription itself would be optimal (in the absence of the ZLB).
Third, even when the effects of the ZLB are added to the analysis, the optimal inflation rate is (according to this line of analysis) probably negative—closer to –2 % than to 4 %…
The Friedman rule follows naturally from a basic model of monetary policy. “Money” is a good that is costless to provide, yet valuable to consumers and businesses; for the sake of efficiency, it should be priced at cost, which means that the risk-free nominal interest rate should be zero (so that you don’t lose anything from holding wealth in the form of cash rather than Treasury bills). Since real interest rates are usually positive, this means that we need long-term deflation.
This is all valid in theory. But does it have practically meaningful welfare consequences? Brad DeLong does a little arithmetic and comes out skeptical:
Say that if cash in my pocket earned the same real rate of return as bonds in my portfolio, I would carry more cash and find myself having to stop at the ATM only once a quarter rather than once a week. Say it takes me six minutes to go the ATM. Say my time is worth $30 per hour at the margin. Say that other portfolio swaps I would no longer have to do are of equal value. Then I would gain $6 per week or $300 per year from a deflation rate of 3% per year. Say I am representative of 200m American adults.
That is a net welfare gain of $60 billion a year for America from this “reduced shoe leather wear” effect of having an inflation target of –3% per year.
The lost production from the recession that began in 2008 has so far amounted to $2.6 trillion. The meter is still running at a current rate of $1.04 trillion per year. It will be at least $4 trillion before we are through.
This is easy to see in other ways as well. Suppose that we’re considering whether to raise the long-term inflation target from 2% to 4%, and that this implies an increase in the typical yield on T-Bills from 4% to 6%. Currently, the amount of currency in circulation plus required reserves is a little above $1 trillion. (Keep in mind that due to zero interest rates, this is slightly higher than it otherwise would be.) What is a plausible estimate of the welfare loss from a change in inflation target from 2% to 4%?
Let’s be extreme and say that the resulting change in steady-state nominal interest rates causes real base money demand to fall in half, from $1 trillion to $500 billion. At worst, this implies a ($500 billion)*6% = $30 billion welfare loss due to deviation from the Friedman rule.* (This would happen if, say, all $500 billion was held by individuals who valued real money balances at exactly 5.999% a year; an increase in the nominal interest rate to 6% would be just enough to inefficiently cause them to forgo this benefit.) That is 0.2% of US GDP—already a pretty small effect, but not completely trivial.
But of course, we wouldn’t expect the demand for base money to fall in half. Laurence Ball, for instance, finds that the semi-elasticity of money demand with respect to interest rates is -0.05, so that a 2% increase in interest rates would lead to a 10% decline in M1. Let’s allow for the maximum possible welfare impact and suppose that the entire decline in M1 takes place in non interest-paying currency, which accounts for about half of the aggregate. Then we’ll see roughly a 20% decline in currency demand, for an absolute decline of $200 billion—which gives a maximum welfare hit of ($200 billion)*6% = $12 billion. Now we’re at less than one-thousandth of GDP!
It doesn’t end there. First of all, seignorage is a way for the government to collect revenue—revenue that would otherwise be raised using some other distortionary tax. Some economists think that higher interest rates can’t be justified on revenue-raising grounds alone: if you were designing the optimal mix of taxes (and nothing else), seignorage wouldn’t be one of them. But if you’re interested in maintaining positive interest rates for some other reason, the fact that seignorage income allows you to bring down some other tax means that the true welfare cost is even lower than you’d initially estimate.
And then there’s the elephant in the room: who holds currency, anyway? There is roughly a trillion dollars of paper currency in circulation; that’s over $3000 for every man, woman, and child in America. Most of the value is held in the form of $100 bills. Clearly most of it isn’t being used for the purpose of ordinary transactions. Estimates suggest that half or more is held outside the US, and undoubtedly an enormous percentage is used by organized crime. As Schmitt-Grohe and Uribe calculate in their survey on the optimal rate of inflation, accounting for the value of currency held abroad can easily push the “optimal inflation rate” (looking only at the tradeoff between revenue and the distortion imposed by departure from the Friedman rule) much higher, up to 2-10%. And that’s not even accounting for the fact that it would be beneficial to extract money from the tax evaders and drug cartels who presumably hold most of the cash circulating domestically.
In short: even if we are generous and assume that currency is being held domestically for legitimate purposes—and wave aside the revenue benefits—the economic impact of departure from the Friedman rule is a very tiny fraction of GDP. In a more realistic world, the costs are smaller still, and quite possibly even negative. They are minor relative to some of the other costs of inflation, and they pale in comparison to the macroeconomic costs of hitting the zero lower bound.
The Friedman rule is the ultimate example of an idea that is qualitatively true yet quantitatively irrelevant.
*For the sake of completeness, I should note that in a formal sense this isn’t quite right. If holding money is complementary to other economic activities, then it’s possible that we would obtain a general equilibrium “tax interaction effect” where the wedge in incentives from a positive nominal interest rate adds to (larger) preexisting distortions from other taxes. But even though a basic theoretical model might say that money is used due to a “cash-in-advance” constraint on consumption, a little introspection suggests that this can’t possibly be responsible for most demand for American currency: only a very, very tiny fraction of consumption is paid for with $100 bills. Most of this cash is held for other reasons (quite possibly illegitimate ones) that aren’t related to some clearly identifiable economic activity like consumption or work already distorted by a tax. In fact, to the extent that cash usage is motivated by tax evasion or crime, we actually see an additional benefit from taxing cash, by the same logic. And regardless, since seignorage revenue allows us to bring down existing taxes, we’d see an offsetting benefit of roughly the same magnitude—the net cost from these more subtle fiscal considerations is very, very unlikely to be much greater than zero.