I had been thinking a lot, before running into that article, about this very idea wih resepct to central bank behaviour. I believe the US Federal Reserve telegraphs too much in terms of its policy intentions and the market has become somewhat addicted to the Federal Reserve bailing it out in times of trouble. I don´t think it is healthy for the market to perceive a ``Bernake put``. Furthermore, as I mentioned in an earleir post ``We are all hedge fund managers now`` the central bank is creating an incentive to market particpants to play much of the same game in selling short term risk. This is apparent from the levels of the VIX in the aftermath of the global finanacial crisis.
To me optimal central bank policy would have a level of randomnss, where interest rates could go up even in a recession even without inflation being sparked. This would have the effect of increasing volatility and cause everybody in the economy to delever because VaR models set their limits on the basis of volatility. Taken far enough it would encourage long term contracting and thus further reduce interest rate exposure to the broader economy. Clearly, this is going to be a problem as we move to higher interest rates because everyone is on the short end floating - getting to fixed is going to be tricky.
And yet as much as unpredicatable is optimal; it is very difficult to operationailze. Imagine if the Fed would say , ``our randomenss model tells us we need to set short term rates at 3% - right now``. A recent blog post highlights this tension:
Much like John Boyd, Sun Tzu emphasised the role of deception in war: “All warfare is based on deception”. In the context of regulation, “deception” is best understood as the need for the regulator to be unpredictable. This is not uncommon in other war-like economic domains. Google, for example, must maintain the secrecy and ambiguity of its search algorithms in order to stay one step ahead of the SEO firms’ attempts to game them. An unpredictable regulator may seem like a crazy idea but in fact it is a well-researched option in the central banking policy arsenal. In a paper for the Federal Reserve bank of Richmond in 1999, Jeffrey Lacker and Marvin Goodfriend analysed the merits of a regulator adopting a stance of ‘constructive ambiguity’. They concluded that a stance of constructive ambiguity was unworkable and could not prevent the moral hazard that arose from the central bank’s commitment to backstop banks in times of crisis. The reasoning was simple: constructive ambiguity is not time-consistent. As Lacker and Goodfriend note: “The problem with adding variability to central bank lending policy is that the central bank would have trouble sticking to it, for the same reason that central banks tend to overextend lending to begin with. An announced policy of constructive ambiguity does nothing to alter the ex post incentives that cause central banks to lend in the ﬁrst place. In any particular instance the central bank would want to ignore the spin of the wheel.” Steve Waldman summed up the time-consistency problem in regulation well when he noted: “Given the discretion to do so, financial regulators will always do the wrong thing.” In fact, Lacker has argued that it was this stance of constructive ambiguity combined with the creditor bailouts since Continental Illinois that the market understood to be an implicit commitment to bailout TBTF banks.