Tag Archives: models

My question: since neoclassical economics atomizes the representative agent, wouldn’t a modern theory without hardcore math be like a particle physics in English?  There are a few posts about the (over) use of abstruse math in economics. I don’t want to comment too much: this conversation is well above my intellectual pay grade. I have a hard time getting through an economics paper (sometimes I manage, and I do actually try to go through the whole paper). I’m pretty good at math. Just not that good. (Disclaimer: I’m writing this for my own benefit, and have no deep understanding in this part of economics).

I sympathize both with both sides, and I think Paul Krugman is right in emphasizing math as a way of preventing sloppy thinking. This is especially true for a subject like economics where there are only relative, not absolute, microfoundations. Economists get to choose what’s endogenous and what is not in a way that illuminates a particular point. Caplan’s says math doesn’t reveal anything that’s not already obvious. That’s because economists, unlike any other scientists, create their own world. If I assume a world without gravity, than I won’t be surprised to mathematically prove my ability to fly.

Economics jumps several logical levels in its use of math. Normally, in a reductionist framework, the farther you get away from small microfoundations, the less mathematical a subject becomes. It is possible, a hypothetically extreme reductionist might say, to reduce macroeconomics to microeconomics to psychology to neurobiology to biochemistry to physical chemistry to atomic physics to quantum physics. As the level of abstraction increases, the use of math falls – to the point where psychologists use math almost solely for empirical examination. But economics is an outlier: the principal charge of neoclassical econ is atomizing the individual. But in doing so, we loose the foundations from “lower” disciplines, and hence must assume our own. Economists, in other words, may create the world.

In that sense, asking a neoclassical to get beyond the math is like asking a physicist to talk in English. Bryan Caplan – who genuinely believes in modern insights – cannot ignore the math any more than he can abandon the “neo” in “neoclassical”.

Math is important for the same reason it (can be) useless. Since economics starts from scratch in a way no other science other than physics does, math tells us precisely what assumptions are made to reach a certain conclusion. Normally we think assumptions first, conclusion later, but let’s say I make a statement like:

Deficit spending can never be expansionary.

By itself, this means nothing. It might be an axiom for all I know. But let’s say it’s not, and we’re talking about a model with representative, rational agents. With math, you can “fill in” all the “holes” (find all necessary assumptions) to reach this conclusion. In that sense, it helps us backward engineer what a model specifically dictates.

You know this conclusion is true when we consider immortal agents which maximize intertemporal consumption. And given a more complex setup, there are many ways in which this particular conclusion may be reached.

You can explain this in English after the fact. This is slightly different from Paul Krugman’s point, which is working from the assumptions to a conclusion. The importance of math comes in seeing all the assumptions necessary for a conclusion. I don’t see how English does this. It’s not all about explanation.

Perhaps my confusion is reconciling Caplan’s comments on math with his comments on neoclassical economics:

Here are a few of the best new ideas to come out of academic economics since 1949:

  1. Human capital theory
  2. Rational expectations macroeconomics
  3. The random walk view of financial markets
  4. Signaling models
  5. Public choice theory
  6. Natural rate models of unemployment
  7. Time consistency
  8. The Prisoners’ Dilemma, coordination games, and hawk-dove games
  9. The Ricardian equivalence argument for debt-neutrality
  10. Contestable markets

This can all easily be explained in English. But in the process of reaching a conclusion, we can trip over our own English, unless the world has no frictions. But once it does, math is like a gutter rail to crazy thinking.

In some ways, mathematics philosophically echoes rational agent economics itself: there are many ways one can be irrational (or wrong), but only one in which he can be rational (or right).

Caplan the blogger or teacher can be aghast with some of the math he sees. But Caplan the neoclassical surely agrees that an economics without math would be like physics written in Iambic Pentameter?

P.S. Caplan challenges Krugman to identify a subject in economics where the insights cannot be explained in English to someone not brainwashed by math-econ. I wonder if he can describe to me a full, neoclassical model with all the best things about modern economics, without any complicated math. I would be grateful, since I’m not good enough to understand some of the papers I’d like too.

Macroeconomists have a big problem. There’s basically no way to quantitatively measure their most important metrics – aggregate demand and aggregate supply as a function. Most measurable quantities – like employment, labor force churn, or gross domestic product – fall under the influence of both, making it difficult to ascertain that important changes are dominated by one or the other. In practice, we know that the recent demand was most likely a result of a crash in demand, which (theoretically) governs the business cycle and is coincident with low inflation.

Since 2000, with the JOLTS dataset from the Bureau of Labor Statistics, we have a deeper insight into both aggregate demand and supply. With such, there is reason to believe demand – unlike supply – has benefitted from relatively rapid growth and recovery to pre-recession normals. I have discussed the importance of structural factors before, but feel the need to stress the return of demand.

My analysis is predicated on some logical assumptions backed up by sound data. It is still important to  accept the limitations of such “assumptions”. The JOLTS data set provides us, among many other things, the level of openings and the level of hires. Here’s a graph of both, with the 2007 business cycle peak as base year:


It is not a stretch to suggest that openings (blue) are highly correlated with aggregate demand for labor, whereas hires (red) are modulated by a mix of both demand and supply. While this is crude in many ways, a job opening is the most literal example of labor “demand”. (Since a lot of commenters mention, I will reiterate, the recruiting intensity – while correlated with the business cycle – does not change substantially in a downturn and, in any case, has recovered since 2009. Therefore arguments like “these are all fake openings requiring unreasonable perfection” are fine, but irrelevant as we’re talking about the change).

What we see is a “V-type” recession for openings. That is, they rapidly crashed during the deeps of the recession, but recovered at a pace proportional to the fall. On the other hand, hires evince a more “L-type” recession which is characterized by a quick fall without a similar recovery.

Of course, “openings” do not map perfectly onto demand. The level of recovery must be adjusted for desire to fill an opening. The best way to measure this would be to ask employers the maximum wage rate they are willing to pay for each opening. Some openings are fake – America’s ridiculously moronic immigration laws require employers to place an ad in the newspaper to “prove” no American can satisfy said needs. (My mom’s sponsor placed an ad so specific to her that by design no one else in the country could fill the job. There is no reason to believe this is an isolated practice.)

However, most jobs aren’t meant for immigrants, and most openings are honest. More importantly, errors are systematic rather than random. That is, even if there is a degree of false openings, we care not about the absolute levels, but rate of change thereof. In fact, some conclusive evidence shows that while “recruiting intensity” does fall during a recession, it only vacillates between 80 and 120% of the average, and we’ve made up most of that loss at this point.

Hires represent a natural amalgam of supply and demand. Each position filled requires a need for services rendered (demand) and ability for a newly employed person to productively serve that need (supply). If we accept that growth in aggregate demand is healthy due to the V-shape of openings, then supply-side problems in the labor force are worse than the L-shaped recovery in hires suggests because the curve is governed by both supply and demand, which means the little recovery we do see derives from a recovering demand on already existing supply.

At this point, it becomes overwhelmingly clear that the standard AS-AD framework is woefully inadequate to understand the current economic dynamic. On the one hand, if we consider Price Level and Employment (as in the textbook models), positive inflation with any level of demand suggests a contraction in supply that’s too deep to reconcile with slow but steady gains in productivity. If nothing else it suggests we are at capacity, which most commenters dispute.

A better framework – one implicitly accepted by most commenters – would consider Inflation and Growth Rates. In this case, extremely low inflation by any standard suggests either a fall in demand  – which, as argued above, is no longer supported by the data – or expansion in supply. But the increase in supply predicted by this model, while explaining unemployment through a labor-mismatch hypothesis, is far too great to square with low growth rates in productivity and income unless demand is highly inelastic, which then contradicts well-established presence of nominally sticky wages.

If demand is at capacity, there is no general configuration of the AS-AD model that even broadly captures the current state. The one exception may be rapidly rising supply coincident with rapidly falling demand. Unless job openings are a complete mirage this is unlikely to be the case. We may, of course, backward engineer a particularly contrived model which would fail to have any insight into necessary fiscal or monetary policy.

As I’ve argued before, the labor-mismatch hypothesis of unemployment is very appealing. The idea that fiscalism is the province of “demand-side” policies is a dangerous idea. Paul Krugman has probably never read my blog, but if he read this post I would surely be accused of VSP-ism – mentioning the preponderance of “structural issues” and saying little else. But if supply has increased it suggests demand, while recovering faster than Krugman would accept, demand is still slack.

In this case, there is a deep role the Federal government can play in moderating the unemployment from mismatched skills while elevating aggregate demand. Low interest rates suggest the United States government can bear far more debt than current deficits imply and with an appallingly high child poverty rate, there’s no reason we can’t vastly improve children’s health, education, and comfort at a national level. Now is a better time than ever to cancel payroll taxes indefinitely and to test a basic income.

Demand could be higher, but it is not nearly as low as it was in the troughs of the recession – compare Europe and the United States, for example. The end of depression economics does not mean the role of government is over, nor does it harken sunnier days for America’s lower-middle. I’m very confident that large scale stimulus will not spark hyperinflation, but less sure the role pure stimulus can have on long-term employment prospects for the poor without a well-thought Federal job guarantee.

It was our responsibility to stimulate the economy far more than we did. It was our responsibility to engage in monetary easing far sooner than we did. The depression of demand lasted far longer than it ought to have under any half-smart policy. But now that we’ve crawled our way out of the hole, it is not clear that demand is lacking.

Perhaps the role of government is more important than it ever was.

Paul Krugman recently explained the contradiction of a statement like:

I don’t believe in sticky prices, or at least not except for very brief periods, and therefore I believe that the economy is almost always close to full employment.

It is important to note that the observational error of not believing in sticky prices – and evolution, too? – is orders of magnitude worse than the analytical flaws that follow. Krugman’s IS-LM framework suggests that because falling prices can’t reduce interest rates at the zero lower bound, there’s no reason to believe aggregate demand is higher at a lower price level, marginalizing the case for wage flexibility. 

The case against flexibility, though, can be extended into normal times. The argument comes from three papers by the DeLong-Summers duo – the first empirically founding theoretical results in the second, and the third as basis for my own somewhat unorthodox interpretation. All together, there’s good reason to believe the sterile, representative agent models from DeLong and Summers will make Krugman to “eat [his] microfoundations”.

Before I continue, let’s explain why understanding the effect of price flexibility is key not just to formal discourse, but policy action. Indeed there’s no reason to believe the government has any fundamental authority upon inherent rigidities in the labor market – aside from long run, secular adjustments such as systematic deunionization. Rather, this conversation is directed at the numerous, conservative critics who question Keynesians for citing sticky wages as cause for recession on the one hand, and supporting minimum wage on the other. More generally, questioning flexible wages outside of the zero lower bound, is in important counterpoint to said structural reforms such as right to work, abolishment of minimum wage, or reluctance to provide unemployment insurance.

The first relevant paper from Brad DeLong and Larry Summers considers “The Changing Cyclical Variability of Economic Activity in the United States” concluding that:

the two principal factors promoting economic stability [are Keynesian auto-stabilizers] and the increasing rigidity of prices. We attribute the latter development to the increasing institutionalization of the economy.

They also suggest a rather stylized (and ultimately unconvincing) model in which wages are set in negotiations where workers are backword-looking. Still, the Keynesian framework here brings important question to the assumption that flexibility is everywhere and always stabilizing. Another very important empirical conclusion here, which will become relevant soon, is noting that the autocorrelation in quarterly output growth rose from 0.4 before WWI to 0.8 in 1985.

This finding allowed DeLong and Summers to drop the assumption of serially uncorrelated changes in output stipulated in John Taylor’s staggered contract model. In their 1985 paper (written in profoundly unreadable font) they develop a key insight showing destabilizing effects of increased wage flexibility among perfectly rational agents. To reach this conclusion, the staggered contract model is amended in two important ways:

  1. As mentioned, serially uncorrelated output growth is a senseless assumption that ought to be dropped – allowing for a persistent nominal income shock.
  2. Elaborating the treatment of output to include real interest rates by relaxing the assumption that money demand is interest inelastic.

We can extend the latter point to note that if the real interest rate determines aggregate demand and the nominal interest rate clears the money market inflation creates a wedge between the two altering output by shifting the solution of the IS-LM system. We can qualitatively state the instability of flexibility as:

While a lower price level is expansionary, the expectation of falling prices is contractionary.

And while their model “resisted” analytical solution, the numerical results confirmed that hypothesis. They determined this by measuring the variance of the steady-state output to random demand shocks modeled by unit variant white noise. They find that price flexibility – modeled by g which measures the responsiveness of prices to changes in demand – “is destabilizing at the margin in almost all cases”. Indeed, the result is so striking that only as demand shocks approximate random noise is an increase in flexibility at the margin stabilizing, and even then only when prices are near perfectly rigid to begin with. That both conditions hold is unlikely.

To demonstrate the robustness of their result, DeLong and Summers show that it holds even if extended to Taylor’s more complete 1980 model which includes a more generalized contract length. We learn that even if price flexibility is increased by decreasing the relative length of contracts, or number of overlapping commitments, steady state variance increases. The importance of this result is hard to overstate as it suggests that even with optimizing agents, long contract lengths do not explain variability in the business cycle.

Penultimately, we are shown that this result holds absolutely except “very near” the Walrasian limit (these are mind-games as we are obviously not). They do this by altering the “time” taken in each time step t. Without explaining the math:

Keeping the parameter g [flexibility] the same in the transformed and they transformed model implies that the transformation to a period that covers half as much time involves doubling of the responsiveness of the price level to output deviations.

They suggest this is a preferable method of studying increased flexibility because of the vast inherent difference between one, two, and many period contracts arising from the naturally discrete nature. While varying g is possible, it results in meaninglessly sensitive variations at the upper bound. Therefore, by altering the “time length” of each period allows a continuous movement towards Walrasian conditions.

Quantitatively, they find that given an initial parameter set, only by altering said setup such that the periods are more than 87.5% shorter can any meaningful increases in stability from flexibility be noticed. They explain this effect qualitatively as:

The effect of a shortened period in causing more rapid price changes – and thus more of an incentive to postpone or accelerate spending by one period – approximately balances the stabilizing effects of more rapid price flexibility.

And finally, the most interesting part of the paper vindicates the claim that flexibility is destabilizing against critics who noted that with durable capital that is costly to adjust, since price flexibility leads to less erratic long run interest rates, output follows suit. They go on to show that even if since is determined by long run investment which are itself determined by Tobin’s Q, under an empirically-founded estimate of equity-risk premium with capital depreciation at 0.20, flexibility is destabilizing

They show all this within the highly stylized world of perfect rationality and atomistic agents. In reality, it is likely that flexibility is even more destabilizing considering the psychological trauma of significant deflation and cyclical adjustment. While there is a lot of leeway within representative agent setups to model – if you will – bullshit, that this exercise is developed in the context of strong empirical foundations suggesting a fall in serially correlated output and increases in rigidity moving with increases in stability and prosperity increases my confidence in its validity. Correlation is not a good reason to believe in causation. Theory founded on rigorous mathematics is not a good reason to believe in causation. But correlation with theory is a strong force.

The third paper from the duo, “Fiscal Policy in a Depressed Economy”, doesn’t mention price flexibility at all, and I’m not sure either DeLong or Summers would endorse my interpretation. That said, at the heart of their paper is the belief is the importance of hysteresis effects from fiscal consolidation, where resulting demand shortfall can create long-run unemployment decreasing future tax revenues and increase the deficit.

I suggest that hysteresis as defined in this paper will be more dangerous with greater flexibility. Let’s say all workers suddenly disemployed because of recession are to make a choice – every day – whether to remain in the labor market or not. Let’s say both wages and prices are flexible, with the latter falling more. Because workers suffer from a money illusion during short-run adjustment, they will not notice that (even with automatic stabilizers) real wages have not fallen as much. Therefore, the opportunity cost of exiting the labor market falls, and many more will fall into long-run unemployment traps.

Conversely, because of money illusion and normal irrationalities, fewer people will be tempted to reenter the labor market than if prices had remained at artificially high levels. 

Altogether, this is unorthodox because while most (not, unfortunately, all) economists agree that real wage decreases via inflation is preferable to deflation few question the idea that flexibility is bad even outside of a liquidity trap. Many accept that wage decreases via inflation are good, but note that deflation (flexibility) is preferable to nothing.

However, a more creative (and granted more liable to misinterpretation) read of DeLong and Summers (2012) suggests that wage flexibility may have perverse consequences on labor market churn, which is key to economic health. 

So ultimately I have stronger convictions than Krugman,

Even in a liquidity trap, deflation could be expansionary if it is perceived as temporary, so that deflation now gives rise to expectations of future inflation.

that wage flexibility isn’t all that important. First it’s important to note a difference between DeLong/Summers who argue about the variance of steady-state output which isn’t necessarily related to contractionary or expansionary effects per se. (Though in almost any sense, stability dampens the business cycle which is good). But if you accept my final point that deflation has adverse effects on labor market entry, it’s possible to imagine a situation wherein expectedly temporary deflation is contractionary.

Also note that I really have a tough time constructing a world where deflation is perceived as temporary, in any meaningful sense. In the long-run, very few people outside of Japan believe their central bank will tolerate deflation. So it’s basically a non-statement. In the short run, recessionary-deflationary expectations are self-confirming which makes it difficult for future expectations to clash with present expectations. 

For example, if I expect an increase in prices in some future time period, I won’t decrease production now. But if I don’t decrease production now, aggregate demand doesn’t fall and the demand-side foundations for deflation erode.

Anyway, it’s hard to believe I wrote this long piece in response to people who “don’t believe in sticky prices”. None the less, repeat after me: sticky is stable.

I just read a very interesting new paper (via Mark Thoma) from the Center of Financial studies at Goethe University, titled “Complexity and monetary policy”. The paper probably filled a “critical gap” in someone’s knowledge toolbox, but failed to consider certain “meta level” deficiencies in methodology. Furthermore, certain implicit assumptions with regard to modeling philosophy were ignored. The authors certainly acknowledged limitations to the DSGE mindset, but did not consider the rich and interesting consequences thereof. I will try to do that, but first context.

A summary

The authors, Athanasios Orphanides and Volker Wieland, set out to test a general policy rule against specific designs for 11 models. The models examined fall under four categories, broadly:

  1. Traditional Keynesian (as formalized by Dieppe, Kuester, and McAdam in 2005)
  2. New Keynesian (less rigorous without household budget optimization)
  3. New Keynesian (more rigorous monetary business cycle models)
  4. DSGEs built post-crisis

So mostly DSGEs, and DSGE-lites.

The general monetary policy rule considered is stated as,

i_t = ρi_t−1 + α(p_t+h − p_t+h−4) + βy_t+h + β'(y_t+h − y_t+h−4)

where i is the short-term nominal interest wait, ρ is a smoothing parameter thereof. p is the log of price level at time t and hence p_t+h − p_t+h−4 captures the continuously compounded rate of change so α is the policy sensitivity to inflation. y is the deviation of output from flexible wage conditions, so β represents the policy sensitivity to output gap, and β’ represents that to growth rate. h is the horizon in consideration (limited to multiples of 2).

The model-specific optimal parameters are considered against four benchmark rules:

  • Well known Taylor rule (where policy responds only to current inflation and output gap, i.e. ρ, β, β’ = 0)
  • A simple differences rule (ρ = 1, α = β’ = 0.5, β = 0)
  • Gerdesmeier and Roffia (GR: ρ = α = 0.66, β = 0.1, β ̃ = h = 0)

The “fitness” of each policy rule for a given DSGE is measured by a loss function defined as,

L_m =Var(π)+Var(y)+Var(∆i)

or the weighted total of unconditional variances of inflation deviation from target, output gap, and change in interest rate. The best parameters by L for each model, as well as L thereof compared against standard policy rules are noted below:


However, as the authors note, the best parametric set for one model is far from it in another, with explosive and erratic behavior:


To “overcome” this obstacle, Orphanides and Wieland use Bayesian model averaging, starting with flat priors, to minimize L over all models. That is the model set that minimizes,

sigma((1/M)*L_m) for m = 1 to m = M, where M is the total number of models.

Under this procedure, the optimal average policy is:

i_t = 0.96i_t−1 + 0.30π_t + 0.19y_t + 0.31(y_t − y_t−4)

Indeed, as we would expect, it performs fairly well measured against the model-specific policy without more than twice exceeding optimal L by above 50%.

The authors then similarly manipulate and determine optimal averages within a subset of models, and perform out-of-sample tests thereof. They further consider the parametric effects of output gap mismeasurement on L,  The paper describes this in further detail, and it is – in my opinion – irrelevant to the ultimate conclusion of the paper. That is, the simple first-differences rule giving equal weight to inflation and output gap growth rate are fairly robust as long as it is targeted on outcomes rather than forecasts.

A critique

More than anything, this paper silently reveals the limitations of model-based policy decisions in the first place. Here’s the silent-but-deadly assertion in the paper:

The robustness exhibited by the model-averaging rule is in a sense, in sample. It per- forms robustly across the sample of 11 models that is included in the average loss that the rule minimizes. An open question, however, is how well such a procedure for deriving robust policies performs out of sample. For example, what if only a subset of models is used in averaging? How robust is such a rule in models not considered in the average loss?

The operative “what if” propels the authors to test subsets within their arsenal of just eleven models. They never even mention that their total set is just a minuscule part of an infinite set S of all possible models. Of course, a whopping majority of this infinite set will be junk models with idiotic assumptions like calvo pricing or perfect rationality utility monsters or intertemporal minimization of welfare which aggregate into nonsense.

Those uncomfortable with predictive modeling such as myself may reject the notion of such a set altogether, however the implicit assumption of this paper (and all modelers in general) is that there is some near-optimal model M that perfectly captures all economic dynamics. To the extent that none of the models in the considered set C meet this criteria (hint: they don’t), S must exist and C ipso facto is a subset thereof.

The next unconsidered assumption, then, is that C is a representative sample of S. Think about it as if each model occupies a point on an n-dimensional space, C is a random selection from S. But C is actually just a corner of S, for three reasons:

  • They all by-and-large assume perfect rationality, long-run neutrality, immutable preferences, and sticky wages.
  • Economics as a discipline is path dependent. That is, each model builds on the next. Therefore, there may be an unobservable dynamic that has to exist for a near-ideal model, which all designed ones miss.
  • S exists independent of mathematical constraints. That is, since all considered models are by definition tractable, it may be that they all miss certain aspects necessary for an optimal model.

But if the eleven considered models are just a corner of all possible models, the Bayesian average means nothing. Moreover, I think it’s fundamentally wrong to calculate this average based on equal priors for each model. There are four classes of models concerned, within which many of the assumptions and modes of aggregation are very similar. Therefore, to the extent there is some correlation within subsets (and the authors go on to show that there is), the traditional Keynesian model is unfairly underweighted because it is the only one in its class. There are many more than 11 formalized models, what if we used 5 traditional models? What if we used 6? What if one of them was crappy? This chain of thought illustrates the fundamental flaw with “Bayesian” model averaging.

And by the way, Bayesian thinking requires that we have some good way of forming priors (a heuristic, say) and some good way of knowing when they need to be updated. As far as models are concerned, we have neither. If I give you two models, both with crappy out-of-sample correlation, can you seriously form a relative prior on efficacy? That’s what I thought. So the very intuitional premise of Bayesian updating is incorrect.

I did notice one thing that the authors ignored. It might be my confirmation bias, but the best performing model was the also completely ignored in further analysis. Not surprisingly, the traditional Keynesian formalization. Go back to table 3, which shows how the model-specific policy rule performs for each model. You see a lot of explosive behavior or equilibrium indeterminacy (indicated by ∞). But see how well the Keynesian-specific policy rule does (column 1). It has the best worst-case for all of the considered cases. Its robustness does not end there, consider how all the other best-cases do on the Keynesian model (row 1), where it again “wins” in terms of average loss or best worst case.

The parameters for policy determination across the models look almost random. There is no reason to believe there is something “optimal” about figures like 1.099 in an economy. Human systems don’t work that way. What we do know is that thirty years of microfoundations have perhaps enriched economic thinking and refinement, but not at all policy confidence. Right now, it would be beyond ridiculous for the ECB to debate the model, or set of models on which to conduct Bayesian averages, used to decide interest rate policy.

Why do I say this? As late as 2011, the ECB was increasing rates when nominal income had flatlined for years and heroin addiction was afflicting all the unemployed kids. The market monetarists told us austerity would be offset, while the ECB asphyxiated the continent of growth. We know that lower interest rates increase employment. We know that quantitative easing does at least a little good.

And yet, it does not look like the serious economists care. Instead we’re debating nonsense like the best way to average a bunch of absolutely unfounded models in a random context. This is intimately connected with the debate over the AS-AD model all over the blogosphere in recent weeks. This is why we teach AS-AD and IS-LM as an end in and of itself rather than a means thereof. AS-AD does not pretend to possess any predictive power, but maps the thematic movements of an economy. To a policymaker, far more important. IS-LM tells you that when the equilibrium interest rate is below zero, markets cannot clear and fiscal policy will not increase yields. It also tells you that the only way for monetary policy to reach out of a liquidity trap is “credibly commit to stay irresponsible”. Do we have good microfoundations on the way inflationary expectations are formed?

It was a good, well-written, and readable paper. It also ignored its most interesting implicit assumptions without which we cannot ascribe a prior for its policy relevance.