Advertisements

Archive

Monthly Archives: June 2013

Matt Yglesias reminds us that “an awful lot of capital [income] today is actually rent”. It’s a point he’s been making for a while, but it’s unfortunately treated as a bygone conclusion rather than an engineered reality. Here’s he is:

Land is just scarce because it’s scarce, and as returns to Manhattan land ownership increase the size of the island doesn’t expand. Intellectual property is deliberate government-created scarcity out of concern that were it not possible for Bill Gates to become so wealthy nobody would write word processing programs (don’t tell these guys).

I’m of a mixed-mind about patents, but before we consider Yglesias’ point, we need to define our terms, specifically “property”. The real classical liberals go for something called the “labor theory of property” which was proposed by John Locke in his Second Treatise on Government which supposes that property comes about by “the exertion of labor upon natural resources”. I’m not of this view, but it offers rich insight into how we consider intellectual property, which I’ll get to.

The competing, and dominant, theory comes from Jeremy Bentham that property is not prior to government:

Property and law are born together, and die together. Before laws were made there was no property; take away laws, and property ceases.

Furthermore, in this view, property is not absolute. My favorite example is a la carte vs. buffets at a given restaurant. In the former, you own the right to eat your meal, but also give it to your friend and take it home. In the latter, the restaurant has only given you the right to eat the food, they reserve the right of all further distribution. Or take your own home. (In Texas) you have the right to shoot trespassers thereupon, but may not inject cocaine within. The law has not ceded you that right.

Now, the idea of “rent” is closely connected with property rights. As long back as the 18th century, David Ricardo noted that rents – loosely “unfair income” – comes from property misallocation:

It was from this difference in costs [between productive and unproductive land] that rent springs. For if the demand is high enough to warrant tilling the soil on the less productive farm, it will certainly be profitable to raise grain on the more productive farm. The greater the difference, the greater the rent.

Therefore, I think Yglesias’ point that intellectual property is “deliberate government engineered scarcity” is a truism at best. Property exists because of government, and to the extent that any property is scarce, it is “government engineered”. You may not like the nuts and bolts of the broken patent system, but the idea itself is not so alien.

Ultimately, I agree that intellectual property earns a fancy rent, but it’s not nearly so simple. I think about “intellectual property” in a very similar way as physical land. That is, when it comes to physical capital, I believe someone truly creates something. When it comes to intellectual property, I think that all the world’s intellectual pursuits already exist, but they only remain to be discovered. And just as the price of land falls as supply increases, so too does that of a broadly defined “intellectual property”. Of course, most of us don’t “buy” this property, but the “IP share” per good, if you will, falls.

However, this isn’t always the case. Just like some land is fertile or valuable, some intellectual landscapes are more productive and useful. In the intellectual world, until the industrial revolution, we were walking on barren deserts of no value. The first industrial revolution was like finding soil, the second like finding the rich riverbanks on the ancient Nile. The technology revolution was like finding Alaska, Qatar, Norway, and Texas altogether.

In the meantime, we’ve found barren land and cheap soil in such abundance that we ignore formal property in total and revert to a communal ownership thereof. Books on Project Gutenberg and the quadratic formula fall under this purview. But the new intellectual land is so fertile and rich, that Ricardo’s principle of rent is perfectly applicable and because this is owned by so few, rent-derived inequality is of critical importance.

But there is one big difference between land and intellectual property. What are your priors that – by and large – we have discovered all the minerals and inhabitable land in the world? Until intergalactic travel becomes a real possibility, my priors are high. Therefore, incredible land value taxes are a brilliant way of financing redistribution without distortion.

And what then is your prior that we have discovered all the profitable intellectual land god created? Perhaps it is similarly high. Oh how wrong you will be. See the brilliant Lord Kelvin at the turn of the last century:

The future truths of physical science are to be looked for in the sixth place of decimals.

Or Albert Michelson, famous for disproving luminiferous ether:

The more important fundamental laws and facts of physical science have all been discovered, and these are so firmly established that the possibility of their ever being supplanted in consequence of new discoveries is exceedingly remote.

This is a question of philosophy, but more a question of defining one’s terms. If you believe men create intellectual designs, then we can treat such property in similar vain to physical capital. However, if you believe in an infinite (or unobservable) landscape of profit-delivering discoveries, the only way to incentivize explorers is like from an era bygone: gold and silver.

There are two ways to achieve this. Many support crippling the patent system and supporting researchers and entrepreneurs with government tax credits instead. But relative to intellectual property, this suffers from adverse selection arising from asymmetric information. Given a set of researchers and a central funding authority, each researcher has a hidden prior on how good he believes his work will be. He will do his best to prove to the government agency that this prior is high. The government has some test T to check his application, but the test compromises precision for tractability (bureaucrats are limited in time, brains, and money).

On the other hand, a patent is obtained – broadly – after the researcher/entrepreneur discovers the idea for his device. Only those with truly (and not demonstrably) high priors will actually take the capital risk in pursuing this idea. In similar fashion, the kings of yore did not always pay explorers to go on grand voyages, but promised them ownership of wealth thereupon to accommodate for informational asymmetries. 

I am no fan of inequality and hence I support socially-owned patents. The government should finance large-scale research projects, and all patents thereof are owned by the American society itself. For this to work in practice, there needs to be a strong and credible independence between patent authorities and the government research apparatus. Private researchers cannot believe that government will prefer itself in competing applications. At the end of each year, the government must auction each patent on an electronic marketplace for five years. This creates a rich source of revenue for social redistribution, but also ensures that intellectual property finds itself in the hands of the firm that will use it most productively. Every five years, the patent will be re-auctioned until the value drops below a certain inflation-indexed value and will fall into communal ownership.

The implicit assumption here is the labor theory of property. If intellectual ideas already exist, property is allocated to he who cultivates it thereof: like the scientist who discovers dry ice, or the programmer who creates Microsoft Office. It strikes me that a libertarian should be most accommodating of this argument.

And, ultimately, rents earned from intellectual property are ultimately spent on natural resources (land, oil, beachfront property, and fancy restaurants in Manhattan). If we tax that, we will be taxing the extractive uses of intellectual capital. That seems like the smarter way.

Advertisements

I just read a very interesting new paper (via Mark Thoma) from the Center of Financial studies at Goethe University, titled “Complexity and monetary policy”. The paper probably filled a “critical gap” in someone’s knowledge toolbox, but failed to consider certain “meta level” deficiencies in methodology. Furthermore, certain implicit assumptions with regard to modeling philosophy were ignored. The authors certainly acknowledged limitations to the DSGE mindset, but did not consider the rich and interesting consequences thereof. I will try to do that, but first context.

A summary

The authors, Athanasios Orphanides and Volker Wieland, set out to test a general policy rule against specific designs for 11 models. The models examined fall under four categories, broadly:

  1. Traditional Keynesian (as formalized by Dieppe, Kuester, and McAdam in 2005)
  2. New Keynesian (less rigorous without household budget optimization)
  3. New Keynesian (more rigorous monetary business cycle models)
  4. DSGEs built post-crisis

So mostly DSGEs, and DSGE-lites.

The general monetary policy rule considered is stated as,

i_t = ρi_t−1 + α(p_t+h − p_t+h−4) + βy_t+h + β'(y_t+h − y_t+h−4)

where i is the short-term nominal interest wait, ρ is a smoothing parameter thereof. p is the log of price level at time t and hence p_t+h − p_t+h−4 captures the continuously compounded rate of change so α is the policy sensitivity to inflation. y is the deviation of output from flexible wage conditions, so β represents the policy sensitivity to output gap, and β’ represents that to growth rate. h is the horizon in consideration (limited to multiples of 2).

The model-specific optimal parameters are considered against four benchmark rules:

  • Well known Taylor rule (where policy responds only to current inflation and output gap, i.e. ρ, β, β’ = 0)
  • A simple differences rule (ρ = 1, α = β’ = 0.5, β = 0)
  • Gerdesmeier and Roffia (GR: ρ = α = 0.66, β = 0.1, β ̃ = h = 0)

The “fitness” of each policy rule for a given DSGE is measured by a loss function defined as,

L_m =Var(π)+Var(y)+Var(∆i)

or the weighted total of unconditional variances of inflation deviation from target, output gap, and change in interest rate. The best parameters by L for each model, as well as L thereof compared against standard policy rules are noted below:

Image

However, as the authors note, the best parametric set for one model is far from it in another, with explosive and erratic behavior:

Image

To “overcome” this obstacle, Orphanides and Wieland use Bayesian model averaging, starting with flat priors, to minimize L over all models. That is the model set that minimizes,

sigma((1/M)*L_m) for m = 1 to m = M, where M is the total number of models.

Under this procedure, the optimal average policy is:

i_t = 0.96i_t−1 + 0.30π_t + 0.19y_t + 0.31(y_t − y_t−4)

Indeed, as we would expect, it performs fairly well measured against the model-specific policy without more than twice exceeding optimal L by above 50%.

The authors then similarly manipulate and determine optimal averages within a subset of models, and perform out-of-sample tests thereof. They further consider the parametric effects of output gap mismeasurement on L,  The paper describes this in further detail, and it is – in my opinion – irrelevant to the ultimate conclusion of the paper. That is, the simple first-differences rule giving equal weight to inflation and output gap growth rate are fairly robust as long as it is targeted on outcomes rather than forecasts.

A critique

More than anything, this paper silently reveals the limitations of model-based policy decisions in the first place. Here’s the silent-but-deadly assertion in the paper:

The robustness exhibited by the model-averaging rule is in a sense, in sample. It per- forms robustly across the sample of 11 models that is included in the average loss that the rule minimizes. An open question, however, is how well such a procedure for deriving robust policies performs out of sample. For example, what if only a subset of models is used in averaging? How robust is such a rule in models not considered in the average loss?

The operative “what if” propels the authors to test subsets within their arsenal of just eleven models. They never even mention that their total set is just a minuscule part of an infinite set S of all possible models. Of course, a whopping majority of this infinite set will be junk models with idiotic assumptions like calvo pricing or perfect rationality utility monsters or intertemporal minimization of welfare which aggregate into nonsense.

Those uncomfortable with predictive modeling such as myself may reject the notion of such a set altogether, however the implicit assumption of this paper (and all modelers in general) is that there is some near-optimal model M that perfectly captures all economic dynamics. To the extent that none of the models in the considered set C meet this criteria (hint: they don’t), S must exist and C ipso facto is a subset thereof.

The next unconsidered assumption, then, is that C is a representative sample of S. Think about it as if each model occupies a point on an n-dimensional space, C is a random selection from S. But C is actually just a corner of S, for three reasons:

  • They all by-and-large assume perfect rationality, long-run neutrality, immutable preferences, and sticky wages.
  • Economics as a discipline is path dependent. That is, each model builds on the next. Therefore, there may be an unobservable dynamic that has to exist for a near-ideal model, which all designed ones miss.
  • S exists independent of mathematical constraints. That is, since all considered models are by definition tractable, it may be that they all miss certain aspects necessary for an optimal model.

But if the eleven considered models are just a corner of all possible models, the Bayesian average means nothing. Moreover, I think it’s fundamentally wrong to calculate this average based on equal priors for each model. There are four classes of models concerned, within which many of the assumptions and modes of aggregation are very similar. Therefore, to the extent there is some correlation within subsets (and the authors go on to show that there is), the traditional Keynesian model is unfairly underweighted because it is the only one in its class. There are many more than 11 formalized models, what if we used 5 traditional models? What if we used 6? What if one of them was crappy? This chain of thought illustrates the fundamental flaw with “Bayesian” model averaging.

And by the way, Bayesian thinking requires that we have some good way of forming priors (a heuristic, say) and some good way of knowing when they need to be updated. As far as models are concerned, we have neither. If I give you two models, both with crappy out-of-sample correlation, can you seriously form a relative prior on efficacy? That’s what I thought. So the very intuitional premise of Bayesian updating is incorrect.

I did notice one thing that the authors ignored. It might be my confirmation bias, but the best performing model was the also completely ignored in further analysis. Not surprisingly, the traditional Keynesian formalization. Go back to table 3, which shows how the model-specific policy rule performs for each model. You see a lot of explosive behavior or equilibrium indeterminacy (indicated by ∞). But see how well the Keynesian-specific policy rule does (column 1). It has the best worst-case for all of the considered cases. Its robustness does not end there, consider how all the other best-cases do on the Keynesian model (row 1), where it again “wins” in terms of average loss or best worst case.

The parameters for policy determination across the models look almost random. There is no reason to believe there is something “optimal” about figures like 1.099 in an economy. Human systems don’t work that way. What we do know is that thirty years of microfoundations have perhaps enriched economic thinking and refinement, but not at all policy confidence. Right now, it would be beyond ridiculous for the ECB to debate the model, or set of models on which to conduct Bayesian averages, used to decide interest rate policy.

Why do I say this? As late as 2011, the ECB was increasing rates when nominal income had flatlined for years and heroin addiction was afflicting all the unemployed kids. The market monetarists told us austerity would be offset, while the ECB asphyxiated the continent of growth. We know that lower interest rates increase employment. We know that quantitative easing does at least a little good.

And yet, it does not look like the serious economists care. Instead we’re debating nonsense like the best way to average a bunch of absolutely unfounded models in a random context. This is intimately connected with the debate over the AS-AD model all over the blogosphere in recent weeks. This is why we teach AS-AD and IS-LM as an end in and of itself rather than a means thereof. AS-AD does not pretend to possess any predictive power, but maps the thematic movements of an economy. To a policymaker, far more important. IS-LM tells you that when the equilibrium interest rate is below zero, markets cannot clear and fiscal policy will not increase yields. It also tells you that the only way for monetary policy to reach out of a liquidity trap is “credibly commit to stay irresponsible”. Do we have good microfoundations on the way inflationary expectations are formed?

It was a good, well-written, and readable paper. It also ignored its most interesting implicit assumptions without which we cannot ascribe a prior for its policy relevance.

Noah Smith argues (quite convincingly) that Japan can benefit from public default:

  1. Default doesn’t have to be costly in the long-run.
  2. It would clear the rot in Japan’s ‘ancien regime’.
  3. Creative destruction would ensue.

Point (1) has a lot of empirical support. An IMF paper I’ve linked to before from Eduardo Borensztein and Ugo Panizza suggests that ”

the economic costs are generally significant but short-lived, and sometimes do not operate through conventional channels.” Argentina is a great example to this point, and so are other South American countries like Paraguay and Uraguay. I was comfortable using this in defense of Greek default, where heroin addiction and prostitution are at an all-time high because of unemployment. I feel less empathy for the highly-employed Japanese, and hence my gut conservative suspicion of borrowers kicks in. But idiotic gut feelings should not guide policy, there are other reasons to tread carefully.

Successful defaulter countries can be described thusly:

  • Poor (low GDP)
  • Developing (high post-default growth or in strong convergence club)
  • Forced, without choice, to default (high cost of capital)

Incidentally, this defines countries like Pakistan, Russia, and the Domincan Republic; not Japan. As I discussed in this (surprisingly) hawkish post, the reason for default or inflation plays a key role in market reaction. Ken Rogoff rightly suggested that America should embrace “4-6% inflation” – which I’m all for – but said this is “the time when central banks  should expend some credibility to take the edge off public and private debts”.

America’s debt is very stable and safe, and if the government with Ben Bernanke’s support inflated that value only to bring it down we tell the world that “we default on debt we can repay”. (Read the whole post if you want to jump on me, because I think there are many other fantastic reasons to inflate – but the market should know that). The central bank’s credibility would be in international ruin if the market perceived Rogoffian default as reason thereof.

And Noah’s proposal for Japan is born of the same ilk. Japan’s debt might become unsustainable, but dirt cheap cost of capital implies that the market does not feel default is imminent. This is in strong contrast to the feedback loops which compelled successful countries to default. There would be long-run reputational cost to this action and, as it turns out, Borensztein and Panizza find the same thing:

A different possibility is that policymakers postpone default to ensure that there is broad market consensus that the decision is unavoidable and not strategic. This would be in line with the model in Grossman and Van Huyck (1988) whereby “strategic” defaults are very costly in terms of reputation—and that is why they are never observed in practice—while “unavoidable” defaults carry limited reputation loss in the markets. Hence, choosing the lesser of the two evils, policymakers would postpone the inevitable default decision in order to avoid a higher reputational cost, even at a higher economic cost during the delay.

What if Japan defaults when markets think it can handle debt service (i.e. when cost of capital is very low). The question here is between “austerity-induced stagnation” and default. This is a very tricky question for developed countries, and the results would be interesting. I don’t think Japan will ever be on the brink of {choose | stagnation, default, hyperinflation} but, if it is, unlike poor countries I am unconvinced that austerity cannot be a possibility.

Would fiscal consolidation hurt growth? Yes. But would the Japanese standard of living remain far above most of the world and other defaulters? Absolutely, especially if its implemented carefully with high taxes on the rich. The biggest risk would be an era of labor protectionism from international competition, but that’s a cost Japan (and the world, indeed) must bear. On this point, I’m also very doubtful of Noah’s optimistic Schumpeterian take on removing the “rot”. Is a default like a good, cold douche? Yes, if banks are able to extend credit to small businesses, again. Why are we so confident that a defaulter Japan (especially one that isn’t believed by the market to be on the brink of default) will have that luxury? Especially when even the most accommodating IMF paper on default acknowledges deep, short-run costs. And we know that short-run brutality carries well into the long-run…

Again, I doubt Japan will be in this dire position – I’m more confident about Abenomics than Noah – but if it is, I cannot support default on the basis of “other countries did it well”. I am a strong proponent of general debt forgiveness, like any leftist, but Japan is no Pakistan or Argentina. It’s a rich country with a history of innovation and remarkable recovery; not a post-communist wreck or terrorist stronghold. We should act like it.

Fareed Zakaria has noted that the Federal government spends four dollars on those over 65 for every dollar on those under 18. This is the inevitable byproduct of an aging society with AARP as its robust lobbying apparatus. Incidentally, the situation will only get worse as our “demographic decline” continues.

Short of disenfranchising the old, I’ve seen no good solutions to the problem (actually, I’m hard-pressed to call this a “problem” but that’s another story). I have (what I think is) a neat idea that, at least in theory, would create an “intergenerationally efficient” political representation. It’s not politically possible, and requires a radically different concept of what democracy is.

Americans vote nationally every two years for an average total vote count of (78.5 – 18)/2 = 30.25. If we think about “voting” as a token, each eligible citizen is “granted” a token for each election, immediately after which it expires. But what if the government endowed voters with all 30 (untradable) tokens at the age of majority?

I can now temporally optimize a voting position based on my personal discount rate. Let’s back up a second and talk about what that means. Other things equal, I would like to spend a dollar today rather than tomorrow. I bet most of you would much rather have $100 dollars in your checking account today, than $500 in 50 years. This is one reason savings rates are so low.

Now consider an election where there are two, broad voting positions: (A) reform entitlements and increase spending on infrastructure, education, and climate or (B) maintain entitlements financed by cuts to education and a tax hike on the working young.

Today, voting power is proportional to the demographic distribution. That means as the population over 65 increases in size, so does their voting power. Let’s further stipulate that most voters are approximately rational. This means that “old” and “near old” voters will go position A, “young” and “recently young” will go B, and “middle aged” will split somewhere in the middle.

However, if you had all your tokens upfront, unless everyone’s personal discount rate is 0, voting power is no longer proportional to the demographic distribution. That’s because we value utility today over tomorrow.

In this case, voters between 20 and 40 will have a lot more voice because they’re just a little bit more careless with their tokens. Before writing this post I thought hard about my “electoral discount rate”. Perhaps I’d vote twice for a candidate I really liked in a swingier state. However, this isn’t really a correct thought experiment. If everyone else is voting only once per election, the political positions fail to reflect the discount rate, and it doesn’t matter.

There’s a good chance our voting positions won’t change much. Yet, if this doesn’t create any changes it means the “we’re burdening our kids with future taxes” crowd doesn’t have much to stand on, because those same kids have shown a revealed preference towards not caring.

Problems to this model are both theoretical and practical. In practice, we ought to be worried deeply about people who “vote and run”. That is, use all their votes on one election without planning to retain citizenship. Assume this away for argument’s sake. (Maybe you can require some sort of contract, at least enforcing tax positions for some number of years past each vote rendered. Or, even better, you pay taxes as far into the future as votes you use).

Does this alleviate intergenerational injustice? To some extent, yes. Let’s think about climate change. I’m going to make the rather ridiculous assumption that the “cost of climate change” is binary, and starts in 2100. Think about an election in 2075. Aside from the fact that it might be already too late (this is a scientific, and not theoretical, inconvenience) only a discount rate-positive system can make the “right” choice. In 2075, the 65+ population (which expects to die by 2100) is majority just can’t understand why we need a carbon tax.

Recent evidence suggests that we purposely answer survey questions like “did inflation increase under GWB” incorrectly to signal political allegiance. I think climate change is the same thing. However, if we’re paid, we make the right choice. If you’re young in 2075 the tax on this “cheap talk” is imminent climate change. None of us today face that, so we can really know the underlying opinions thereof.

However, under this system young people can coalesce and take extraordinary action to halt climate change, because they have far more “token wealth” as a) they’re younger and hence used less, and b) the elderly temporally optimized their voting preferences resulting in disproportionately lower savings in their retirement.

All said and done, I’m inclined to think that this will not change our political structure much, for better or worse. That means that we the young care about helping the old and they care, in principle, about education and infrastructure. Normally, high discount rates are seen as too deferential to the “here and now” over “then and there”. Interestingly, in this scenario, over the long-run, a high discount rate results in just the opposite.

In a 1963 paper Robert Mundell first argued that higher inflation had real effects. He challenged the classical dichotomy by suggesting that nominal interest rates would rise less than proportionally with inflation, because higher price levels would induce a fall in money demand, thereby increasing velocity and capital formation which, in turn, would bring real rates down. The most interesting part of my argument comes from a model designed by Eric Kam in 2000, which I’ll get to.

And as Japan emerges from a liquidity trap, the Mundell-Tobin effect (named, too, for James Tobin submitting a similar mechanism) should anchor our intellectual framework. I don’t see any of the best bloggers (I may be wrong but see the self-deprecation there via Google) arguing along these lines, though Krugman offers a more sophisticated explanation of the same thing through his 1998 model, this can only strengthen our priors.

Paul Krugman, Noah Smith, Brad DeLong, and Nick Rowe have each replied to a confused suggestion from Richard Koo about monetary stimulus. Smith, as Krugman points out, was restricting his analysis on purely nominal scope and notes that DeLong captures the risk better, so here’s DeLong:

But if Abenonomics turns that medium-run from a Keynesian unemployment régime in which r &ltl g to a classical full-employment régime in which r > g, Japan might suddenly switch to a fiscal-dominance high-inflation régime in which today’s real value of JGB is an unsustainable burden..

Moreover, to the extent Abenomics succeeds in boosting the economy’s risk tolerance, the wedge between the private and public real interest rates will fall. Thus Paul might be completely correct in his belief that Abenomics will lower the real interest rate–but which real interest rate? The real interest rate it lowers might be the private rate, and that could be accompanied by a collapse in spreads that would raise the JGB interest rate and make the debt unsustainable.

I’ll address the latter concern first. Let’s consider the premise “to the extent Abenomics succeeds in boosting the economy’s risk tolerance”. If the whole scare is about Japan’s ridiculously-high debt burden, and we’re talking about the cost of servicing that debt, as far as investors are concerned isn’t Japan’s solvency a “risk”. I don’t think it is, I certainly don’t see a sovereign default from Japan, but that’s the presumed premise DeLong sets out to answer. So with that clause, the question becomes self-defeating, as increased risk tolerance would convince investors to lend Japan more money. Note the implicit assumption I’m making here is that it’s possible for a sovereign currency to default. I make this because there are many cases where restructuring (“default”) would be preferable to hyperinflation.

Even ignoring the above caveat, the fall in interest rate spreads can come from both private and public yields falling, with the former falling more rapidly. A lot of things “might be”, and do we have any reason to believe it “might be” that inflation does nothing to real public yields?

Well, as it turns out, we have good reason beyond Krugman’s model suggesting that inflation increases only nominal yields:

  • Mundell-Tobin argue that the opportunity cost of holding money increases with inflation, resulting in capital creation and decreased real rates. This is a simple explanation as any, but would be rejected as its a “descriptive” (read: non-DSGE) model.
  • So comes along Eric Kam arguing that: The Mundell-Tobin effect, which describes the causality underlying monetary non-superneutrality, has previously been demonstrated only in descriptive, non-optimizing models (Begg 1980) or representative agent models based on unpalatable assumptions (Uzawa 1968). This paper provides a restatement of the Mundell-Tobin effect in an optimizing model where the rate of time preference is an increasing function of real financial assets. The critical outcome is that monetary superneutrality is not the inevitable result of optimizing agent models. Rather, it results from the assumption of exogenous time preference. Endogenous time preference generates monetary non-superneutrality, where the real interest rate is a decreasing function of monetary growth and can be targeted as a policy tool by the central monetary authority.

[Caution, be-warned that we can probably create a DSGE for anything under the sun, but I will go through the caveats here as well] Note that he’s not making any (further) remarkable constraints to prove his point, just relaxing a previous assumption that time preference is exogenous. A previous paper (Uzawa, 1968) followed a similar procedure, but made the strong, questionable, and unintuitive assumption that the “rate of time preference is an increasing function of instantaneous utility and consumption” which implies that savings are a positive function on wealth, which contradicts the Mundell-Tobin logic.

Kam, rather, endogenizes time preference as a positive function on real financial balances (capital + real balances). He shows non-neutrality with the more intuitive idea that savings are a negative function on wealth. (So unanticipated inflation would result in higher steady-state levels of capital).

Look, in the long-run even Keynesians like Krugman believe in money neutrality. By then, however, inflation should have sufficiently eroded Japan’s debt burden. DeLong’s worry about superneutrality in the medium-term, where debt levels are still elevated, seems unlikely even without purely Keynesian conditions. That is, no further assumptions other than an endogenous time preference are required to move from neutrality to non-neutrality. DSGEs are fishy creatures, but here’s why this confirms my prior vis-a-vis Abenomics:

  1. Lets say superneutrality is certain under exogenous time preference (like Samuelson’s discounted utility).
  2. Non-superneutrality is possible under endogenous time preference. Personally, I find a god-set discount rate/time preference rather crazy. You can assign a Bayesian prior to both possibilities in this scenario. But note, the models which support neutrality make many more assumptions.

Now we need a Bayesian prior for (1) vs. (2). My p for (1) is low for the following reasons:

  • It just seems darned crazy.
  • Gary Becker and Casey Mulligan (1997) – ungated here – quite convincingly discuss how “wealth, mortality, addictions, uncertainty, and other variables affect the degree of time preference”.

I’d add a few other points to DeLong’s comment – “the real interest rate it lowers might be the private rate, and that could be accompanied by a collapse in spreads that would raise the JGB interest rate and make the debt unsustainable” – if the idea of falling real yields is not plausible. A fall in private cost of capital should be associated with a significant increase in wages, capital incomes, and at least profits. This by itself increases more revenue, but likely a greater portion of the earned income will fall in the higher tax brackets, suggesting a more sustainable debt. Of course my belief is that both private and public debt will be eroded quicker, but even if that’s too strong an assumption, there’s no reason to believe that falling spreads per se are a bad thing for government debt so long as it maintains tax authority.

However, the Mundell-Tobin and similar effects derive from a one-time increase in anticipated inflation. It’s not even to be seen that Japan will achieve 2%, and that’s a problem of “too little” Abenomics. On the other hand, if Japan achieves 2%, and tries to erode even more debt by moving to 4% it will loose credibility. Therefore, Japan should – as soon as possible – commit to either a 6% nominal growth target or 4% inflation target.

This is preferable because it increases the oomph of the initial boost, but primarily it extends the duration of the short run where monetary base is expanding and inflation expectations are rising. A longer short-run, we minimizes DeLong’s tail risks of a debt-saddled long-run, even if you reject all the above logic to the contrary.

DeLong concludes:

Do I think that these are worries that should keep Japan from undertaking Abenomics? I say: Clearly and definitely not. Do I think that these are things that we should worry about and keep a weather eye out as we watch for them? I say: Clearly and definitely yes. Do I think these are things that might actually happen? I say: maybe.

 I agree with all of it except the last word. I’d say, “doubtful”.

Paul Krugman has a post correctly bashing the nonsensical criticisms to unemployment insurance (UI):

Here’s what is true: there’s respectable research — e.g., here — suggesting that unemployment benefits make workers more choosy in the search process. It’s not that workers decide to live a life of ease on a fraction of their previous wage; it’s that they become more willing to take the risk of being unemployed for an extra week while looking for a better job.

 His analogy:

One way to think about this is to say that unemployment benefits may, perhaps, reduce the economy’s speed limit, if we think of speed as inversely related to unemployment. And this suggests an analogy. Imagine that you’re driving along a stretch of highway where the legal speed limit is 55 miles an hour. Unfortunately, however, you’re caught in a traffic jam, making an average of just 15 miles an hour. And the guy next to you says, “I blame those bureaucrats at the highway authority — if only they would raise the speed limit to 65, we’d be going 10 miles an hour faster.” Dumb, right?

His analogy is actually too easy on the crowd. Here’s why that’s the case, from the Massachusetts government:

Unemployment Insurance is a temporary income protection program for workers who have lost their jobs but are able to work, available for work and looking for work.

Receipt of UI benefits is contingent on one staying in the labor force. So when people tell you “UI increases unemployment” they may be right in a technical sense (and Krugman suggests why that’s wrong today). Even if so, the U3 measure has a numerator and a denominator, broadly:

  • Numerator: People in the labor force who a) don’t have a job but b) have actively looked for the past four weeks
  • Denominator: People in the labor force.

The biggest “increase” in unemployment from UI, then, comes from the “have actively looked for the past four weeks” by decreasing the number of discouraged workers. While demographic shifts are one cause of labor force exit, studies suggest that explains only 50% of the phenomenon.

Therefore, I think Krugman’s gone too easy. Even to the extent UI increases unemployment, it’s a “good” thing, by increasing labor market flexibility which has huge supply-side dividends. I have a high prior it decreases hysteresis, and chances are the only people who don’t are those who reject hysteresis outright (no comment there).

Most of us agree that worker protections are a good thing. Even “pragmatic libertarians” like Megan McArdle support unemployment insurance on “humanitarian” grounds. There are two ways we can help workers, either through the ridiculous French system of making it illegal to fire workers (which also makes it harder to fire them) or through a “flexicure” system where the state provides generous unemployment insurance and reemployment credit.

The former shrinks labor supply, the latter increases it by creating a healthier labor market. So if I could modify Krugman’s analogy I would go thusly:

In a recession cars at the front start moving slowly which makes the whole pack slower. The government can’t convince the first cars to go faster, so it decreases, literally, the friction of the road ahead by icing it. Now, the first cars can’t control themselves and start going faster. Allowing cars at the back to do the same. As they start going faster, heat generated increases, and the ice starts melting rather quickly. 

There you have the last benefit of UI, too. It’s an automatic stabilizer. Politicians couldn’t screw it up even if they wanted to.

Tyler Cowen offers an optimistic theory about the longevity of the surveillance state:

Let’s say that everything is known about everybody, or can be known with some effort.  The people who have the most to lose are powerful people who have committed some wrongdoing, or who have done something which can be presented as wrongdoing, whether or not it is.  Derelicts with poor credit ratings should, in relative terms, flourish or at least hold steady at the margin.

It is not obvious that the President, Congress, and Supreme Court should welcome such an arrangement.  Nor should top business elites.  More power is given to the NSA, or to those who can access NSA and related sources, and how many interest groups favor that?

Therein lies a chance for reform.

I’m not as sanguine, and I don’t think Mancur Olson would have been either. The security state is a perfect, if subtle, example of Olson’s “dispersed costs, concentrated benefits” thesis. A classic application might be America’s Farm Bill (which costs each of us cents each year, and gives large agricultural many millions, providing incentive to lobby).

Here’s a place to start for this thought experiment. How much would you pay to avoid government surveillance. This isn’t a realistic heuristic, at least to the extent that those willing to pay the most are also most likely to be the ones we want to target. But it’s sound enough to build a public choice logic. I wouldn’t pay more than $10 a year, and even that small change is a reflection of my distaste for the program rather than any rational calculation. Frankly, I just don’t care if all my information is read by a jobless bureaucrat at the NSA.

On the other hand,  a whopping 98% of Booz Allen Hamilton’s revenue comes from the security state. Here’s the nonlinearity, the big “business interests” Cowen talks about perhaps have a Bayesian prior of p = 0.005 that government surveillance will affect their profits negatively – or the probability that information regarding the company surfaces, that it is detrimental, that government would be willing to use it against them, and that it actually effects long-run profits. A long chain. (My point here isn’t restricted to a binary consideration of profits, but makes the abstraction a little bit simpler). And that’s an overestimate, if you ask me. I do believe businesses powerful enough can evade state scrutiny through a plethora of tools, including encrypted networks. Strong corporations also have a good reason to believe were sensitive material to emerge, they can strong arm the government into compliance.

But Booz’s prior that they will benefit from the military-intelligence-industrial complex (MIIC) is asymptotic to p = 1. A better way to put it is their prior that they will be destroyed without the MIIC is p = 1. And there’s a huge number of companies in the most politically connected corners of the country who will have this prior.

Therefore, for Cowen’s suggestion to ring true, the 200*E[cost of MIIC to big business] > E[benefit to MIIC]. That seems extremely improbable to me unless, as some have suggested, the MIIC directly hurts technology companies in foreign countries.

Also remember, we’re discussing metadata aggregation here. This is far more relevant to understanding political trends, indeed it’s what made the 2012 Obama campaign so technologically robust. It seems very unlikely that specific, detrimental, messages – of, say, an affair – would surface beyond the intestinal gears of whatever mining algorithm NSA uses.

This is, of course, so long as NSA doesn’t target businessmen in particular. But even if lobbies can prevent them from doing that, until Americans decide to place a non-negligible monetary value on their privacy, I don’t see the emergence of a permanent security state.

By the way, we know that Americans don’t trust their government. A recent study showed, however, that people respond to surveys with “cheap talk”, as a means to display political affiliation. What if our indifference towards surveillance is a revealed preference that we do, indeed, trust our government?