Archive

Monthly Archives: June 2013

Helicopter money has received a lot of praise, from left and right, as sound monetary policy. Some have gone as far as to say a universal basic income, or broad tax cuts, should be financed by printing money. I don’t like this as an argument for monetary policy for several reasons:

  • Financing tax cuts or a basic income is decidedly political, and blurs central bank independence.
  • It’s very difficult to granularly tune any such “drop”, or broader financing plan. They must either be discrete choices – a onetime expansion of the monetary base by emailing $1000 to each household (stimulus), or a coordinated policy decision.
  • Cash handouts that are big enough can have perverse consequences.

But there’s something much better device that can be finely tuned: the credit card. James Tobin remarked that the “the linking of deposit money and commercial banking is an accident of history”. It is this “accident”, to say the least, that compelled the United States to waste billions feeding AIG bonuses. Banks would not be “systematically important” if their collapse did not cause deep ruin.

Just like the Bank of America and Citigroup, I propose that the Federal Reserve issue each citizen of majority a lifetime credit card. During boom times, the daily interest rate on credit will be pegged to the inflation rate with default risk premium – users will prefer their normal card that offers a 30-day paying block.

However, if nominal GDP ever falls off path, the Fed directly lowers the rate until the FOMC’s forecast hits the target (also, board salaries are docked by the extent to which their forecast misses the target and the actual result misses the forecast). In the counterfactual, after Wall Street imploded, TARP would be completely unnecessary because the Fed would be providing incredibly cheap credit directly to consumers to reflate the economy (otherwise they get no salary!)

This policy isn’t plagued by a “zero lower bound”, either. Instead of highly questionable quantitative easing, the Fed can choose to inflate its balance sheet by bringing direct credit rates below zero, to increase consumer spending. This has the following benefits:

  • QE works through “hot potato” and “wealth” effects, and the extent of monetary base expansion completely contradicts inflation and job growth figures. Mostly because a lot of QE ends up in interest paying reserves, or the Fed mangled expectations. Likely both.
  • Any and all expansions of the monetary base are directly increasing consumption or investment. There is ZERO debate among economists whether this increases nominal GDP.
  • The card will simply be banned from repaying other debts (this will encourage private creditors to artificially increase rates on expectation of a Fed decrease, and hence will be an indirect QE: that is more profit for banks).
  • It also can’t be used on the stock market, we don’t want average Joe to conduct a leveraged buyout with his card. It is a physical card, and has the same limitations. Can’t do any fancy finance stuff with it.

Ben Bernanke would have seen this year’s job reports, thought to himself “wow, this is shit”, and decreased the interest rate on direct credit. He would also have the New York TimesWall Street Journal, and ZeroHedge advertise this rate cut everywhere (because he can), and this would get consumers to go buy more stuff. If rates were -10% (and that’s consistent, for a period of time, with the kind of QE we have) who wouldn’t buy that new Mac Pro?

Note that, if I buy something nice I can’t just wait for the negative rates to cut my burden to zero. As the economy picks up – and it will much more rapidly with this policy – rates again hit market levels. This will allow the Fed to more naturally (than QE) wind-down its balance sheet. Here’s why I’m confident this has to bring NGDP, inflation, and hence unemployment back on track. If it does not, we’ve basically found a forever free lunch. Because this cannot happen, there will be a time when credit rates rise, and that’s good.

That’s prong uno. As we’ve noticed, credit crunches after financial crises leave small businesses without access to corporate bond markets asphyxiated. The Federal Reserve will offer special corporate cards to such entities earning less than whatever the tax code defines as “small business”. (I’ll pass on actually reading it). This will offer ultra low-rate (0% or maybe less) long-run credit for expansionary investment. Cities and states will be given special rates to finance infrastructure or education (the mayor and governor will get diamond-studded platinum cards with free Air America miles).

Feedback loops will reflate the economy. Not only does direct credit access now evade the rotten financial system altogether, but businesses will expect increased demand (because of consumer spending) that will decrease their fear of low profits, and hence convince them that expansion is necessary. Further, more people will be encouraged to open new businesses, and this will keep markets competitive in a recession.

(Okay, there’s a big problem I’m not talking about, which is default. I used to be against this stuff, but since this is the opposite of predatory lending, I’d say the government has your social security number and way more information than normal companies, and can withhold tax returns or income (and even threaten penalties) for default. A little is okay and good.)

Finally, this plan can be augmented with its own Evans Rule (call it the Rao’s Rule?): “So long as unemployment remains above 5% or inflation under 3%, the Federal Reserve will decrease direct financing rates by k*(u-5)% a month” where k is some multiplier, and u is the unemployment rate.

This does a few things the Evans rule does not. First it provides a condition for easing not tightening, which is what we need in a recession. It also achieves what I call “informational neutrality” good news on the job market won’t scare investors into “TAPER TAPER!!!”, because any and all tightening of policy will be linked with proportional decrease in unemployment, and hence increase in aggregate demand. Also, this is not linked to the stock market in the way QE is. That’s a good thing, in case you were wondering.

Under this, the Fed can much more easily commit to be irresponsible (that is, tolerate above-trend inflation in the future), which is the only thing that can gain traction in a liquidity trap, as Paul Krugman famously puts it. All Bernanke needs to do is promise to keep credit card rates negative, or very low, until NGDP is on level (not growth). He needs to bully the FOMC into saying the same thing (and gag Richard Fisher) and there is no way markets won’t believe it.

Problem solved. Oh and the best thing? We could have let the damn banks fail in 2007. Competitive finance, here we come! We even managed to slip in a bit of sneaky fiscal policy with credit financed bridges and education. Might I even say this would end the phrase “zero lower bound”. (The economy’s not picking up? Well lower the rate to -500%, for god’s sake!) Or, if you want to be safe, just stick with Rao’s rule.

P.S. A rather obvious omission is, of course, a credit limit. This would likely be determined by income. The standard refrain might be queasiness at the idea that rich people get access to more cheap (negative) credit, but just consider this compared with the dynamics of QE. Further, most of the rebalancing of the Fed’s balance sheet during the recovery and boom will come from the “rich”.

“There are rents. Look around.” would inaugurate my ideal essay. Since I learned about the economic definition of supernormal profit, I’ve been fascinated by the idea of rentiers. This informs my admiration for David Ricardo and Henry George, as well as my broad disdain towards aspects of Wall Street. When I recently heard that Thompson Reuters sells its report early to elite traders, where Paul Krugman and Kevin Drum see a vindication of wasteful finance, I see the model for an extremely effective tax on rents. I want markets to work, and I see this as a robust financial transactions tax. I’m excited! 

My previous post, arguing for the sale of government information as a tax on rents has received some attention. I left the idea half-baked, and I want to clarify specifically why this would be socially beneficial. Roughly, I want firms to “buy” their rents. Think about a government-issued permit sold for $p that promised you $n in risk-adjusted profit. By definition, (p-n) is your rent. If the permit provides access to early information, an increased demand for the permit does two things, increases p (because other things equal, an upward shift in demand raises the market price) and decreases (because other things equal, if someone else has access to early information, your access is that much less valuable).

If somehow, the demand for such permits was a positive feedback loop – that is, increased demand induced a further increase in demand – then a competitive market would clear at a point where p = n, eliminating all rents from the information. However, there are two components of profit from a government-issued report:

  1. The value of the information itself – perhaps the implication of a payroll jobs report on the state of aggregate demand.
  2. The value of knowing the information before the market. We know that for the University of Michigan consumer sentiment report, this is at least in the millions.

It’s curious that people think (2) is more morally questionable than (1). Actually (2) is just firms eating each others profit in a way that doesn’t really hurt or help society. (If J.P. Morgan pays off Barack Obama to learn who the next Fed chair will be, this will just increase their relative profits to Goldman Sachs). On the other hand, (1) allows highly-scaled trading firms – with access to cutting, proprietary algorithmic and technological edge – to earn a profit on publicly financed information.

Think about the jobs report, which costs money, and scares some Republicans about privacy and individual rights. The value of its publication is quite minuscule to the average American. But it is highly valuable to Wall Street which will then bet on things like overall market health or chances of a QE taper. It’s a classic example of Mancur Olson’s “dispersed costs and concentrated benefits”.

What if we can design a mechanism that uses (2) to capture the rents from (1). Enter the following information auction:

  1. The SEC will run two auctions on the open market. One for permits granting the right to early information, and the other for the extent by which each permit-holder will be granted said early information.
  2. Call the second auction the “market for milliseconds”. The SEC secretly sets the maximum number of “milliseconds” by which someone can access information before the public release. Traders then compete to buy the number of milliseconds that maximizes expected profit.
  3. Then the SEC auctions a limited number of permits, for which there is a captive demand as milliseconds are useless without at least one permit. (Rents from the rentiers).
  4. The difference between this and a classic auction as (somewhat sarcastically) supposed by Neil Irwin is the ability to buy the precise extent of information expedition. This is critical in automatic rent elimination.

While the dual-auction may seem like an unnecessary complexity, it both increases revenues and is required to bind profit from information to profit from early information, which is in some way rivalrous, though not technically (the value of my time premium is inversely proportional to the extent of yours). It “discretizes” the market into “buy” and “don’t buy” rather than “buy a little”, as if you bought just one millisecond you would then realize it is not worth it to buy a whole permit. But, if you buy a millisecond at the margin, it is ceteris paribus more worthwhile for me to follow-suit. This is the necessary positive feedback loop. Firms will now keep buying milliseconds until the profit of the information itself is lost. But between “buy” and “not buy”, the former will be the dominant strategy.

This is a prisoner’s dilemma for traders. Ideally, they would all collude to ignore the auction altogether, waiting for the public release of information. But if everyone else colludes, it’s extremely lucrative for me to buy many milliseconds and just one permit. They will all “defect”, creating a competitive market for early information.

The government can do this for the release of every bit of market-important information:

  • Job numbers
  • Barack Obama’s nomination for Treasury Secretary
  • Who will replace Ben Bernanke?
  • Would Obama have signed Obamacare? Dodd-Frank?

I don’t think the market should earn profits from government reports or policy decisions. You can think of this as an automatically-tuned financial transactions tax. This would ideally be very low during most times, but whenever the government releases information, all trades induced by said release should be taxed at a firm’s Bayesian prior that it is accurate. (From the government, this would naturally be certain). The dual auction accomplishes precisely this.

The maximum time period by which the information can be expedited is the total milliseconds on the open market. But it will always be less than that unless one person purchases the whole market. In this case, the government can still capture all rents. On the second market for the permit to use that information, the SEC should just keep purchasing permits from itself driving the market price to the point where the buyer is indifferent. (The structure of an auction makes it quite possible to determine the reservation price). Of course, this is highly unlikely to happen, but is a good illustration of the reason why two markets are required.

It is natural that the government should command the quantity, and not the price, of information. Given a certain price, the government cannot limit the maximum time by which information can be expedited, which is not a desirable uncertainty. Further, auctions in this case better lend themselves to market-controlled rent elimination.

I hope this clarifies the process, and I’m eager to see where else a similar methodology can be used to cancel rents.

Several bloggers over the past week have commented on lethargic labor market movements as cause of economic decline. Adam Ozimek here argues that the docile job market is possible cause of stagnation. More acutely, Evan Soltas suggests that slow churning increases economic frictions and deepens the long-term unemployment crisis. Most interestingly, Ryan Decker notes that “churning is costly [and] if churning is declining for good reasons, we should applaud it. But that may not be the case.”.

What is see lacking across this discussion – in the long view – is a consideration of how new structures and and Internet have permanently altered what “economic dynamism” is, and how it can be measured thereof. We might call the switch to this new world a “reset”, as per Tyler Cowen. As I discuss here, the most notable economic development seems to be an increase in the stock (as opposed to bond) component of human capital.

To understand the potential impact of the Internet, start with this seemingly irrelevant 2007 paper from Hoyt Bleakley and Jeffrey Lin, “Thick Market Effects and Churning in the Labor Market: Evidence from U.S. Cities”. Bleakley and Lin suggest thick labor markets encouraging deeper and broader churning are causally-linked with agglomeration effects and hence wealth:

These results provide evidence in favor of increasing-returns-to-scale matching in labor markets. Results from a back-of-the-envelope calibration suggest that this mechanism has an important role in raising both wages and returns to experience in denser areas. 

I don’t see many bloggers considering the Internet as an urban agglomeration. In the old days of manufacturing, agglomeration markets derived primarily from physical proximity. However, the idea of urban scaling is theoretically captured (see the Santa Fe Institute on cities or Geoffrey West on the surprising math thereof) with the idea that as the number of inhabitants double, the total number of interactions more than doubles. This generates a superlinear scale on opportunities for innovation, creativity, and dissemination of information (as well as violence and pollution). 

But we don’t need physical proximity for creative collaboration, anymore. Twitter murders the intellectual distance between two parties, allowing for rich propagation of information, as well as creative speculation thereof. I see job offerings for remote positions advertised on my timeline, and too see the violence and incivility one would expect of a large physical gathering.

Cities allowed grand old factories to capitalize on economies of scale. But as the factory share of our economy shrinks, the Internet will become the driving force of most economic agglomeration. (I’m not as confident that the Internet will ever replace San Francisco or Manhattan as the hubs of social agglomeration).

Take etsy.com. I live in India, and know someone that weaves traditional handicrafts here and sells them for a good markup to an American market. The distributed apparatus handles shipping, handling, and any similar frictions. Outlets like this are making labor as mobile as capital. Indeed, part of Tyler Cowen’s “reset” world is deep factor price equalization in some industries. (A plumber’s service is not mobile, for  example).

But as the labor market moves online, jobs are sourced through Craigslist the concept of quitting, firing, and hiring is shaken completely. Ozimek’s measure of dynamism through labor market churn will begin to capture an increasingly smaller aspect of the American economy. If you read the Harvard Business Review, you’re probably familiar with “supertemps” who are high-end professionals by definition in constant churn. Networked labor markets (on steroids via the Internet) make this a reality.

That’s the top 1%. But I believe cheap forms of Craigslist service are on rapid rise among the poor, too. Incorporated self-employment is on the decline, but what of forced entrepreneurship and dogsitting made possible because of the Internet? What of the sharing economy?

There are both cyclical and structural policy implications:

  • Flexible labor markets suffused with social insurance should take the form of strong unemployment benefits and reemployment credits, the latter scaled by length of unemployment.
  • Minimum wage restrictions will become increasingly irrelevant. A basic minimum income, on the other hand, less so. (This is another post, but I don’t think America is yet rich enough for the UBI we need, but will be soon).
  • Increasing frictional unemployment encouraged by a discount version of unemployment benefits offered to “quitters”.
  • The emergence of government-handled job auction markets as a means of depression management.

The Internet is a city that’s growing faster than New York or Mumbai ever did or will. This is the reset. For not much longer will quits+separations be a reliable statistic on job market health. Indeed, the Beveridge curve shows a weak economy in need of rejuvenation, but this is the dreamtime for the old labor market. Something new is coming. I think for the better.

Update: Just realized Neil Irwin considers a somewhat-similar idea here, though I think they’re ultimately quite different in design, I discuss it more at the end.

Earlier this week, we learned that the University of Michigan sells its popular consumer sentiment figures to Thompson-Reuters which delivers the elite customers the information 5 minutes before it is made public, and the super-elite 2 seconds earlier still. As you might imagine, the liberal blogosphere is up in arms. “Insider trading” is trending, at least on my timeline.

But this is a blessing and opportunity in disguise. So long as traders are just rational and not super-rational (also known as renormalized rationality; no one thinks they are) the prisoner’s dilemma will trap them in a brilliant bind. The United States government through the Bureau of Labor Statistics, Federal Reserve, and deep network of public universities frequently and freely releases rich and extremely valuable information. Take the payroll report that is released the first friday of every month, and available to all free of charge. As a rational human being, it is almost worthless to me. However, for a trader on Wall Street, the report conveys crucial information on aggregate demand and forms expectations on Federal Reserve action.

This asymmetrical response generates significant rent, in the form of incredible consumer surplus to the financial services, especially high-frequency traders. How much would you pay for the unemployment figures? I’d pay no more than $5 a month, and that’s for personal interest. For a trader, on the other hand, the value is far greater – and more so the earlier he receives it relative to his fellow traders.

One ought to wonder why the public purse is used for information gathering which creates surplus for a small segment of society. The government stands to make deep profit which can finance progressive redistribution and increase both private and social welfare. But it requires an abdication of our gut aversion towards “financialization” or “insider trading” as if that’s something yucky.

Rather, I suggest a process that will work thusly:

  1. Some practical time period, t, before an important government report is released an electronic auction house opens. The SEC (secretly) decides how many “early” permits auctioned would maximize revenue. =
  2. The SEC further runs an auction selling “milliseconds”. You can buy as many “milliseconds” as you want, from the regulated auction market. The SEC does not disclose how many such segments are being auctioned.
  3. Key: The “milliseconds” auction starts before the primary “early permit” auction.

So, the secondary auction starts and a bunch of traders buy their fancy little milliseconds. Mechanism design tactics must be employed to prevent collusion, and the SEC will hire Al Roth to figure something out. A millisecond is rivalrous (not technically, but in principle), that is if I know information one second earlier, it becomes less valuable if you do as well. But no one knows how many total milliseconds there are, and hence will buy what the expect will maximize revenue.

Then the primary auction for “early” permit starts, without which milliseconds are useless. Because many traders have purchased milliseconds, the government has created a captive demand for “early” permits, increasing demand and revenue thereof (and also decreasing rent).

Traders cannot opt to wait for the public information arrival because they know their competitors will just buy it up front. It would be most profitable if they all just waited for the public announcement, but since they all stand a huge benefit from defecting, they will all default. Therefore, their rents are lost because of their own rationality.

I suggest all important economic reports are auctioned this way, like:

  • FOMC minutes,
  • The Beige Book,
  • News of Osama Bin Laden’s death
  • Income reports,
  • Anything and everything markets will like, including,
  • Election results.

just realized Neil Irwin suggests a similar idea, but I ultimately feel they’re quite different. Mine is designed in a way to snare rentiers into a prisoner’s dilemma into a captive demand of information, rather than nice auctions, per se. Irwin ultimately thinks this violates the responsibility of a government; but I think all this data collection is useless for the whopping 99.9% of us that don’t look at trading terminals all day. And yet our tax dollars finance this boring nonsense, creating rents for a small minority. I like Jeremy Bentham a lot, and he’s always inspired my tax philosophy. I say, when in need of revenues, just create artificially new property rights and auction them to the highest bidder.

P.S. Some have responded that traders do indeed collude, as in Libor. This is a failure of SEC criminal (= jail) enforcement. Anyway, mechanisms like this where defecting is so probably is unlikely to create a collusive equilibrium. Trust between traders would need to be too significant. If demand for permits crashed, someone would buy many for a pittance to make a huge profit. Collusion is not stable.

But say they all did collude not to buy until the public disclosure. Well, that’s not too different from today…

Matt Yglesias reminds us that “an awful lot of capital [income] today is actually rent”. It’s a point he’s been making for a while, but it’s unfortunately treated as a bygone conclusion rather than an engineered reality. Here’s he is:

Land is just scarce because it’s scarce, and as returns to Manhattan land ownership increase the size of the island doesn’t expand. Intellectual property is deliberate government-created scarcity out of concern that were it not possible for Bill Gates to become so wealthy nobody would write word processing programs (don’t tell these guys).

I’m of a mixed-mind about patents, but before we consider Yglesias’ point, we need to define our terms, specifically “property”. The real classical liberals go for something called the “labor theory of property” which was proposed by John Locke in his Second Treatise on Government which supposes that property comes about by “the exertion of labor upon natural resources”. I’m not of this view, but it offers rich insight into how we consider intellectual property, which I’ll get to.

The competing, and dominant, theory comes from Jeremy Bentham that property is not prior to government:

Property and law are born together, and die together. Before laws were made there was no property; take away laws, and property ceases.

Furthermore, in this view, property is not absolute. My favorite example is a la carte vs. buffets at a given restaurant. In the former, you own the right to eat your meal, but also give it to your friend and take it home. In the latter, the restaurant has only given you the right to eat the food, they reserve the right of all further distribution. Or take your own home. (In Texas) you have the right to shoot trespassers thereupon, but may not inject cocaine within. The law has not ceded you that right.

Now, the idea of “rent” is closely connected with property rights. As long back as the 18th century, David Ricardo noted that rents – loosely “unfair income” – comes from property misallocation:

It was from this difference in costs [between productive and unproductive land] that rent springs. For if the demand is high enough to warrant tilling the soil on the less productive farm, it will certainly be profitable to raise grain on the more productive farm. The greater the difference, the greater the rent.

Therefore, I think Yglesias’ point that intellectual property is “deliberate government engineered scarcity” is a truism at best. Property exists because of government, and to the extent that any property is scarce, it is “government engineered”. You may not like the nuts and bolts of the broken patent system, but the idea itself is not so alien.

Ultimately, I agree that intellectual property earns a fancy rent, but it’s not nearly so simple. I think about “intellectual property” in a very similar way as physical land. That is, when it comes to physical capital, I believe someone truly creates something. When it comes to intellectual property, I think that all the world’s intellectual pursuits already exist, but they only remain to be discovered. And just as the price of land falls as supply increases, so too does that of a broadly defined “intellectual property”. Of course, most of us don’t “buy” this property, but the “IP share” per good, if you will, falls.

However, this isn’t always the case. Just like some land is fertile or valuable, some intellectual landscapes are more productive and useful. In the intellectual world, until the industrial revolution, we were walking on barren deserts of no value. The first industrial revolution was like finding soil, the second like finding the rich riverbanks on the ancient Nile. The technology revolution was like finding Alaska, Qatar, Norway, and Texas altogether.

In the meantime, we’ve found barren land and cheap soil in such abundance that we ignore formal property in total and revert to a communal ownership thereof. Books on Project Gutenberg and the quadratic formula fall under this purview. But the new intellectual land is so fertile and rich, that Ricardo’s principle of rent is perfectly applicable and because this is owned by so few, rent-derived inequality is of critical importance.

But there is one big difference between land and intellectual property. What are your priors that – by and large – we have discovered all the minerals and inhabitable land in the world? Until intergalactic travel becomes a real possibility, my priors are high. Therefore, incredible land value taxes are a brilliant way of financing redistribution without distortion.

And what then is your prior that we have discovered all the profitable intellectual land god created? Perhaps it is similarly high. Oh how wrong you will be. See the brilliant Lord Kelvin at the turn of the last century:

The future truths of physical science are to be looked for in the sixth place of decimals.

Or Albert Michelson, famous for disproving luminiferous ether:

The more important fundamental laws and facts of physical science have all been discovered, and these are so firmly established that the possibility of their ever being supplanted in consequence of new discoveries is exceedingly remote.

This is a question of philosophy, but more a question of defining one’s terms. If you believe men create intellectual designs, then we can treat such property in similar vain to physical capital. However, if you believe in an infinite (or unobservable) landscape of profit-delivering discoveries, the only way to incentivize explorers is like from an era bygone: gold and silver.

There are two ways to achieve this. Many support crippling the patent system and supporting researchers and entrepreneurs with government tax credits instead. But relative to intellectual property, this suffers from adverse selection arising from asymmetric information. Given a set of researchers and a central funding authority, each researcher has a hidden prior on how good he believes his work will be. He will do his best to prove to the government agency that this prior is high. The government has some test T to check his application, but the test compromises precision for tractability (bureaucrats are limited in time, brains, and money).

On the other hand, a patent is obtained – broadly – after the researcher/entrepreneur discovers the idea for his device. Only those with truly (and not demonstrably) high priors will actually take the capital risk in pursuing this idea. In similar fashion, the kings of yore did not always pay explorers to go on grand voyages, but promised them ownership of wealth thereupon to accommodate for informational asymmetries. 

I am no fan of inequality and hence I support socially-owned patents. The government should finance large-scale research projects, and all patents thereof are owned by the American society itself. For this to work in practice, there needs to be a strong and credible independence between patent authorities and the government research apparatus. Private researchers cannot believe that government will prefer itself in competing applications. At the end of each year, the government must auction each patent on an electronic marketplace for five years. This creates a rich source of revenue for social redistribution, but also ensures that intellectual property finds itself in the hands of the firm that will use it most productively. Every five years, the patent will be re-auctioned until the value drops below a certain inflation-indexed value and will fall into communal ownership.

The implicit assumption here is the labor theory of property. If intellectual ideas already exist, property is allocated to he who cultivates it thereof: like the scientist who discovers dry ice, or the programmer who creates Microsoft Office. It strikes me that a libertarian should be most accommodating of this argument.

And, ultimately, rents earned from intellectual property are ultimately spent on natural resources (land, oil, beachfront property, and fancy restaurants in Manhattan). If we tax that, we will be taxing the extractive uses of intellectual capital. That seems like the smarter way.

I just read a very interesting new paper (via Mark Thoma) from the Center of Financial studies at Goethe University, titled “Complexity and monetary policy”. The paper probably filled a “critical gap” in someone’s knowledge toolbox, but failed to consider certain “meta level” deficiencies in methodology. Furthermore, certain implicit assumptions with regard to modeling philosophy were ignored. The authors certainly acknowledged limitations to the DSGE mindset, but did not consider the rich and interesting consequences thereof. I will try to do that, but first context.

A summary

The authors, Athanasios Orphanides and Volker Wieland, set out to test a general policy rule against specific designs for 11 models. The models examined fall under four categories, broadly:

  1. Traditional Keynesian (as formalized by Dieppe, Kuester, and McAdam in 2005)
  2. New Keynesian (less rigorous without household budget optimization)
  3. New Keynesian (more rigorous monetary business cycle models)
  4. DSGEs built post-crisis

So mostly DSGEs, and DSGE-lites.

The general monetary policy rule considered is stated as,

i_t = ρi_t−1 + α(p_t+h − p_t+h−4) + βy_t+h + β'(y_t+h − y_t+h−4)

where i is the short-term nominal interest wait, ρ is a smoothing parameter thereof. p is the log of price level at time t and hence p_t+h − p_t+h−4 captures the continuously compounded rate of change so α is the policy sensitivity to inflation. y is the deviation of output from flexible wage conditions, so β represents the policy sensitivity to output gap, and β’ represents that to growth rate. h is the horizon in consideration (limited to multiples of 2).

The model-specific optimal parameters are considered against four benchmark rules:

  • Well known Taylor rule (where policy responds only to current inflation and output gap, i.e. ρ, β, β’ = 0)
  • A simple differences rule (ρ = 1, α = β’ = 0.5, β = 0)
  • Gerdesmeier and Roffia (GR: ρ = α = 0.66, β = 0.1, β ̃ = h = 0)

The “fitness” of each policy rule for a given DSGE is measured by a loss function defined as,

L_m =Var(π)+Var(y)+Var(∆i)

or the weighted total of unconditional variances of inflation deviation from target, output gap, and change in interest rate. The best parameters by L for each model, as well as L thereof compared against standard policy rules are noted below:

Image

However, as the authors note, the best parametric set for one model is far from it in another, with explosive and erratic behavior:

Image

To “overcome” this obstacle, Orphanides and Wieland use Bayesian model averaging, starting with flat priors, to minimize L over all models. That is the model set that minimizes,

sigma((1/M)*L_m) for m = 1 to m = M, where M is the total number of models.

Under this procedure, the optimal average policy is:

i_t = 0.96i_t−1 + 0.30π_t + 0.19y_t + 0.31(y_t − y_t−4)

Indeed, as we would expect, it performs fairly well measured against the model-specific policy without more than twice exceeding optimal L by above 50%.

The authors then similarly manipulate and determine optimal averages within a subset of models, and perform out-of-sample tests thereof. They further consider the parametric effects of output gap mismeasurement on L,  The paper describes this in further detail, and it is – in my opinion – irrelevant to the ultimate conclusion of the paper. That is, the simple first-differences rule giving equal weight to inflation and output gap growth rate are fairly robust as long as it is targeted on outcomes rather than forecasts.

A critique

More than anything, this paper silently reveals the limitations of model-based policy decisions in the first place. Here’s the silent-but-deadly assertion in the paper:

The robustness exhibited by the model-averaging rule is in a sense, in sample. It per- forms robustly across the sample of 11 models that is included in the average loss that the rule minimizes. An open question, however, is how well such a procedure for deriving robust policies performs out of sample. For example, what if only a subset of models is used in averaging? How robust is such a rule in models not considered in the average loss?

The operative “what if” propels the authors to test subsets within their arsenal of just eleven models. They never even mention that their total set is just a minuscule part of an infinite set S of all possible models. Of course, a whopping majority of this infinite set will be junk models with idiotic assumptions like calvo pricing or perfect rationality utility monsters or intertemporal minimization of welfare which aggregate into nonsense.

Those uncomfortable with predictive modeling such as myself may reject the notion of such a set altogether, however the implicit assumption of this paper (and all modelers in general) is that there is some near-optimal model M that perfectly captures all economic dynamics. To the extent that none of the models in the considered set C meet this criteria (hint: they don’t), S must exist and C ipso facto is a subset thereof.

The next unconsidered assumption, then, is that C is a representative sample of S. Think about it as if each model occupies a point on an n-dimensional space, C is a random selection from S. But C is actually just a corner of S, for three reasons:

  • They all by-and-large assume perfect rationality, long-run neutrality, immutable preferences, and sticky wages.
  • Economics as a discipline is path dependent. That is, each model builds on the next. Therefore, there may be an unobservable dynamic that has to exist for a near-ideal model, which all designed ones miss.
  • S exists independent of mathematical constraints. That is, since all considered models are by definition tractable, it may be that they all miss certain aspects necessary for an optimal model.

But if the eleven considered models are just a corner of all possible models, the Bayesian average means nothing. Moreover, I think it’s fundamentally wrong to calculate this average based on equal priors for each model. There are four classes of models concerned, within which many of the assumptions and modes of aggregation are very similar. Therefore, to the extent there is some correlation within subsets (and the authors go on to show that there is), the traditional Keynesian model is unfairly underweighted because it is the only one in its class. There are many more than 11 formalized models, what if we used 5 traditional models? What if we used 6? What if one of them was crappy? This chain of thought illustrates the fundamental flaw with “Bayesian” model averaging.

And by the way, Bayesian thinking requires that we have some good way of forming priors (a heuristic, say) and some good way of knowing when they need to be updated. As far as models are concerned, we have neither. If I give you two models, both with crappy out-of-sample correlation, can you seriously form a relative prior on efficacy? That’s what I thought. So the very intuitional premise of Bayesian updating is incorrect.

I did notice one thing that the authors ignored. It might be my confirmation bias, but the best performing model was the also completely ignored in further analysis. Not surprisingly, the traditional Keynesian formalization. Go back to table 3, which shows how the model-specific policy rule performs for each model. You see a lot of explosive behavior or equilibrium indeterminacy (indicated by ∞). But see how well the Keynesian-specific policy rule does (column 1). It has the best worst-case for all of the considered cases. Its robustness does not end there, consider how all the other best-cases do on the Keynesian model (row 1), where it again “wins” in terms of average loss or best worst case.

The parameters for policy determination across the models look almost random. There is no reason to believe there is something “optimal” about figures like 1.099 in an economy. Human systems don’t work that way. What we do know is that thirty years of microfoundations have perhaps enriched economic thinking and refinement, but not at all policy confidence. Right now, it would be beyond ridiculous for the ECB to debate the model, or set of models on which to conduct Bayesian averages, used to decide interest rate policy.

Why do I say this? As late as 2011, the ECB was increasing rates when nominal income had flatlined for years and heroin addiction was afflicting all the unemployed kids. The market monetarists told us austerity would be offset, while the ECB asphyxiated the continent of growth. We know that lower interest rates increase employment. We know that quantitative easing does at least a little good.

And yet, it does not look like the serious economists care. Instead we’re debating nonsense like the best way to average a bunch of absolutely unfounded models in a random context. This is intimately connected with the debate over the AS-AD model all over the blogosphere in recent weeks. This is why we teach AS-AD and IS-LM as an end in and of itself rather than a means thereof. AS-AD does not pretend to possess any predictive power, but maps the thematic movements of an economy. To a policymaker, far more important. IS-LM tells you that when the equilibrium interest rate is below zero, markets cannot clear and fiscal policy will not increase yields. It also tells you that the only way for monetary policy to reach out of a liquidity trap is “credibly commit to stay irresponsible”. Do we have good microfoundations on the way inflationary expectations are formed?

It was a good, well-written, and readable paper. It also ignored its most interesting implicit assumptions without which we cannot ascribe a prior for its policy relevance.

Noah Smith argues (quite convincingly) that Japan can benefit from public default:

  1. Default doesn’t have to be costly in the long-run.
  2. It would clear the rot in Japan’s ‘ancien regime’.
  3. Creative destruction would ensue.

Point (1) has a lot of empirical support. An IMF paper I’ve linked to before from Eduardo Borensztein and Ugo Panizza suggests that ”

the economic costs are generally significant but short-lived, and sometimes do not operate through conventional channels.” Argentina is a great example to this point, and so are other South American countries like Paraguay and Uraguay. I was comfortable using this in defense of Greek default, where heroin addiction and prostitution are at an all-time high because of unemployment. I feel less empathy for the highly-employed Japanese, and hence my gut conservative suspicion of borrowers kicks in. But idiotic gut feelings should not guide policy, there are other reasons to tread carefully.

Successful defaulter countries can be described thusly:

  • Poor (low GDP)
  • Developing (high post-default growth or in strong convergence club)
  • Forced, without choice, to default (high cost of capital)

Incidentally, this defines countries like Pakistan, Russia, and the Domincan Republic; not Japan. As I discussed in this (surprisingly) hawkish post, the reason for default or inflation plays a key role in market reaction. Ken Rogoff rightly suggested that America should embrace “4-6% inflation” – which I’m all for – but said this is “the time when central banks  should expend some credibility to take the edge off public and private debts”.

America’s debt is very stable and safe, and if the government with Ben Bernanke’s support inflated that value only to bring it down we tell the world that “we default on debt we can repay”. (Read the whole post if you want to jump on me, because I think there are many other fantastic reasons to inflate – but the market should know that). The central bank’s credibility would be in international ruin if the market perceived Rogoffian default as reason thereof.

And Noah’s proposal for Japan is born of the same ilk. Japan’s debt might become unsustainable, but dirt cheap cost of capital implies that the market does not feel default is imminent. This is in strong contrast to the feedback loops which compelled successful countries to default. There would be long-run reputational cost to this action and, as it turns out, Borensztein and Panizza find the same thing:

A different possibility is that policymakers postpone default to ensure that there is broad market consensus that the decision is unavoidable and not strategic. This would be in line with the model in Grossman and Van Huyck (1988) whereby “strategic” defaults are very costly in terms of reputation—and that is why they are never observed in practice—while “unavoidable” defaults carry limited reputation loss in the markets. Hence, choosing the lesser of the two evils, policymakers would postpone the inevitable default decision in order to avoid a higher reputational cost, even at a higher economic cost during the delay.

What if Japan defaults when markets think it can handle debt service (i.e. when cost of capital is very low). The question here is between “austerity-induced stagnation” and default. This is a very tricky question for developed countries, and the results would be interesting. I don’t think Japan will ever be on the brink of {choose | stagnation, default, hyperinflation} but, if it is, unlike poor countries I am unconvinced that austerity cannot be a possibility.

Would fiscal consolidation hurt growth? Yes. But would the Japanese standard of living remain far above most of the world and other defaulters? Absolutely, especially if its implemented carefully with high taxes on the rich. The biggest risk would be an era of labor protectionism from international competition, but that’s a cost Japan (and the world, indeed) must bear. On this point, I’m also very doubtful of Noah’s optimistic Schumpeterian take on removing the “rot”. Is a default like a good, cold douche? Yes, if banks are able to extend credit to small businesses, again. Why are we so confident that a defaulter Japan (especially one that isn’t believed by the market to be on the brink of default) will have that luxury? Especially when even the most accommodating IMF paper on default acknowledges deep, short-run costs. And we know that short-run brutality carries well into the long-run…

Again, I doubt Japan will be in this dire position – I’m more confident about Abenomics than Noah – but if it is, I cannot support default on the basis of “other countries did it well”. I am a strong proponent of general debt forgiveness, like any leftist, but Japan is no Pakistan or Argentina. It’s a rich country with a history of innovation and remarkable recovery; not a post-communist wreck or terrorist stronghold. We should act like it.

Fareed Zakaria has noted that the Federal government spends four dollars on those over 65 for every dollar on those under 18. This is the inevitable byproduct of an aging society with AARP as its robust lobbying apparatus. Incidentally, the situation will only get worse as our “demographic decline” continues.

Short of disenfranchising the old, I’ve seen no good solutions to the problem (actually, I’m hard-pressed to call this a “problem” but that’s another story). I have (what I think is) a neat idea that, at least in theory, would create an “intergenerationally efficient” political representation. It’s not politically possible, and requires a radically different concept of what democracy is.

Americans vote nationally every two years for an average total vote count of (78.5 – 18)/2 = 30.25. If we think about “voting” as a token, each eligible citizen is “granted” a token for each election, immediately after which it expires. But what if the government endowed voters with all 30 (untradable) tokens at the age of majority?

I can now temporally optimize a voting position based on my personal discount rate. Let’s back up a second and talk about what that means. Other things equal, I would like to spend a dollar today rather than tomorrow. I bet most of you would much rather have $100 dollars in your checking account today, than $500 in 50 years. This is one reason savings rates are so low.

Now consider an election where there are two, broad voting positions: (A) reform entitlements and increase spending on infrastructure, education, and climate or (B) maintain entitlements financed by cuts to education and a tax hike on the working young.

Today, voting power is proportional to the demographic distribution. That means as the population over 65 increases in size, so does their voting power. Let’s further stipulate that most voters are approximately rational. This means that “old” and “near old” voters will go position A, “young” and “recently young” will go B, and “middle aged” will split somewhere in the middle.

However, if you had all your tokens upfront, unless everyone’s personal discount rate is 0, voting power is no longer proportional to the demographic distribution. That’s because we value utility today over tomorrow.

In this case, voters between 20 and 40 will have a lot more voice because they’re just a little bit more careless with their tokens. Before writing this post I thought hard about my “electoral discount rate”. Perhaps I’d vote twice for a candidate I really liked in a swingier state. However, this isn’t really a correct thought experiment. If everyone else is voting only once per election, the political positions fail to reflect the discount rate, and it doesn’t matter.

There’s a good chance our voting positions won’t change much. Yet, if this doesn’t create any changes it means the “we’re burdening our kids with future taxes” crowd doesn’t have much to stand on, because those same kids have shown a revealed preference towards not caring.

Problems to this model are both theoretical and practical. In practice, we ought to be worried deeply about people who “vote and run”. That is, use all their votes on one election without planning to retain citizenship. Assume this away for argument’s sake. (Maybe you can require some sort of contract, at least enforcing tax positions for some number of years past each vote rendered. Or, even better, you pay taxes as far into the future as votes you use).

Does this alleviate intergenerational injustice? To some extent, yes. Let’s think about climate change. I’m going to make the rather ridiculous assumption that the “cost of climate change” is binary, and starts in 2100. Think about an election in 2075. Aside from the fact that it might be already too late (this is a scientific, and not theoretical, inconvenience) only a discount rate-positive system can make the “right” choice. In 2075, the 65+ population (which expects to die by 2100) is majority just can’t understand why we need a carbon tax.

Recent evidence suggests that we purposely answer survey questions like “did inflation increase under GWB” incorrectly to signal political allegiance. I think climate change is the same thing. However, if we’re paid, we make the right choice. If you’re young in 2075 the tax on this “cheap talk” is imminent climate change. None of us today face that, so we can really know the underlying opinions thereof.

However, under this system young people can coalesce and take extraordinary action to halt climate change, because they have far more “token wealth” as a) they’re younger and hence used less, and b) the elderly temporally optimized their voting preferences resulting in disproportionately lower savings in their retirement.

All said and done, I’m inclined to think that this will not change our political structure much, for better or worse. That means that we the young care about helping the old and they care, in principle, about education and infrastructure. Normally, high discount rates are seen as too deferential to the “here and now” over “then and there”. Interestingly, in this scenario, over the long-run, a high discount rate results in just the opposite.

In a 1963 paper Robert Mundell first argued that higher inflation had real effects. He challenged the classical dichotomy by suggesting that nominal interest rates would rise less than proportionally with inflation, because higher price levels would induce a fall in money demand, thereby increasing velocity and capital formation which, in turn, would bring real rates down. The most interesting part of my argument comes from a model designed by Eric Kam in 2000, which I’ll get to.

And as Japan emerges from a liquidity trap, the Mundell-Tobin effect (named, too, for James Tobin submitting a similar mechanism) should anchor our intellectual framework. I don’t see any of the best bloggers (I may be wrong but see the self-deprecation there via Google) arguing along these lines, though Krugman offers a more sophisticated explanation of the same thing through his 1998 model, this can only strengthen our priors.

Paul Krugman, Noah Smith, Brad DeLong, and Nick Rowe have each replied to a confused suggestion from Richard Koo about monetary stimulus. Smith, as Krugman points out, was restricting his analysis on purely nominal scope and notes that DeLong captures the risk better, so here’s DeLong:

But if Abenonomics turns that medium-run from a Keynesian unemployment régime in which r &ltl g to a classical full-employment régime in which r > g, Japan might suddenly switch to a fiscal-dominance high-inflation régime in which today’s real value of JGB is an unsustainable burden..

Moreover, to the extent Abenomics succeeds in boosting the economy’s risk tolerance, the wedge between the private and public real interest rates will fall. Thus Paul might be completely correct in his belief that Abenomics will lower the real interest rate–but which real interest rate? The real interest rate it lowers might be the private rate, and that could be accompanied by a collapse in spreads that would raise the JGB interest rate and make the debt unsustainable.

I’ll address the latter concern first. Let’s consider the premise “to the extent Abenomics succeeds in boosting the economy’s risk tolerance”. If the whole scare is about Japan’s ridiculously-high debt burden, and we’re talking about the cost of servicing that debt, as far as investors are concerned isn’t Japan’s solvency a “risk”. I don’t think it is, I certainly don’t see a sovereign default from Japan, but that’s the presumed premise DeLong sets out to answer. So with that clause, the question becomes self-defeating, as increased risk tolerance would convince investors to lend Japan more money. Note the implicit assumption I’m making here is that it’s possible for a sovereign currency to default. I make this because there are many cases where restructuring (“default”) would be preferable to hyperinflation.

Even ignoring the above caveat, the fall in interest rate spreads can come from both private and public yields falling, with the former falling more rapidly. A lot of things “might be”, and do we have any reason to believe it “might be” that inflation does nothing to real public yields?

Well, as it turns out, we have good reason beyond Krugman’s model suggesting that inflation increases only nominal yields:

  • Mundell-Tobin argue that the opportunity cost of holding money increases with inflation, resulting in capital creation and decreased real rates. This is a simple explanation as any, but would be rejected as its a “descriptive” (read: non-DSGE) model.
  • So comes along Eric Kam arguing that: The Mundell-Tobin effect, which describes the causality underlying monetary non-superneutrality, has previously been demonstrated only in descriptive, non-optimizing models (Begg 1980) or representative agent models based on unpalatable assumptions (Uzawa 1968). This paper provides a restatement of the Mundell-Tobin effect in an optimizing model where the rate of time preference is an increasing function of real financial assets. The critical outcome is that monetary superneutrality is not the inevitable result of optimizing agent models. Rather, it results from the assumption of exogenous time preference. Endogenous time preference generates monetary non-superneutrality, where the real interest rate is a decreasing function of monetary growth and can be targeted as a policy tool by the central monetary authority.

[Caution, be-warned that we can probably create a DSGE for anything under the sun, but I will go through the caveats here as well] Note that he’s not making any (further) remarkable constraints to prove his point, just relaxing a previous assumption that time preference is exogenous. A previous paper (Uzawa, 1968) followed a similar procedure, but made the strong, questionable, and unintuitive assumption that the “rate of time preference is an increasing function of instantaneous utility and consumption” which implies that savings are a positive function on wealth, which contradicts the Mundell-Tobin logic.

Kam, rather, endogenizes time preference as a positive function on real financial balances (capital + real balances). He shows non-neutrality with the more intuitive idea that savings are a negative function on wealth. (So unanticipated inflation would result in higher steady-state levels of capital).

Look, in the long-run even Keynesians like Krugman believe in money neutrality. By then, however, inflation should have sufficiently eroded Japan’s debt burden. DeLong’s worry about superneutrality in the medium-term, where debt levels are still elevated, seems unlikely even without purely Keynesian conditions. That is, no further assumptions other than an endogenous time preference are required to move from neutrality to non-neutrality. DSGEs are fishy creatures, but here’s why this confirms my prior vis-a-vis Abenomics:

  1. Lets say superneutrality is certain under exogenous time preference (like Samuelson’s discounted utility).
  2. Non-superneutrality is possible under endogenous time preference. Personally, I find a god-set discount rate/time preference rather crazy. You can assign a Bayesian prior to both possibilities in this scenario. But note, the models which support neutrality make many more assumptions.

Now we need a Bayesian prior for (1) vs. (2). My p for (1) is low for the following reasons:

  • It just seems darned crazy.
  • Gary Becker and Casey Mulligan (1997) – ungated here – quite convincingly discuss how “wealth, mortality, addictions, uncertainty, and other variables affect the degree of time preference”.

I’d add a few other points to DeLong’s comment – “the real interest rate it lowers might be the private rate, and that could be accompanied by a collapse in spreads that would raise the JGB interest rate and make the debt unsustainable” – if the idea of falling real yields is not plausible. A fall in private cost of capital should be associated with a significant increase in wages, capital incomes, and at least profits. This by itself increases more revenue, but likely a greater portion of the earned income will fall in the higher tax brackets, suggesting a more sustainable debt. Of course my belief is that both private and public debt will be eroded quicker, but even if that’s too strong an assumption, there’s no reason to believe that falling spreads per se are a bad thing for government debt so long as it maintains tax authority.

However, the Mundell-Tobin and similar effects derive from a one-time increase in anticipated inflation. It’s not even to be seen that Japan will achieve 2%, and that’s a problem of “too little” Abenomics. On the other hand, if Japan achieves 2%, and tries to erode even more debt by moving to 4% it will loose credibility. Therefore, Japan should – as soon as possible – commit to either a 6% nominal growth target or 4% inflation target.

This is preferable because it increases the oomph of the initial boost, but primarily it extends the duration of the short run where monetary base is expanding and inflation expectations are rising. A longer short-run, we minimizes DeLong’s tail risks of a debt-saddled long-run, even if you reject all the above logic to the contrary.

DeLong concludes:

Do I think that these are worries that should keep Japan from undertaking Abenomics? I say: Clearly and definitely not. Do I think that these are things that we should worry about and keep a weather eye out as we watch for them? I say: Clearly and definitely yes. Do I think these are things that might actually happen? I say: maybe.

 I agree with all of it except the last word. I’d say, “doubtful”.

Paul Krugman has a post correctly bashing the nonsensical criticisms to unemployment insurance (UI):

Here’s what is true: there’s respectable research — e.g., here — suggesting that unemployment benefits make workers more choosy in the search process. It’s not that workers decide to live a life of ease on a fraction of their previous wage; it’s that they become more willing to take the risk of being unemployed for an extra week while looking for a better job.

 His analogy:

One way to think about this is to say that unemployment benefits may, perhaps, reduce the economy’s speed limit, if we think of speed as inversely related to unemployment. And this suggests an analogy. Imagine that you’re driving along a stretch of highway where the legal speed limit is 55 miles an hour. Unfortunately, however, you’re caught in a traffic jam, making an average of just 15 miles an hour. And the guy next to you says, “I blame those bureaucrats at the highway authority — if only they would raise the speed limit to 65, we’d be going 10 miles an hour faster.” Dumb, right?

His analogy is actually too easy on the crowd. Here’s why that’s the case, from the Massachusetts government:

Unemployment Insurance is a temporary income protection program for workers who have lost their jobs but are able to work, available for work and looking for work.

Receipt of UI benefits is contingent on one staying in the labor force. So when people tell you “UI increases unemployment” they may be right in a technical sense (and Krugman suggests why that’s wrong today). Even if so, the U3 measure has a numerator and a denominator, broadly:

  • Numerator: People in the labor force who a) don’t have a job but b) have actively looked for the past four weeks
  • Denominator: People in the labor force.

The biggest “increase” in unemployment from UI, then, comes from the “have actively looked for the past four weeks” by decreasing the number of discouraged workers. While demographic shifts are one cause of labor force exit, studies suggest that explains only 50% of the phenomenon.

Therefore, I think Krugman’s gone too easy. Even to the extent UI increases unemployment, it’s a “good” thing, by increasing labor market flexibility which has huge supply-side dividends. I have a high prior it decreases hysteresis, and chances are the only people who don’t are those who reject hysteresis outright (no comment there).

Most of us agree that worker protections are a good thing. Even “pragmatic libertarians” like Megan McArdle support unemployment insurance on “humanitarian” grounds. There are two ways we can help workers, either through the ridiculous French system of making it illegal to fire workers (which also makes it harder to fire them) or through a “flexicure” system where the state provides generous unemployment insurance and reemployment credit.

The former shrinks labor supply, the latter increases it by creating a healthier labor market. So if I could modify Krugman’s analogy I would go thusly:

In a recession cars at the front start moving slowly which makes the whole pack slower. The government can’t convince the first cars to go faster, so it decreases, literally, the friction of the road ahead by icing it. Now, the first cars can’t control themselves and start going faster. Allowing cars at the back to do the same. As they start going faster, heat generated increases, and the ice starts melting rather quickly. 

There you have the last benefit of UI, too. It’s an automatic stabilizer. Politicians couldn’t screw it up even if they wanted to.