Monthly Archives: August 2013

The interesting story of the day is definitely a rapidly “collapsing” Rupee haunted by abnormally high volatility over the past month. I scare quote “collapse” because it’s a term that reflects the Western bias of this conversation. If you’re an American who has invested in emerging market funds hoping for real gains intended for domestic consumption that beat an index well, then, you’ve been screwed. As far as India is concerned the depreciation is  of less concern. Still, Raghuram Rajan has his job cut for him, and I’ve discussed the Indian Rupee in this context before. However, since Paul Krugman blegs to learn what he is missing, I’ll offer a few things that worry me at the moment.

Primarily, like any other developing country with rentier bureaucrats, fuel subsidies are important to India twofold. First as a stimulant for middle class growth that demands transportation and electricity (generators are big); second as a key ingredient to important fertilizers without which the Indian farming model would fail.

Fuel subsidies pose an interesting problem for a country that will meet 90% of its oil needs through imports. (I mistakenly noted that 90% of oil is currently imported. That said, the greater the difference between the current figure and the future one, the worse one unit of depreciation will be). Basically all growth in India’s most important market will be financed from imports henceforth. On the one hand, the decisive role government plays in the energy market almost guarantees that deficits will face an upward pressure, in a time when yields are already rising. The sensible solution would unfortunately involve curtailing the provision of a good necessary for political success.

More importantly, as energy becomes the dominant theme in Indian trade – as it undoubtedly will – India looses the primary benefit of depreciation: exports. When trade structure is such that the cost effect (price of imports) of depreciation trumps the quantity effect (quantity of imports) the beneficial nature of depreciation is removed entirely. Known as the Marshall-Lerner condition, a country technically faces this dilemma when the summed price elasticity of imports and exports falls below one.

Usually, since goods tend to be price inelastic in the short-run, devaluations are not always immediately successful but work over time. Not so for dollar-priced oil. As the value of oil is already buoyed by demand from emerging markets, each point of depreciation for the Rupee is that much worse for its balance and budget. Using data from the International Financial Statistics and Direction of Trade Statistics from the IMF, Yu Hsing estimates that India may not significantly meet the Marshall-Lerner condition. Since the paper might be gated for some of you, I’ll copy the relevant result:

As the US real income declines due to the global financial crisis, the trade balance for Japan, Korea, Malaysia, Pakistan, Singapore, or Thailand will deteriorate whereas the trade balance for Hong Kong or India may or may not deteriorate depending upon whether the relative CPI or PPI is used in deriving the real exchange rate.

India – to my surprise – weathered a depreciation better than some of the other countries studied, but a statistically significant success was predicated on the deflator of choice to determine real interest rates.

While I am not confident that India fails to meet the condition, rising oil prices and domestic demand guarantees the cost and quantity effects may be a little too close for comfort.

That, of course, only considers the total trade balance. As mentioned, government is unlikely to weather the necessary inflation well, especially if Raghuram Rajan decreases liquidity reserve ratios (as he has wanted to do for a while) which would light an upward pressure on already rising yields. The feedback loops formed from a rising deficit, stalling growth, and decreased demand for Rupee bonds will result in unfortunately high interest payments.

A tangential point concerns rentiers like Reliance – owner of the world’s largest refinery – which benefit from rapidly rising prices in an inelastic-demand environment. Its influence in government, along with political concerns will make handling these ridiculously useless subsidies hell for any Democratic government predicated on shaky coalitions.

India has a lot going for it. A falling Rupee hardly highlights any structural problem insofar as its own domestic economy is concerned (but brings to bear important questions about international monetary systems, a discussion for another day). I am largely with Paul Krugman that this is nothing to fret about – we are still talking about a country where people cry about 5% growth – but am only cautiously optimistic regarding the political ramifications from such rapid depreciation. Krugman is right in principle, and sometimes that is not enough.


Macroeconomists have a big problem. There’s basically no way to quantitatively measure their most important metrics – aggregate demand and aggregate supply as a function. Most measurable quantities – like employment, labor force churn, or gross domestic product – fall under the influence of both, making it difficult to ascertain that important changes are dominated by one or the other. In practice, we know that the recent demand was most likely a result of a crash in demand, which (theoretically) governs the business cycle and is coincident with low inflation.

Since 2000, with the JOLTS dataset from the Bureau of Labor Statistics, we have a deeper insight into both aggregate demand and supply. With such, there is reason to believe demand – unlike supply – has benefitted from relatively rapid growth and recovery to pre-recession normals. I have discussed the importance of structural factors before, but feel the need to stress the return of demand.

My analysis is predicated on some logical assumptions backed up by sound data. It is still important to  accept the limitations of such “assumptions”. The JOLTS data set provides us, among many other things, the level of openings and the level of hires. Here’s a graph of both, with the 2007 business cycle peak as base year:


It is not a stretch to suggest that openings (blue) are highly correlated with aggregate demand for labor, whereas hires (red) are modulated by a mix of both demand and supply. While this is crude in many ways, a job opening is the most literal example of labor “demand”. (Since a lot of commenters mention, I will reiterate, the recruiting intensity – while correlated with the business cycle – does not change substantially in a downturn and, in any case, has recovered since 2009. Therefore arguments like “these are all fake openings requiring unreasonable perfection” are fine, but irrelevant as we’re talking about the change).

What we see is a “V-type” recession for openings. That is, they rapidly crashed during the deeps of the recession, but recovered at a pace proportional to the fall. On the other hand, hires evince a more “L-type” recession which is characterized by a quick fall without a similar recovery.

Of course, “openings” do not map perfectly onto demand. The level of recovery must be adjusted for desire to fill an opening. The best way to measure this would be to ask employers the maximum wage rate they are willing to pay for each opening. Some openings are fake – America’s ridiculously moronic immigration laws require employers to place an ad in the newspaper to “prove” no American can satisfy said needs. (My mom’s sponsor placed an ad so specific to her that by design no one else in the country could fill the job. There is no reason to believe this is an isolated practice.)

However, most jobs aren’t meant for immigrants, and most openings are honest. More importantly, errors are systematic rather than random. That is, even if there is a degree of false openings, we care not about the absolute levels, but rate of change thereof. In fact, some conclusive evidence shows that while “recruiting intensity” does fall during a recession, it only vacillates between 80 and 120% of the average, and we’ve made up most of that loss at this point.

Hires represent a natural amalgam of supply and demand. Each position filled requires a need for services rendered (demand) and ability for a newly employed person to productively serve that need (supply). If we accept that growth in aggregate demand is healthy due to the V-shape of openings, then supply-side problems in the labor force are worse than the L-shaped recovery in hires suggests because the curve is governed by both supply and demand, which means the little recovery we do see derives from a recovering demand on already existing supply.

At this point, it becomes overwhelmingly clear that the standard AS-AD framework is woefully inadequate to understand the current economic dynamic. On the one hand, if we consider Price Level and Employment (as in the textbook models), positive inflation with any level of demand suggests a contraction in supply that’s too deep to reconcile with slow but steady gains in productivity. If nothing else it suggests we are at capacity, which most commenters dispute.

A better framework – one implicitly accepted by most commenters – would consider Inflation and Growth Rates. In this case, extremely low inflation by any standard suggests either a fall in demand  – which, as argued above, is no longer supported by the data – or expansion in supply. But the increase in supply predicted by this model, while explaining unemployment through a labor-mismatch hypothesis, is far too great to square with low growth rates in productivity and income unless demand is highly inelastic, which then contradicts well-established presence of nominally sticky wages.

If demand is at capacity, there is no general configuration of the AS-AD model that even broadly captures the current state. The one exception may be rapidly rising supply coincident with rapidly falling demand. Unless job openings are a complete mirage this is unlikely to be the case. We may, of course, backward engineer a particularly contrived model which would fail to have any insight into necessary fiscal or monetary policy.

As I’ve argued before, the labor-mismatch hypothesis of unemployment is very appealing. The idea that fiscalism is the province of “demand-side” policies is a dangerous idea. Paul Krugman has probably never read my blog, but if he read this post I would surely be accused of VSP-ism – mentioning the preponderance of “structural issues” and saying little else. But if supply has increased it suggests demand, while recovering faster than Krugman would accept, demand is still slack.

In this case, there is a deep role the Federal government can play in moderating the unemployment from mismatched skills while elevating aggregate demand. Low interest rates suggest the United States government can bear far more debt than current deficits imply and with an appallingly high child poverty rate, there’s no reason we can’t vastly improve children’s health, education, and comfort at a national level. Now is a better time than ever to cancel payroll taxes indefinitely and to test a basic income.

Demand could be higher, but it is not nearly as low as it was in the troughs of the recession – compare Europe and the United States, for example. The end of depression economics does not mean the role of government is over, nor does it harken sunnier days for America’s lower-middle. I’m very confident that large scale stimulus will not spark hyperinflation, but less sure the role pure stimulus can have on long-term employment prospects for the poor without a well-thought Federal job guarantee.

It was our responsibility to stimulate the economy far more than we did. It was our responsibility to engage in monetary easing far sooner than we did. The depression of demand lasted far longer than it ought to have under any half-smart policy. But now that we’ve crawled our way out of the hole, it is not clear that demand is lacking.

Perhaps the role of government is more important than it ever was.

Tyler Cowen sends us to an essay by Gary Marcus arguing that modern artificial intelligence (AI) has failed. Not only because it has failed to live up to its goals, but also because it has the wrong goals. At the lowest common denominator – between computer scientists and everyone else – AI’s chief mandate is to pass the so-called “Turing Test” wherein a human “judge” has a natural language conversation with a machine, and cannot tell that it is not a human. Modern AI agents, like Siri, Watson or your neighborhood NSA, basically mine troves of data trolling for correlations to find something significant. Marcus argues this is not “intelligence” because even the smartest and most powerful machines cannot answer simple, common sensical questions like,

“Can an alligator run the hundred-metre hurdles?”


Joan made sure to thank Susan for all the help she had given. Who had given the help?

a) Joan
b) Susan

because billions of Google searches will not yield any patterns to this effect: since nobody has asked this question before. He also points out most machines that “pass” a Turing Test do so with misdirection and deception, not signs of innate brilliance.

At a theoretical and conceptual level maybe Marcus is right. There is something profoundly unintelligent about using “big data” to solve human problems. That is because, “theoretically” by definition , anything that fails to use theory derived from axiomatic laws is not intelligent. It is why mathematical economists – even after their time has come and perhaps gone (with apologies to Al Roth who is a mechanism designer that has used theory to profoundly improve our lives) – hold their nose high towards the empiricists. It is why theoretical computer scientists have mid-career forgotten how to actually program. 

Theory is brilliant for some things. Thank god that we are not using randomized experiments and inductive reasoning to conclude that for a right-angled triangle the sum of the squares of each side does indeed equal the length of the hypotenuse squared. But a practitioner, capitalist, or your average Joe would look at Marcus’ critique another way. Who really cares that computers cannot answer questions that nobody has asked. Computers that are deeply common sensical by definition are not targets for artificial intelligence.

I would normally not write about computer science. Contrary to my choice of major, I’m quite a bit more confident with my command of economics than abstract philosophy or computer science. But I do understand the Turing Test. And any computer witty enough to “trick” humans is smart enough for me. I do not recall Alan Turing issuing an exception for remarkably sarcastic computers.

I also know a bit about capitalism and why it works. We may experience any sort of “market failure”. Maybe it’s too cheap to pollute. Maybe money demand is too high in a liquidity trap. But, by and large, markets work. That means useless companies go out of business and good ones stay. It means Apple’s iPad brings in treasure chests of profit while Microsoft’s Surface does… I do not know what.

It means artificial intelligence answers questions people give a shit about. Private enterprise has done wonders for the tech world. And the tech world is busy fixing problems that substantially improve our lives. 

Marcus does not consider the flip side of his claim. He is embittered by an industry that attempts to “trick” humans (not in general, but only when they are specifically asked to), but is upset computers cannot answer heroically contrived questions similarly designed to trick insightful algorithms, with the failure of a non-formal language such as English. In fact, computers can understand basically every colloquially important part of English. It would take a computer scientist to design a question that computers cannot effectively answer. 

Markets work. Artificial intelligence may not change the world as once did steam engines or double-entry bookkeeping. But it is answering questions profound to our social existence. Some, like Marcus, are upset that the artificial intelligentsia is keen on making programs only to trick human readers. He believes this does not take Turing’s argument in good faith, for he surely could not have foreseen the preponderance of big, evil data! Yet, at a theoretical level, Turing’s very insight demanded that machines trick minds, for if an algorithm convincing a judge that it is a human is not a “trick”, I do not know what is. More importantly, passing a silly test hardly defines the profession any more. It is about answering questions that generate mass profit, and hence mass welfare.

Contemporary AI has “forgotten” about the question of philosophical intelligence because it is not a well-defined phenomenon. An essay in the New Yorker a definition of intelligence does not make; indeed there is a far more philosophically elegant beauty in the Turing Test than querying a computer about the habits of Alligators. And I have a theory for that.

Peter Orszag, Barack Obama’s former OMB director, argues that structural factors – like labor market mismatch, globalization, and automation – are unlikely to explain much of the recent shift the the Beveridge Curve. That is, we have a historically low hiring rate given the number of job openings and unemployment.

While it would be foolish to discount cyclical dynamics, we ought to pay heed to certain structural inconveniences. Some economists like to classify recessions into letter shapes including: “V” (rapid fall, rapid recovery), “U” (fall, stagnate, recover), “W” (double-dip and recover), “L” (fall and stagnate). Europe, and much of the United States (with apologies to North Dakota) are somewhere in between the latter, crappier, two. Measured by income or unemployment, at least.

But job openings tell a fundamentally different story:


What we see is an (approximately) L-shaped recovery for hires and a very V-shaped recovery for openings. Openings are showing a rapid recovery because they fell twice as hard. (So it’s unfair for Orszag to claim that openings are recovering much faster without noting the respective falls). While this data set does not go far back, we can see this tension of shape only in this recession. Previously, as theory and common sense would expect, both followed a similar path, if to different magnitudes. This time, one might say, there are two different recessions. (And as I tackle in a later post, openings should map very well to aggregate demand as a whole, suggesting a much stronger demand-side recovery than many suggest.)

This suggests the skill mismatch thesis has more support than Orszag believes. He makes an interesting point: the opening-hire ratio is abnormally high even for Retail Trade, which isn’t a particularly skill intensive sector. While that’s a fair characterization, retail is also a remarkably noisy data set:


Now, if we wanted to draw any inference, there clearly is a negative trend since 2011. That’s important because this deterioration doesn’t go as far back as 2008 – or even 2009 – as would be suggested by a cyclical downturn. This is an important distinction from ferment in the labor market as a whole. It’s entirely fair to argue that this inference derives from too-noisy data: but it’s the only inference one can make. We certainly may not conclude that the opening-hire ratio in retail trade speaks against mismatch theories as a whole.

For example, there seems to be no problems for workers in “Food and Accommodative Services” another historically low-skill, low-wage market:


In fact, we should note that not only the magnitude, but also the shape, of hirings and openings are in harmony – in 2001 as well as 2009. This tells us something has changed between then and now in the market as a whole that has not changed in the minimum wage market. A best guess might be skills.

Research and common sense suggests workers at the bottom crust of our labor market are slightly more persistent against globalization and automation than those in the middle. Technology and globalization tend to hollow out jobs open to middle-class, blue-collar builders and factory workers: jobs that made the 20th century American. On the other hand, it’s markedly more difficult to outsource your fry cook to China.

As I’ve noted before, many of the problems caused by sticky wages have likely evaporated. Aggregate demand is well short of where it should be, without front-loaded spending cuts, but it’s becoming harder to attribute as a primary cause. Paul Krugman recently cited evidence that industries that were hit the hardest are recovering the quickest, which is great evidence in favor of the cyclical downturn hypothesis.

However, while it’s certainly clear that there was an incredible aggregate demand shortfall between 2008 and 2012, it’s harder to argue that this deficit continues today. Let me be clear: I think smart stimulus today – the type that boosts supply, demand, and low-income welfare – is a good policy today. However, the fact that many of the less desirable shifts to which Orszag refers begin years after the trough informs me that structural adjustment may be an important component of the recovery.

Some on the right, at this point, throw their hands up and claim “we’ve done what we can”. I’m not so complacent. America lost jobs that aren’t coming back. A recession only made it easier to plow through the industrial hurdles of such a change. This is precisely the time to offer high-quality, technical education through free two-year colleges targeted at our broad middle class. This is precisely the time to roll back payroll taxes completely. This is precisely the time to expand the earned income credit.

There’s one very important argument in Orszag’s column that deserves more attention: internal recruiting. There’s some more evidence that this might be true:


A healthy labor market “churns” a lot: a lot of people get fired, a lot get hired, and still others quit. But we’re not seeing a recovery in fires, hires, and quits. Fewer people are fired if they can be internally rehired to do something more important. There is an important problem with this argument, that is “layoffs and discharges” are down to their trough in a healthy market. Regardless, the problem with internal job markets is that only the employed benefit: and a rise thereof would suggest painful problems for the long-term unemployed. Economists know employers look for many “signals” that provide information about a candidate (in other words, its not the amazing education you get at Harvard that get you hired). One potentially important signal is employment itself. That’s a big problem.

Orszag notes that “over the past three years, the number of job openings has risen almost 50 percent, but actual hiring has gone up by less than 5 percent. Companies are advertising a lot more jobs, in other words, but not filling them.” That’s fair enough, but it’s somewhat tricky to look at the recovery without scrutinizing the downturn. The real problem is the L-shaped hires graph and V-shaped recovery graph. And if we don’t do something about it soon enough, jobs Americans never had will go overseas.

I’ve seen many posts today about our sluggish jobs recovery. Most people are pointing to data that show most of the job creation is in ultra low-wage, crappy sectors like fast food. (Here are James Pethokoukis, Tyler Cowen, and Mark Thoma on the matter.)

I think there’s a silver lining to this. Without considering part time jobs (which are not relevant to this post) there are three types of households: dual income, single income, and no income. The latter two indicate that one or both earners, respectively, cannot hold their job consistently, have high turnover, and are living off insurance.

Dual-income families have higher median incomes not only because there are two earners, but each individual earns more on average. Educated people from healthier backgrounds are more likely to get married – and stay married.

Let’s say I’m a genie. I can create jobs as I want. The process is simple, I specify the income said job will earn, and the market efficiently allocates my magical capital into productive labor. So I can wish for the marginal job to command $100,000, $50,000, or $20,000. Or less. Naturally, within this framework anyone with a genie would just hit some uselessly large number until all of America is rich and prosperous. But let’s stay within a reasonable bound.

Let me pose a question: what creates more marginal welfare: an efficiently allocated job pulling $20,000 or an efficiently allocated job earning $40,000. Most people will be quick to say the second job, of course.

But here’s the thing. Better jobs – like the ones we want to create – are more likely to go to families that already have an earner. That’s because the spouse of someone earning $75,000 a year may be “unemployed” because she (or maybe he) can’t get a great position, but will hold out instead of becoming a fry cook. She’ll take the executive assistant position paying $40,000, though.

As you can see, if your intention is to help America’s poorest – and that is precisely where the marginal dollar generates the most utility – you want to create jobs that are suitable to America’s poorest. Obviously there’s a point at which my tenuous argument breaks down. I would hands down rather disemploy a fry cook over an engineer or scientist (but not doctors – definitely not doctors).

There’s another, subtler point. Two $20,000 jobs are better than one $40,000 job – sure one shows up better on the output/hr productivity statistics: but the other employs two people. Furthermore, you may not believe wage flexibility is good (I don’t), but if you do the best way to achieve said flexibility is not through deflation of existing jobs but creation of crappier ones. That’s basic arithmetic.

This is not just a cyclical story. As many have claimed, automation and globalization will hurt the solid middle before it touches the bottom of the barrel. America is divided as much on income as it is on culture. A solidly middle-class housewife or husband will – to my approximation – rather remain unemployed because she can afford it rather than “stoop” to the jobs that are being created.

We may see more single-earner middle class families with a dual-earning poor who need it. This isn’t consolation for the grim path of our economic future, but it suggests there is reason to be optimistic at this stage in the recovery. These dynamics and frictions cannot be ignored: sometimes a higher paying job is not what the economy needs, hard as it is to believe.

Update: I just came across this fantastic post from Nick Rowe explaining why exactly fiat money isn’t a liability to the central bank.

Paul Krugman has a new, mostly-great post on the Pigou Effect. I have one pretty big quibble:

One way to say this — which Waldmann sort of says — is that even a helicopter drop of money has no effect in a world of Ricardian equivalence, since you know that the government will eventually have to tax the windfall away. Of course, you can invoke various kinds of imperfection to soften this result, but in that case it depends very much who gets the windfall and who pays the taxes, and we’re basically talking about fiscal rather than monetary policy. And it remains true that monetary expansion carried out through open-market operations does nothing at all.

Now, Krugman has said this before. Brad DeLong called him out on the fact that fortunately we don’t believe in Ricardian equivalence. But let’s say we do. Let’s say we are operating in a world of rational expectations without any ad hoc “imperfections to soften this result”. Krugman claims that a drop is effectively a lump-sum tax cut, and representative agents would save it all in expectation of future financing efforts.

A common refrain across blogosphere holds that Treasuries are effectively high-powered money at the zero lower bound. There is a cosmetic difference – redeemability – that plays in important role within the highly stylized, unrealistic, thought experiments that are representative agent models.

Fiat money is a final transaction. Even when the coupon rate is zero, the principal on the outstanding liability must be “redeemed” by the government. Therefore, outstanding government debt does not constitute net wealth in either the government’s or household’s budget constraint.

I’ve been toying with this distinction in my head for a while now, but Willem Buiter got there almost a decade ago. In this little-cited (according RePEc it has only self-citations, which is odd given the important result) paper, Buiter shows that a helicopter drop does not function as a tax cut. The result derives from the pithy, contradictory, but fair assumption that fiat monies are are an asset to the private holder but not – meaningfully – a liability to the public issuer.

Therefore, an dissonance between the household and government perception of the net present value (NPV) of terminal fiat stock results in discordant budget constraints in the model. In this sense, the issuance of money can increase the household’s budget constraint in a way open-market operations cannot, increasing consumption and transitively aggregate demand.  (For those interested, the math is presented in the previously linked paper as well as, in better font, this lecture). The so-called “real balances effect” is, for lack of a better word, real.

We don’t have to assume any sort of friction or “imperfection” that mars the elegance of the model to achieve this result, but Krugman is right: it very much is about who gets the windfall and who pays the taxes. For every liability there does not exist an asset.

Without resorting entirely to irrational expectations (what some might term, “reality”) there is a further game theoretic equilibrium in which helicopter drops have expansionary effects. Douglas Hofstadter (whose name I can never spell) coined the idea of “super-rationality”. It’s very much an unconventional proposition in the game theoretic world. But it’s very useful. Wikipedia synopsizes it as:

Superrationality is an alternative method of reasoning. First, it is assumed that the answer to a symmetric problem will be the same for all the superrational players. Thus the sameness is taken into accountbefore knowing what the strategy will be. The strategy is found by maximizing the payoff to each player, assuming that they all use the same strategy. Since the superrational player knows that the other superrational player will do the same thing, whatever that might be, there are only two choices for two superrational players. Both will cooperate or both will defect depending on the value of the superrational answer. Thus the two superrational players will both cooperate, since this answer maximizes their payoff. Two superrational players playing this game will each walk away with $100.

Superrationality has been used to explain voting and charitable donations – where rational agents balk that their contribution will not count; but superrational agents look at the whole picture. They endogenize into their utility functions the Kantian Universal Imperative, if you will.

In this case, superrational agents note that the provision of helicopter money will not be expansionary if everyone saves their cheque, and note the Kaldor-Hicks efficient solution would be for everyone to spend the cheque, thereby increasing prices and aggregate demand.

This may be too rich an argument – in a superrational world we would not have the Paradox of Thrift, for example – but is more robust against imperfections. For example, as an approximately superrational agent who understands the approximately superrational nature of my friends, I know that they will probably spend their money (I mean they’ve been wanting that new TV for so long). I know that will create an inflationary pressure, and while I would like to save my money, I know they will decrease its value and I’d rather get there before everyone else.

I see this as a Nash Equilibrium in favor of the money-print financed tax cut.

Paul Krugman, though, is worried that accepting the existence of Pigou’s Effect undermines the cause for a liquidity trap:

What caught me in the Waldmann piece, however, was the brief discussion of the Pigou effect, which supposedly refuted the notion of a liquidity trap. The what effect? Well, Pigou claimed that even if interest rates are up against the zero lower bound, falling prices will be expansionary, because the rising real value of the monetary base will make people wealthier. This is also often taken to mean that expansionary monetary policy also works, because it increases money holdings and thereby increases wealth and hence consumption.

And that’s where I came in (pdf). Looking at Japan in 1998, my gut reaction was similar to those of today’s market monetarists: I was sure that the Bank of Japan could reflate the economy if it were only willing to try. IS-LM said no, but I thought this had to be missing something, basically the Pigou effect: surely if the BoJ just printed enough money, it would burn a hole in peoples’ pockets, and reflation would follow.

What Krugman wants to say, is that the liquidity trap cannot be a rational expectations equilibrium, if monetary policy can reflate the economy at the zero lower bound. In New Keynesian models, if the growth rate of money supply exceeds the nominal rate of interest on base money, a liquidity trap cannot be a rational expectations equilibrium. The natural extension of this argument is that if the central bank commits any policy of expanding the monetary base at the zero lower bound, we cannot experience a liquidity trap (as we undoubtedly are).

It’s crucial to note this argument – while relevant to rational expectations – has nothing to do with Ricardian Equivalence. In short, the government may want to do any number of things with the issuance of fiat currency – like a future contraction – but it is not required under its intertemporal budget constraint to do anything. This is fundamentally different from the issuance of bonds where the government is required to redeem the principal even at a zero coupon rate. Therefore, in the latter scenario, Ricardian Equivalence dictates that deficits are not expansionary.

The argument follows as the NPV of terminal money stock is infinite under this rule, which implies consumption exceeds the physical capacity of everything in this world. Therefore, rational agents would expect that the central bank will commit to a future contraction to keep the money stock finite. They do not know when and to what extent, but by the Laws of Nature and God are bounded from being rational.

In this world of bounded rationality, we must think of agents as Bayesian-rational rather than economic-rational. That means there is a constant process of learning where representative agents revise their beliefs that the central bank will not tighten prematurely. In fact, the existence of a liquidity trap is predicated on the prior distribution of the heterogenous agents along with their confidence that a particular move by the central bank signals future easing or tightening.

Eventually, beliefs concerning the future growth of the monetary base must (by Bayes’ Law) be equilibrated providing enough traction to escape the liquidity trap. But enough uncertainty on part of the market and mismanaged messaging on part of the central bank can sufficiently tenure the liquidity trap.

This is all a rather tortuous thought experiment. Unless one really believes that all Americans will save all of the helicopter drop, this conversation is an artifact. More importantly, a helicopter drop is essentially fiscal policy that doesn’t discredit the Keynesian position against market monetarism to begin with.

Ultimately, there is one thought experiment that trumps. Helicopter a bottomlessly large amount of funding into real projects – infrastructure, education, energy, and manufacturing. Build real things. Either we’re blessed with inflation, curtailing the ability to monetize further expansionary fiscal spending or we’ve found a free and tasty lunch. Because if we can keep printing money, buying real things, without experiencing inflation, we are unstoppable.

Here’s a good liberal’s plan for saving Detroit: Deregulate Drastically. (Sorry, I have a penchant for pointless alliteration). We generally consider expectations in the context of monetary policy; but it has crucial implications for capital inflows as well. Imagine the Federal Government – in concert with Detroit – stipulated the following rule:

Until Detroit’s per capita income reaches 85% of the national level, all local and federal regulations governing labor and capital transactions are suspended.

These include, but are not limited to – minimum wages, severance, Fair Labor Standards, and all immigration restrictions. (That is, if an employer located in the City of Detroit can vouch two years of employment for any immigrant, they will not need to operate under all the crazy guest worker provisions).

The only regulation will require Detroit to hire two unemployed residents of the City or long-term unemployed American for each immigrant. This institutes a de facto wage floor against prevailing global wages which is necessary to increase the per capita income of the city. While a long stream of poor labor will always make everyone better off – it will after a point decrease per capita wages of each country even if increasing the  average of both.

However, unlike Adam Ozimek at Modeled Behavior, I don’t think immigration should be the centerpiece of any plan to save Detroit, when there are better regional options at the ready. If firms are given a Federal credit to relocate distally long-term unemployed, Detroit has a large group of people that have atrophied skills, poor social habits (not of their fault), and extremely low marginal value to a scaled firm.  

Now, in general I don’t think regulations are killing the American economy, I support reasonable increases in the minimum wage, and genuinely believe work and child safety laws have made the country a better place. But if Detroit is the only locality to benefit (or suffer) from relaxed regulations, the benefit comes from the difference in cost of production in Detroit vs. elsewhere. This is usually a Prisoner’s Dilemma-esque race to the bottom for the poor (to the extent we’re talking about sensible regulations), but in tight and rare situations provides a deficit-free way of giving Detroit breathing space.

All sorts of firms will want to capitalize this opportunity which will both repopulate Detroit and cap long-term national unemployment. But much of the recovery will take place without the law taking effect. Let’s say we stipulate a mass deregulation that begins in exactly one year. Once we’ve credibly convinced the market towards this commitment, firms will today begin shifting operations significantly increasing employment and wage expectations by the time the law takes effect. The argument against minimum wage goes that it prevents people who want to work for less from doing so, curtailing personal liberty and decreasing employment. Just by virtue of mutating expectations, there’s a case to be made that even if today many are willing to work below minimum wage, after accounting for expectations of future growth aggregate demand will rise sufficiently that by the time deregulation occurs few will want to make use of it.

There are two possibilities: either this works (increases employment and incomes) or it doesn’t. Shocker, I know. Anti-deregulation Pro-regulation liberals should be happy either way. If it doesn’t work we learn that the costs of deregulation are not all that high, and lends further support for a broader program of sensible, national regulations. If it does work, many of Detroit’s poorest are brought out of poverty, repopulation saves pensioners, and employment soars.

In fact, if the Federal Government credibly convinces the market of such a move – and guarantees long-term unemployment credits – there’s a non-negligible chance that by the time of deregulation the “rule” already requires re-regulation. Let’s say I want to produce cheap shoes, and learn about this program. I will begin preparatory operations today increasing both employment and wages.

This is unfair for all the other cities that must abide by Federal law, but that’s the point. We wish to give an “unfair advantage” to the poor. The only other viable option, that does not include a complete shutdown, requires massive Federal stimulus which will hurt the rest of America far more than a distorted regulatory code.

In fact, capitalizing on distortions should become a key lever of interregional stabilization. One great way to save Detroit – and the environment – at the same time would be to institute a rigorous cap-and-trade program, and providing Detroit a disproportionately high number of permits (distributed from the rest of the country). Each of these solve a key efficiency gap that is lost through simple fiscal stimulus which can be ineffective because of bureaucratic concerns.

Furthermore, regulatory policies often effect the young more than the old – minimum wages are a great example. The best way to repopulate a dying city is with fresh, young talent. A mix of deregulation and supply-side stimulus including good public education and retraining programs might once again make Detroit a model city. A model for dying cities across the rich world.