Archive

Tag Archives: finance

Update: Mark Schieritz writes in the comments:

Thanks for reporting on my interview! I believe however you are confusing my role as a journalist conducting an interview with my own view on things. I do believe that the ECB should act more forcefully and I have said that on numerous occasions (e.g. on my blog http://blog.zeit.de/herdentrieb). When interviewing officials however it is my duty to challenge there actions and as I write for a German audience it is my duty to challenge them from a German perspective. Peter Praet has enough rooms to defend himself.

(I think that’s fair. As I told him, I think this only further implicates the European elite by serving a certain “German perspective” that requires such a deep prior that any and all monetary stimulus is always and everywhere evil. And while journalists are responsible for their audience, they are also beholden to the truth: and at least given what I know the perspective peddled in this interview is an unfair representation of reality. To a layman reading the interview, the tone is dominated not necessarily by the clarity of Praet’s thought, but the forceful austerity of Schieritz’ question.

In any case, read this post and substitute “elite” for “Mark”.

That said, since I believe this was originally written in German, I cede some things are lost in translation. But where are the elite “challenging the actions” of Jens Weidmann?)

It’s no secret that the European Union is collapsing under the weight of its own bind. The political situation across the continent looks not unlike what it did before a great war more than fifty years ago and, relatedly, the economy doesn’t look better. John Oliver lambasted the fascist politics that are taking over the union last Sunday. While it may be hard for many Americans, including Oliver, to understand why Europe hates its elite so much, you need look no further than the austerity complex at Die Zie, described on Wikipedia as:

 well-regarded for its journalistic quality. With a circulation of 504,072 for the second half of 2012and an estimated readership of slightly above 2 million, it is the most widely read German weekly newspaper […] The paper is considered to be highbrow. Its political direction is centrist and social-liberal, but has oscillated a number of times between slightly left-leaning and slightly right-leaning.

The most widely read weekly paper, it might be to Germany what Thomas Friedman is to the Beltway elite. Enter our protagonist, Mark Schieritz, in conversation with Peter Praet of the ECB in what may be the most spectacularly absurd interview of the year.

For context, inflation in Germany rolled in this month below its own low expectations at a barely-discernable-from-deflation 0.5% (consensus was 0.7%). Investors around the world are on the edge of their seats for the ECB meeting this Thursday when Draghi is widely-expected to announce Europe’s first round of quantitative easing.

Mark starts the interview curious that the ECB isn’t hiking rates because “business activity is picking up in Europe”. When Praet accurately notes that inflation is well-below target – that, indeed, 0.7 is less than 2 – he wonders whether that is “a little formalistic”. This defines the psychology of establishment economic thought across the pond: not only a requirement that the stipulations of rules-based policy themselves be hawkish, i.e. “close to, but below 2%”, but that even when we are far from meeting that goal, following rules are merely “formalistic”.

In the German mind, the one and only purpose of monetary policy must be further pain and austerity.

When told that persistently below-target inflation would harm the ECB’s credibility, Mark notes that many “experts’ expect inflation to soon again rise of its own accord. This statement is a great window into the minds of the European elite, motivating the idea that inflation is something that exists of its own accord, outside of monetary authority. This is not surprising for a country scarred by years of hyperinflation when, indeed, political and fiscal theories of the price level dominated. In most times, however, inflation is “everywhere and always a monetary phenomenon”. European policymakers for some reason believe supply, not demand, is the problem (how else would inflation come “of its own accord”). Of course, there is debate among sensible people about whether the central bank can do this in a liquidity trap, except journalists at Die Zeit have nowhere near the level of economic sophistication to actually understand that debate.

The story gets worse. When Praet gives Mark the canonical reasons for which low inflation is deadly he concludes that we cannot “allow inflation to be too low for too long”. You would think that even if people had subjective contention on what constitutes “too low” and “too long”, they would agree that given their own reference, inflation should not fall below that threshold.

Not so for Mark who wonders “why would that be dangerous?” In fact, the German elites want inflation to be too low for too long. And, unfortunately, this is probably not a logical paradox. The core elite know that austere policy is terrible for Europe as a whole. Indeed, what ignorant fools like Jens Wiedmann want is inflation that is devastatingly low for Europe and Spain, but just enough to keep the uninformed but vengeful German electorate appeased.

The interview has not climaxed yet. Praet makes a very sensible claim that inflation is an insurance policy and goes on to note that it is necessary to facilitate economic adjustment. Since Mark has not read an economics textbook, he is yet again left wondering why. He disagrees with Praet’s correct answer about external devaluation noting that

Prices and wages in den crisis countries had risen far too rapidly over many years. The low rate of inflation helps companies there to regain competitiveness vis-à-vis rivals in the north. Why do you want to counter that?

And here, for the first time, I think Praet gets it wrong. (You can see the whole interview below). Since European trade is such an important part of peripheral growth – indeed Germany maintained a surplus only by exporting its savings to weaker countries – what matters isn’t the absolute rate of inflation of Europe as a whole so much as the relative rate of inflation between the periphery and the core. While higher inflation in general would help reduce the real burden of debt, what we need is higher inflation in the core to make peripheral labor more competitive. It makes one whole hell of a lot more sense for Germany to tolerate slightly above-trend inflation for a few years instead of forcing Greek to deflate.

Then there’s some nonsense about low rates hurting savers which is a dumb, but forgivable, mistake before we get to the most remarkable statement in economic journalism:

If the banks have to pay penalty interest [negative rates], you may well be making loans dearer. What happens then?

In the eyes of the European elite, the cost of borrowing increases when interest rates decrease! There is a perverse world in which this makes emotional, though not logical sense, and that is from the perspective of a saver for whom making a loan is more expensive at a lower rate. Of course, that is not what the English means, nor is it economically valuable information even were it to be so, but it gives us a gory insight into how European policymakers think.

Then there’s some huffing and puffing about low interest rates causing a “property price bubble in Germany” – when, no less, I wouldn’t trust Mark to monitor soap bubbles, let alone one of the world’s largest economies. It’s interesting to note that he is both worried about “dearer loans” and bubbles, which are almost mutually exclusive.

The sad part is how Praet ends the interview, discussing more accommodative policy:

That possibility, too, was discussed. But I believe that such purchases would only be made if business activity and inflation develop along lines that are significantly worse than expected.

Even if you don’t believe QE works, and there may be good reasons to hold this belief, it is scary that officials don’t think they need it. Apparently “business activity and inflation” – nonexistent as they are – have not yet reached levels “significantly worse than expected”.

Paul Krugman is right – the Germans are masters that want the beatings to continue until morale improves. I’m not fan of fascism or nutty right-wing racism, but lets hope these asshats get thrown out of office and soon.

Addendum: In the sleepy haze that I wrote this post, I forgot to mention an important point. Among smart commentators in the US, the ECB is viewed largely as an archaically-tight institution, governed by bad economics and a misunderstanding of monetary policy. That isn’t true, the ECB is damned by a cultural and emotional – not economic – unwillingness to follow the right policy. The German elite couldn’t give a shit about the logical validity of their argument, indeed Schiertz hates inflation not for any monetary reason, but as a presupposition to his worldview.

Read the whole interview:

Mr Praet, business activity is picking up in Europe. Hasn’t it become time to prepare increases in interest rates – rather than to ease monetary policy further, as the European Central Bank (ECB) indicated rather clearly last week?

That would not be in line with our mandate. The ECB is required to keep the value of money stable. We understand this to mean an inflation rate of close to, but below, 2% over the medium term. Incidentally, this definition dates back to Otmar Issing …

… formerly chief economist of the Bundesbank and your predecessor in office …

… The rate of inflation in the euro area currently stands at 0.7%. Such a small increase in prices cannot be regarded as satisfactory over the medium term if we want to attain what we have announced.

That is formalistic.

Why formalistic? What is at stake is credibility, an issue of great importance for a central bank. People must be able to rely on our keeping the annual rate of inflation at close to, but below, 2%.That is important for them to take business decisions. That is why we cannot allow inflation to deviate lastingly from our designated target figure, irrespective of whether to the upside or the downside.

Many experts, however, expect inflation to soon again rise of its own accord.

We are assuming that prices will increase only gradually. According to our current projections – new ones will be presented in June – it is only at the end of 2016 that inflation will approach the mark of 2%. And what we have observed recently are rather surprises to the downside, which means that inflation has tended to be slightly lower than expected over the past few months. The longer this increase in inflation is delayed, the greater is the risk of a change in inflation expectations. This would cause firms and households to take very low inflation rates for granted, and to behave accordingly. That is why we cannot allow inflation to remain too low for too long.

Why would that be dangerous?

There is no central bank that would aim for an inflation rate of zero over the medium term – not even the Bundesbank has done so in the past. Normally, the goal of monetary policy is to have a moderate rate of inflation. This could be regarded as a margin of safety to avoid the risk of deflation, which would have grave repercussions for growth and employment. In a monetary union, moderate inflation would also facilitate necessary economic adjustment.

Why?

If there is a need to adjust wage and salary levels, it is an accepted fact that wage cuts are difficult to push through, while wage moderation could well be achieved. Expressed in a simplified manner, moderate inflation thus ensures that wages and salaries fall in real terms even when they remain nominally unchanged.

Prices and wages in den crisis countries had risen far too rapidly over many years. The low rate of inflation helps companies there to regain competitiveness vis-à-vis rivals in the north. Why do you want to counter that?

It is true that there were adverse developments of this kind. In the meantime, however, most of the countries concerned have made significant progress in adjusting prices, and the adjustment process will continue. What we want to prevent, however, is that this turns into a lasting change in inflation expectations for the euro area.

The President of the ECB has indicated that the Governing Council might take action at its next meeting in June. What precisely could you do?

We are preparing a number of measures. We might again lend banks money over an extended period of time, possibly subject to certain conditions. We could also lower interest rates still further. Even the combined use of several monetary policy instruments is conceivable.

As things stand today, when banks deposit surplus funds at the central bank, they receive no interest. The ECB would thus have to impose penalties in future.

Negative interest rates on deposits are a possible part of a package of measures.

That was attempted in Denmark, with a rather mixed outcome, so that the central bank there put an end to the experiment.

The situation is not comparable. The negative interest rates helped mitigate the appreciation of the Danish kroner. In the prevailing environment of low euro area inflation, any appreciation of the currency there is problematic for the whole area because a strong euro would make imports cheaper and push inflation down even further.

Paris will be glad to hear that. France has long urged the ECB to take action against the appreciation of the euro.

Various governments are calling for a number of different measures. We as the central bank are independent and will not be influenced by such demands. Moreover, our focus is not on weakening the euro in order to help exporters in Europe. We are interested, first and foremost, in the impact of the exchange rate on inflation.

You are nevertheless entering uncharted territory.

Doing nothing would also pose risks. We are currently observing that demand for loans is gradually picking up again. For the recovery in economic activity, it is extremely important that the banks actually satisfy that demand.

The ECB’s low interest rates are already proving to be detrimental for all savers. Interest rate cuts would exacerbate the problem.

I have a great deal of understanding for the concerns of savers – my money, too, lies in the bank. We must, however, resolve the crisis now. That will also be to the benefit of savers because interest rates would then rise again in future. It may well seem paradoxical, but a further easing of monetary policy in the prevailing environment could help in this respect.

If the banks have to pay penalty interest, you may well be making loans dearer. What happens then?

Given the orders of magnitude we have been discussing, I do not expect that to occur.

That said, experts are warning that the low interest rates could give rise to a property price bubble in Germany.

Low interest rates are an incentive to seek alternatives to classic savings books or time deposits at banks. However, that may certainly not lead to speculative excesses.

You intend to reduce interest rates nonetheless!

In the case of Germany, there is no justification to speak of a general property price bubble, even though prices have risen rather sharply in individual market segments, such as those for popular locations in a number of major cities. In the event of problems in a specific country, it would be the responsibility of the competent national authorities – in Germany, the Bundesbank and the Federal Financial Supervisory Authority – to take appropriate countermeasures and, for instance, to compel banks to be more cautious in extending credit. That having been said, we must see the euro area as a whole – and property prices are continuing to fall in a many of the countries there.

In the event of your indeed cutting interest rates in June, will there be a broad majority in favour thereof in the ECB’s management?

We had a very good discussion at our meeting last week, both with respect to the assessment of the current situation and with regard to the conclusions to be drawn.

Controversial, by contrast, are bond purchases of the kind undertaken by the US Federal Reserve.

That possibility, too, was discussed. But I believe that such purchases would only be made if business activity and inflation develop along lines that are significantly worse than expected.

 

A lot of people seem pretty sure that student loans are behind a slow economy. Elizabeth Warren got there first. Then Vox posted a few charts confirming that, yes, those with student debt are less likely to own homes. Today I read that Larry Summers and Joe Stiglitz endorse this idea. Rick Rieder from BlackRock agrees. 

Let me play devil’s advocate.

The facts are clear – home ownership among those with student loan debt is decreasing much quicker than for those without.

Image

It is difficult to reach the conclusion, at least from this chart alone, that student loan debt decreases home ownership. For one, both lines are following the same trend over the same period of time, only one has a steeper slope since the financial crisis. Except this isn’t surprising, student debt is correlated with debt which is poison over a period of deflation (or lower than expected inflation). It is not unique to students.

But more importantly, those jumping at the correlation between student loans and home ownership over a few years of data are not paying enough attention to large, structural shifts in economic geography over the past fifty years. For nearly a century, suburban growth eclipsed urban. Millenials, the fraught group in question, are changing that. Two-thirds of young graduates now want to move to cities for a better job, compared to a fraction not too long ago. Not to mention the more than 80% that are willing to move to any city if needed.

Dad is no longer a company man, nor mom a housewife. Rather, graduates are likely to vie for shorter commitments focused on training and, in a number of cases, with a higher probability of relocation in the future. Not to mention the logistical, locational difficulty of maintaining a dual-income family (specifically outside of urban centers).

And that’s the demand side. Jobs that cater to college graduates are slowly disappearing from middle America toward coastal centers that capitalize on economies of scale and network effects. Vox notes that the age at which graduates first purchase a house is becoming later. True, but not necessarily relative to household formation itself – something happening later across the country driven by graduates. (Not to mention, as a commenter on Twitter mentions, the increasing necessity of a post-Baccalaureate degree).

Here’s the thing about those with “student debt” – they are much more likely to be “students” than those “without student debt”. In the latter group, you either don’t have a degree – in which case none of the above qualifications apply – or you’re wealthy enough to go through college without any debt. Neither is a representative group.

The bottom line is students increasingly want to live in areas where homeownership is unaffordable. (And it’s not like twenty-somethings even should be able to afford a place in New York City). There is increasing evidence that homeownership is probably not the best investment for many people. The returns on real estate are dwarfed by the stock market, and other carefully-orchestrated investment plans, especially without the (clearly excessive) boom years of the past decade. Maybe millenials are paying more attention and making smarter investments.

Of course, jubilees are almost always a good thing (and it’s not clear that lower rates as Elizabeth Warren wants, would even achieve lower debt – it may just encourage poorer people, or those who otherwise would not have, to borrow more). Deleveraging, especially in a time of low inflation, will improve the economy through simple wealth effects and encourage capital formation via higher savings rate (and perhaps investment in domestic equity).

But it does not strike me as a particularly equitable, or necessarily economically-optimal, use of our budget to help those who were, are, or would-be students. Or, in other words, near the top of the income distribution. An expansion of the EITC with the same money would be more equitable, more directly encourage job creation, and give more bang for the buck.

Of course, the best thing for debtors and the economy would be a inflation above expectations.

 

Brad DeLong notes that lack of consumption is not especially responsible for currently low levels of aggregate demand. I am not so sanguine. At first approximation, this is hardly surprising. There is some truth to the Austrian principle that recessions arise from a decline  in investment during the boom. (There are many flaws to this theory, not the least that spending on consumption goods increases only in relative, not absolute, terms). Since most consumers smooth their spending over time, to the extent liquidity is not a problem (and it was), what happened is exactly what you would expect.

Unfortunately, to the extent this stagnation is secular, we can’t ignore consumption. Take a stylized accelerator model which says that I/Y_r = (K/Y)g_r, where I/Y and K/Y are the investment and capital stock to potential income ratio respectively and g is the potential growth rate. Let’s state secular stagnation as the state where an economy is (for the foreseeable future) demand constrained and g as a function of r drops by some constant amount. (That is, the real interest rate necessary to maintain some level of growth is lower than it used to be).

So, for whatever reason, when the long-run potential growth of the economy falls, firms are not as driven to invest in future profits and therefore the level of investment has to fall. This must either be accompanied by a corresponding increase in consumption or decrease in total income.

The question we should be answering then isn’t whether consumption has increased as a share of GDP, but whether it has increased enough given a lower potential growth rate. Even under generous assumptions, this is probably not the case:

Image

The growth in consumption as a share of GDP – which we will generously define as 1 – max_t(C)/min_t(C) – is just over 18%. Over the same period, real income grew by an average of 3.2%. Taking this as the former potential, and 2% as the new normal g has fallen by around 60%. Even with an optimistic assumption that the economy will grow at 2.5%, the fall in potential g still outweighs the increase in consumption. (This is all under the assumption that K/Y has and will remain fairly constant. Piketty says no. I don’t buy that this will be significant enough to outweigh everything else, but that is for another post). 

This is, of course, a simplistic assumption. The accelerator model is naturally stylized and investment may not fall nearly as much as suggested. Increased consumption of capital (“wear and tear”) may be one such reason, though that seems ever unlikely in an economy increasingly-oriented towards investments in intellectual property and information rather than coal mines. So if it is not the case that increasing consumption is necessary to maintain a certain level of income, it is certainly interesting to see the assumptions and model under which that is so.

The United States is simultaneously too much like China, and not enough like China. On the one hand, falling potential growth in both countries necessitate a decreasing reliance on private investment. On the other hand, unlike China, there is much the United States can and should do to increase public investment in green technology, basic research, and stronger infrastructure. 

Earlier today Matt Yglesias borrowed a chart from Thomas Piketty’s new book noting that public assets remain above public debt. This chart actually underestimates the strength of our balance sheet.  A more striking image is the ratio of our gross national product (GNP) to our gross domestic product (GDP).
 
Image 
 
 
The left axis here is a pretty narrow range, so the dynamics aren’t as extreme as they first seem, but what you’re seeing is that the GDP-GNP ratio is at it’s highest pint in history and continues to rise. What does that mean? Remember from Econ 101 that the GNP = GDP + American income on foreign assets – foreign income on American assets. In essence, the ratio I’m graphing shows that over the past decade net inflows have grown substantially faster than GDP, despite the skyrocketing debt. (Since GNP/GDP = 1 + (net payments)/GDP In fact, you would expect precisely the opposite phenomenon, more so than ever before as our deficits are increasingly financed by foreign savings (interest payments to domestic pension funds cancel out). As Wikipedia recites basic economic wisdom:
 
Similarly, if a country becomes increasingly in debt, and spends large amounts of income servicing this debt this will be reflected in a decreased GNI but not a decreased GDP. Similarly, if a country sells off its resources to entities outside their country this will also be reflected over time in decreased GNI, but not decreased GDP. This would make the use of GDP more attractive for politicians in countries with increasing national debt and decreasing assets.,
 
It’s worthwhile asking yourself what exactly national debt represents – a claim on future income. This graph suggests that our claim on everybody else’s future income is way higher than their claim on our income. You should be scratching your head – why are emerging markets throwing money at us when the domestic return on capital is far more attractive? They are lending us really cheap credit so that we can recycle that to our firms which then invest in foreign equities earning a huge premium. That should sound pretty familiar to you, because it’s basically how Wall Street made all of its money. America, basically, is a huge hedge fund.
 
But for the Federal Government, this isn’t a problem. We can always roll over our debt since the dollar is sovereign. We can actually make some pretty insightful observations taking the perspective that the government is a big bank. We could theoretically sell our foreign assets to repurchase our own debt (or, since we don’t have a sovereign wealth fund, tax our citizens more, which is the same thing) which is less volatile than emerging market equities (and isn’t subject to any currency risk). But we’d then be short emerging market, which means we would pay the risk premium rather than earn it. 
 
The in-built short term volatility isn’t actually a big deal, either. In the long-run, equity indices are pretty well correlated with economic growth, and everybody else will grow faster than us on average. That leaves liquidity risk, but because the Federal Reserve exists even that is negligible. The currency implications of this are even more mind bending still. For most countries, a depreciation has a negative wealth effect – that is we can buy less on the international market than before in real terms. But the more our own inflows are denominated in foreign currencies, the less obvious this link becomes. That gives monetary policy more space as higher inflation would be a lot less detrimental for us than any other country.
 
People are happy to pay a premium for safety. We should be happy to earn that spread. 

The recent emerging markets crisis – one a long time coming, depending on who you ask – started on cue from Ben Bernanke that the Fed would “taper” its unprecedented injection of liquidity into international markets. That emerging markets responded so poorly tells us something important: they expect the taper to come much too soon.

In principle, the Fed wants to continue with its “expansionary” program until the United States is growing quickly again. The “Evans Rule” ensures we will keep a zero interest rate policy until unemployment has fallen below 6.5%, though no such forward guidance exists for QE. 

In reality, if emerging markets expected QE to come only after the US economy was healthy again, this kind of depreciation of currency and fall of stock would be very unlikely. A robust US economy implies a growing demand for exports denominated in EM currencies: boosting both the currency and stock market. While the expectation of future growth probably wouldn’t offset entirely the effect of curtailed liquidity, it would have likely contained much of the depreciation.

Emerging markets are a more informative tool to this effect than American stocks. While  we can infer the same conclusion – that tightening will be premature – from our stocks’ sensitivity to QE, US markets are too closely connected with QE’s direct wealth effect, obscuring observation of expectations.

Of course, the massive liquidity from QE also plays a direct effect on emerging market currencies, but it’s fair to guess the relative sensitivity of EM currencies is lower than that of American stocks. (For example, a good jobs report might actually freak Wall Street out, whereas emerging markets are relatively insulated from that).

This is a somewhat contrived argument, still, the magnitude of currency deterioration across Asia should somewhat revise our beliefs that tapering is coming sooner than it should. 

The interesting story of the day is definitely a rapidly “collapsing” Rupee haunted by abnormally high volatility over the past month. I scare quote “collapse” because it’s a term that reflects the Western bias of this conversation. If you’re an American who has invested in emerging market funds hoping for real gains intended for domestic consumption that beat an index well, then, you’ve been screwed. As far as India is concerned the depreciation is  of less concern. Still, Raghuram Rajan has his job cut for him, and I’ve discussed the Indian Rupee in this context before. However, since Paul Krugman blegs to learn what he is missing, I’ll offer a few things that worry me at the moment.

Primarily, like any other developing country with rentier bureaucrats, fuel subsidies are important to India twofold. First as a stimulant for middle class growth that demands transportation and electricity (generators are big); second as a key ingredient to important fertilizers without which the Indian farming model would fail.

Fuel subsidies pose an interesting problem for a country that will meet 90% of its oil needs through imports. (I mistakenly noted that 90% of oil is currently imported. That said, the greater the difference between the current figure and the future one, the worse one unit of depreciation will be). Basically all growth in India’s most important market will be financed from imports henceforth. On the one hand, the decisive role government plays in the energy market almost guarantees that deficits will face an upward pressure, in a time when yields are already rising. The sensible solution would unfortunately involve curtailing the provision of a good necessary for political success.

More importantly, as energy becomes the dominant theme in Indian trade – as it undoubtedly will – India looses the primary benefit of depreciation: exports. When trade structure is such that the cost effect (price of imports) of depreciation trumps the quantity effect (quantity of imports) the beneficial nature of depreciation is removed entirely. Known as the Marshall-Lerner condition, a country technically faces this dilemma when the summed price elasticity of imports and exports falls below one.

Usually, since goods tend to be price inelastic in the short-run, devaluations are not always immediately successful but work over time. Not so for dollar-priced oil. As the value of oil is already buoyed by demand from emerging markets, each point of depreciation for the Rupee is that much worse for its balance and budget. Using data from the International Financial Statistics and Direction of Trade Statistics from the IMF, Yu Hsing estimates that India may not significantly meet the Marshall-Lerner condition. Since the paper might be gated for some of you, I’ll copy the relevant result:

As the US real income declines due to the global financial crisis, the trade balance for Japan, Korea, Malaysia, Pakistan, Singapore, or Thailand will deteriorate whereas the trade balance for Hong Kong or India may or may not deteriorate depending upon whether the relative CPI or PPI is used in deriving the real exchange rate.

India – to my surprise – weathered a depreciation better than some of the other countries studied, but a statistically significant success was predicated on the deflator of choice to determine real interest rates.

While I am not confident that India fails to meet the condition, rising oil prices and domestic demand guarantees the cost and quantity effects may be a little too close for comfort.

That, of course, only considers the total trade balance. As mentioned, government is unlikely to weather the necessary inflation well, especially if Raghuram Rajan decreases liquidity reserve ratios (as he has wanted to do for a while) which would light an upward pressure on already rising yields. The feedback loops formed from a rising deficit, stalling growth, and decreased demand for Rupee bonds will result in unfortunately high interest payments.

A tangential point concerns rentiers like Reliance – owner of the world’s largest refinery – which benefit from rapidly rising prices in an inelastic-demand environment. Its influence in government, along with political concerns will make handling these ridiculously useless subsidies hell for any Democratic government predicated on shaky coalitions.

India has a lot going for it. A falling Rupee hardly highlights any structural problem insofar as its own domestic economy is concerned (but brings to bear important questions about international monetary systems, a discussion for another day). I am largely with Paul Krugman that this is nothing to fret about – we are still talking about a country where people cry about 5% growth – but am only cautiously optimistic regarding the political ramifications from such rapid depreciation. Krugman is right in principle, and sometimes that is not enough.

Update: I just came across this fantastic post from Nick Rowe explaining why exactly fiat money isn’t a liability to the central bank.

Paul Krugman has a new, mostly-great post on the Pigou Effect. I have one pretty big quibble:

One way to say this — which Waldmann sort of says — is that even a helicopter drop of money has no effect in a world of Ricardian equivalence, since you know that the government will eventually have to tax the windfall away. Of course, you can invoke various kinds of imperfection to soften this result, but in that case it depends very much who gets the windfall and who pays the taxes, and we’re basically talking about fiscal rather than monetary policy. And it remains true that monetary expansion carried out through open-market operations does nothing at all.

Now, Krugman has said this before. Brad DeLong called him out on the fact that fortunately we don’t believe in Ricardian equivalence. But let’s say we do. Let’s say we are operating in a world of rational expectations without any ad hoc “imperfections to soften this result”. Krugman claims that a drop is effectively a lump-sum tax cut, and representative agents would save it all in expectation of future financing efforts.

A common refrain across blogosphere holds that Treasuries are effectively high-powered money at the zero lower bound. There is a cosmetic difference – redeemability – that plays in important role within the highly stylized, unrealistic, thought experiments that are representative agent models.

Fiat money is a final transaction. Even when the coupon rate is zero, the principal on the outstanding liability must be “redeemed” by the government. Therefore, outstanding government debt does not constitute net wealth in either the government’s or household’s budget constraint.

I’ve been toying with this distinction in my head for a while now, but Willem Buiter got there almost a decade ago. In this little-cited (according RePEc it has only self-citations, which is odd given the important result) paper, Buiter shows that a helicopter drop does not function as a tax cut. The result derives from the pithy, contradictory, but fair assumption that fiat monies are are an asset to the private holder but not – meaningfully – a liability to the public issuer.

Therefore, an dissonance between the household and government perception of the net present value (NPV) of terminal fiat stock results in discordant budget constraints in the model. In this sense, the issuance of money can increase the household’s budget constraint in a way open-market operations cannot, increasing consumption and transitively aggregate demand.  (For those interested, the math is presented in the previously linked paper as well as, in better font, this lecture). The so-called “real balances effect” is, for lack of a better word, real.

We don’t have to assume any sort of friction or “imperfection” that mars the elegance of the model to achieve this result, but Krugman is right: it very much is about who gets the windfall and who pays the taxes. For every liability there does not exist an asset.

Without resorting entirely to irrational expectations (what some might term, “reality”) there is a further game theoretic equilibrium in which helicopter drops have expansionary effects. Douglas Hofstadter (whose name I can never spell) coined the idea of “super-rationality”. It’s very much an unconventional proposition in the game theoretic world. But it’s very useful. Wikipedia synopsizes it as:

Superrationality is an alternative method of reasoning. First, it is assumed that the answer to a symmetric problem will be the same for all the superrational players. Thus the sameness is taken into accountbefore knowing what the strategy will be. The strategy is found by maximizing the payoff to each player, assuming that they all use the same strategy. Since the superrational player knows that the other superrational player will do the same thing, whatever that might be, there are only two choices for two superrational players. Both will cooperate or both will defect depending on the value of the superrational answer. Thus the two superrational players will both cooperate, since this answer maximizes their payoff. Two superrational players playing this game will each walk away with $100.

Superrationality has been used to explain voting and charitable donations – where rational agents balk that their contribution will not count; but superrational agents look at the whole picture. They endogenize into their utility functions the Kantian Universal Imperative, if you will.

In this case, superrational agents note that the provision of helicopter money will not be expansionary if everyone saves their cheque, and note the Kaldor-Hicks efficient solution would be for everyone to spend the cheque, thereby increasing prices and aggregate demand.

This may be too rich an argument – in a superrational world we would not have the Paradox of Thrift, for example – but is more robust against imperfections. For example, as an approximately superrational agent who understands the approximately superrational nature of my friends, I know that they will probably spend their money (I mean they’ve been wanting that new TV for so long). I know that will create an inflationary pressure, and while I would like to save my money, I know they will decrease its value and I’d rather get there before everyone else.

I see this as a Nash Equilibrium in favor of the money-print financed tax cut.

Paul Krugman, though, is worried that accepting the existence of Pigou’s Effect undermines the cause for a liquidity trap:

What caught me in the Waldmann piece, however, was the brief discussion of the Pigou effect, which supposedly refuted the notion of a liquidity trap. The what effect? Well, Pigou claimed that even if interest rates are up against the zero lower bound, falling prices will be expansionary, because the rising real value of the monetary base will make people wealthier. This is also often taken to mean that expansionary monetary policy also works, because it increases money holdings and thereby increases wealth and hence consumption.

And that’s where I came in (pdf). Looking at Japan in 1998, my gut reaction was similar to those of today’s market monetarists: I was sure that the Bank of Japan could reflate the economy if it were only willing to try. IS-LM said no, but I thought this had to be missing something, basically the Pigou effect: surely if the BoJ just printed enough money, it would burn a hole in peoples’ pockets, and reflation would follow.

What Krugman wants to say, is that the liquidity trap cannot be a rational expectations equilibrium, if monetary policy can reflate the economy at the zero lower bound. In New Keynesian models, if the growth rate of money supply exceeds the nominal rate of interest on base money, a liquidity trap cannot be a rational expectations equilibrium. The natural extension of this argument is that if the central bank commits any policy of expanding the monetary base at the zero lower bound, we cannot experience a liquidity trap (as we undoubtedly are).

It’s crucial to note this argument – while relevant to rational expectations – has nothing to do with Ricardian Equivalence. In short, the government may want to do any number of things with the issuance of fiat currency – like a future contraction – but it is not required under its intertemporal budget constraint to do anything. This is fundamentally different from the issuance of bonds where the government is required to redeem the principal even at a zero coupon rate. Therefore, in the latter scenario, Ricardian Equivalence dictates that deficits are not expansionary.

The argument follows as the NPV of terminal money stock is infinite under this rule, which implies consumption exceeds the physical capacity of everything in this world. Therefore, rational agents would expect that the central bank will commit to a future contraction to keep the money stock finite. They do not know when and to what extent, but by the Laws of Nature and God are bounded from being rational.

In this world of bounded rationality, we must think of agents as Bayesian-rational rather than economic-rational. That means there is a constant process of learning where representative agents revise their beliefs that the central bank will not tighten prematurely. In fact, the existence of a liquidity trap is predicated on the prior distribution of the heterogenous agents along with their confidence that a particular move by the central bank signals future easing or tightening.

Eventually, beliefs concerning the future growth of the monetary base must (by Bayes’ Law) be equilibrated providing enough traction to escape the liquidity trap. But enough uncertainty on part of the market and mismanaged messaging on part of the central bank can sufficiently tenure the liquidity trap.

This is all a rather tortuous thought experiment. Unless one really believes that all Americans will save all of the helicopter drop, this conversation is an artifact. More importantly, a helicopter drop is essentially fiscal policy that doesn’t discredit the Keynesian position against market monetarism to begin with.

Ultimately, there is one thought experiment that trumps. Helicopter a bottomlessly large amount of funding into real projects – infrastructure, education, energy, and manufacturing. Build real things. Either we’re blessed with inflation, curtailing the ability to monetize further expansionary fiscal spending or we’ve found a free and tasty lunch. Because if we can keep printing money, buying real things, without experiencing inflation, we are unstoppable.

A new paper, from Jonathan Meer and Jeremy West at Texas A&M, suggests that minimum wage does have adverse consequences on the labor market: just not in the way most economists think. Meer and West argue that employment levels are a red herring in estimating consequences of a price floor and hence we should look at net job growth. Indeed they conclude:

Using a long state-year panel on the population of private- sector employers in the United States, we find that the minimum wage reduces net job growth, primarily through its effect on job creation by expanding establishments.

I don’t want to critique this study as much as use it as a window into an equally curious debate: sticky wages and a Keynesian recession. The standard theory holds that downward wage inflexibility results in disemployment as wages are kept artificially high for any number of reasons.

In many ways, the sticky wage thesis posits the same transmission mechanism as the classical view that minimum wage creates unemployment (the policy ramifications are unclear, and I support an increase in the minimum wage, but that’s because I think more unemployment of the right kind is good). Meer and West offer both empirical and theoretical reasons why observing employment growth, and not level, is economically appropriate which implies that employment growth – adjusted for population – is a better indicator of potential gap than unemployment rate.

This is important considering “potential income” is a remarkably abstract concept based on estimates of long-run supply-side factors that cannot be evident contemporaneously. Indeed, if the Keynesian argument is to be accepted in whole, contraction of aggregate supply is one possible reason we are not experiencing outright deflation.

Consider the historical relationship between change in the potential gap and population-adjusted employment growth:

Image

This is a pretty scary graph because it looks like we’ve caught up with our pre-recession rate of employment growth.  Unemployment will continue to fall, but will not accelerate if the wage rate is again appropriate. Historically, at every other such point, the pace of recovery slows down. Because it’s over. This paints a prominently different picture than suggested by unemployment levels:

Image

And, indeed:

Image

(Edit: I want to point out here that this pattern isn’t new, and I’m not the first to point it out. I only find it suddenly relevant in the context of the paper I cited which tells us that if stickiness and minimum wages act similarly, we’re now at the point where the sticky effect is less relevant. This can be a good thing since it implies job growth is faster, but it also means the recovery will not be accelerating. That’s a big “if” but has theoretical appeal in concert with the fact that historically other points at which job creation reaches it’s pre-recession peak represent an adjusted labor market. One way to interpret this is a continuing recovery without the acceleration our output gap might suggest I’ve suggested below a few reasons why this may not be the case).

While the level of unemployed workers remains uncomfortably high, the rate at which that level is falling is back at its pre-recession high – and there is little precedent for it to rise any further.

This does not mean the recession is over, or that aggregate demand is sufficiently high. It does mean that, were we to use the same logic as Meer and West, sticky wages and a dis-equilibrated labor market cannot sufficiently explain our troubles.

This has five possible implications:

  1. The Meer and West employment dynamic cannot be translated to a Keynesian “sticky minimum wage”, if you will. This may well be the case – the empirical foundation of their paper considers the economy at all times. The conclusion we strive to make concerns economies specifically mired in recession, which may be a confounding variable.
  2. Wages are not as sticky as we believed and even the low dose of price level inflation since 2008 has been sufficient to adjust the market.
  3. Aggregate supply has contracted more than we would like to admit.
  4. An amalgam of points (2) and (3) suggest that the potential gap is not as high as estimated by the CBO.
  5. Excess capacity and other Keynesian forces are far stronger than we previously anticipated.

As far as policy is concerned, the result in its crudest suggests we are no longer in a Keynesian short-run, if sticky wages are the only important factor. This means that while inflation will have its traditionally expansionary effect through the money illusion and wealth and “hot potato” effects, it will no longer move the labor market towards equilibrium. In other words, the marginal benefit of a change in the price level is falling towards zero.

The evidence is too clear that further deficit-spending will not spark the bond vigilantes and that debt monetization will not bring rise to runaway inflation; therefore, it is unlikely that the United States should engage in any further austerity by prolonging the sequester and January payroll tax hikes.

However, we should perhaps question the modality of sticky wages vis-a-vis the unemployment rate, and focus on employment growth which suggests a more robust recovery relative to supply. This is not a call for a smaller government. Indeed with interest rates at historical lows, now is as good a time as ever for the government to engage in sound investments like green energy, smart grids, or infrastructure. Now is as good a time as ever to risk monumental tax reform that brings in too little, rather than too much.

Ultimately, the evidence for large output gaps and lacking demand is far greater than the evidence against – and perhaps in the throes of fiscal crisis we should have engaged in more aggressive stimulus. Indeed, just one paper suggesting a relationship elsewhere unfound and theoretically unprecedented should not change our interpretation of the economy too seriously.

But Meer and West have given us a unique prism to consider not just minimum wages, but maximum employment.

(That last sentence had more cadence than content: the point is not maximum unemployment, as much as a maximum rate of recovery – to many that is disheartening).

It is difficult to begin a post about Larry Summers – and his suitability for Chairman of the Federal Reserve – due to the sheer volume of current commentary. I’ve refrained from writing too much about Summers because I don’t know much more than the average pundit and therefore cannot add much.

However, I recently read Larry Summers’ decades-old, prescient analysis of the emergent possibility of a financial crisis in the modern monetary system, and the role of central bankers therein. It would not be an overstatement to claim that every ounce of thought and analysis in this paper flies in the face of Summers’ contemporary detractors vis-a-vis his position on both financial deregulation and monetary imperatives. Larry Summers likes regulation mutatis mutandis – he supports it at the core with necessary alterations on the side.

Before I continue, I anticipate a common response from detractors will suggest Summers’ revealed preference for deregulation contradict views expressed in 1991 and must clearly have evolved since. Two points:

  • It is well within the realm of reason that Summers’ views towards regulation did evolve due to the illusory stability of American finance between 1995 and 2006. However, as a rational Bayesian agent, we can be certain that he has the intellectual and analytical foundation to revise his priors as a result of the 2007 crash, whose course he presciently anticipated in 1991. His support of Obama’s regulatory regimen in recent years lends support to this argument.
  • Summers’ aversion to the harsh – even crude – style of derivatives regulation proposed by Brooksley Born does not confirm the argument that Larry Summers opposes any and all financial regulation. It confirms the argument that Larry Summers opposes Brooksley Born. But that’s just not as sexy.

Larry Summers’ unfortunate response to Raghuram Rajan’s warning – in which regulators are accused of Ludditery – are at the heart of a liberal backlash against Summers. Also unfortunately, this does not capture his opinion on regulation. From “Planning for the Next Financial Crisis” (1991, linked above), Summers argues:

Kindleberger’s preconditions for crisis are as likely to be satisfied today as they ever have been in the past. It is probably now easier to lever assets than ever before and the combination of reduced transactions costs and new markets in derivative securities make it easier than it has been in the past for the illusion of universal liquidity to take hold. Asset price bubbles are now as likely as they have ever been. Bubbles eventually burst. The increased speed with which information diffuses and the increased use of quantitative-rule- based trading strategies make it likely that they will burst more quickly today than they have in the past.

The thrust of Summers’ discourse is that the risk of financial crisis had not decreased over the decades leading into the 1990s and may well have increased. While he accepted the contemporary establishment opinion that the risk of panic to the real economy had subsided, he rejects the notion that this emerges from any fundamental efficiency of markets, rather the emergence of Keynesian-style economic stabilizers:

If financial crisis is less likely now than it used to be, the reason is the firewalls now in place that insulate the real economy from the effects of financial disruptions. Most important in this regard is the federal government’s acceptance of the responsibility for stabilizing the economy. Automatic stabilizers that are now in place cushion the response of the economy to changes in demand conditions.

Indeed, Summers goes further to suggest that the risk of financial panic per se has increased with the dominance of the new, derivative-driven financial system:

I conclude that technological and financial innovation have probably operated to make speculative bubbles which ultimately burst more likely today than has been the case historically.

Therefore, critics like Dean Baker – and, yes, Paul Krugman – should be cautious when accusing Summers of not foreseeing the housing bubble and crash in the 2000s. As Krugman himself has argued, we must judge the analytical value of a position not by specific predictions or bets – on which count Larry Summers summarily fails – but by the analytical model behind a prediction. I am now more confident that Summers, like Krugman and others on the left, had the foresight and firepower to incorporate the events of 2007 into their intellectual framework. (I’m personally more impressed by someone using an analytical model to suggest that something big is possible than someone saying something will happen for donkey’s years only to be proved right by the Law of Large Numbers).

Before I go on, let me provide the context in which Summers’ paper was written. The early ’90s were in many ways a time of free banking revival on the right and monetarist conceptions of business cycle moderation in the mainstream. It was not vogue to militate the idea of an activist central bank whose role extends beyond blanket provision of liquidity and ensuring a steady growth in monetary base.

In 1991, conceiving a 2007-esque crisis would have been unimaginable. Yet Summers carefully builds a fictional scenario of financial panic – from its animal spirit antecedent to its dystopian consequent – militating the following:

The result was the worst recession since the Depression. Unemployment rose to 11 percent and real GNP declined by 7 percent. For the first time since the war, there was a decline from year to year in the consumption of nondurable goods.

I don’t cite this paragraph to describe something so mundane as a recession, but the infinitesimally-close resemblance it holds to our own reality. Again, Summers never predicted that this crisis will occur, but notes that within his intellectual framework it could occur. Something, again, that sets him apart from his contemporaries in the early noughties, let alone nineties.

Summers discusses four possible paradigms in which we may respond to a financial crisis that so dearly affected the real economy:

  • Free banking.
  • Monetarist lender-of-last-resort.
  • Classical (Bagehot, 1873) lender-of-last-resort.
  • Modern Pragmatic View.

It’s a detailed discussion, and I want to keep my remarks concise. (Needless to say, Summers politely declines the train-wreck of an idea that is free banking.) Ultimately, he supports what he coins a “modern pragmatic view” which includes broadly:

  • Keynesian stabilizers.
  • Targeted (TBTF) bailouts in the case of financial crisis.
  • Regulation. (Read this one again, if you must).
  • Absolute provision of liquidity.

While a lot of this is mainstream stuff that shouldn’t surprise anyone, I want to highlight a few nuggets that are certainly relevant today:

A minimalist view of the function of the central bank would hold that, in the face of a major disturbance, it should use open market operations to make sure that the money stock, somehow de- fined, is not allowed to decline precipitously; a more activist view would seek to insure that it rises rapidly enough to offset any decline in velocity associated with financial panic. On this monetarist view, there is no need for the Fed to make use of the discount window or moral suasion in the face of crisis. It suffices to make enough liquidity available.

In the emphasized, Larry Summers effectively endorses the benefits of nominal income targeting – were we to stipulate that he is an “activist”. (His recent columns, editorials, and speeches on the importance of employment, and danger of hysteresis convinces me that he is). At the time, a standard monetarist argument held that targeting money supply growth would sufficiently stabilize the business cycle and insulate the real economy from financial panic. The equation of exchange states that:

mv = pq

where m is the money supply, v is the velocity of money, p is the price level and q is the real output. This simplifies to:

y_nominal = mv

Monetarists therefore argue that the central bank should target m. However, Larry Summers correctly argues that during financial panic there is a shortage of safe assets and hence money demand increases, diminishing the velocity v. Hence, to maintain output while accounting for velocity is now simply called targeting nominal income.

I have many times said I don’t know what Summers’ views on monetary policy are. While this doesn’t increase my confidence too much, I’ll note it is incumbent to accept Summers has considered, acknowledged, and supported the benefits of a nominal income target.

We also know little about Summers’ current attitude toward the central bank asset purchases known as quantitative easing (QE). Japan first engaged in this “exotic” policy in the early 2000s having hit the zero lower bound. Most economists would not support this policy, and certainly did not one decade ago. Here is Summers two decades ago:

Yet another [possible treatment to financial panic] is direct intervention to prop up asset prices. If this is possible, it will serve to increase confidence in the financial system and reduce the need for reductions in interest rates that would otherwise lead to a currency collapse. Journalistic accounts such as Stewart and Hertzberg ( 1987) suggest that manipulation of a minor but crucial futures market played an important role in preventing a further meltdown on Tuesday, 20 October 1987. They also assign a prominent role to orchestrated equity repurchases by major companies. Hale (1988) argues that the primary thrust of Japanese securities regulation in general, and especially in the aftermath of the crash, is raising the value of stocks rather than maintaining a “fair” marketplace.

In this capacity, it’s not too hard to believe that Larry Summers anticipated something proximal to the new “market monetarist” position. Indeed:

Quite apart from whatever it does or does not do to back up financial institutions that get in trouble, the Federal Reserve has the ability to alter the money stock through open market operations. In the face of a defla- tionary crisis like the one described above, it is hard to see why it would not be appropriate to pursue an expansionary monetary policy that would prevent the expectation of deflation from pushing real interest rates way up. The use of such a policy would at least limit the spillover consequences of financial institution failures. Whether it would be enough to fully contain the damage is the issue of whether a lender of last resort is necessary, the subject of the next section.

In fact, the “market monetarist” movement has two wings. Those occupying the sturdy, monetarist-activist position like Scott Sumner who argue that fiscal policy is entirely irrelevant and those like Paul Krugman and Brad DeLong who occupy a nexus of Keynesian-Monetarist beliefs militating for both monetary and fiscal easing along with regulation.

It is very clear that at least in the above paper, Larry Summers falls solidly with Krugman and DeLong noting the dangers of budget balancing during a financial crisis:

[Emerging stability] is largely the result of the expansion of government’s role in the economy. When the economy slumps, government tax collections decline and government transfer pay- ments increase, both of which cushion the decline in disposable income. The mirror image of stability in disposable income is instability in the government deficit. Hence, automatic stabilizers cannot work if the government seeks to maintain a constant budget deficit in the face of changing economic conditions.

Perhaps the most vicious criticism of Larry Summers comes from an ultra-Left wing of commenters and economists that allege a conspiratorial tie between Summers and Wall Street embodied by his support for big bailouts (which I do not support) and supposed deregulation. In fact, this is just not the case:

Lender-of-last-resort policy is probably an area where James Tobin’s in- sight that “it take a heap of Harberger triangles to fill an Okun gap” is relevant. It may well be that the moral hazard associated with lender-of-last-resort in- surance is better controlled by prudential regulation than by scaling the insur- ance back. This at least is the modern pragmatic view that has worked so far.

The reference to Harberger triangles and Okun gap is an old quip in favor of Keynesian stimulus: suggesting the gains from employment (a closing Okun gap) are orders of magnitude more important than deadweight losses emergent from taxation (Harberger triangles).

Summers only suggests that during a crisis it is dangerous to let big banks fail and it is better to take a vaccination than unscientifically reject a cure for fear of its side effects. While this position certainly creates a moral hazard and excessive risk taking among big banks, Summers is clear this is handled better with regulation (!!!) than failure, phrasing the argument in terms many on the mainstream left must accept:

 It is difficult to gauge the price of this success. Almost certainly, the subsidy provided by the presence of a lender of last resort has led to some wasteful investments and to excessive risk taking. I am not aware of serious estimates of the magnitude of these costs. Estimates of the cost of bailouts, which represent transfers, surely greatly overestimate the ex ante costs of inappropriate investments. If the presence of an active lender of last resort has avoided even one percentage point in unemployment sustained for one year, it has raised U.S. income by more than $100 billion. It would be surprising if any resulting misallocation of investment were to prove nearly this large.

I fall to the left of many fellow mainstream critics of Summers and hence for many other reasons reject the purported pro-bailout stance. Regardless, with the sort of sane regulatory policies that Summers above clearly supports, I would be more at rest.

Summers carefully notes why the former three policy paradigms (free banking, classical,  and monetarist) fail. In this, he anticipates many of the too-optimistic arguments made by those like Scott Sumner or David Beckworth in favor of a rules-only nominal income target. Indeed, in another paper, he questions the value of a rules-based policy entirely:

[I]nstitutions do the work of rules, and monetary rules should be avoided…Unless it can be demonstrated that the political institutional route to low inflation — to commitment that preserves the discretion to deal with unexpected contingencies and multiple equilibria — is undesirable or cannot work, I don’t see any case at all for monetary rules.

But in this paper itself, he notes that the biggest reason for contemporary economic stability comes not from more efficient markets, monetary rules, or even activist lenders-of-last-resort but deep, automatic stabilizers. He notes:

A major difference between the pre-and post- World War I1 economies is the presence of automatic stabilizers in the postwar economy. Before World War 11, a $1-drop in GNP translated into a $.95 de- cline in disposable income. Since the war, less each $1 change in GNP has translated into a drop of only $.39 in GNP. This change is largely the result of the expansion of government’s role in the economy.

He furthers a powerful case against crude monetarism by noting many reputational externalities from bank failures:

 But the analysis of the potential difficulties with a free banking system suggests that support of specific institutions, rather than just the money stock, may be desirable. De- clines in the money stock are just one of the potential adverse impacts of bank failures. Bank failures, or the failure of financial institutions more generally imposes external costs on firms with whom they do business and through the damage they do to the reputations of other banks. Private lenders have no incentive to take account of these external benefits, and so there is a presump- tion that they will lend too little.

The point here may be put in a different way. Because of the relationship- specific capital each has accumulated, reserves at one bank are an imperfect substitute for reserves at another. Maintaining a given aggregate level of lend- ing is not sufficient to avoid the losses associated with a financial disturbance. 

I thought that last point was especially powerful.

I would sum Larry Summers’ opinion on relevant regulatory institutions as the victory of regulation and moral hazard in a tension between the goal of discouraging risky behavior and resolving the crisis. I would sum Larry Summers’ opinion on monetary policy as erring on the side of expansion and liquidity in a tension between the goal of discouraging inflation while promoting employment.

Miles Kimball argues that any potential candidate to head the Fed must note their position on the following three items:

  • Eliminating the “Zero Lower Bound” on Interest Rates.
  • Nominal GDP Targeting.
  • High Equity Requirements for Banks and Other Financial Firms.

I cannot know Summers’ position on the first – though I will note the e-money argument is still very heterodox, and hence I find it impossible that Janet Yellen will support it and only highly improbable that Larry Summers will.

I noted earlier that Summers accepts any activist monetary policy must increase money supply adjusting for a fall in velocity, which is an effective nominal GDP target. It is for the reader to interpret his position on activist monetary policy as an item, and the extent to which he has revised his views today.

Finally, we can read Kimball’s last item specifically as support for higher equity requirements but more broadly as a belief that regulation is important. While equity requirements are distinct from capital requirements, they share some similarity and Summers wrote here “Raising bank capital requirements would seem to be an obvious approach”, signaling support. Raising equity requirements (that is, forcing banks to finance themselves with stock instead of debt) are in every which way better and hence should receive even more support.

On the topic of regulation – the irony that Larry Summers is the bête noire of the Left vis-a-vis regulation is becoming very clear. If anything, Larry Summers supported a form of deregulation in the ’90s but had the correct intellectual framework and has hence updated his priors correctly since 2007. There is no evidence that Graham-Leach-Bliley (the repeal of Glass-Steagall) was the cause of the recent crisis, and even less evidence that Summers critique of Brooksley Born’s harsh opposition towards derivatives trading can be translated into his opposition of smart derivatives regulation overall.

This paper convinces me that Larry Summers was well-ahead of the curve in a way more respectable than just guessing a random crisis (like Steve Keen – in retrospect, as John Aziz points out, this was a poorly chosen jab. Keen, like Summers, had a model. Others did not.). Rather, he teases out the very specific method in which such a panic might occur and analytically understands exactly what we would need to insulate the real economy and employment.

In 1991, few others could have so presciently described the reasons why the economy is more stable today, how the financial system is becoming riskier, and what role the central bank can or should play to fix that.

What more do you want from your Chairman of the Federal Reserve.

(Oh by the way,  for the record and for the little my two cents are worth, I’m officially noting Larry Summers as my top choice for the job).

Ritwik Priya sent me an intriguing paper from Philip Maymin arguing that an efficient solution to NP-complete problems is reducible to efficient markets, and vice-versa. In other words the weak-form efficient market hypothesis holds true if and only if P = NP. This result is not published in a peer reviewed journal (as far as I can tell), but purports a remarkable discovery. My bad, looks like it’s published in Algorithmic Finance which has Ken Arrow and Myron Scholes on its editorial board. I’m still surprised NBER doesn’t point me to any future citations. As Maymin himself notes, he seems to have married the definitive question in finance (are markets efficient) with the holy grail of computer science (does P = NP).

There are reasons to be skeptical, and a lot of questions to be asked, but first I want to summarize the paper. I Googled around, to see where the paper showed up and, maybe not surprisingly, this MarginalRevolution post was a top hit. But Tyler Cowen seems unfairly dismissive when he says “Points like this seem to be rediscovered every ten years or so; I am never sure what to make of them.  What ever happened to Alain Lewis?”.

I don’t know much about this Alain Lewis. I can see he has written papers like “On turing degrees of Walrasian models and a general impossibility result in the theory of decision making”. I can’t even pretend to understand the abstract. On the other hand, reading Maymin’s paper didn’t really change my mind about efficient markets, but it gives an intriguing example of market capabilities. Anyone with thirty minutes and a mild interest in computer science should read the whole paper, because I think it gives a very good heuristic on understanding the debate of EMH itself, even if not resolving it thereof.

Indeed, some commenters on the net (at MarginalRevolution and elsewhere) joke that this paper is just another guy hating on the EMH. They say this, of course, because they have an incredibly high subjective belief that P ≠ NP (will discuss this later). They have not read the paper, because this disregards the fact that the author is a blatant libertarian citing Hayek favorably within.

Before I give a brief explanation of Maymin’s proof, I will add that I am skeptical as this result seems not to be replicated (with regard to his empirical evidence) in any prominent journal, economic or mathematical. While one may cite the “natural conservativeness” of the former profession as an explanation, the proof is simply too cool not to receive more attention. My understanding of theoretical computer science is limited, and to the extent that I am a good judge, the paper makes  sense on first read. (Strike one against comparing him to Alain Lewis, whose very titles make me shiver?) I do have some quibbles which I note along the way.

A Summary

Maymin ventures to prove biconditionality between the weak-form of the EMH and P = NP. He notes this would be an interesting result as the majority of financial economists have a fair degree of belief that markets are efficient, contrasted with computer scientists who very much doubt that P = NP. (It is this relationship that I will critique, but that later.) The first part of the proof shows that efficient markets imply that P = NP.

The weak-form of the EMH asserts the following:

  • If a certain pattern – such as Amazon always moves with Google with a lag of seven days – is observed, it will immediately disappear as the market incorporates this information into natural prices.
  • Because markets are informationally efficient, said pattern will be found immediately.

Richard Thaler calls these, respectively, the “no free lunch” and “price is right” claims of the EMH. Maymin’s result suggests that for the latter to be true, there must exist polynomial time algorithms to NP-complete problems (specifically, the Knapsack problem). We assume that there are n past price changes (1) for UP and (0) for DOWN. We take that a given investor can submit either a long, short, or neutral position at each price change. Therefore, the total number of strategies is 3^n. We note that verifying whether a given strategy earns a statistically significant profit requires only a linear pass through the n past time changes and is hence in NP. (That is, given some model, is there a 95% chance that your strategy beats a monkey throwing darts at the WSJ at coffee each morning). Remember this whole thought experiment is an exercise in finding some pattern of ups and downs associated with some future ups and downs which will always hold true and hence may be exploited.

Maymin notes that in practice, popular quantitative strategies are based on momentum, and hence some fixed-lookback window t. He notes the joint-hypothesis problem from Fama (1970) that the EMH says we cannot, given some equilibrium model with a normal profit K, earn in excess of K for a period of time. He resolves what I find to be an important debate among EMH skeptics quite well; that is how we reasonably search across the 3^n possible strategies. Some argue that we should stick specifically to economic theory, others submit blind data mining, and others still machine learning. Maymin notes that this is irrelevant to the question at hand, as the employed search strategy is endogenous to the determination of K.

Maymin agrees that determining whether a strategy iterated on one asset can earn supernormal profits is polynomial time. However, he notes that under the assumption that a) investors do not have infinite leverage and b) operate under a budget constraint, answering the question “does there exist a portfolio of assets earning supernormal profits within our budget” to be akin to solving the Knapsack problem.

For those who do not know, the Knapsack problem – a canonical introduction to discrete optimization and intractable problems – asks one, given n items represented as {value, size} to maximize the sum total value keeping the total size under a constraint C. In the analogy, size is the price of an asset following a t-length time series where t is the lookback window, value is the future return of the asset following the same time series, and the strategy on each asset is picked from a t-sized set U with {long, neutral, short} as possible options. Hence, in Maymin’s words, “the question of whether or not there exists a budget-conscious long-or-out strategy that generates statistically significant profit relative to a given model of market equilibrium is the same as the knapsack problem, which itself is NP-complete. Therefore, investors would be able to quickly compute the answer to this question if and only if P = NP.”

Maymin concludes that this algorithm is exponential in t, the size of the lookback window. He suggests that because this grows linearly with n (the total time series of all history) markets become intractable rapidly. I must quibble, theoretically if not empirically (which seems soundly in Maymin’s favor). Is there reason to assume that t ~ n? Is it not possible that for asymptotically large ntlog n? Indeed, for the market as a whole, if this were the case the problem would be linear in time. Empirically, however, linearity seems to be a fair assumption. I might add that time series analyses are restricted by the assumption of stationarity. In the future, the possible window of reasonably assuming such might be more than linearly larger than it is today. This would work in Maymin’s favor.

I have not yet explained why this means markets are efficient if and only if P = NP. Let’s say there are a group of investors searching through the total strategy set U, which is 3^n in size, for a supernormally profitable strategy. Let’s say, by miracle on one of my first guesses, I happen to find one such strategy. If P = NP, theory suggests that most everyone else will also immediately find this strategy, and hence it will be “priced into the market”.

However, if P ≠ NP, it might take years for someone else to find the same strategy, allowing me to earn a supernormal profit for a large period of time. This would render even the weak form of the EMH false. What are the empirics in favor of this idea? Well, this is something that probably deserves further research and I’m not happy with what’s provided, but Maymin cites Jegadeesh and Titman (1993) as a plausible example. Jegadeesh and Titman are credited with developing an investment strategy based on market momentum. Their strategy was largely unknown in the experiment windows (1965 – 1989) and therefore not priced into the market. Maymin’s result would suggest that this strategy becomes increasingly effective against the market as other participants content against a linearly growing events for an exponential-time algorithm. He offers this as evidence:

Image

I don’t see it as such. First, assuming stationarity across 1927 to 1989 is incredibly iffy. Second, backtracking a current strategy onto historical trends tells us what? I am positive I can also find some strategy (not momentum-6) which finds just the opposite. So what? Rather, Maymin touches on the empirical evidence that would work in his favor. That is, NASDAQ was added to the data set in 1972, vastly increasing the number of data points. If some strategy earned supernormal profits, it would be exponentially harder to mine it after the inclusion of data. To the extent that this strategy remains broadly unknown, its performance against the market should increase relative to baseline after 1972. But he doesn’t cite this data.

On the one hand, I’m glad he offers the correct framework on which to make his prediction falsifiable. On the other, presuming the above printed data from “Table 1” as in support of his hypothesis seems somewhat sly. I read this part quite favorably on my first parse, but employing this dataset is obviously incorrect for the hypothesis he attempts to prove.

The Corollary

As interestingly and more convincingly, Maymin argues that an efficient market implies that P = NP. To do this, he assumes that markets allow participants to place order-cancels-order transactions. I can say that I want to buy the Mona Lisa if it falls to $10, or sell David if it hits $15, but as soon as market conditions are such that one order is fulfilled, the other is automatically cancelled. We must actually assume that such orders with three items may be placed. Computer science nerds will know where this is going. Maymin wants to program the market to efficiently solve 3-SAT, quite literally the mother of NP-complete problems. It is beside the point of this scope to explain the dynamics of this problem, but enough to know that its solution is reducible into solving many other intractable problems, including factoring large prime numbers and hence breaking into your bank account.

The logical form of the problem is such:

Let y = (a | b | !c) & (b | !d | a) & (z | m | a) & … & (a | b | !m), where all variables are boolean

Within the literature, this is known as a “conjunctive normal form”. Each parenthetical phrase is a clause, which must consist of a disjunction between three “literals”. Solving 3-SAT involves finding the state (true or false) of each literal such that the whole statement is true (or known to be impossible to solve). 3-SAT is exponential in the number of clauses.

We can think about each clause as an order-cancels-order (OCO) option, consisting of three possible transactions. A literal can imply a sale and a negated literal a purchase, or vice-versa. Now let us price each asset (literal) at the midpoint of the bid-ask spread. Therefore, it yields a supernormal expected profit for all participants (and will be immediately arbitraged in markets are efficient).

Once we place the set of OCOs, they should all be executed within an arbitrarily small time period, as each by itself is a contradiction of of the “no free lunch condition” of efficient markets. In fact, each of the OCOs must be executed to maximize profits, and that is what proponents of the EMH suppose they do. Maymin does not state his implicit assumption that the time it takes to clear his transaction on the open market may not be instantaneous but within the weak EMH bounds of “as quickly as possible”. I would say this is a big ding to the theoretical equivalence of the two efficiencies (as he does not offer a mathematical or logical reason why one must be the other, but equivocates the terminology). I wish he had made this more clear. But I still think it’s an important result because the EMH would be a toothless truth if the “as quick as possible” included the years it would take to solve a sufficiently large 3SAT. Even then, without the theoretical equivalence, the structural similarities are striking. Note, that the example I provided above is rather easy as there are almost as many variables as there are literals. In reality, the question is a lot harder. Therefore, the market mechanisms that would “solve” such a set of OCOs in a polynomially-small time period would have done something remarkable. If I want to solve some given 3SAT problem, I just pick any given stock for each variable, encode nots as sell orders and literals as buys, and place the total transaction which must be completed in the profit-maximizing manner.

I found this result far more compelling than the first, perhaps this reflects my propensity towards computer science over finance.

Discussion

I’ve read quite a bit of stuff on how Walrasian equilibria are NP-hard, this or that. There seems to be a lot of literature relating economic games and equilibria with computational tractability. The question of the EMH is inherently different, and logical – not mathematical – in nature. The symmetry between the two fields here is one-to-one, even self-evident. So, I disagree with Cowen’s quip that stuff like this comes up once every ten years. I can’t put my finger on it, but the previous literature suggesting such similarities had more to do with solving for some equlibrium that’s hard, rather than processing the market itself.

Specifically, this is why I find the second result (reducing markets to 3SAT) to be mind-boggling.

Regardless, something in my gut rejects the logical biconditional between P = NP and the EMH. However, I think this result supports the idea that one should form his priors on both with a similar heuristic (which may yield a different result for either depending on the heuristic used).

For example, Maymin notes the contradiction between most finance professionals believing the EMH to be true and most computer scientists rejecting that P = NP. Let’s take the latter case. How exactly can one form a prior that P≠ NP? Well, he can believe that when the problem is solved it will be in favor of P ≠ NP. But that’s just turtles all the way down. A better way of explaining such a prior would be “I believe that if I asked a magical black box ‘does P = NP’ the resulting answer would be in the negative”. You agree that’s a fair way of forming a subjective belief, right? This belief can be formed on any number of things but for most computer scientists it seems to be the radical implications of a positive result (breaking RSA cryptosystems, etc.)

But, to form such a prior, you must exist that in a Popperian world such black boxes can exist. However, any such magical truth tellers existing is ipso facto a stronger and more absurd reality than P = NP. Therefore, this question is not in any normal sense falsifiable (other than the standard turtles all the way down, which only talks about the resolution of the problem rather than the true implications thereof).

I would argue that even if the biconditional between P = NP and EMH does not hold, for whatever reason, the structural science does. That is to say that submitting that the EMH is falsifiable would be akin to believing in such magical black boxes. It is better to not form any priors regarding efficient markets, as a general rule. Better would be to believe whether certain forms of markets are efficient.

The analogy to computer science holds even here. Though most NP-complete problems are in the worst case exponential in scale, heuristics and randomized approximations allow computer scientists to design remarkably efficient solutions for most cases or the average case. This is scientifically falsifiable, and the correct question to ask. Similarly, we may talk about the informational efficiency of a practical market, within certain bounds, granted certain approximations. And, crucially, granted some margin of error and the risks thereof. What are the chances a randomized input to a backtrack-enabled Knapsack solver will fall to exponential time? What are the chances a market will fail to weed out inefficiencies given a level of technology? Indeed, he suggests that it is such approximations and shortcuts that make a market perhaps inefficient, but a government even more so. He compares this result to Hayek’s famous economic calculation problem, which suggests something similar.

To me, this is an absolutely non-trivial comparison. After reading this paper I genuinely believe that it is a futile question either guessing whether P = NP (or, practically, whether when the question is resolved if P will equal NP) or whether markets are efficient. However, according to the St. Louis Fed, this paper has never been cited. (I intend to change that, one day.)

Within appropriate bounds, Maymin’s paper illuminates not necessarily how we view efficient markets, but how we view the debate thereof.