Archive

Monthly Archives: June 2013

Rarely is a book’s title, subbed How Institutions Decay and Economies Die, so well-suited for its own description. Let me start by noting I agree with Niall Ferguson’s principal charge; that is institutions more than geography and culture explain global economic disparity, and their decay – especially in the West – should be of concern. At least more than it is right now. Also, despite the fact that he seems to know almost nothing about monetary policy, I found The Ascent of Money to be invaluable. Never, however, have I read a book – no less from an endowed chair at Harvard – that is so blatant in its fraudulent claim, only vindicated by a lawyerly interpretation of grammar: something, as any reader of the book knows, Ferguson does not like.

This book is clearly written for a lay audience, given its understandably pedantic explanation of what exactly an “institution” is (no, friends, not a mental asylum reminds us Ferguson). Therefore, it is fair to assume Ferguson does not expect his audience to have read the papers cited throughout. It is his responsibility as a member of elite academia to represent those papers with honesty and scholarship. As page 100 of the Allen Lane copy suggests, Ferguson does not agree:

It is startling to find how poorly the United States now fares when judged by these criteria [relating to the ease of doing business]. In a 2011 survey, [Michael] Porter [of Harvard] and his colleagues asked HBS alumni about 607 instances of decisions on whether or not to offshore operations. The United States retained the business in just ninety-six cases (16 per cent) and lost it in all the rest. Asked why they favoured foreign locations, the respondents listed the areas where they saw the US falling further behind the rest of the world. The top ten reasons included:

1. the effectiveness of the political system;

2. the complexity of the tax code;

3. regulation;

4. the efficiency of the legal framework;

5. the flexibility in hiring and firing.

As it happens, I have read this paper. And, like I said, the only way Ferguson’s cherry-picked nonsense can be justified is through the emphasized grammar. Indeed the “top ten” reasons did (in different words) include the above. Take a look for yourself:

The reader is led to believe that the five listed reasons are at or near the top of a such list created by Porter and his team. To the contrary, the biggest reason, almost twice as prevalent as anything else, was “Lower wage rates (in the destination country)”. That would seem to suggest that “globalization” and “technology” play a role, which Ferguson shrugs off as irrelevant. The tax system, is sixth down the list and cannot in any way be considered primary. There is a chance that Ferguson was using another, more defensible, graph in defense of his claim:

However, this division is not directly emergent from the study of HBS alumni themselves, as Ferguson suggests, but from later analysis. These are not “responses” at all. Furthermore, even though higher skill and education of foreign labor were core components of offshoring, Ferguson does not cite from the above graph America’s lagging “K-12 education system”. Or take the “flexibility in hiring and firing” which, from this graph, isn’t definitively a weakness or deteriorating at all. (Edit: if it is this graph, “flexibility in hiring and firing” is still a very clear strength – and looks stable. So it would be purely dishonest to list it as a reason for decay. It’s like cherry-picking cherries from a apple orchid.) He even fails to cite “logistics infrastructure” and “skilled labor”, whose deterioration leads the authors to note “investments in public goods crucial to competitiveness came under increasing pressure” at all. They do not gel with his hypothesis that “state is the enemy of society”.

What of Reinhart and Rogoff’s famous report, “Growth In a Time of Debt”? This study has much documented methodological and computational errors, but even before these were furiously publicized in the past three months – in academic circles to which Ferguson supposedly belongs – the results were highly disputed. Even the authors, explicitly at least, warned of confusing correlation with causation. But Niall Ferguson notes “Carmen and Vincent Reinhart and Ken Rogoff show that debt overhangs were associated with lower growth”. “Associated” with? Okay, cool. But then he, gleefully, argues against deficits “because the debt burden lowers growth”. And here I thought they teach historians about correlation and causation at Oxford. Also, with gay abandon, he ignores R and R’s consistent claim that inflation is a necessary tool today, arguing instead that higher inflation will lead to no good, signing his name on letters warning of hyperinflation.

Already, there is a pattern of citing microscopic elements of large, nuanced (in the former case), or debunked (in the latter case) studies without providing the appropriate context, or indeed the truth at all. Again, grammar – and that alone – can vindicate Ferguson’s convoluted logic.

The confusion doesn’t end with misrepresentation of facts and opinions, however. He even misrepresents the legends. Most educated but lay readers know very little of Adam Smith, only something about the free market and the famously invisible hand of the price mechanism. So they can be easily lied to about his beliefs. He inaugurates his book with a tribute to Smith via his writings on “the Stationary State” – (the “West” today) noting:

I defy the Western reader not to feel an uneasy sense of recognition in contemplating these two passages.

He pastes Smith’s lucid argument that standard of living of labor are high only in the “progressive” state, “hard” in the “stationary state” and “miserable” in the “declining state”.

This Smithian motif continues throughout the book. What readers are not told is that Adam Smith believed all growth was eventually stationary. Famous phrases like “the division of labor is limited by the extent of the market” aren’t catchy for nothing. He believed as wages increased, so too would the population, pushing them down, continuing in perpetual negative feedback into a steady-state population.

Adam Smith did believe that deregulation and free trade – contrary to mercantalist principles of the day – would increase that maximum steady state. But he, like so many classical economists around him, did not believe in the power of human ingenuity to lift human living conditions. Permanently. Indeed, he certainly did not nuance his argument through modern prescriptions of “institutional change” as forwarded by Acemoglu and Robins as caveats thereof, as Ferguson tries to have you believe.

And if the misrepresentation of philosophers (sorry, Professor Ferguson, but John Stuart Mill bordered on socialist), economists, facts, and opinions is not enough; perhaps the sheer internal inconsistency of the whole thing is. He casually notes:

Nor can we explain the great divergence [between the West and the Rest] in terms of imperialism; the other civilizations did plenty of that before Europeans began crossing oceans and conquering.

On the next paragraph, I kid you not, he cites “the ghost acres” of enslaved Caribbean farmers,

which were soon providing the peoples of the Atlantic metropoles with abundant sugar, a compact source of calories unavailable to most Asians.

SUCKERS! He also argues that the massive buildup of British debt “was a benign development”. But there was one big difference between them, and us. They were imperialists:

Though the national debt grew enormously in the course of England’s many wars with France, reaching a peak of more than 260 per cent of GDP in the decade after 1815, this leverage earned a handsome return, because on the other side of the balance sheet, acquired largely with a debt-financed navy, was a global empire […] There was no default. There was no inflation. And Britannia bestrode the globe.

Alright, folks, so this is what tenured professors at the world’s by and far most prestigious university want you to know: debt is okay so long as it finances a navy and enslaves other people on whom you may force a market. Or if it fuels a war (throughout the book Ferguson talks about “peacetime” debt, as if using your credit card to kill and maim is chill). This explains why Niall Ferguson was hush about George Bush’s credit fueled war on Iraq, and presumably the lovely occupation thereafter. But when debt finances education or infrastructure; food stamps or healthcare; unemployment relief or development aid we must “be cautious of inflation and slower growth”. Thereupon we develop Fergusonian Inequivalence: Ricardian equivalence, by economic law, holds only for peacetime debt. What?

In fact, contrary to his initial claim, the whole damn chapter is about the benefits of imperialism. But here’s a little secret: just like everyone can’t run a trade surplus, everyone can’t imperialize the shit out of other countries. In fact, Niall Ferguson’s sex relationship with imperialism is deeper than at first glance. Through the book, he pays tribute to the responsible “capital accumulation” that so helped 19th century European economies. However, as we know from John A. Hobson, one of England’s best economists, imperialism is the direct and necessary outgrowth of such accumulation. Capitalists could no longer finance sufficient profit on domestic consumption alone, requiring large and bountiful export markets. Furthermore, domestic industry would no longer require capital at a rate commensurate with high profits, which would need to be invested somewhere. Contrast, hence, the domestic versus national products of British colonies to see just what Niall Ferguson believes was a good thing.

I would devote paragraphs to every other flaw within, but then I’d have to write something longer than the book itself, so here’s a good summary:

  • Ferguson repeatedly invokes Edmund Burke’s “partnership of the generations” in the context of fiscal irresponsibility. He’s also delighted about the failures of the “green fantasy” and more than once criticizes those worrying about degrading the “environment”. I will leave it as an exercise to the reader to find the irony.
  • He keeps talking as if central banks should control something called “asset price inflation”. First of all, huh? Second of all, we tried that once upon a time. It’s called the “gold standard”. Remember how that went? (If you need a history lesson, take a look at Europe today)
  • He talks a lot about America’s broken regulation system and how it’s “consistently” beat by Hong Kong. Yes.
  • He cites World Bank Ease of Doing Business indicators, and yet fails to tell the reader – shock! – that the only three countries ahead of America (Singapore, Hong Kong, and New Zealand) have a population the size of less than New York. He has no problem, however, citing Heritage’s far more subjective “Economic Freedom” index which places America far behind. By the way, Heritage is a far more scholarly organization than the World Bank, folks.
  • He tells us in a footnote that Iran’s appearance on IFC’s “Doing Business” report “is a reminder that such databases must be used with caution”. So apparently his only inclusion criteria for “further consideration” is the Boolean variable “is this country part of an Axis of Evil”.
  • As Daniel Altman has suggested, British legal institutions may not be all that great.
  • He consistently wants to confuse the reader with stocks and flows. Sometimes China is this amazing story of incredible growth of which America should be afraid. He devotes pages and pages to the West’s relative decline against the “Rest”. But suddenly against a purported counterpoint to his argument regarding state capitalism, he also notes that America is way more productive than China.
  • On this note, he waxes eloquent about the “Rest’s” amazing sovereign wealth funds valued in the trillions. On the same page, he talks about the ills of state capitalism. Huh? Where exactly does he think China and Saudi Arabia get their money from? Selling software?
  • For that matter, he talks about the “artificial” purchase of American debt from China keeping American interest rates unfairly low. What exactly does he think China should do with its trade surplus? Burn it?
  • (A retrospective edit from my notes). He so dismissively passes off Steven Pinker’s claim that violence has decreased over time, saying he hasn’t seen “statistics”; does he know his colleague has written a 700-page book about the subject? Also brings doubt to his whole “this is peacetime” thing.
  • He devotes a whole chapter to the erosion of civil society. Beneath the veneer of an actually valuable point is an argument that we should replace progressive taxes with volunteerism. Yeah, okay, that’s new. But anyway, an overwhelming portion of civil charities, believe government provision of goods and services to the poor is complimentary to their job.
  • He bemoans the fact that even if the average donation amount has increased, the number of people who donate has fallen. This bothers me too. Except I don’t call it the battle between “Civil and Uncivil Societies”. I call it “inequality”. As the divergence between mean and median wages continues, this is to be expected. That is by definition the result of an increase in relative poverty.

But nothing so far comes close to measuring the deepest flaw in this book, and indeed his worldview. The whole argument behind his “fall of the West” thesis comes from relative decline. Indeed, the first graph in the whole book shows the historical ratio between British and Indian per capita income levels. Maybe the recent fall in that ratio bothers neocolonialists like Niall Ferguson, but to me it is empowering. To me it is the amazing comeuppance (as pointed out by commenter Julian, I’ve used this word incorrectly) rise of my country – which, mind you, was suppressed for so long by his. I would not make it personal if he shared the accepted, and indeed correct, view that colonialism was largely extractive, with a few piecemeal benefits on the side.

Let me tell you, Niall Ferguson, that I will not be happy until the damned ratio in your primary graph falls to, and below, one. What you see as the fall of some stupid and glorious ideology is what over three billion people of this Earth see as the final coming of dignity and prosperity.

As an American I share with you the concern about decaying institutions. But I’m looking at pictures of Detroit and New York City after Hurricane Sandy. As a true believer in market institutions I am looking with concern, but full understanding, at the Occupy movement. I am looking at a world which now mistakes America as a warmonger and projects onto us a mantle you wear so elegantly: “imperialist”.

There are different levels of argument. There is that of fact. Then of analysis. And then of meta-analysis and beyond. Debunking this book requires little more than the first. For it is nothing more than the whimper of a dying idea.

As evidenced by the huge interest in – and, indeed, rebuke of – the so styled “Dynamic Stochastic General Equilibrium” models (DSGEs) we know that economists desperately want a predictive framework on which to guide policy. The epistemological problems associated with prediction, of which there are many, aside, this is a laudable goal. However, unlike the stock market, there seems to be little financial reward for “getting it right”. To the extent many of these models are developed for no-name technocrats within a central bank, the intellectual glory also seems small. Maybe I’m wrong about this, but intellectual glory in economics seems well-divorced from prediction per se* (especially when our prior on the ability to do so is small). Without well aligned incentives, as those in the profession certainly know, the results are poor. This must be changed; yes, we must apply economics to itself.

I initially believed that Federal Reserve sponsored “contests” for the best model would create a fertile environment for development. Models would not be restricted to (what I think are) dubious DSGEs, but may include the more promising agent based models (ABMs) which rely not on analytics but computation. There seems to be hope in DSGEs with financial frictions (see this), but that just strikes me as a flavor-of-the-month type of deal. I think, now, there is a better way: the prediction market.

A DSGE has, broadly, two components: the model specification (incorporation of wage rigidities, the financial system, etc.) and parametric estimation (invariant discount rates, cost of adjusting a firm’s capital stock, etc.) The former are endogenous, and the latter exogenous. A little digression here. While the emergence of DSGEs is linked with the Lucas Critique, which warns against the use of aggregate economic variables in policy action, it is questionable to suggest that the DSGEs are in any meaningful sense immune thereof, as almost every proponent claims. It is much more accurate to say that DSGEs assume the Lucas Critique away, by believing in policy-invariant parametric estimations. Indeed, many are estimated with by “historical averages”, which would earn Robert Lucas’ ire. More succinctly: microfoundations mutate.

Back to the idea. A full model can be captured by its internal specification and parametric estimations. The former are usually various iterations of neoclassical theory (rational expectations, sticky prices, and such). The latter are generally taken as Bayesian priors formed from quite sophisticated statistical methods.

Imagine now a marketplace wherein participants may “bet” on various models. The bet would be specified in the form of an investor’s odds that a given model will deviate from its prediction over n years by less than k%. (That is the total area between the projected and actual paths). This ensures it succeeds on out-of-sample projection. Model designers may “upload” their product onto the market, with acceptance from regulators. Tweakers may take an existing model and create another iteration thereof by adjusting its parameters. The site will show each model with every associated mutation. (Important to note, a prediction market for models is quite different for one on outcomes: for example, long-term Treasuries are a good bet on growth expectations but tell us nothing on economic structure)

If such a market existed, and were thick, it would have policy relevance. For example, currently central banks employ statistical techniques to measure discount rates and capital adjustment costs. That’s great, but hard to measure, resulting in weak priors on its veracity. By allowing investors to tweak the model framework – which will grow large incredibly fast – the market will estimate the best guess on the expected future parameters. This would better overcome the Lucas Critique, as predictions from this are not simple statistical aggregations, but rational expectations.

(Quick sidebar: the market might be illiquid, so it’s clearly not something on which central banks can form definitive rules, but it still aggregates information more efficiently and creates more lasting incentives – it is, as I will explain, best seen as a source for metamodeling).

The market would also tell us the overall confidence investors have in one model over the other. While in-sample (past) accuracy would obviously be incorporated into part of the expectation, this would also include more important intangibles that only a whole market can know. Therefore, this prediction market would tell a central banker a) the expectedly best model and b) the expected parametric estimates for said model.

This has incredible policy relevance, but also tells economists what the market estimates wage rigidities are. There is no prediction market to measure this today, as the answer to the question is itself is tethered to a host of theoretical and statistical estimates. A prediction market for a complete DSGE, on the other hand, would yield a more fruitful answer. I am not being fully honest. If the market predicts that parameters for the best model are {a, b, c, d…} we can only know that it believes that a is the closest estimate given the other parameters. Therefore, we can only form a prior on wage rigidity by assuming many other coefficients as well. This is still hugely useful, I can’t think of an economist that wouldn’t want to know these numbers.

There is also the obvious reality that the parameter set for one model will not be the same as another. This is tautological, as models have different parameters to begin with (like financial market frictions in post-crisis DSGEs). More importantly, if investors believe a certain model places incorrect emphasis on a certain parameter (like wage rigidities) it will skew the bet towards said parameter without necessarily revealing anything nontrivial. Therefore the best parameters for the best model are the key. Compare polls versus prediction markets!

The market would then create for us a better model than ever existed which itself would then enter the marketplace and tolerate bets. Just like a stock market, it would be in constant ferment, but hopefully show broad underlying trends which themselves may be incorporated into future designs. Modelers might learn that the market doesn’t think price rigidities are important and hence focus on other frictions allowing a more efficient use of dynamism, as analytical methods are inherently intractable after a point.

The prediction market would also be open to ABMs and pure statistical estimates. The former are still quite new to economics, and hence underdeveloped, and the latter have shown to be less accurate than DSGEs in the long-run. And to the extent all models endogenize central bank behavior, as they should, the market will be immune to government failure from a surprise change in monetary regime.

I have explained how we can use the market to tell us the right DSGE. But we can also use it to incentivize the creation of better models, microfounded or not. Think about how Nate Silver made election forecast history in 2008 and again in 2012. He relied a little bit on polls (statistical estimates or individual DSGEs) and a bit on Intrade (the prediction market). He decided he knew better than the market as a whole – and way, way better than some idiotic estimates from Rasmussen or Gallup. He profited. Of course, if he did bet on a prediction market, his winnings were limited by the fact that everyone else thought Obama would win as well.

An election forecast is discrete (though it may be analogized in the form of distributions etc.) The growth rate of productivity, wages, capital, and GDP are not. If I’m a Nate Silver, and I feel the current favorite of the prediction market can be improved, I go home and create a nice little DSGE. I’m also the first one to know about this DSGE, by definition. I form a subjective prior on the veracity of my product, which ipso facto should be higher than my odds that the market pick is right, and upload it to the market, and place my (relatively) high odds on the pick: and a nice, juicy bet.

If it turns out my model does “beat the market”, I’ll earn a healthy sum until other models and the market-pick tweak it into the average (weak EMH-ish). Suddenly, model designers across the world are incentivized not just to create DSGEs that will most beautifully grace the letters of AER or impress friends, but those that will best predict out-of-sample trends. So not only do we have a better aggregation of existing models and parameters which itself is hugely useful, but we have, without any cost, incentivized people to create a better model. Furthermore, you may not be a model designer but a econometrician. You now have the incentive to gather the best estimates of future parameters and tweak it off your DSGE of choice. Clearly such a market will also encourage cleaner and better statistics, as the goal is now accuracy not publication.

Of course, there is one flaw. Say I don’t have much money of my own to begin with, and it’s certainly not disposable. I might have a beautiful DSGE, but I wouldn’t bet much on it due to risk aversion. There are three reasons why this doesn’t matter much:

  • The variance of wealth and attitudes towards risk will be small among the sample of people that can actually create a better DSGE (largely academics but almost certainly all upper middle class folks).
  • To the extent the work is sponsored by a university, the risk (and reward) will be shared institutionally.
  • At worst I will not be incentivized, and chances are I will. If don’t have confidence in myself to bet on the market, I will just publish my DSGE as I do today, and receive the same credit that I do today. So even at its worst (and this is highly unlikely), this market only better aligns incentives.

I believe if we were to create such a market, with the usual expectations of completeness and competitiveness, the economic prediction landscaping will mirror Kuhnian paradigms, just like a real science. This will be so because shortly after the emergence of such a model, some academic will create a highly successful metamodel (vis-a-vis the market, if not reality). This would run some, presumably complex, algorithm on variables from the world and prediction market itself, and create a DSGE therefrom. This is a Nate Silver.

Suddenly, someone will discover an even better metamodel either because it was hitherto undiscovered, or reflects a changing economic landscape that would have been too complicated to endogenize. This is the paradigm shift, the revolution. Over a long period of time, this market will develop and more closely approximate the efficient market hypothesis. The emergence of complex financial instruments on this market (such as but not limited to options on parameter values, etc.) will result in such sophisticated estimates of future trends that no individual person could design. Within a paradigm, economic theorists can learn general trends among successful DSGEs (maybe the top 100 DSGEs all have wage stickiness).

We would, quite literally, be evolving models. Economists nod to the difficulty of creating a model that is limited on specification to in-sample data. Prediction markets are the mecca of everything future; everything out-of-sample. Excitingly, this is not limited to economics, but modeling of any kind. While these are usually most present in the social sciences, where microfoundations are not well understood, it can play a crucial role in geophysical models of earthquake prediction and climate change. Indeed, there is no reason this model marketplace (“model” the adjective and “model” the noun) be restricted to economics at all.

Brad DeLong calls big finance a big parasite, which it totally is. But there’s no reason it has to be, and this is an example to that effect. If I could, I’d be willing to bet on that. Shame that I can’t.

 

*At least when predictions are right (see: austrians)

Last week I wrote about the puzzling discrepancy between an increasing wage share for the top 1% on the backdrop of a falling wage share for the country as a whole. The first important takeaway is that labor share isn’t as relevant to stagnating wages or emerging inequality as many believe.

There is, I realize, a deeper connection between this fact and the broader debate of of the 1% and their “fair share”. We know that in America, like many other rich countries, capital’s share of income has increased substantially since 1970. (This is also around the year the GINI uptick begins which, I think, is an accident). We also know that capital is disproportionately owned by the rich: in 2009 the top 1% of Americans owned over 35% of all stock.

We would extrapolate from these stylized facts that non-wage income would play a disproportionately important role in the incomes of the wealthy. And we would be wrong:

Image

(I’d should also include income from interest and land rent in the latter group but this, in general, followed the same trend and was relatively insignificant.)

This surprised me. Not only do we fail to see what would be an expected trend, but we see broadly the opposite: wages play an increasingly important role for America’s elite. To believe that every bit of the elite income advantage derives from productivity differences, as Greg Mankiw clearly does, would be to believe in not just a rapid divergence of human capital, but one so rapid as to make up for the emerging importance of capital gains.
I wanted to find out more about the distribution of wages over time but my site of choice, “The World Top Incomes Database”, doesn’t have such granular data. No less, given the 1% share of total income, labor share of GDP, and the wage share of the 1%, this isn’t hard to estimate with basic probability. As:

p(1% | wage) = p(1%) * p(wage | 1%) / p(wage).

The probability a random dollar is “in the 1%”, so to speak, given it’s from earned income is obviously the wage share of GDP. The probability that you are in the 1% is, well… 1%. The income database from which my above graphs were created gives us the chance a random dollar allocated to the 1% comes from earned income. I estimated p(wage), using the FRED series (WASCUR/GDP), clearly labor’s share of income.

This gives me a time series the wage distribution, and I found the most curious thing:
Image

The y-axis is the difference between the share of total income captured by the 1% and the share of total wages captured by the 1%. It remained above zero till the late ’70s, implying capital gains, land rents, and other unearned income were disproportionately allocated to the wealthy relative to the country. That’s what you would expect. But increasingly since the ’80s, the capital discrepancy seems to be falling in favor of a wage discrepancy (both cannot increase at once, as they must all average out to the 1%’s share of total income).

And here’s why this is the case:

Wage income includes wages, bonuses, exercised stock options and bonuses

Ay, there’s the rub! If you look at the first two graphs, you’ll see that the top 5% and 1% move very proximately with the .05%. This is, of course, because most income going to the “affluent plus” category really goes to the super-rich. Members of the top .05% earn, on average, almost 2 million dollars annually most of which are bonuses. This lends support, I believe, to new evidence, from Josh Bivens and Lawrence Mishel, suggesting that the supply-side tax cuts of the ’90s and noughties simply increased the chance and extent to which top executives negotiated for a higher salary.

If you define labor rent as the income over and above that which is required to keep an employee in his current position, nothing comes closer than negotiating higher just because the tax rate is lower. (I have other issues with the paper, for example, might this not indirectly support the claim that lower tax rates creates a desire for higher income, whether realized through rentiership or productivity).

Corporate governance is hard to measure. I do think, however, that the growing blur between the chief executive and chairman creates a severe principal-agency dilemma insofar as the allocation of unnecessary rents towards top CEOs. By this I don’t mean celebrity CEOs who number in less than one handful, but the unnamed mass that run public corporations.

Are we to believe that this CEO, just through genetic assortment (as Mankiw believes) and disproportionate access to opportunity (only 40% of Harvard is on financial aid, for example, that means over 60% of kids come from families that can afford to pay $60,000 a year) has not only increased his wages thusly, but to the extent that his increased wage dominates his increased income from capital? This means if wages really tracked productivity, the rich are at singularity. And probably beyond.

That is the question, I believe, so-called “defenders of the 1%” must answer. And other liberals, who love a good ol’ battle between labor and capital, should probably accept a falling labor share – for good – and try to promote a system that most equitably distributes opportunities given this constraint.

P.S. I’m in the process of crunching the same numbers for European countries. I’ve found  the increased wages for the French elite to be far more muted. Indeed, the French elite have seen an increase in the share of their income from dividends. Overall, the wage-capital discrepancy (accounting for changes in labor share, etc.) seem to be tamer. That said, the countries I’d most like to compare with America are Germany and the United Kingdom, neither of which has the granularity of data (through the Top Incomes Database) from less relevant countries like Italy and Spain.

Peter Orszag here considers the interesting and recent phenomenon of taxi drivers with college degrees. Tyler Cowen adds comment here. Let’s consider a few stylized facts about the emerging labor market:

  • There will be stark inequality.
  • College grads tend to be underemployed or unemployed unlike ever before.
  • Without good luck, you can expect to graduate with not insignificant debt.
  • There’s something about this “knowledge economy” thing. It’s probably good if you can take a derivative functions, and such.

There are two ways to have this conversation. We either talk about the “new economy” where paying the big bucks for an English degree isn’t worth it, but also where you can get a full computer science degree from a top school for less than $8,000. Or you talk about the “old economy” where people still care about the vanilla Bachelors. For signaling if nothing else.

I think Orszag, and too many others, speak in a hybrid construct; a mongrel between the old education and new labor market. That’s very understandable, because this is where we are today. Still, it’s broadly not a very useful discussion. The point of this post isn’t to pontificate about the future, but give some concrete predictions on college education vis-a-vis the labor market.

Here’s the money word: bundling. College, the experience, is contrary to popular belief mostly a consumption good. We’ve successfully disguised it as investment because the relatively small portion of it that was yielded huge dividends between 1950 and 1980. Bundling worked in this context, because the experience aggregated many different goods (English 101, Econ 102, etc.) into one. The consumption value of college exists today, but the economic fundamentals behind bundling do not. Here are features of a successful bundle (like pre-Netflix TV):

  1. Relative economies of scale and scope.
  2. Opportunity cost of bundling is low.
  3. Very high barriers to entry.
  4. There’s value from the bundling itself.
  5. And, most importantly, heterogenous demand. Technically, this means that for two agents, demand for components within the bundle are inversely correlated (you like Math relative to Physics, and me vice-versa).

(1), (3), and to an extent (4) probably will still exist in some form. But the opportunity cost of bundling will no longer be low. Bundling means a university has to congregate PhDs for about a 100 different fields within a physical locale, provide them with good salary, a good number of them with tenure, and all of them with good research activities. For teaching schools it means somehow convincing a PhD to get off the “publish or perish” mindset. For research schools it means ensuring strong research opportunities exist, for all disciplines.

Then you have to add agglomeration effects, which work deeply in favor of Harvard and MIT over a random university. If I’m a hotshot with a PhD in Math from Princeton, where do you think I want to go? (Hint: not Utah). All of this makes the non-established university, meant for the non-established students, an incredibly expensive operation. It’s been worth it, until a decade ago.

But things are worse today. Young whippersnappers today don’t want to learn from a third-rate professor in a lecture hall of 500. They can learn from, quite literally, the ustads of a field from online ventures. Yes, there’s a lot of skepticism about that today. But to the extent the new, “knowledge economy” is everything people fear, online education in its best form will exist. In the next ten years, highly versatile technologies will exist allowing students to interface across the globe in ways we can’t imagine.

How can a current bundler, with its non top tier professors, compete with the best future version of edX? Khan? Minerva? The bundlers will begin their slow descent into glorified credentialing services, and then test centers with Gothic halls.

With education debundled, students will no longer undergo a whole four years of education, but present disaggregated credentials for various courses. Tyler Cowen sees the future in, I don’t think, a dissimilar way:

The more likely scenario is that the variance of the return to having a college education has gone up, and indeed that is what you would expect from a world of rising income inequality.  Many people get the degree, yet without learning the skills they need for the modern workplace.  In other words, the world of work is changing faster than the world of what we teach (surprise, surprise).  The lesser trained students end up driving cabs, if they can work a GPS that is.  The lack of skill of those students also raises wage returns for those individuals who a) have the degree, b) are self-taught about the modern workplace, and c) show the personality skills that employers now know to look for.  All of a sudden those individuals face less competition and so their wages rise.  The high returns stem from blending formal education with their intangibles.

Except it’s not the variance that’s important. I see it this way. Every college student’s future earning potential, after controlling for background and genetics, is normally distributed. The uncertainty arises from intangibles (call this the Charles Murray factor) but also dumb luck. As Cowen has it, we would agree that it is the variance of means that increases when, in fact, it is the variance of variances. There’s no way to forecast this, or even form a Bayesian prior on what one’s normal distribution is. But we can assert that it exists, and that it will change.

It’s a little tricky to explain why I think this. An increase in variance, other things equal, means there’s a higher chance that someone with a lower mean (because they came from a poor family and such) will end up in the top decile. That’s clearly not the case. So we can take the inequality of prior means to be a background condition, the symptom of an unequal country. Within this context, the debundling of education will increase the variance by allowing students to more accurately target their strengths. There’s also much more room for falling below the curve without the guidance of an environment. The variance of variances increases because certain people, poor and not, will choose to take a “safer” path, that doesn’t lend itself to mobility.

Net net, you will see that more surprisingly poor kids make it “out” in the debundled world, than the counterfactual of our current education system. And you’ll see more people do even worse. So Cowen is right, in a sense, that the variance of the aggregate normal distribution will increase. However this is predicated on a background trend (inequality) that almost guarantees that result.

Variance across the aggregate normal is bad if it emerges from variance in individual means, and concentration of uncertainty among the bottom quintile. But variance on the individual, to the extent it is equally distributed, is a great thing. Variance kills despondence of the poor. It kills complacence of the rich. Indeed, we should do everything in our hands to make the prior normal distribution of a one year-old, just about as uncertain as possible.

That is the goal of a competitive society. It’s a well-known story that soon after the American Revolution, productivity stateside skyrocketed versus Britain. The absence of a state-protected aristocracy gave the American farmhand huge incentive to work hard, and get rich. His variance was high.

The “Great Unwinding” is better understood as not just a divergence of mean but also a contraction of variance. My certainty that I’ll never be “on the dole” – as an eerily high number of Americans, at one time or another, are – is a sign of societal ill. And the debundling of education can fix that.

A certain group will be exempt. Indeed, I expect that Harvard and Yale will last longer than America itself. The intense agglomeration economies within Ivy League and such schools, and the incredibly different population that attends, is removed from the scope of this observation. But take comfort that the top 25 schools graduate less than 1% of the population. Indeed, once the variance of variances of the population increases, competition to attend the Ivy League will follow suit, and next time around rich kids may not have the upper hand they do today.

Alas I am not predicting an end of inequality. But I do see in the Internet’s evolution a huge potential for more equal opportunity. I fear that too many will still be slave to genetics. Whether of looks, brains, or charm. What’s my most absurd belief? I will die in a Rawlsian society. At age 18, I will also say it is my most optimistic belief.

Ritwik Priya sent me an intriguing paper from Philip Maymin arguing that an efficient solution to NP-complete problems is reducible to efficient markets, and vice-versa. In other words the weak-form efficient market hypothesis holds true if and only if P = NP. This result is not published in a peer reviewed journal (as far as I can tell), but purports a remarkable discovery. My bad, looks like it’s published in Algorithmic Finance which has Ken Arrow and Myron Scholes on its editorial board. I’m still surprised NBER doesn’t point me to any future citations. As Maymin himself notes, he seems to have married the definitive question in finance (are markets efficient) with the holy grail of computer science (does P = NP).

There are reasons to be skeptical, and a lot of questions to be asked, but first I want to summarize the paper. I Googled around, to see where the paper showed up and, maybe not surprisingly, this MarginalRevolution post was a top hit. But Tyler Cowen seems unfairly dismissive when he says “Points like this seem to be rediscovered every ten years or so; I am never sure what to make of them.  What ever happened to Alain Lewis?”.

I don’t know much about this Alain Lewis. I can see he has written papers like “On turing degrees of Walrasian models and a general impossibility result in the theory of decision making”. I can’t even pretend to understand the abstract. On the other hand, reading Maymin’s paper didn’t really change my mind about efficient markets, but it gives an intriguing example of market capabilities. Anyone with thirty minutes and a mild interest in computer science should read the whole paper, because I think it gives a very good heuristic on understanding the debate of EMH itself, even if not resolving it thereof.

Indeed, some commenters on the net (at MarginalRevolution and elsewhere) joke that this paper is just another guy hating on the EMH. They say this, of course, because they have an incredibly high subjective belief that P ≠ NP (will discuss this later). They have not read the paper, because this disregards the fact that the author is a blatant libertarian citing Hayek favorably within.

Before I give a brief explanation of Maymin’s proof, I will add that I am skeptical as this result seems not to be replicated (with regard to his empirical evidence) in any prominent journal, economic or mathematical. While one may cite the “natural conservativeness” of the former profession as an explanation, the proof is simply too cool not to receive more attention. My understanding of theoretical computer science is limited, and to the extent that I am a good judge, the paper makes  sense on first read. (Strike one against comparing him to Alain Lewis, whose very titles make me shiver?) I do have some quibbles which I note along the way.

A Summary

Maymin ventures to prove biconditionality between the weak-form of the EMH and P = NP. He notes this would be an interesting result as the majority of financial economists have a fair degree of belief that markets are efficient, contrasted with computer scientists who very much doubt that P = NP. (It is this relationship that I will critique, but that later.) The first part of the proof shows that efficient markets imply that P = NP.

The weak-form of the EMH asserts the following:

  • If a certain pattern – such as Amazon always moves with Google with a lag of seven days – is observed, it will immediately disappear as the market incorporates this information into natural prices.
  • Because markets are informationally efficient, said pattern will be found immediately.

Richard Thaler calls these, respectively, the “no free lunch” and “price is right” claims of the EMH. Maymin’s result suggests that for the latter to be true, there must exist polynomial time algorithms to NP-complete problems (specifically, the Knapsack problem). We assume that there are n past price changes (1) for UP and (0) for DOWN. We take that a given investor can submit either a long, short, or neutral position at each price change. Therefore, the total number of strategies is 3^n. We note that verifying whether a given strategy earns a statistically significant profit requires only a linear pass through the n past time changes and is hence in NP. (That is, given some model, is there a 95% chance that your strategy beats a monkey throwing darts at the WSJ at coffee each morning). Remember this whole thought experiment is an exercise in finding some pattern of ups and downs associated with some future ups and downs which will always hold true and hence may be exploited.

Maymin notes that in practice, popular quantitative strategies are based on momentum, and hence some fixed-lookback window t. He notes the joint-hypothesis problem from Fama (1970) that the EMH says we cannot, given some equilibrium model with a normal profit K, earn in excess of K for a period of time. He resolves what I find to be an important debate among EMH skeptics quite well; that is how we reasonably search across the 3^n possible strategies. Some argue that we should stick specifically to economic theory, others submit blind data mining, and others still machine learning. Maymin notes that this is irrelevant to the question at hand, as the employed search strategy is endogenous to the determination of K.

Maymin agrees that determining whether a strategy iterated on one asset can earn supernormal profits is polynomial time. However, he notes that under the assumption that a) investors do not have infinite leverage and b) operate under a budget constraint, answering the question “does there exist a portfolio of assets earning supernormal profits within our budget” to be akin to solving the Knapsack problem.

For those who do not know, the Knapsack problem – a canonical introduction to discrete optimization and intractable problems – asks one, given n items represented as {value, size} to maximize the sum total value keeping the total size under a constraint C. In the analogy, size is the price of an asset following a t-length time series where t is the lookback window, value is the future return of the asset following the same time series, and the strategy on each asset is picked from a t-sized set U with {long, neutral, short} as possible options. Hence, in Maymin’s words, “the question of whether or not there exists a budget-conscious long-or-out strategy that generates statistically significant profit relative to a given model of market equilibrium is the same as the knapsack problem, which itself is NP-complete. Therefore, investors would be able to quickly compute the answer to this question if and only if P = NP.”

Maymin concludes that this algorithm is exponential in t, the size of the lookback window. He suggests that because this grows linearly with n (the total time series of all history) markets become intractable rapidly. I must quibble, theoretically if not empirically (which seems soundly in Maymin’s favor). Is there reason to assume that t ~ n? Is it not possible that for asymptotically large ntlog n? Indeed, for the market as a whole, if this were the case the problem would be linear in time. Empirically, however, linearity seems to be a fair assumption. I might add that time series analyses are restricted by the assumption of stationarity. In the future, the possible window of reasonably assuming such might be more than linearly larger than it is today. This would work in Maymin’s favor.

I have not yet explained why this means markets are efficient if and only if P = NP. Let’s say there are a group of investors searching through the total strategy set U, which is 3^n in size, for a supernormally profitable strategy. Let’s say, by miracle on one of my first guesses, I happen to find one such strategy. If P = NP, theory suggests that most everyone else will also immediately find this strategy, and hence it will be “priced into the market”.

However, if P ≠ NP, it might take years for someone else to find the same strategy, allowing me to earn a supernormal profit for a large period of time. This would render even the weak form of the EMH false. What are the empirics in favor of this idea? Well, this is something that probably deserves further research and I’m not happy with what’s provided, but Maymin cites Jegadeesh and Titman (1993) as a plausible example. Jegadeesh and Titman are credited with developing an investment strategy based on market momentum. Their strategy was largely unknown in the experiment windows (1965 – 1989) and therefore not priced into the market. Maymin’s result would suggest that this strategy becomes increasingly effective against the market as other participants content against a linearly growing events for an exponential-time algorithm. He offers this as evidence:

Image

I don’t see it as such. First, assuming stationarity across 1927 to 1989 is incredibly iffy. Second, backtracking a current strategy onto historical trends tells us what? I am positive I can also find some strategy (not momentum-6) which finds just the opposite. So what? Rather, Maymin touches on the empirical evidence that would work in his favor. That is, NASDAQ was added to the data set in 1972, vastly increasing the number of data points. If some strategy earned supernormal profits, it would be exponentially harder to mine it after the inclusion of data. To the extent that this strategy remains broadly unknown, its performance against the market should increase relative to baseline after 1972. But he doesn’t cite this data.

On the one hand, I’m glad he offers the correct framework on which to make his prediction falsifiable. On the other, presuming the above printed data from “Table 1” as in support of his hypothesis seems somewhat sly. I read this part quite favorably on my first parse, but employing this dataset is obviously incorrect for the hypothesis he attempts to prove.

The Corollary

As interestingly and more convincingly, Maymin argues that an efficient market implies that P = NP. To do this, he assumes that markets allow participants to place order-cancels-order transactions. I can say that I want to buy the Mona Lisa if it falls to $10, or sell David if it hits $15, but as soon as market conditions are such that one order is fulfilled, the other is automatically cancelled. We must actually assume that such orders with three items may be placed. Computer science nerds will know where this is going. Maymin wants to program the market to efficiently solve 3-SAT, quite literally the mother of NP-complete problems. It is beside the point of this scope to explain the dynamics of this problem, but enough to know that its solution is reducible into solving many other intractable problems, including factoring large prime numbers and hence breaking into your bank account.

The logical form of the problem is such:

Let y = (a | b | !c) & (b | !d | a) & (z | m | a) & … & (a | b | !m), where all variables are boolean

Within the literature, this is known as a “conjunctive normal form”. Each parenthetical phrase is a clause, which must consist of a disjunction between three “literals”. Solving 3-SAT involves finding the state (true or false) of each literal such that the whole statement is true (or known to be impossible to solve). 3-SAT is exponential in the number of clauses.

We can think about each clause as an order-cancels-order (OCO) option, consisting of three possible transactions. A literal can imply a sale and a negated literal a purchase, or vice-versa. Now let us price each asset (literal) at the midpoint of the bid-ask spread. Therefore, it yields a supernormal expected profit for all participants (and will be immediately arbitraged in markets are efficient).

Once we place the set of OCOs, they should all be executed within an arbitrarily small time period, as each by itself is a contradiction of of the “no free lunch condition” of efficient markets. In fact, each of the OCOs must be executed to maximize profits, and that is what proponents of the EMH suppose they do. Maymin does not state his implicit assumption that the time it takes to clear his transaction on the open market may not be instantaneous but within the weak EMH bounds of “as quickly as possible”. I would say this is a big ding to the theoretical equivalence of the two efficiencies (as he does not offer a mathematical or logical reason why one must be the other, but equivocates the terminology). I wish he had made this more clear. But I still think it’s an important result because the EMH would be a toothless truth if the “as quick as possible” included the years it would take to solve a sufficiently large 3SAT. Even then, without the theoretical equivalence, the structural similarities are striking. Note, that the example I provided above is rather easy as there are almost as many variables as there are literals. In reality, the question is a lot harder. Therefore, the market mechanisms that would “solve” such a set of OCOs in a polynomially-small time period would have done something remarkable. If I want to solve some given 3SAT problem, I just pick any given stock for each variable, encode nots as sell orders and literals as buys, and place the total transaction which must be completed in the profit-maximizing manner.

I found this result far more compelling than the first, perhaps this reflects my propensity towards computer science over finance.

Discussion

I’ve read quite a bit of stuff on how Walrasian equilibria are NP-hard, this or that. There seems to be a lot of literature relating economic games and equilibria with computational tractability. The question of the EMH is inherently different, and logical – not mathematical – in nature. The symmetry between the two fields here is one-to-one, even self-evident. So, I disagree with Cowen’s quip that stuff like this comes up once every ten years. I can’t put my finger on it, but the previous literature suggesting such similarities had more to do with solving for some equlibrium that’s hard, rather than processing the market itself.

Specifically, this is why I find the second result (reducing markets to 3SAT) to be mind-boggling.

Regardless, something in my gut rejects the logical biconditional between P = NP and the EMH. However, I think this result supports the idea that one should form his priors on both with a similar heuristic (which may yield a different result for either depending on the heuristic used).

For example, Maymin notes the contradiction between most finance professionals believing the EMH to be true and most computer scientists rejecting that P = NP. Let’s take the latter case. How exactly can one form a prior that P≠ NP? Well, he can believe that when the problem is solved it will be in favor of P ≠ NP. But that’s just turtles all the way down. A better way of explaining such a prior would be “I believe that if I asked a magical black box ‘does P = NP’ the resulting answer would be in the negative”. You agree that’s a fair way of forming a subjective belief, right? This belief can be formed on any number of things but for most computer scientists it seems to be the radical implications of a positive result (breaking RSA cryptosystems, etc.)

But, to form such a prior, you must exist that in a Popperian world such black boxes can exist. However, any such magical truth tellers existing is ipso facto a stronger and more absurd reality than P = NP. Therefore, this question is not in any normal sense falsifiable (other than the standard turtles all the way down, which only talks about the resolution of the problem rather than the true implications thereof).

I would argue that even if the biconditional between P = NP and EMH does not hold, for whatever reason, the structural science does. That is to say that submitting that the EMH is falsifiable would be akin to believing in such magical black boxes. It is better to not form any priors regarding efficient markets, as a general rule. Better would be to believe whether certain forms of markets are efficient.

The analogy to computer science holds even here. Though most NP-complete problems are in the worst case exponential in scale, heuristics and randomized approximations allow computer scientists to design remarkably efficient solutions for most cases or the average case. This is scientifically falsifiable, and the correct question to ask. Similarly, we may talk about the informational efficiency of a practical market, within certain bounds, granted certain approximations. And, crucially, granted some margin of error and the risks thereof. What are the chances a randomized input to a backtrack-enabled Knapsack solver will fall to exponential time? What are the chances a market will fail to weed out inefficiencies given a level of technology? Indeed, he suggests that it is such approximations and shortcuts that make a market perhaps inefficient, but a government even more so. He compares this result to Hayek’s famous economic calculation problem, which suggests something similar.

To me, this is an absolutely non-trivial comparison. After reading this paper I genuinely believe that it is a futile question either guessing whether P = NP (or, practically, whether when the question is resolved if P will equal NP) or whether markets are efficient. However, according to the St. Louis Fed, this paper has never been cited. (I intend to change that, one day.)

Within appropriate bounds, Maymin’s paper illuminates not necessarily how we view efficient markets, but how we view the debate thereof.

Brad DeLong summarizes the emerging case for monetary and fiscal austerity:

And so right now the austerians are deploying three arguments:

  1. The longer zero-interest-rate and quantitative easing policies continue, the more likely it is that banks somehow reach for yield in ways that will require another rescue–and this time to rescue the banks and so prevent total economic meltdown will be politically impossible.
  2. In any event, policies of extraordinary monetary ease are certain to fail because central banks cannot credibly promise to be incredibly irresponsible over the long-term.
  3. Further expansionary policies are unwise because our Keynesian short-run is going to be followed by a classical long run, and entering that long run with too high a debt to annual GDP ratio will cause the economy a world of hurt–although precisely how appears to be one of those pesky “unknown unknowns”.

How much substantive theoretical, empirical, and policy meat is there, really, on top of dry bones in these three arguments the Napoleons of austerity have now marshaled?

I seem to be somewhat out of harmony, in that I think that there may well be at least some meat here.

Except “summarizes” is the wrong word, implying there’s something more substantive. That’s all there is, the bones that DeLong tosses are the whole case for austerity, there is no “meat” to the story. In other words, it’s vegetarian-friendly. We can show that (1), (2), and (3) do not support either monetary or fiscal austerity. More importantly, we can show that (1), (2), and (3) are internally inconsistent with each other. But before we take a detailed look at why the three points fail to hold water, it is important to note that the new austerians purport to present a case against both monetary and fiscal stimulus. That is, they either represent the Bank of International Settlements or write for the Wall Street Journal editorial page. (Strike zero).

The “reaching for yield” argument has run its course. For one, it is only an argument against monetary – not fiscal – stimulus. Regardless, we can’t judge monetary policy on bad credit dynamics. Plenty of us have offered fair alternatives to bypass the credit system altogether. To the extent these are politically infeasible, it is cruel and moralistic to subject many to long-term unemployment because the government cannot adequately regulate financial markets.

Indeed, whatever risks are a generated as a byproduct of easier monetary policy can be amended with sound regulation: equity requirements and transparent exchanges come to mind. Furthermore proponents of this theory must somehow sustain that the expected loss from a future recession that is causally-related to easy money today is equal to or greater than the loss from continually anemic growth and hysteresis effects thereof.

To the extent they are making an intellectual, and not practical, argument, they must include in the costs of this recession the counterfactual that the Fed had pursed an “easy” policy at the time of recession; for example a nominal income level target. Indeed, if nominal income had not crashed – either due to monetary expectations or fiscal stimulus – and our unemployment rate was at 5% there would be a guaranteed counterfactual benefit equivalent to .02*.70*315000000*25000*3.

These same arguments must be further sealed with either the guarantee that sound regulatory institutions will not emerge as a result of smart governance or not only is easy money increasing risks by creating a reach for yield, but that it is fundamentally altering the financial landscape inter alia.

In the former case, proponents of the “reach for yield” thesis must tell moral philosophers why they are wasting their time arguing for policies that hurt the poor and stunt economic growth in service of a fixable risk instead of furthering the case for sound regulation that is Pareto-superior. They must also tell us why they do not support a stronger fiscal policy which is devoid of such risks.

Therefore, the argument for proposition (1) rests on an inordinate set of assumptions that only holds sway because a group of very smart academics believes it to be true. Whether such conviction, rather than hard fact, is enough to drive policy is a normative question beyond my pay grade.

DeLong’s second proposed case for austerity is actually just an argument for fiscal policy. Furthermore “it might fail so we should do nothing” is a terrible form of argument. There are ways to credibly commit to irresponsibility. Proponents of this view tend to lack imagination and hence cannot be trusted to give full view to the arsenal of monetary strength. Paul Krugman is not one such person, he just believes – like me – that serious policymakers lack the imagination for good monetary policy and hence supports fiscal policy as a less imaginative guarantee.

More importantly, let’s note that there is significant overlap between those making each of the above three arguments. Therefore there is significant overlap between those making the former two arguments. The reach for yield crowd secretly wants higher interest rates, not just a tapering of asset purchases. Therefore the zero lower bound critique is nonexistent, because in the world that such austerians live, a higher interest rate supports the all-noble saver and discourages profligacy by increasing rates on Treasuries. Therefore, the two arguments are intellectually inconsistent, and are yet submitted by the same people.

The third argument is more curious still. It first requires that fiscal policies be purely demand side. However, smart job training programs, investment in education or infrastructure, financing of basic research, deployment of a smart grid, and building a large green energy program not only increase immediate aggregate demand (when aggregate supply is elastic) but also right-shift the long-run aggregate supply curve. Therefore, potential growth can be linked with smart fiscal policy today. Austerians somehow reject this.

The next implicit assumption by the classical crowd suggests that it is better for a large number of poorer Americans to remain unemployed for now, and hence forever, than for future debt to be financed by earnings from those with a high marginal propensity to save. Indeed, even if one accepts Ricardian equivalence (which we perhaps must in the longest of runs) he must still make a welfare judgement that the tax rate on the top 1% in the next generation is of more importance than the employment and poverty condition of the bottom 20% today. I reject this judgement.

But the assumptions do not end there. This future-classicist  must believe that the jump from a Keynesian short-run to a classical long-run is abrupt. He must believe that interest rates will suddenly jump making it difficult to service American debt, without accepting that such optimism implies a higher tax revenue and lower safety net outlay. He must believe the long-run demand curve has a sharp curvature with a nonexistent medium run, that is it becomes perfectly inelastic, rapidly. He must reject any significant cost of hysteresis because of long-run labor competitiveness.

He must reject path dependency of structural and cyclical growth rates, and thereby somehow cede that the debt-GDP ratio in the classical long run is unaffected by GDP today. This is a strong assumption.

Finally, because the new austerian suggests a policy against both monetary and fiscal austerity, he must accept either (1) and (3) or (2) and (3) together. (1) and (3) cannot coexist because a classical long-run at the zero lower bound is an economic future with wildly different implications (the government can run debts and never pay them back, for example). (1) and (2) cannot coexist because you cannot both argue that “we should have above-zero interest rates but tight monetary policy” and that “monetary policy is only useful at above-zero interest rates”. Not with a straight face, at least.

Therefore, the chance that this new argument is correct falls from p = [(1 | 2) & 3] to p = 2 & 3. However, (2) and (3) cannot coexist, either. Think about it, (2) believes that monetary policy cannot gain traction in a liquidity trap and fiscal expansion is necessary to succeed. (3) believes that we should not enter the long-run with a higher debt-GDP ratio. If ZIRP and non-inflationary policy is the symptom of Keynesian short-runs, that (3) submits we will one day be in a classical long-run implies that fiscal policy was used.

See that? (3) is internally inconsistent with either (1) or (2). Therefore, the new austerian is compelled to pick one, and only one, of {1, 2, 3}. None of these on their own justify both fiscal and monetary stimulus. And if you agree with the thrust of my argument above, none on its own justifies either fiscal or monetary stimulus.

Without R-R or A-A there is no case for the new austerians, other than a tricky morality play or a huge prior that a future financial calamity is both imminent, not solvable by regulation, endogenous to easy money, and worse than that in which we exist today. There is no meat on the argument, only the very seductive illusion thereof. Anyone who bites in hard will break his teeth on the bone.

P.S. I was going to say vegetarians have a second case for austerity. An unemployed poor means less people can buy expensive meats and will have to succumb to the damned vegetables for their meal. Then I realized I live in India and we’re talking about America where a healthy living and vegetarianism are more expensive than McDonald’s or Tyson meats. Oh well.

Paul Krugman jots some very interesting thoughts about rents in the now-future economy. Brad DeLong notes that investment seems healthier than Krugman implies. But there’s more to the story.

Here are two stylized facts about the American economy:

  • Over the past 30 years the share of incomes captured by the top 1% has soared.
  • Wage’s share of income has fallen with preponderance of capital.

You would gather from this that ownership of capital is highly concentrated in said top 1%. You would be right. Under capital-biased technological change you would further gather from this that a random dollar earned in the top 1% is increasingly likely to derive from capital. You would be very wrong:

Image

Since the onset of labor’s decline in the ’70s, wage share of the top 5% has gone up by as much as it has fallen for the country as a whole. There’s been some volatility in the last 20 years, though. On the other hand, in 1960 the top .01% earned almost 20% of their income on dividends. That figure stands at 7% today:

Image

These data are from the highly useful “World Top Incomes Database”. By the way, it’s worth noting that the dynamics among the top 5% are driven by the 1%, and really the top .05% who control an inordinate amount of that income. Looking at the contrast between the national and top economies, we can deduce that either principal-agency dilemmas have become far more pervasive over time or there’s a lot more to the story than Krugman’s model.

Look, I’m actually pretty favorable to the former explanation. Because the top 5% is driven by the top .05%, which disproportionately consists of CEOs who just happen to be their own chairman, salary accruals have skyrocketed. But a lot of it has to do with two M’s and a J: MBA, MD, and JD. The majority of America’s rich aren’t actually entrepreneurs saving the world in Silicon Valley, as the Republican party may have you believe. Instead, they are largely, boring, rentier doctors, lawyers, and management earning huge excesses either because of government subsidy (doctors and lawyers) or network effects (businessmen). – Note I think lawyers play a crucial role for American society, but do believe a certain subset has benefitted enormously from government action.

Before I go on, I want to clarify my qualms with Krugman’s sketch. As the economy monopolizes, income is diverted from labor into what Krugman argues are rents. But his consideration is limited to where the income goes, not from where it comes. Even within a labor-intense class, as America’s rich surprisingly are, much of the wages can be unproductive rent. Using a Stiglitz-Dixit framework leaves the reader with the implicit assumption that labor income is somehow more productive than anything else. In fact the whole “labor share is falling” meme across the liberal blogosphere does this.

In better days – where the top 5% did not control so much of national product – this might be true. But today, it is not. The cartelization of America’s professionals through elite networking organizations (known to some as the “Ivy League” and others as the “AMA”) have sent consulting and medical wages well above their natural level.

And the subsidy to lawyers is even more infuriating. Indeed, every time the government passes a law, it ipso facto subsidizes legal practices across the country. Not the honest public defender in Omaha, mind you, but the corporate litigator in Manhattan. This is not a question of regulation, but the type thereof. Deregulation is never instituted in sweeping form so as to actually reduce market power of lawyers. Rather, the axis of evil between firm-LLP-lobbiest-legislature ensures maimed regulation in the form of loopholes only ultra-rich firms can realize.

Unionization – of doctors, lawyers, and corporate America – is on the rise. Not surprisingly, the labor share thereof is as well. Therefore I don’t like to think that income is being diverted from labor (and capital) into monopolistic rents. They’re not making it into corporate profits in the first place, but accruing in the form of “wages” which are really just rents. But why are profits going up then? I think Paul Krugman gets the symptoms right. Capital biased technical change doesn’t explain everything. Instead, because capital rents are falling, the owners thereof (top 5%) are leasing it out at an increasingly low rate, allowing profit to accumulate.

At the end of the day, we’re both describing the same situation with a different transmission mechanism. But this difference isn’t superficial. Paul Krugman says Apple’s earning rents because, it’s… well… Apple! What can the government do about that? Not very much. On the other hand, if you see the rising rents accruing in upper income wages as not the symptom, but the cause, you can identify the disease and implement a swift cure:

  • Mandate better best practices (CEO cannot be chairman).
  • Get rid of the AMA and implement a single-payer system. Or institute wage controls.
  • Write simpler (not weaker) regulatory law or, if this is impossible because of private interests, countersubsidize the market with a flood of corporate lawyers.
  • Focus on earned income, and not capital gains, taxes. Yes, most capital gains goes to the top 1% but all income earned over $400,000 goes to the top 1%.

I’ve also argued (and here) that contestable markets are better framework for the tomorrow-economy. I’ll leave more detailed deliberations, other than the linked, for later, but suffice to say that seeming giants like Google Reader have proved to be operating under nothing less than the threat of fierce competition, preventing the assumption of monopolistic rents.

Is ownership of intellectual property important? Yeah. But it is a longshot to assume that patents are wholly wasteful. They are definitely a strong incentive to create and invest. And rents earned thereof aren’t permanent – indeed Moore’s law compels rapid intellectual capital consumption, if you will. Usually when I hear people complain about patents, I sense the underlying argument is redistribution of brand. Could we let another company copy Coke’s logo or Nike’s slogan?

I can’t come to any grand conclusion, here. But we are having the wrong conversation if we’re talking about all time low wage share of the top 100% without talking about the all time high wage share of the top 1%. And if I am onto something, the problem is a lot easier fixed than either DeLong or Krugman suggest. On this note, something that we should correct:

Image

The above graph depicts the chance a randomly selected wage earning goes to the top 1%. It’s a simple calculation. We know that:

p(1% | wage) = p(1%)*p(wage | 1%)/p(wage),

p(1%) is the share of all income collected by the top 1%, p(wage | 1%) is the wage share of said income, and p(wage) is the wage share of all income. This is very interesting in how it compares to dynamics overall:

Image

A given dollar earned in wages (as opposed to land rents, interests, or dividends) is more likely to fall towards the rich than dollars earned overall. I’m not sure about everyone else, but this surprises me. We need to stop talking about capital. A lot of inequality looks like it can be solved by fixing the principal-agent problem, and breaking America’s ridiculous unions. Not of autoworkers or teachers. But of doctors, lawyers, and MBAs. Indeed, the rents earned by these three professions can be considered as a risk-free return (guess what the risk of an MBA from HBS is!) To the extent that the IQ of such falls broadly in the same range as potential entrepreneurs (who may not even know their latent skill), rents earned in these industry not only increase inequality, but asphyxiate entrepreneurial spirit by allowing an easy way out.

Ben Bernanke said recently:

The only way for even a putative meritocracy to hope to pass ethical muster, to be considered fair, is if those who are the luckiest in all of those respects also have the greatest responsibility to work hard, to contribute to the betterment of the world, and to share their luck with others. As the Gospel of Luke says (and I am sure my rabbi will forgive me for quoting the New Testament in a good cause): “From everyone to whom much has been given, much will be required; and from the one to whom much has been entrusted, even more will be demanded” (Luke 12:48, New Revised Standard Version Bible). Kind of grading on the curve, you might say.

I do not want to live in a society where Yale Law and then Wall Street is that which we demand of our best and brightest.

(I do understand there are some mind blowingly excellent MBAs, MDs, and JDs out there. I also think, on average, the profession earns more than it should via government protection).

I’m no big Rand Paul fan. I think his politics of abortion are fundamentally inconsistent with his otherwise Lockean definition of property, I think he would make a bad president, but I do not think he is racist. And I am perhaps duped into thinking he is more sincere than your average senator. I think, net-net, I am glad Rand Paul is an elected member to Senate. (By the way, I feel no sympathy for his father who is devious and racist).

So I (think I) disagree with Brad DeLong. Eliot Isquith’s critique of Conor Friedersdorf’s (sometimes rambling) defense of Rand was not satisfying. Friedersdorf argues that it’s natural to question democracy:

If a scholar of political thought said of ancient Athens, “I’m not a firm believer in democracy — it required slavery, war, or both, to subsidize the lower classes while they carried out their civic duties,” no one would think that a strange formulation — it is perfectly coherent to talk about democracy in places that didn’t extend the franchise universally, given how the term has been used and understood for two thousand years of political history.

To which Isquith responds:

Well, here’s the thing: Rand Paul is many things, but he is not “a scholar of political thought.” And he’s certainly not the senator from Athens. What he is, though, is a man who still can’t give a straight answer as to whether or not he finds the Civil Rights Acts constitutional, though he’s proved happy to brandish Jim Crow as a kind of shield against further inquiry.

Even on its own terms, the Jim Crow example falters. If you listen to Friedersdorf or Paul, you’d almost think that majoritarian democracy is what led to Jim Crow. One imagines it as if, after the Civil War, there was a big meeting in every city, town, and holler of the South, and there was a show of hands. Jim Crow: yea or nay?

But, of course, that’s far from the truth. Jim Crow wasn’t a product of a democratic process — of the kinds of democratic processes we think of as our own in the United States. Those institutional channels were the ones that passed the laws that broke Jim Crow. The American apartheid, on the other hand, was the product of terroristic violence, white supremacy, and Northern indifference; of the kind of evil Rand Paul’s father’s newsletters trafficked in.

So as I’m reading it, Isquith’s contention with Friedersdorf is predicated on the following:

  1. Rand Paul is no Aristotle and his expression of doubt makes him illegitimate as a critique of democracy (?)
  2. Jim Crow America was not Democratic
  3. Direct legislature is the only facet of modern democracy.

I’m not sure if I parsed (1) correctly (needless to say the comparison with Ancient Greece or Aristotle is confusing), but (2) and (3) are certainly wrong. And to the extent these are true, it is America’s least representative component of democracy that struck Jim Crow at its heart in Brown vs. B. of Education.

If you consider alienation of civil rights as inherently undemocratic, you will find (2) a natural statement. But taken to its logical conclusion, Isquith must accept that America 2012 is somehow not “democratic” because of DOMA. Perhaps he will make that argument, indeed he would be wrong. Democracy is a self-correcting system. As Bill Clinton put it, “there is nothing wrong with America that cannot be fixed by what is right with America”. An act of justice, like the Civil Rights Act, is not a sudden emergence of democracy, but a confirmation thereof. Those who take a discrete approach to political systems find themselves in a quite a quandary.

At first approximation, democracy in America – to use another man’s phrase – is defined not just by majoritarian elections (as Isquith has you believe) but by a vibrant, activist, and independent judiciary. Indeed, it would be fundamentally less democratic if majoritarian will had crushed the civil rights movement. The Constitution via the justice system, and not direct election, started the movement.

It is, therefore, entirely appropriate that we may disdain a system that perpetuated racial discrimination for so long. We may disdain the fact that American democracy was too representative. That it did not give sufficient emphasis to natural rights, a concept to which Rand Paul presumably subscribes.

Therefore, I read Rand Paul’s skepticism of American politics in the mid-20th century not as a referendum on democracy, as much as one on a democracy divorced from classically liberal beliefs. I am a harsh statist if next to Rand, but I can appreciate a more nuanced depiction of democracy.

Indeed, I do not doubt that my grandchildren may one day wonder the failures of a democracy that let George Bush lie his way into war. I will imagine the Isquith’s of tomorrow tell us no, that was the outpour of too little democracy, too little oversight. And yet, the media – a robust organ of any democracy – played sucker to an evil war effort. Directly elected representatives, chose to cede good judgement in favor of god knows what.

I believe that if America instituted a universal draft, Iraq would not have happened. I also believe universal draft is at its core undemocratic. It is evil, but only the lesser of two.

In similar vein, Rand Paul believes the redistribution of income, and forced regulation are evil. But he also believes forcefully in natural rights. It is cheap to argue that John Locke would forbid discrimination in defense of “life, liberty, and property”. Indeed, Paul believes in the virtues of negative, over positive, liberty.

Therefore, it is entirely legitimate for Rand Paul to support a very active, somewhat undemocratic, judiciary that protected at all costs the right to life, liberty and property. It is further legitimate for him to see a divorce between contemporary perceptions of democracy and classical liberalism. And therefore, it is legitimate for him to oppose Jim Crow on grounds of a democratic excess.

(P.S. What makes him a quack, however, is his idiotic approach to the relationship between women and womb, and his inability to apply his own Lockean logic thereof. I do not respect those who selectively choose classical liberty, and of this Rand Paul is guilty. But less guilty than his fellow republicans).

Evan Soltas is right, the United Kingdom needs to adopt a nominal income target, and Mark Carney is the man for this job. This is an idea that caught the blogosphere by fire, and has enlisted hints of support from Carney himself. Maybe most importantly, an NGDP target has deep support from across the political spectrum. Everyone sees it as a way to increase immediate employment. Conservatives argue that if the central bank is targeting aggregate demand, all other macroeconomic factors become classical in nature, shifting the priority onto supply-side growth. Liberals see higher inflation during deep recession as a way of easing the burden on debtors and reducing hysteresis by increasing the opportunity cost of leaving the labor force.

Before we further the case for a NGDP target, it’s important to consider the elegance of inflation targeting, a regime existent in the United Kingdom in its current form since 2003. Unlike many other potential targets (like monetary base) once the central bank anchors inflationary expectations at the target, its job becomes a self-fulfilling prophecy. The logic is simple, if I expect 2% annualized inflation I will sign a contract that increases my wage rate accordingly. But my wages are my employer’s inflation, which will be reflected in higher price levels across the nation. More importantly, nominal income is a measurable – and publicly understandable – figure unlike monetary base, inflation, or even real income. They call it the “money illusion” for a reason.

Without a futures market (which has many criticisms of its own), an NGDP target is unlikely to be so simple and beautiful. However, it may be one of the only ways for monetary policy to gain traction at the zero lower bound. Paul Krugman, now famously, argued that successful monetary policy today must “credibly promise to be irresponsible” tomorrow. An NGDP level target will convince the market that inflation will only fall after income is on its previous trend, or full employment. Granted, it is unlikely that during roaring booms or deep recessions, contract formation will be as simple as with inflation targeting.

So far, Carney’s one objection to an NGDP target does not hold water: “As potential real growth changes over time, either the nominal target will have to change or else it will force an arbitrary change in inflation in the opposite direction”. Evan notes that this is a “small price to pay”, but I don’t see it as a “price” at all. Changes in potential growth reflect only supply-side movements and, hence, Carney’s statement would imply that the current inflation targeting regime is robust during supply shocks.

But it’s not. Consider a central bank targeting 2% inflation in a country that’s going through a shale gas revolution. Prices levels are naturally bound to fall as energy is a critical component of pretty much all output. However, if the central bank is to meet its mandate, it must artificially inflate the economy. The consequences from a negative supply shock are even worse. If the OPEC agrees to an oil embargo, a pre-shale United States faces risk of recession. Making matters worse, because general price levels are increasing, the central bank must deliberately deflate the economy in a recession.

In both cases, at least a nominal GDP target would allow the supply-side inflation to compensate for deceleration in growth rates. Carney is right that over time current proposals of approximately 5% NGDP growth might be suboptimal. But even, or perhaps especially, in a supply-shock NGDP trumps inflation.

It is overwhelmingly clear that rich world economies need more inflation. But arguments to this effect, especially from a central banker, are not popular. Arguing for more income, however, makes intuitive sense and should hold broad appeal from the general public. Of course, the latter implies the former as my wages are your inflation, but it’s just a question of semantics…

Now is the time, if any, for Carney to strike a bold new mandate. Inflation expectations in Britain are already dislodged. Indeed, with poor productivity growth, the United Kingdom is undergoing a negative supply shock with the worst system thereof. It’s about time for the country to try something bold. Something fresh.

Have you been advocating that fiscal stimulus is unnecessary for the past four years? Do you want help in defending your position? Are you a die-hard monetarist? Are you annoyed at how right Paul Krugman has been? What follows is your best shot at disproving him. Tread with trepidation.

If there’s one rational expectation in economics, it’s that Paul Krugman is an inflationarydollar-debasingausterity-thumpingirresponsiblefiscalist imp that needs to be controlled. And over a decade ago in 1998, the good doctor wrote:

The way to make monetary policy effective, then, is for the central bank to credibly promise to be irresponsible – to make a persuasive case that it will permit inflation to occur, thereby producing the negative real interest rates the economy needs.

Back then, he had to end that phrase noting that “this sounds funny as well as perverse”. That’s a sea change from today, where this is taken as a foregone conclusion in wonkonomics. I see petitions for Larry Summers, Janet Yellen, and Christina Romer as great Fed chairs. That’s wonderful, and I’d be happy with any one of them (particularly Romer). But someone has to give me a detailed explanation why there isn’t a roaring movement – from Scott Sumner to Brad DeLong – calling on Barack Obama to nominate Paul Krugman for the most prestigious job in the country.

In January, Krugman politely declined a loud calling for his nomination as Treasury Secretary, preferring to remain an outside man. A latter-day Socratic gadfly, if you will. I agree, Paul Krugman in high political government would be a very bad thing. As I see it, such a position is best filled by a technocrat who can organize a willing coalition to frame the President’s economic policy into law. Paul Krugman is not that man.

Jack Lew is not involved with markets on an daily basis. We don’t much care how thick his briefcase might be. On the other hand, the Fed chair has the incredible burden of forming the most important expectations across international finance. We’ve all written many articles about how better monetary policy would slash deflationary expectations. We’ve talked about helicopter drops and QE infinity. We’ve talked about 4% inflation, and nominal targets.

Krugman is the intellectual father of irresponsibility, quite literally. If there is one man in this world who can convince markets that America will tolerate above-trend inflation, it is Paul Krugman. If there is one man in this world who can falsify Krugman’s own theory that we need more fiscal stimulus, it is Paul Krugman. Indeed, if Paul Krugman cannot credibly commit to be irresponsible, no one can.

Markets will smoke if he is shortlisted. If he is nominated, they will all but go on fire. So if you are interested in disproving Paul Krugman’s many calls for fiscal policy in a liquidity trap, you best champion for his nomination as Fed Chair.

(P.S. I did promise you “notes” plural. Your next best shot is to abdicate the scientific method, and choose to believe in hyperinflation, hard money, and short run superneutrality. This has been the option of choice for most.)

P.P.S Comon, is there a better “expectations channel” than krugman.blogs.nytimes.com?