Monthly Archives: July 2013

That’s the debate of the moment, folks. Evan Soltas tells us Brad DeLong and Miles Kimball – both accomplished Summers’ colleagues and collaborators – think he’s the way to go. Matt Klein, also for Bloomberg, thinks Larry Summers’ bet on interest rates in 2004 disqualifies both his competence and humility. Barry Ritholtz thinks Summers’ pro-dergulation politics were a train-wreck and would prefer “anyone but Larry Summers”. Whatever the case, I’m sad this debate doesn’t include Christina Romer, but I’ll bitch and moan somewhere else. (Edit: Actually, that’s a lie, I want Paul Krugman. And a billion dollars, too).

The debate (as stated by others) can be captured almost totally by this matrix:

Larry Summer Janet Yellen
Pros Dominating, cutting brilliance, superstar Monetary credentials, dovishness, brilliance
Cons Dominating, unflinching, arrogant, deregulator Not dominating, not an alpha male

A few months ago, I wrote that Janet Yellen has the edge. I wrote that Larry Summers had what Walter Isaacson said of Steve Jobs: a “reality distortion field” – an uncanny ability to use his rhetoric and excellent debating skills to bend those around him to his view. Much of this view is informed by Ron Suskind’s Confidence Men, what everyone epithetically says of Summers – he is beyond human brilliant – and a personal experience.

I was once (not all too long ago) in a very small room, with a very small number of people and Larry Summers was at the center. I think it was off the record, or something to that effect, but it was basically Larry Summers vs. 2007. I entered that room with Inside Men fresh in my mind. But I left with an unmistakably altered view, both on the man and the matter.

Suskind tells us he had a similar effect on Obama. But when I wrote that Summers’ dominating presence was overrated, I thought that Ashok Rao and Barack Obama were very different from the Open Market Committee. We were educated lay people without PhDs in economics. I’m also sure I can convince someone sufficiently dumber than myself that austerity is a good thing. That doesn’t mean shit. I firmly believed that Larry Summers would face far stronger competition – from Yellen included – in the hallowed halls of Harvard the Federal Reserve. I wrote:

So clearly I think Summers is a gifted scholar. For one, it’s kind of funny Yellen’s experience in the central banking system is taken as a bygone conclusion, with far more emphasis on Summers’ “intellectual leadership”. The question is “to whom”. You take a few smart and relatively well-educated people. You put Larry Summers and Janet Yellen in a room with them. There’s probably a very good chance Summers would come out as the “more impressive” character.

But you take two, highly-competent economists, and I’m willing to bet they’re equally confident in Yellen’s intellectual leadership. Now let’s actually talk policy, for a second. I won’t dwell on this because Yellen’s monetary credentials have been discussed in great depth for a while. She’s the rare Fed Official who actually seems to realize that inflation targeting is a disaster, and has endorsed a nominal spending target in all but name. (Christina Romer, my preferred option, has explicitly supported the same).

Miles Kimball tweeted me that I underestimated how much Larry Summers can dominate a room of economists. Based on what I know, I can’t help but doubt this. But Kimball and I agreed that I probably face a severe selection bias exposed mostly to the econ bloggers who are decidedly firmer in their beliefs than the median economist. And Summers’ ability to sway the FOMC is something both Kimball and DeLong cite as Summers’ relative strength to Yellen, and I’ll cede to that judgement.

But I can’t help but worry about the trajectory of this conversation. There’s basically no talk of Larry Summers’ monetary policy beliefs (and he even mentioned in passing something funny in the Financial Times about low interest rates and bubbles. Ugh.) That’s because the pro Larry Summers crowd writing op-eds are insiders. Brad DeLong is a frequent collaborator, Miles Kimball a colleague, and Ed Luce was his former speechwriter. Miles Kimball tweets me “I would expect Larry Summers to have similar views on monetary policy to those expressed by @delong on his blog”.

But I have no basis on which to understand that expectation. Perhaps I’m not privy to privileged information among elite economists. Maybe I just don’t know Larry Summers’ beliefs well enough. Whatever the case, I have no way of forming an opinion on what Larry Summers will do as a Fed chair. Sure we know that he’s expressed discontent with certain aspects of financial regulation, but it’s unlikely his private beliefs are unrevised. And what about his position on the zero interest rate policy? Quantitative easing? Nominal income targets?

On the other hand, while Larry Summers might be a academe-political superstar, Janet Yellen is a monetary genius. I think I sum it up well here:

It almost goes without saying that Yellen is far more established as a academic and policymaker insofar as monetary policy. All we need is a quick Google search to see the extent to which this is (perceived to be) the case. As former Treasury chief and NEC chairman – and in general a brilliant academic – Summers is the more eminent personality: yielding 6,310,000 search hits to Yellen’s 467,000.

But change the query to “[Larry Summers/Janet Yellen] monetary policy”, Yellen comes ahead at 206,000 to Summers’ 131,000. Now I’m not suggesting this is a particularly smart way to judge scholarship on a subject, but it gives a very visceral sense of Yellen’s online footprint insofar as monetary policy is concerned. Moreover, Yellen’s hits are almost entirely pages that are really concerned with relevant policy.

Deriving from his comparative fame, even Larry Summers’ “monetary policy” search hits are of no relevance. At the top are links to his Wikipedia entrya brilliant profile comparing Larry Summers and Glen Hubbard, something about healthcare, and Now please don’t get me wrong. Summers’ is probably one of the smartest economic policymakers alive today and would make great choice for central banker. But Yellen’s history and deep erudition in this subject – as well as a functioning understanding that “full employment” is 50% of the Fed’s mandate, not just scribbles on a paper – are unquestionably in her favor.

Larry Summers has without doubt engaged in private deliberations to which Barack Obama is privy. That means, ultimately, I have to resign my opinion to Obama’s judgement on monetary policy. History tells me this is not good. To the contrary, I know with excellent confidence that Janet Yellen knows her monetary policy, and rightly believes that dovish expectations will lead to higher employment. I even think she secretly supports a nominal income target.

Google “Larry Summers monetary policy” – tell me what you can get.

The asymmetry of this debate has shifted the conversation entirely from concrete monetary positions – on which I can inveigh my unsolicited opinions – to a sidebar about personality. I’m sorry but that’s just not the most important thing for a Fed chair. We’re not talking about a Treasury Secretary embroiled in politics, here.

And that’s why this conversation still gives Larry Summers the edge. When it comes to personality, academics tend to like the cutting alpha male. Genius associates itself subconsciously with everything Larry Summers represents. But the discussion on policy is vacuous. It’s taken as a foregone conclusion that his good analytical command implies that Larry Summers will follow the right monetary policy. In 2005 I would have said the same thing about Ben Bernanke – the analytical god of unconventional monetary policy. Come 2013, I don’t think Bernanke has lived up to his mandate, and many of us have updated our priors.

That said, I think Ritholtz, Klein, and friends levy an unfair criticism of Summers. It’s always something about his choice of deregulation, or a bet he made at Harvard. Unfortunately one micromotive (an interest rate swap) correlates nothing to his macrobeliefs. That he’s “pro Wall Street” says little of his command of monetary mechanics. That he’s arrogant says nothing of his private maturity in leading the world’s most important institution.

Here’s a heuristic. Larry Summers has – for many people – been a hated man for a long time. But the criticism this time is just a carbon copy of everything that’s been said before. You would expect that there would be specific concerns for a specific job. But there aren’t.

That’s a two way street. The pro-Summers commenters also offer little in his support that couldn’t have been said in service of his appointment to the National Economic Council in 2009. That says a lot about the ambiguities of the rhetoric in his favor, and we need more recognition to this effect.

Larry Summers is an arrogant, Bob Rubin acolyte? What is new? Larry Summers is a brilliant man and an even better debater. What is new?

I would rest easy that we can’t go wrong either way, but I do not trust Barack Obama’s monetary acumen.


Brad DeLong sends us to another hush warning about mass disemployment and future inequality, this time from the excellent Jim Tankersley. As frequent readers know, I don’t think we’re in for some techno-dystopian future and, while I fall on the right (ish) side of Marc Andreeson’s dichotomy – “The spread of computers and the Internet will put jobs in two categories: People who tell computers what to do, and people who are told by computers what to do.” – I think there’s a lot of cognitive dissonance in this conversation.

Let’s be clear. It is theoretically possible – yet improbable – to have either a fall in median standard of living or natural rate of employment. It is, in every way, impossible to have both. Understanding why isn’t all too complicated, and requires only a simple knowledge of supply and demand. Let’s think of the future utopian society operating within a three-class system: unemployed masses, educated technocrats, and the rich capital owners.

Tankersley points to work from Frank Levy and Richard Murnane suggesting that America isn’t in a great position to educate people in the would-be “unemployed masses” to the “technocrats”. That’s a red herring. While in the realistic framework from which they argue – the medium term – more education of the poor is exactly what we need, in the long-run the demand for technocrats is likely to be low.

But let’s take a moment to think what a fall in the standard of living means? Forget my dollar income, for a minute. A stereotypically middle class American derives his high quality of life from free public education, good shelter, incredible consumer choice, copious quantities of food, and access to cheap but effective consumer durables. Americans also enjoy excellent public services like parks, free roads, cheap energy from other institutional arrangements.

Americans own these goods and services  because – by and large – Americans are involved in the production and distribution of these goods and services. (No, not the direct assembly, but most everything else). But let’s say we enter a robot future. (Interlude: have you seen the Tesla factory?) Let’s say minimum wage workers in McDonalds are replaced with wage-free robots and truckers are replaced with automated engines.

The Neo-Luddites tell us that all these middle class workers will overtime be disemployed and hence observe a crash in their standard of living. Notwithstanding the fact that redistribution mechanisms solve this problem (which does not make it theoretically impossible, per se) – this cannot happen. If robots replace workers who are in the service of providing mass market goods – toys, teaching, everything we listed above – that means those goods are being produced, and hence exist. Of course, we get the Keynesian problem of overproduction if for a time period there is excess supply. But if robots keep producing these goods there is no way they will not find their way into the hands of American masses.

Okay, you say, “what if robots don’t keep producing these goods”. When Kevin Drum linked my previous article on Mother Jones, I found all kinds of liberals (a group with which I otherwise identify) telling me that I’m naive for not knowing that the bad capitalists will hog all the robots, that they won’t share their wealth, that political redistribution is impossible, and such. That means the goods aren’t being produced. But let’s say there’s a town – Detroit? – of the “mass unemployed”. If robots supposedly “replace” their jobs and somehow they don’t get the rewards, two agents aren’t just going to sit and go “oh. shit. I’m unemployed. Too bad”.

No. It’s not like the demand for consumer goods vanished. The mass unemployed will create their own economy. That means since I don’t have a car – because of, you know, “robots” – and you don’t have a microwave, I’ll specialize in microwave production and sell you microwaves and trade you for a car. And before you know it, unemployment solves itself, because demand for goods creates a demand for labor creates employment.

Of course, this situation is highly unlikely to arise. There will be some disemployment, but it will not be associated with a falling standard of living. That’s impossible. This situation is unlikely to arise because:

  • It’s sad. It’s like a hark back to an older economy. We should be using robots to consume more, and that should be the dream of every technologist: not the exclusive ownership by a capitalist minority.
  • The first situation, where we have disemployment, but higher standards of living – indeed, “post scarcity” – is more probably. Profitable corporations need a large consumer base. The mass market is the best and, indeed, only such market. While some commenters tell me that “the rich will own the robots for their own uses”, there’s not much profit if all the one percenters create goods only for other one percenters. For one, there would be a huge excess capacity of robots, compounded by the fact that rich people like artisan work. Not crappy, mass-market Tupperware.

The former point is my philosophical belief and I hold it as axiom. The latter has good empirical foundations. The 2000s – a decade of globalization, capital-biased technical change, and rising income inequality – was actually accentuated by a fall in consumption inequality. Over the past four decades, income inequality increased 237% as much as consumption inequality. And even this is just a first derivative of my point, that is lower-quintile consumption increased far more than income. (Many also suggest we overestimate consumption inequality as it fails to capture surplus deriving from the Internet and other free goods).

I think I’ve somewhat convincingly argued (if only to myself) that both a fall in broad living standards – or even a deceleration in the rate of change thereof – is unlikely to be coincident with disemployment. But this leaves open the possibility of either happening exclusive of the other. Let’s consider the first pair, “fall in living standards, but no disemployment”. I think this is the trickiest possibility, because I can’t really see a way in which this can happen, but also can’t see a way in which it’s impossible – outside of political-democratic institutions which, quite honestly, are failing. All I can argue is that technology is definitely going to increase the consumption from the wealthy which almost certainly will translate into higher wages – if at a far, far lower rate – to the poor. A decrease, or even stagnation, then seems improbably: just a substantial disconnect between capital and labor.

The second case, “disemployment without any fall, and perhaps even increase, in living standards”. Everyone fears this. But we think about unemployment in the narrow sense of U3. But Americans are one of the most “overworked” countries in the world. We’re the richest country not only because of our incredibly high “output per hour” [productivity], but also just “hour” [work ethic]. This works well for some people. Doctors who slave it out as residents and then earn a criminally-unfair killing later in life. (Oh what I’d give to replace every damn doctor sucking money from the American middle class with a robot!)

But think of the single mother working two jobs just to have heat in the winter. Disemployment without a fall in living standards will do her well. She can spend more time with her family. In a previous iteration of this post, I wrote that we’d have a “cornucopia of thought”, or something. I still think I got the principle right, but I was fairly bashed by a bunch of readers for being an idealistic idiot. (“Not everyone has a high IQ”, etc.) Regardless, there are a plethora of studies showing engaged parents are crucial for someone’s future success, in society as well as in the economy. Today this works in advantage of the affluent (if not the ultra-rich). But in the future I do hope that a middle class worker can live very comfortably on 6 hours of work, getting more time to spend time with his family – whether reading books or at a barbecue – and sleep, relax, and ponder. Oh shit I’m getting “idealistic” again.

That’s not to say that economic mobility or condition for the poor in America today is anything great. Only that a sharp fall, or even an absence of elevation, of living standards is very, very unlikely. That is the fundamental dissonance inveighed by the anti-technologists. They assume a utopia in which robots provide everything, but at the same time a dystopia in which they provide nothing.

Subtitled “Why so many predictions fail – but some don’t” Silver’s book is probably one of the only good “pop” statistics book out there. Silver has an engaging style that keeps even the informed reader alert, and brings philosophically profound concepts – like Bayesian reasoning – to the lay man. I put “pop” in scare quotes because I want to deter immediate comparisons to Freakonomics or Blink. In the sense that the book is written by the veritable master of a field – rather than engaging writers dabbling in curious, mind-bending topics – I’m more inclined to compare Silver with Thomas Schelling in Micromotives and Macrobehavior.

As this book has been thoroughly reviewed, I want to frame my response in the context of remarks from Kaiser Fung, Cathy O’Neill, both through a post by Andrew Gelman (who, at least as far back as December last year, had not read the book).

In short, the book is an investigative journey through fallibility of human prediction: from economics and earthquakes to the environment. The thesis is at its core hopeful; and draws a silver lining to human error in the form of: humility, doubt, and above all Thomas Bayes. But he never sells any such heuristic as a panacea to chaos and uncertainty, and is himself very measured about his promotion of a metasolution. I would say he follows his own rules on his 450-page forecast for forecasting.

Since I’m overall very positive about the book, let me start with the cons. Fung (and many others) note that Silver does a wonderful job bringing attention to the Bayesian worldview. They go on to suggest he might have oversold the concept; I see it in another way. He knows that most statisticians would be furious if he started with the tautological identity “p(a|b) = p(a)*p(b|a)/p(b)” which in its simplicity, to a lay person, desalinates Thomas Bayes’ philosophical leap into sterile mathematics. But he takes the abstraction a little too far. While we see the extended formula – through a women updating her priors that her husband cheated – there’s next to nothing in an explanation of why it is the case (we never see the simple form, or hear about the Law of Total Probability). In fact, I’m not sure that removing every reference to Bayes would take so much from his thesis – as the idea of “updating a prior” is not, per se, contingent on probability.

For a book that purports (and for the most part does) tell us why the ghost of Thomas Bayes’ rules the world, the dearth of precise explanation into the mechanics is damaging. Indeed it is crucial to understanding why holding (near) absolute priors makes further revision against evidence (near) impossible which indeed is the bane of every failed forecaster.

Another minor quibble – which is in all honesty dominated by Silver’s clarity and style – is the book doesn’t at all times feel very “together”. With the exception for a few stray remarks, each chapter can be read independently of another as they each tell the story of forecasting in a wide range of disciplines. The story behind Bayes’ is then relegated to Silver explicitly reminding us of its power rather than a natural flow throughout. Again, this is minor and included mostly so that you form a prior in favor of my impartiality!

Let me start with O’Neill’s review. The analytical counterpart of missing the signal for the noise is loosing the forest for the trees (or twigs), and I think accounts for a large part of the review. Don’t get me wrong – I respect her and love her blog – but I just can’t understand how O’Neill concludes that “Nate Silver confuses cause and effect [and] ends up defending corruption”:

The ratings agencies, which famously put AAA ratings on terrible loans, and spoke among themselves as being willing to rate things that were structured by cows, did not accidentally have bad underlying models. The bankers packaging and selling these deals, which amongst themselves they called sacks of shit, did not blithelybelieve in their safety because of those ratings.

Rather, the entire industry crucially depended on the false models. Indeed they changed the data to conform with the models, which is to say it was an intentional combination of using flawed models and using irrelevant historical data […]

In baseball, a team can’t create bad or misleading data to game the models of other teams in order to get an edge. But in the financial markets, parties to a model can and do.

This just doesn’t make sense to me because Nate Silver a) shows that the models are so ridiculously stupid that a child could find their flaws and b) accepts that Wall Street financiers are smart. The only conclusion thereof is the only reason to keep playing fool to these models is to “keep the music playing”, so to speak. No sane reader could finish the first chapter without feeling disgust towards the rot in the financial system. Nor is Silver happy about the bank bailouts that let AIG get away with murder and more.

The purpose of chapter one wasn’t even to “excuse” finance in any meaningful way, just to explain the fallibility of human models and, yes, exuberance. Silver is explaining why the models suck. O’Neill is getting angry that people were using models that suck. It’s not like his “expertise” in finance is – in writing a layman’s book – any less than his intuition of earthquakes or the environment, so O’Neill’s dismissal of his knowledge seems unfair. Especially divorced from the point of his book, which has nothing to do with finance to begin with. Reading the review, one may not be so sure.

Anyway, Silver’s basic point seems to be the criminality of using bad models, and having confidence in their success – so tautologically it seems he hates the way Wall Street worked. I get the feeling O’Neill wanted him to bring his emotions, and other completely irrelevant topics into the discussion thereof, but that misses the point of the book entirely.

And I’m not sure about how other readers feel, this is up to interpretation and I fully believe that O’Neill got this sense, but for what it’s worth I don’t think this is right:

I’m not criticizing Silver for not understanding the financial system. Indeed one of the most crucial problems with the current system is its complexity, and as I’ve said before, most people inside finance don’t really understand it. But at the very least he should know that he is not an authority and should not act like one.

Personally, I never got the sense that Silver was an “authority” in the field, nor did he ever claim to be. Again, a lot of this is up to subjective reading, but I can’t see how someone can reach this conclusion concretely; especially when the chapter is merely an introduction to the way in which models can be used in the real world. Much of his explanation of the failures even derive from a professor at the University of Chicago who teaches a course on the financial crisis; a veritable expert. I can’t help but feel that O’Neill and others feel distaste from the start because Silver introduces Larry Summers without a string of qualifying epithets.

Though of all the subjects discussed in the book, economics was the most familiar. (Which is not saying much for a kid right out of high school, to be fair). And perhaps not surprisingly, I did have the most problem with his discussion of economic forecasting, particularly predicting recession: Chapter Six – “How to Drown in Three Feet of Water”. He mentions several times that professional forecasters failed to predict recession as a serious possibility even after the United States officially was in a downturn and considers this mostly as a flaw of overconfidence or bad modeling; much in tune with the rest of his book.

But I think there’s a disservice in not considering the epistemological impossibility of forecasting recession, quite to the contrary of this passage:

In September 2011, ECRI predicted a near certainty of a “double dip” recession. “There’s nothing that policy makers can do to head it off,” it advised. “If you think this is a bad economy, you haven’t seen anything yet.” In interviews, the managing director of the firm, Lakshman Achuthan, suggested the recession would begin almost immediately if it hadn’t started already. The firm described the reasons for its prediction in this way:
“ECRI’s recession call isn’t based on just one or two leading indexes, but on dozens of specialized leading indexes, including the U.S. Long Leading index…. to be followed by downturns in the Weekly Leading Index and other shorter-leading indexes. In fact the most reliable forward looking indicators are now collectively behaving as they did on the cusp of full-blown recessions.” There’s plenty of jargon, but what is lacking in this description is any actual economic substance. Theirs was a story about data – as though data itself caused recessions – and not a story about the economy.

Silver gets one thing absolutely right. This ECRI firm seems to be staffed of knaves, fools, and worse: a stupidly overconfident chief. That said, Silver’s dismissal of the emphasized text shocked me: because that is precisely the reason a recession is impossible to predict.

Now, Silver agrees that the best forecaster is one who gets it right. Rationally, the goal of any good forecaster is to be trusted and serve as an important source of information for clients. As far as economic predictions go, the client is of course the free market. Here’s the problem, let’s say I’m a trusted forecaster and I publish a report stating that the American economy will shrink by 4% next quarter. If people trust me, my report will have caused a recession. Why? Because business operations across the country will note that consumer demand will crash in three months, and halt expansions and disinvest from the economy into safer instruments like US Treasuries. The fall in investment will precipitate a contraction of nominal spending and hence aggregate demand. Therefore, the market cannot believe that we are on the cusp of recession without actually being in recession at that point.

Of course, when an idiotic firm makes such a prediction no one will bat an eye, because no one trusts that firm: but that isn’t Silver’s point. I think readers loose a gem of an example here because few times is a prediction so intellectually contradictory as in forecasting a recession. Now, Silver does talk about self-fulfilling prophecies in another context entirely, and notes that fear of an epidemic might result in more precautionary measures which undo such a fear.

But that’s not an epistemological flaw. We can’t possibly know that a recession is coming without actually being in one. Single individuals can, but they can’t be trusted by the market as a whole ipso facto. It’s not self-fulfilling at all, it’s like asking “are we there yet” after you’re in the hotel.

The rest of the book is a smooth ride. Silver consistently packs the book with anecdotes from interesting interviews as well as a slew of useful data, and a trove of fascinating references. I suppose many readers would have been thrilled to learn about Knightian uncertainty in the context of climate change, and was somewhat surprised this was absent, but Silver does a lovely job of coordinating various viewpoints with all the important data there is.

Unfortunately, we can’t have too many books on one topic. That’s why reviewers like O’Neill think Silver – for reasons right or wrong – has done a disservice to his audience. I judge a book not by how good it is; but against how well it could have been written. On careful read, I don’t think Silver misses the gold standard by far. I consider myself to be fairly knowledgeable about these topics, but still learned a ton. Most importantly, this book will (hopefully) inspire a new generation of toy forecasters and model tinkerers to approach the world with a probabilistic mindset and to relish in uncertainty. Because it’s a fun book, and doesn’t sellout on substance to get there. Even if Silver isn’t an expert on finance, he has a unique window into that world vis-a-vis the general public, for whom this book is intended.

Oh, and the most important take away is that KPMG should have fired Nate Silver a long time ago.

Five stars, and I’d be hard pressed to update my prior.

Update: I just Tweeted: “Ok seriously “bets reveal beliefs” is a belief. And a pretty strong one too. I don’t believe you believe that. Reveal it. Now!” Sounds like the impossibility of proving you believe in bets is a pretty good argument against expecting people to bet to reveal their beliefs if in such a world you can’t prove your own metabelief that bets reveal beliefs. Or make an exception and it’s turtles all the way down.

Since there are about fifty recent posts on wagers – each some permutation of “bet”, “belief”, and “portfolio” – I have no creative title for this post! The debate is mostly an outgrowth of a conversation between GMU economists (Tyler Cowen seems to be outnumbered) on whether bets reveal an individuals beliefs and whether marking beliefs to market makes for higher quality debate. Cowen doesn’t want to be locked into any particular viewpoint and sees bets working against a Bayesian updating of one’s beliefs. Alex Tabarrok, quite simply, thinks “bets are a tax on bullshit”. I have a few points to add.

Tabarrok, a libertarian economist, should perhaps consider the deadweight loss from such a tax. If the academic environment only respected predictions and models on which the designer is willing to wage a not insignificant bet, we would naturally see a fall in the supply of predictions. (Note, I’m not in general against bets, and think they can play a huge theoretical role for economics: only requiring them for respect). This might be a good thing, but I wonder, are he and other academics so bad at weeding out the crappy predictions? And what if the tax weeds out something truly cool and fascinating?

For one, it would put poorer academics at an unfair competitive disadvantage. A rich economist like Paul Krugman can easily wage bets at rather high odds without any risk. As I commented earlier:

Here’s the problem with thinking that bets reveal one’s true beliefs. Let’s say that I’m a prominent economist, and my career and theory has been suggesting significant hyperinflation for years now. Let’s say I take a 3:1 bet that inflation will remain below 5%. (Being generous in allowing even such people to bet against hyperinflation, just at far lower odds than rational people like Noah Smith who take 75:1).

Is it 3 cents to 1? Or 3 thousand dollars to 1? I imagine the prestige and pride in making such a bet outweighs – for such top economists – the cost of loosing it. That is I think people like Niall Ferguson or Paul Krugman would be indifferent within an order of magnitude except at very high values. (For example, if PK looses a 50:1 bet on a thousand dollars what’s the worst that happens… he has to give another speech?)

Therefore, only at extreme levels of confidence *and* high bet values (a bit of leniency between the two) can a bet actually reveal true preferences. Otherwise, if I was NF for example, I might take a much bigger bet just to make a statement because what’s the difference anyway?

Therefore, to the extent there is a prestige in making the bet itself, we will experience “prediction market failure”. More importantly, the Krugmans and Cowens will find it a lot easier to advance their theory among fellow academics than a newcomer. Alex Tabarrok’s “tax” – in line with his own market-oriented beliefs! – now deters competition and supports monopoly of academic thought.

Of course, this is very unlikely to happen, because academics (Tabarrok included) will pay attention to new theories. But that would bring serious question to the claim that bets are a tax on bullshit. Tabarrok, Caplan, and all naturally want us to place higher respect on theories in which the theoretician has financial stake. Further, if academic respect derives from bets, that will further tangle market signals as the value of a bet not only includes its discounted future return, but the social and professional value of having made such a bet. If this were to be the case academics will consistently bet at irrational odds in favor of their theory and, the market, knowing this would just constantly short everything and earn a constant profit. (I really want to know if Paul Krugman bet $50,000 on a Keynesian model whether they would take him more seriously; especially knowing that costs all of one speech for the man!)

There is an important, undermentioned, value to a bet – however. There are two layers on top of belief. The pro-bet economists argue that a bet aligns what I say (the topmost layer) not with what I believe, necessarily, but with what I think I believe. That is, they think that hyperinflationistas may be overselling their own confidence in their models, and that a bet would decrease the wedge. (This requires that the only value from a bet be the wager itself, but that’s another story). But there’s another gap, between what I think I believe, and what I actually believe.

Here’s a bet I’m willing to make. If you ask me what I would bet on a various series of events and, at an independent time, asked me to actually bet with my own money, my answers would be different. It’s not that I’m lying the first time, it’s just that it is impossible to know one’s actions until they are revealed – as much to the market as to the self. For example, in theory I would buy a $20,000 watch on sale for $1,000 – it’s entirely rational. I am less sure that this would actually happen given the choice. (There’s an epistemological impossibility in explaining this on myself, because self-doubt invalidates the initial prior. It’s like saying “I’m not as smart as a I think I am” which, by definition, means I don’t think I’m that smart. But, just like this oxymoronic phrase, the example conveys my point).

A bet forces me to reveal my belief to myself. Therefore, within small groups of academic friends, there might be value to making bets, as it compels a greater degree of introspection. A final note: I also think it’s important to demarcate the current advocacy for betting as a way to reveal actual beliefs and support for prediction markets. Like Cowen, I’m skeptical of the former, but while prediction markets really are just “bets”, they serve the far more useful purpose of aggregating information and hence are not really comparable.

My point? I’m somewhat skeptical that Bryan Caplan and Alex Tabarrok know their own hidden priors on betting. Because if they really respected the gambler more than the prude, they would be placing a tax not just on bullshit; but one on intellectual discovery and thought itself. In other words, their position on betting seems at odds with their otherwise libertarian ideology.

If you follow basic news you have, no doubt, heard prognosticators speak of the “long”, “medium”, and “short” run possibilities for our economy. Theoretically, the “long run” is an abstract time period in which there are no fixed factors of production. Or, in other words, the time period over which a theoretically competitive firm will not earn supernormal profit. The short run, then, is the such a time period on which at least one factor of production is fixed. Macroeconomists might say the long run is that over which expectations have adjusted to the fundamental state of the economy. I suggest a better heuristic: sensitivity to time-dependent forecasting. But first, background.

In practice, no one can actually observe such conceptual definitions. So when an “economist” enlightens you about his opinion of the future, “short run” is now, and “long run” is later. Some economists might further specify their belief by constraining the short run to factors governed by demand, and the long run as those governed by supply. While a conceptually more useful approximation, economists will dispute ad infinitum the existence of a shortfall. Indeed, the question and debate becomes largely tautological as measures of demand shortfall – like output gap – are measured on a historical “trend” growth rate.

As an intellectual framework, statistical estimates are unsatisfying, as they fail to capture the tectonics that moderate a dynamic economy. For example, an econometrician may note a significant number of peak-trough flows lasted fewer than n years. But do we believe there is something “real” about n? Not really.

On the one hand, there is the purely theoretical and hence rather useless. On the other, we have either ad hoc definitions designed more prominently to suit a particular economists own pet beliefs rather than capturing the business cycle or impure statistical estimates. 

However, what if we – for all intents and purposes – define the long-run as “the point at which future forecasts will not effect current consumption and investment definitions”. Let’s conduct a thought experiment. Let’s say I’m an economic forecaster and, by some stroke of dumb luck, the market actually trusts what I have to say. If I produce a report that promises on high confidence booming growth next year the result is obvious. Purchasing managers, entrepreneurs, and investors will suddenly update their confidence about demand tomorrow and hence increase their investment todayAu contraire, if I suggest a high likelihood of recession next year, the market will update its confidence negatively, decreasing investment today.

While I’m using this example as a thought experiment for another point altogether, it’s important to note this shows precisely why “recession predicting” is an idiot’s game (to the extent you want people to believe what you have to say). A forecast on the future is self-fulfilling today. It is epistemologically impossible to have a good forecast that is also credible insofar as recessions are concerned. 

Back to the experiment. What if I, magically, at the same confidence level produced a forecast for the economy fifteen years from today. The reaction to my report – whichever way – would be dampened. I’m not sure to what extent, but few of us would expect this sort of forecast to have much effect. This is not trivial, especially if you manage to convince yourself (it’s hard) that the market places the same Bayesian likelihood (“trust”) in this report as my short-term prediction. You find it hard to convince yourself of this, of course, because uncertainty is ipso facto correlated with the extent of the forecast.

A longer forecast fails to elicit the same energy from the market for the following theoretically-sound reasons:

  • If my forecast tells you little about the interim between the periods (that, after all, is the point of the thought experiment) the longer the window, the greater the chance of an intercepting recession.
  • Without knowing about the path to the future, capital depreciation make immediate investments unprofitable.
  • If my forecast is ten years out, there is no reason to wait nine years on the expectation that my forecast will improve with improved information.

Of course, there will be some activity which derives from something economic theory has a harder time explaining. Investments take time to build. If I expect a huge increase in energy demand in ten years, I might invest in a nuclear energy plant as this would take about as long to build. This voids the above set of uncertainties by evaporating the relevance of the interim.

Now consider something called a “forecast yield” curve. This measures the markets response (y-axis), given a certain indicator (stock returns, consumer confidence, job creation, etc.) and the time period (x-axis). The response is of course a qualitative feature which may be rather easily quantified using a variety of indicators such as immediate change in stock prices or the purchasing managers index.

The response will be very high if the forecast is on one year, and diminish – in some form, I do not know which – over time. The long run is then defined as the point at which the response becomes insignificant. This is not easy to measure, especially in a methodological way. However it is, theoretically, possible to measure, unlike the rigidity of various “factors of production” which is an entirely epicyclical phenomenon (that is, tautological). It makes great sense in explaining and capturing the idea of market dynamics, but is less useful when married with real numbers.

A slight modification of this definition is very in tune with some currently used interpretations of “short” and “long”, but removes the ad hoc nature thereof. Let’s say the forecast purports to guess the demand side constraints only – say, by proxy, nominal consumer expenditure. (We assume that all long run price adjustments are structural in nature). The time at which the current forecast becomes irrelevant, by nature, then is the point when the market believes the supply side will dominate the demand side. Therefore, the demand forecast sensitivity curve would provide a good idea of when the market expects supply to dominate demand – which many economists would ascribe to long run superneutrality. 

The intellectual benefit from this sort of definition is its ability to be measured, to say nothing of the many statistical and logistical challenges that will follow! I see several rough approximations:

  • Ask purchasing managers what they would purchase given a future indicator, and vary that by time. This is a direct approximation, and perhaps the most logistically sound. It is vulnerable, however, to investors’ inability to know what they would do. (Which violates rationality, but that’s another story).
  • Note market reactions to various government reports and observe sensitivity to time (like the need to invest in more green jobs in ten years, etc.) This suffers quite a bit from: a) the inability to control the “indicator” (specific government policy) and b) extremely small sample size.

The flaws associated with the first method – i.e. the chance that investors do not know their true own belief – are more fixable, and are not at all a theoretical challenge. Further, there is reason to believe such errors are systematic and hence would effect only the value, and not slope, of the sensitivity curve. Ultimately, in calculating the end of the short run, it is the slope that matters. Moreover, many such biases will cancel out over a large sample size, and hence the aggregated curve – weighted by investment value – should be an important indicator.

If nothing else, it is more specific than what we have today. Conducting this type of survey would also provide extremely useful insight into dynamics of economic structure. We’ve heard some say that the “short run is getting longer”. If the magnitude of the downward slope is falling, it would lend evidence to this argument. On the other hand, as of now, we have little reason to believe one story or the other. It’s about time we get a more precise and observable idea of these definitions crucial to economics. The long and short of it all is that practitioners today love manipulating these definitions to serve their pet theory. 

I started this blog late last November, and I’ve been seriously involved since early March. It’s been four months and a great experience. I wanted to share my thoughts, but also thank the people that have got me where I am today: if not where I want to be, well ahead of where I started.

I’ll start with where I am. A very small number of my posts command almost all my visitors: the most read 1 10% of posts hold over 70% of all the page views. This is not surprising considering the power-law distribution of views by rank (logarithmic scale):


My most popular post (perhaps unfortunately) is a recent review of Niall Ferguson’s The Great Degeneration with over 22% of the pie. More encouragingly, one of my favorite posts, an outline for a Ricardian tax reform, comes in second at 13%.

Network effects are real, and at this point I should thank the people that helped a few of my posts go as far as they did. Other than Twitter, which dominates in links to my blog but unfortunately would be a nightmare to untangle, Tyler Cowen and especially Brad DeLong really helped lift my blog of the ground. I really am a better blogger, and perhaps quasi-economist, with their help: because of their attention (together bringing in over a fifth of my total views!), but also certainly the content on their respective blogs. As far as journalists go, Slate’s Matt Yglesias and Washington Post’s Dylan Matthews helped push some of my favorite writing forward as well. And while one blogger at Slate can bring me thousands of hits, nothing has helped me more than Twitter. While tweets from popular journalists like Matt O’Brien help tons, the consistent support from loyal followers with not many followers of their own goes even farther. I can’t go into further depth without an expensive Twitter analytics package, but know this isn’t empty gratitude.

The most useful help doesn’t come from links at all. When I started blogging, Evan Soltas was kind enough to give me a rather detailed guide to the blogosphere. In real life I occasionally impress some people with my age, but Evan seems to be the real wunderkind, and his advice brought me a long way. He was happy to let me post it on here, and here’s a part:

The best advice I can give you is that doing what other people are doing (especially when your sample is of the top bloggers) is a good way to never get to the top. By this I mean short posts with long excerpts and your brief commentary, or other things like that. This sounds like counterintuitive advice, because it’s natural to imitate succes[s]. But the correct reasoning, at least so far as I’ve been able to tell, runs in the other direction. If other successful people are doing it, then it’s probably in their comparative advantage but not yours, and that they already satiate the market for readers in those areas. Most importantly, their advantage comes from prior traffic. You don’t have that, nor did I.

That’s the negative reasoning side. Here’s are some positive implications.

(1) You need to do original work. Dig around FRED, OMB, CBO, Eurostat, NIPA, Google Finance, etc. Make graphs. Analyze.

(2) You are better off writing specific/detailed pieces than general overviews. This will require you to be better at and highly knowledgeable in some area of the field than your competition: for me, that’s monetary policy, fiscal policy, and macroeconomic theory. You should pick what interests you.

(3) You are better off disagreeing constructively or reconciling contrary views than echoing a consensus. You cannot afford to say what everyone else could say.

(4) You need to write frequently. This is because you don’t have a natural traffic base and presumably want to build one rather than single-time visitors. I wrote basically every single day from January to July of 2012 before I was hired by The Washington Post and Bloomberg. I still average 7 pieces a week (two in Bloomberg, five in The Washington Post).

(5) You shouldn’t be partisan. Partisan is predictable, and new voices are most interesting when they are objective or balanced. By balanced, I don’t mean shallowly centrist, but rather thoughtful to both sides, even if one takes a side. […] This thoughtful market is not satiated, trust me.

Not surprisingly, I think this is one of the most valuable things I’ve done. Perhaps to your surprise, I didn’t think I would be writing about economics much, and I certainly didn’t think I had the competency thereof. I was clearly brutally wrong about the first point, and definitely hope I am about the second one, too. While I’m not taking any economics classes first semester in college (engineering schedules have [un?]fortunately strict requirements), the amount of math, modeling, and reading I’ve done to keep up has made me a sharper thinker – and hopefully writer! – on many fronts.

There’s an excitement to knowing your work may be read by your favorite thinkers, and hence an associated pressure. I’ve burned many more articles than I’ve posted, forced to rethink just to make sure I don’t submit anything shoddy. That filter slips, sometimes, and this rather confused reaction to the trade deficit is a good example. All in all, I’ve never been compelled to think as quickly but thoroughly as I do now – I look forward to joining a debate club later on!

I do want to increase the scope of my blog, potentially to computer science which has been a pet interest of mine for a while (and key to my declared major). This is because I think economics has a naturally lower barrier to discussion – which leads to a lot of bullshit commentary – but I hope that hasn’t been the case as far as my writing goes, but that isn’t for me to judge. I’ve found blogging about both computer science and economics to be natural, as in this post about the EMH, and only hope I can get even better as I learn more formal logic.

However, my blog has been mostly about economics which leads my to my final note of thanks, to Ms. Indrani Verma, who was my absolutely fantastic high school economics teacher. You would never guess that she is battling cancer today with her conviction to make it to school and be the best damn teacher possible – every day. While my blog may be more frequented by college, rather than high school, educators, I know every Economics 101 professor wishes their students had taken her class.

I’m not going to thank my parents because, well, that would just be predictable! (And they know it too).

I’m currently reading, and almost through, The Signal and the Noise by Nate Silver. I’ll probably write more about my personal takeaway after I’ve had a chance to finish and think, but one particular phrase in the third chapter struck me:

Imagine you walked into an average high school classroom, got to observe the students for a few days, and were asked to predict which of them would become doctors, lawyers, and entrepreneurs, and which ones would struggle to make ends meet. I suppose you could look at their grades and SAT scores and who seemed to have more friends, but you’d have to make some pretty wild guesses.

Now Silver was trying to explain the difficulty of prediction, though this is unfortunately not a great example. A question popped up in my mind: how confident am I that I would be where I am today, if not for inequality. That is, how lucky am I that life sucks for everyone else. (Part of this post is simply chronicling my thoughts for future reference – feel free to skip if uninterested)

It’s a difficult question to define quantitatively but, I think, the answer is an overwhelming “yes”. You can define “inequality” quantitatively to be a whole host of indicators. GINI is obviously the most common one. And, I’ve shown (to little surprise), this tracks with a (slightly modified) difference between mean and median incomes extremely well. The most sophisticated indicator is Theil, and I’ll get back to that later in this post – as the derivation actually flows well from our intuition of inequality: better than GINI at least.

I’m too young for “where I am today” to be defined well in any objective context like income. I’ll use my admission to a selective university as proxy. The admissions committee has certainly aggregated a bunch of information, and their “stamp” signals a fair amount of information to a clueless onlooker (and, honestly, that’s probably what I’m paying for). You don’t have to believe in the importance of prestigious universities, or whatever, for this to be a fair definition. You only have to accept that there is at least correlation between students at a good university and what society might call “success”.

Now, read this damning document from the Brookings Institution, titled “The Missing ‘One Offs: The Hidden Supply of High-achieving Low Income Students”, which says:

We show that the vast majority of very high-achieving students who are low-income do not apply to any selective college or university. This is despite the fact that selective institutions would often cost them less, owing to generous financial aid, than the resource-poor two-year and non-selective four-year institutions to which they actually apply. Moreover, high-achieving, low-income students who do apply to selective institutions are admitted and graduate at high rates. We demonstrate that these low-income students’ application behavior differs greatly from that of their high-income counterparts who have similar achievement.

Also let’s note that, other things equal, a low income student has an above-average chance of acceptance than me. They are more likely to be black or Latino and colleges favor socioeconomic diversity: both shifting the admission result fairly in their favor.

Finally, know that aside from exceptional circumstance, a low income student – by official university policy – will be given enough money to ensure academic success, along with preference for more comfortable on-campus jobs and the guarantee that this will not affect chances of admission. Financial aid is offered to families who would in no other context be considered “needy”, for example some well-endowed colleges help families earning well into the six-figure range.

You will read many admissions counselors say something along the lines of “we expect that nearly 60% of the students admitted to the class of 2017 will need financial aid”. Usually at the information sessions where such statements are made there will be a knowing nod admiring such generous policy. And it is generous. But what should strike you – as it does me – is the 40% of people who clearly do not need financial aid come from maybe 2% of the country.

That means the total number of applicants, if in a country of approximately equal opportunity, would be much larger. By simple calculation you can see that if the whole country applied to top colleges as much as the top 2% do, the admitted class – at the same admission rate – would be 20x higher. Clearly impossible. That means my little lower than 10% acceptance rate would be translated to about 0.5%.

Now go back to my question: “What are the chances, if not for inequality, I would not be where I am today”. To answer this, all you need to do are answer the question “Do you think if the admissions officers took their pool of admitted students, cut it in half, then cut it in half again, and then again, and lopped off 25% of that for fun your application would remain in the pool“.

I would have to be supremely arrogant to think this to be the case. The chances of this happening, of course are about thrice as unlikely as my admission in the first place. Indeed, my chance at this is worse than a Gallup poll over a year in advance of a primary election. None of this even accounts for the fact that each progressive halving puts me against increasingly competitive students, or the preference for minorities.

And so my university admission is overwhelmingly a consequence of inequality. It is not because I have been endowed with books, enrichment “programs” and, above all, educated parents – which I have – but the chance of it all. There is nothing distinctive about my application, and any objective respondent to my initial question would be lying if they answered otherwise. The stagnation of life for 98% of America works so deeply in my favor that just educational programs, or whatever, can’t explain it.

Now let me answer Nate Silver’s question. There is literally no chance that I will have problems making ends meet. I, first, have an entitlement well beyond what the government provides the poor, which is the promise that if luck and God conspire against me my parents will help out. But more than that, I know for a damned fact that people in the top quintile of America rarely ever fall out. And what of that quntile within the quintile?

And I am willing to bet – on good odds – that if Nate Silver took me to a random high school and instead of giving me a bunch of SAT scores, gave me parental occupation (as a proxy for income) I would make money – in a market where everyone else was flipping a coin. A lot of it. I am willing to bet, for example, on 300:1 odds that the son of a Yale educated lawyer and Berkley educated computer scientist (not me, by the way, but who cares same point) will never except of his own choice earn less than $50,000 a year. I’ll take 150:1 odds that he earns more than $100,000. And if I was betting against a random market, I’d make ten times as much in a tenth of the time. [In retrospect: I would not place 300:1 odds on the second bet, that is just me lying. I might place 10:1 odds, on the bet that inequality is getting worse. Regardless, this isn’t the point as much as the chance that someone in the top quintile will fall to the median, especially near the top 5%. There I am willing to place very high odds].

This is bad for all the reasons evident from moral philosophy and basic humanity. But it has deeper consequence. It engenders deep rot and complacence among those who benefit. My dad’s professor, once, told him “if your GPA falls below 3.8 [voiding your fellowship], don’t ask me for any leeway, the only thing I can give you is a free ride to the airport”. That is the stress under which people without any money study and work. Do you have any doubt that I, in that position today, would work a tenth as hard? (Not to mention the fact that both my parents worked jobs at odd hours when I am more likely to be sleeping or studying in comfort).

I would never will it upon myself, but I wish the politicians would. Say I believed there’s a 50% chance that I would earn less than $40,000 a year. That’s a high chance and would be a big drop from [the latter part of] my upbringing. I would work hard as hell to avoid that. People talk too much about inequality without the economic rot emerging from predictability: too much signal, and too little noise.

If you think that complacence doesn’t affect productivity riddle this. Each person has a normal distribution of their earning potential. The mean will be the fundamentals – IQ, attractiveness, height, charm, parental income – and the variance uncertainty thereof. Now, be careful in interpreting this, it is not the aggregate normal distribution. That means richer people may have just as high, or even higher, variance – but just from a much higher mean.

There is also reason to believe that the distribution is log-normal at the tail, implying a right skew. This is because income has a zero lower bound, but a more or less infinite upper bound (Bill Gates is effectively, if not technically, “arbitrarily” rich). Even if you believe, too optimistically, the the median American has a mean chance of success – that is he has a 1% chance of earning more than $350,000 you would see rot in the system. Because the log-normal distribution has a right skew, the 1% quantile for someone in the 1% should be higher than the .01% of the country as a whole. How high depends on your variance. But my odds would be lower than this statistical sample. By the way, if this weren’t the case, inequality would be higher. I call that good inequality. You could measure this, if you had a way of forming a good prior, as the difference between potential variance and observed variance. Of course potential is too hard to define!

Clearly the more accurately I can predict your income, the sadder meritocratic conditions are. You might put it another way: we want entropy in our socioeconomic system. We want noise. Something called the Theil Index of inequality does just this. As I explained earlier:

The math behind the measure (between 0 and 1) requires a fair understanding of information theory but the idea is lower index implies a higher economic “entropy”.

Your physics teacher might tell you that this is a bad thing (heat death and all) but, economically, it’s a little more complex. As Boltzmann showed, entropy increases as predictability of an event decreases. This means the entropy of a fair coin is higher than a biased one. Similarly, in a very equal economy it is very difficult to distinguish between two earners based only on their income. Indeed in a perfectly equal society this is impossible. However, as society stratifies itself, knowledge of ones income conveys far more information (redundancy), thereby decreasing entropy.

Within a system, Theil makes it easy for econometricians to understand the amount of total inequality due to within-group inequality and across-group inequality. If this is a little hard to grasp, think about it this way. If the total differences in economic output remained constant between countries (that is, India is still poor and Norway rich) but income was equally distributed within each country the residual inequality would be the “across-country” inequality. The residual from the converse, where all countries remain as unequal as they were, but world economic output is distributed equally to countries (not people), represents the “within-country” inequality.

It’s not the same as what I describe, but intutional similarities are clear. My bluster in making such strong bets on future income – proved through college admission – is a sign of a rotting economic system. I’ll end with a story of post-Revolution America. Divorced from British classism of lords and whatever the hell else you see on Downton Abbey, suddenly candlemakers and peasants felt good about working hard. Because class was determined by income, and gentrified Americans were suddenly at risk of loosing it all. As it ought to have been. Entropy was high in America, and low across the pond. That vibrancy crumbles every time someone like me is unfairly admitted into an institution of choice. It crumbles as the future distribution becomes so certain, that no one would believe that I may have to fry cook at McDonald’s. Part of the evil in this is that someone has the fry cook at McDonald’s (or do some other shitty job if robots take over). And because it’s not me – or anyone in the top 30% of American society – the risk is disproportionate on the bottom 30%. The rot does not come from lack of upward mobility. It comes from lack of downward mobility. I want John Boehner to make a big bet on my future potential. And I dare him not to change it once I give him my background. Because that’s the kind of society he thinks we live in.

By the way, it should come as no surprise that after 1776 American productivity shot well above Britain’s and the rest is history.