Archive

Monthly Archives: July 2013

Update: I just Tweeted: “Ok seriously “bets reveal beliefs” is a belief. And a pretty strong one too. I don’t believe you believe that. Reveal it. Now!” Sounds like the impossibility of proving you believe in bets is a pretty good argument against expecting people to bet to reveal their beliefs if in such a world you can’t prove your own metabelief that bets reveal beliefs. Or make an exception and it’s turtles all the way down.

Since there are about fifty recent posts on wagers – each some permutation of “bet”, “belief”, and “portfolio” – I have no creative title for this post! The debate is mostly an outgrowth of a conversation between GMU economists (Tyler Cowen seems to be outnumbered) on whether bets reveal an individuals beliefs and whether marking beliefs to market makes for higher quality debate. Cowen doesn’t want to be locked into any particular viewpoint and sees bets working against a Bayesian updating of one’s beliefs. Alex Tabarrok, quite simply, thinks “bets are a tax on bullshit”. I have a few points to add.

Tabarrok, a libertarian economist, should perhaps consider the deadweight loss from such a tax. If the academic environment only respected predictions and models on which the designer is willing to wage a not insignificant bet, we would naturally see a fall in the supply of predictions. (Note, I’m not in general against bets, and think they can play a huge theoretical role for economics: only requiring them for respect). This might be a good thing, but I wonder, are he and other academics so bad at weeding out the crappy predictions? And what if the tax weeds out something truly cool and fascinating?

For one, it would put poorer academics at an unfair competitive disadvantage. A rich economist like Paul Krugman can easily wage bets at rather high odds without any risk. As I commented earlier:

Here’s the problem with thinking that bets reveal one’s true beliefs. Let’s say that I’m a prominent economist, and my career and theory has been suggesting significant hyperinflation for years now. Let’s say I take a 3:1 bet that inflation will remain below 5%. (Being generous in allowing even such people to bet against hyperinflation, just at far lower odds than rational people like Noah Smith who take 75:1).

Is it 3 cents to 1? Or 3 thousand dollars to 1? I imagine the prestige and pride in making such a bet outweighs – for such top economists – the cost of loosing it. That is I think people like Niall Ferguson or Paul Krugman would be indifferent within an order of magnitude except at very high values. (For example, if PK looses a 50:1 bet on a thousand dollars what’s the worst that happens… he has to give another speech?)

Therefore, only at extreme levels of confidence *and* high bet values (a bit of leniency between the two) can a bet actually reveal true preferences. Otherwise, if I was NF for example, I might take a much bigger bet just to make a statement because what’s the difference anyway?

Therefore, to the extent there is a prestige in making the bet itself, we will experience “prediction market failure”. More importantly, the Krugmans and Cowens will find it a lot easier to advance their theory among fellow academics than a newcomer. Alex Tabarrok’s “tax” – in line with his own market-oriented beliefs! – now deters competition and supports monopoly of academic thought.

Of course, this is very unlikely to happen, because academics (Tabarrok included) will pay attention to new theories. But that would bring serious question to the claim that bets are a tax on bullshit. Tabarrok, Caplan, and all naturally want us to place higher respect on theories in which the theoretician has financial stake. Further, if academic respect derives from bets, that will further tangle market signals as the value of a bet not only includes its discounted future return, but the social and professional value of having made such a bet. If this were to be the case academics will consistently bet at irrational odds in favor of their theory and, the market, knowing this would just constantly short everything and earn a constant profit. (I really want to know if Paul Krugman bet $50,000 on a Keynesian model whether they would take him more seriously; especially knowing that costs all of one speech for the man!)

There is an important, undermentioned, value to a bet – however. There are two layers on top of belief. The pro-bet economists argue that a bet aligns what I say (the topmost layer) not with what I believe, necessarily, but with what I think I believe. That is, they think that hyperinflationistas may be overselling their own confidence in their models, and that a bet would decrease the wedge. (This requires that the only value from a bet be the wager itself, but that’s another story). But there’s another gap, between what I think I believe, and what I actually believe.

Here’s a bet I’m willing to make. If you ask me what I would bet on a various series of events and, at an independent time, asked me to actually bet with my own money, my answers would be different. It’s not that I’m lying the first time, it’s just that it is impossible to know one’s actions until they are revealed – as much to the market as to the self. For example, in theory I would buy a $20,000 watch on sale for $1,000 – it’s entirely rational. I am less sure that this would actually happen given the choice. (There’s an epistemological impossibility in explaining this on myself, because self-doubt invalidates the initial prior. It’s like saying “I’m not as smart as a I think I am” which, by definition, means I don’t think I’m that smart. But, just like this oxymoronic phrase, the example conveys my point).

A bet forces me to reveal my belief to myself. Therefore, within small groups of academic friends, there might be value to making bets, as it compels a greater degree of introspection. A final note: I also think it’s important to demarcate the current advocacy for betting as a way to reveal actual beliefs and support for prediction markets. Like Cowen, I’m skeptical of the former, but while prediction markets really are just “bets”, they serve the far more useful purpose of aggregating information and hence are not really comparable.

My point? I’m somewhat skeptical that Bryan Caplan and Alex Tabarrok know their own hidden priors on betting. Because if they really respected the gambler more than the prude, they would be placing a tax not just on bullshit; but one on intellectual discovery and thought itself. In other words, their position on betting seems at odds with their otherwise libertarian ideology.

If you follow basic news you have, no doubt, heard prognosticators speak of the “long”, “medium”, and “short” run possibilities for our economy. Theoretically, the “long run” is an abstract time period in which there are no fixed factors of production. Or, in other words, the time period over which a theoretically competitive firm will not earn supernormal profit. The short run, then, is the such a time period on which at least one factor of production is fixed. Macroeconomists might say the long run is that over which expectations have adjusted to the fundamental state of the economy. I suggest a better heuristic: sensitivity to time-dependent forecasting. But first, background.

In practice, no one can actually observe such conceptual definitions. So when an “economist” enlightens you about his opinion of the future, “short run” is now, and “long run” is later. Some economists might further specify their belief by constraining the short run to factors governed by demand, and the long run as those governed by supply. While a conceptually more useful approximation, economists will dispute ad infinitum the existence of a shortfall. Indeed, the question and debate becomes largely tautological as measures of demand shortfall – like output gap – are measured on a historical “trend” growth rate.

As an intellectual framework, statistical estimates are unsatisfying, as they fail to capture the tectonics that moderate a dynamic economy. For example, an econometrician may note a significant number of peak-trough flows lasted fewer than n years. But do we believe there is something “real” about n? Not really.

On the one hand, there is the purely theoretical and hence rather useless. On the other, we have either ad hoc definitions designed more prominently to suit a particular economists own pet beliefs rather than capturing the business cycle or impure statistical estimates. 

However, what if we – for all intents and purposes – define the long-run as “the point at which future forecasts will not effect current consumption and investment definitions”. Let’s conduct a thought experiment. Let’s say I’m an economic forecaster and, by some stroke of dumb luck, the market actually trusts what I have to say. If I produce a report that promises on high confidence booming growth next year the result is obvious. Purchasing managers, entrepreneurs, and investors will suddenly update their confidence about demand tomorrow and hence increase their investment todayAu contraire, if I suggest a high likelihood of recession next year, the market will update its confidence negatively, decreasing investment today.

While I’m using this example as a thought experiment for another point altogether, it’s important to note this shows precisely why “recession predicting” is an idiot’s game (to the extent you want people to believe what you have to say). A forecast on the future is self-fulfilling today. It is epistemologically impossible to have a good forecast that is also credible insofar as recessions are concerned. 

Back to the experiment. What if I, magically, at the same confidence level produced a forecast for the economy fifteen years from today. The reaction to my report – whichever way – would be dampened. I’m not sure to what extent, but few of us would expect this sort of forecast to have much effect. This is not trivial, especially if you manage to convince yourself (it’s hard) that the market places the same Bayesian likelihood (“trust”) in this report as my short-term prediction. You find it hard to convince yourself of this, of course, because uncertainty is ipso facto correlated with the extent of the forecast.

A longer forecast fails to elicit the same energy from the market for the following theoretically-sound reasons:

  • If my forecast tells you little about the interim between the periods (that, after all, is the point of the thought experiment) the longer the window, the greater the chance of an intercepting recession.
  • Without knowing about the path to the future, capital depreciation make immediate investments unprofitable.
  • If my forecast is ten years out, there is no reason to wait nine years on the expectation that my forecast will improve with improved information.

Of course, there will be some activity which derives from something economic theory has a harder time explaining. Investments take time to build. If I expect a huge increase in energy demand in ten years, I might invest in a nuclear energy plant as this would take about as long to build. This voids the above set of uncertainties by evaporating the relevance of the interim.

Now consider something called a “forecast yield” curve. This measures the markets response (y-axis), given a certain indicator (stock returns, consumer confidence, job creation, etc.) and the time period (x-axis). The response is of course a qualitative feature which may be rather easily quantified using a variety of indicators such as immediate change in stock prices or the purchasing managers index.

The response will be very high if the forecast is on one year, and diminish – in some form, I do not know which – over time. The long run is then defined as the point at which the response becomes insignificant. This is not easy to measure, especially in a methodological way. However it is, theoretically, possible to measure, unlike the rigidity of various “factors of production” which is an entirely epicyclical phenomenon (that is, tautological). It makes great sense in explaining and capturing the idea of market dynamics, but is less useful when married with real numbers.

A slight modification of this definition is very in tune with some currently used interpretations of “short” and “long”, but removes the ad hoc nature thereof. Let’s say the forecast purports to guess the demand side constraints only – say, by proxy, nominal consumer expenditure. (We assume that all long run price adjustments are structural in nature). The time at which the current forecast becomes irrelevant, by nature, then is the point when the market believes the supply side will dominate the demand side. Therefore, the demand forecast sensitivity curve would provide a good idea of when the market expects supply to dominate demand – which many economists would ascribe to long run superneutrality. 

The intellectual benefit from this sort of definition is its ability to be measured, to say nothing of the many statistical and logistical challenges that will follow! I see several rough approximations:

  • Ask purchasing managers what they would purchase given a future indicator, and vary that by time. This is a direct approximation, and perhaps the most logistically sound. It is vulnerable, however, to investors’ inability to know what they would do. (Which violates rationality, but that’s another story).
  • Note market reactions to various government reports and observe sensitivity to time (like the need to invest in more green jobs in ten years, etc.) This suffers quite a bit from: a) the inability to control the “indicator” (specific government policy) and b) extremely small sample size.

The flaws associated with the first method – i.e. the chance that investors do not know their true own belief – are more fixable, and are not at all a theoretical challenge. Further, there is reason to believe such errors are systematic and hence would effect only the value, and not slope, of the sensitivity curve. Ultimately, in calculating the end of the short run, it is the slope that matters. Moreover, many such biases will cancel out over a large sample size, and hence the aggregated curve – weighted by investment value – should be an important indicator.

If nothing else, it is more specific than what we have today. Conducting this type of survey would also provide extremely useful insight into dynamics of economic structure. We’ve heard some say that the “short run is getting longer”. If the magnitude of the downward slope is falling, it would lend evidence to this argument. On the other hand, as of now, we have little reason to believe one story or the other. It’s about time we get a more precise and observable idea of these definitions crucial to economics. The long and short of it all is that practitioners today love manipulating these definitions to serve their pet theory. 

I started this blog late last November, and I’ve been seriously involved since early March. It’s been four months and a great experience. I wanted to share my thoughts, but also thank the people that have got me where I am today: if not where I want to be, well ahead of where I started.

I’ll start with where I am. A very small number of my posts command almost all my visitors: the most read 1 10% of posts hold over 70% of all the page views. This is not surprising considering the power-law distribution of views by rank (logarithmic scale):

Image

My most popular post (perhaps unfortunately) is a recent review of Niall Ferguson’s The Great Degeneration with over 22% of the pie. More encouragingly, one of my favorite posts, an outline for a Ricardian tax reform, comes in second at 13%.

Network effects are real, and at this point I should thank the people that helped a few of my posts go as far as they did. Other than Twitter, which dominates in links to my blog but unfortunately would be a nightmare to untangle, Tyler Cowen and especially Brad DeLong really helped lift my blog of the ground. I really am a better blogger, and perhaps quasi-economist, with their help: because of their attention (together bringing in over a fifth of my total views!), but also certainly the content on their respective blogs. As far as journalists go, Slate’s Matt Yglesias and Washington Post’s Dylan Matthews helped push some of my favorite writing forward as well. And while one blogger at Slate can bring me thousands of hits, nothing has helped me more than Twitter. While tweets from popular journalists like Matt O’Brien help tons, the consistent support from loyal followers with not many followers of their own goes even farther. I can’t go into further depth without an expensive Twitter analytics package, but know this isn’t empty gratitude.

The most useful help doesn’t come from links at all. When I started blogging, Evan Soltas was kind enough to give me a rather detailed guide to the blogosphere. In real life I occasionally impress some people with my age, but Evan seems to be the real wunderkind, and his advice brought me a long way. He was happy to let me post it on here, and here’s a part:

The best advice I can give you is that doing what other people are doing (especially when your sample is of the top bloggers) is a good way to never get to the top. By this I mean short posts with long excerpts and your brief commentary, or other things like that. This sounds like counterintuitive advice, because it’s natural to imitate succes[s]. But the correct reasoning, at least so far as I’ve been able to tell, runs in the other direction. If other successful people are doing it, then it’s probably in their comparative advantage but not yours, and that they already satiate the market for readers in those areas. Most importantly, their advantage comes from prior traffic. You don’t have that, nor did I.

That’s the negative reasoning side. Here’s are some positive implications.

(1) You need to do original work. Dig around FRED, OMB, CBO, Eurostat, NIPA, Google Finance, etc. Make graphs. Analyze.

(2) You are better off writing specific/detailed pieces than general overviews. This will require you to be better at and highly knowledgeable in some area of the field than your competition: for me, that’s monetary policy, fiscal policy, and macroeconomic theory. You should pick what interests you.

(3) You are better off disagreeing constructively or reconciling contrary views than echoing a consensus. You cannot afford to say what everyone else could say.

(4) You need to write frequently. This is because you don’t have a natural traffic base and presumably want to build one rather than single-time visitors. I wrote basically every single day from January to July of 2012 before I was hired by The Washington Post and Bloomberg. I still average 7 pieces a week (two in Bloomberg, five in The Washington Post).

(5) You shouldn’t be partisan. Partisan is predictable, and new voices are most interesting when they are objective or balanced. By balanced, I don’t mean shallowly centrist, but rather thoughtful to both sides, even if one takes a side. […] This thoughtful market is not satiated, trust me.

Not surprisingly, I think this is one of the most valuable things I’ve done. Perhaps to your surprise, I didn’t think I would be writing about economics much, and I certainly didn’t think I had the competency thereof. I was clearly brutally wrong about the first point, and definitely hope I am about the second one, too. While I’m not taking any economics classes first semester in college (engineering schedules have [un?]fortunately strict requirements), the amount of math, modeling, and reading I’ve done to keep up has made me a sharper thinker – and hopefully writer! – on many fronts.

There’s an excitement to knowing your work may be read by your favorite thinkers, and hence an associated pressure. I’ve burned many more articles than I’ve posted, forced to rethink just to make sure I don’t submit anything shoddy. That filter slips, sometimes, and this rather confused reaction to the trade deficit is a good example. All in all, I’ve never been compelled to think as quickly but thoroughly as I do now – I look forward to joining a debate club later on!

I do want to increase the scope of my blog, potentially to computer science which has been a pet interest of mine for a while (and key to my declared major). This is because I think economics has a naturally lower barrier to discussion – which leads to a lot of bullshit commentary – but I hope that hasn’t been the case as far as my writing goes, but that isn’t for me to judge. I’ve found blogging about both computer science and economics to be natural, as in this post about the EMH, and only hope I can get even better as I learn more formal logic.

However, my blog has been mostly about economics which leads my to my final note of thanks, to Ms. Indrani Verma, who was my absolutely fantastic high school economics teacher. You would never guess that she is battling cancer today with her conviction to make it to school and be the best damn teacher possible – every day. While my blog may be more frequented by college, rather than high school, educators, I know every Economics 101 professor wishes their students had taken her class.

I’m not going to thank my parents because, well, that would just be predictable! (And they know it too).

I’m currently reading, and almost through, The Signal and the Noise by Nate Silver. I’ll probably write more about my personal takeaway after I’ve had a chance to finish and think, but one particular phrase in the third chapter struck me:

Imagine you walked into an average high school classroom, got to observe the students for a few days, and were asked to predict which of them would become doctors, lawyers, and entrepreneurs, and which ones would struggle to make ends meet. I suppose you could look at their grades and SAT scores and who seemed to have more friends, but you’d have to make some pretty wild guesses.

Now Silver was trying to explain the difficulty of prediction, though this is unfortunately not a great example. A question popped up in my mind: how confident am I that I would be where I am today, if not for inequality. That is, how lucky am I that life sucks for everyone else. (Part of this post is simply chronicling my thoughts for future reference – feel free to skip if uninterested)

It’s a difficult question to define quantitatively but, I think, the answer is an overwhelming “yes”. You can define “inequality” quantitatively to be a whole host of indicators. GINI is obviously the most common one. And, I’ve shown (to little surprise), this tracks with a (slightly modified) difference between mean and median incomes extremely well. The most sophisticated indicator is Theil, and I’ll get back to that later in this post – as the derivation actually flows well from our intuition of inequality: better than GINI at least.

I’m too young for “where I am today” to be defined well in any objective context like income. I’ll use my admission to a selective university as proxy. The admissions committee has certainly aggregated a bunch of information, and their “stamp” signals a fair amount of information to a clueless onlooker (and, honestly, that’s probably what I’m paying for). You don’t have to believe in the importance of prestigious universities, or whatever, for this to be a fair definition. You only have to accept that there is at least correlation between students at a good university and what society might call “success”.

Now, read this damning document from the Brookings Institution, titled “The Missing ‘One Offs: The Hidden Supply of High-achieving Low Income Students”, which says:

We show that the vast majority of very high-achieving students who are low-income do not apply to any selective college or university. This is despite the fact that selective institutions would often cost them less, owing to generous financial aid, than the resource-poor two-year and non-selective four-year institutions to which they actually apply. Moreover, high-achieving, low-income students who do apply to selective institutions are admitted and graduate at high rates. We demonstrate that these low-income students’ application behavior differs greatly from that of their high-income counterparts who have similar achievement.

Also let’s note that, other things equal, a low income student has an above-average chance of acceptance than me. They are more likely to be black or Latino and colleges favor socioeconomic diversity: both shifting the admission result fairly in their favor.

Finally, know that aside from exceptional circumstance, a low income student – by official university policy – will be given enough money to ensure academic success, along with preference for more comfortable on-campus jobs and the guarantee that this will not affect chances of admission. Financial aid is offered to families who would in no other context be considered “needy”, for example some well-endowed colleges help families earning well into the six-figure range.

You will read many admissions counselors say something along the lines of “we expect that nearly 60% of the students admitted to the class of 2017 will need financial aid”. Usually at the information sessions where such statements are made there will be a knowing nod admiring such generous policy. And it is generous. But what should strike you – as it does me – is the 40% of people who clearly do not need financial aid come from maybe 2% of the country.

That means the total number of applicants, if in a country of approximately equal opportunity, would be much larger. By simple calculation you can see that if the whole country applied to top colleges as much as the top 2% do, the admitted class – at the same admission rate – would be 20x higher. Clearly impossible. That means my little lower than 10% acceptance rate would be translated to about 0.5%.

Now go back to my question: “What are the chances, if not for inequality, I would not be where I am today”. To answer this, all you need to do are answer the question “Do you think if the admissions officers took their pool of admitted students, cut it in half, then cut it in half again, and then again, and lopped off 25% of that for fun your application would remain in the pool“.

I would have to be supremely arrogant to think this to be the case. The chances of this happening, of course are about thrice as unlikely as my admission in the first place. Indeed, my chance at this is worse than a Gallup poll over a year in advance of a primary election. None of this even accounts for the fact that each progressive halving puts me against increasingly competitive students, or the preference for minorities.

And so my university admission is overwhelmingly a consequence of inequality. It is not because I have been endowed with books, enrichment “programs” and, above all, educated parents – which I have – but the chance of it all. There is nothing distinctive about my application, and any objective respondent to my initial question would be lying if they answered otherwise. The stagnation of life for 98% of America works so deeply in my favor that just educational programs, or whatever, can’t explain it.

Now let me answer Nate Silver’s question. There is literally no chance that I will have problems making ends meet. I, first, have an entitlement well beyond what the government provides the poor, which is the promise that if luck and God conspire against me my parents will help out. But more than that, I know for a damned fact that people in the top quintile of America rarely ever fall out. And what of that quntile within the quintile?

And I am willing to bet – on good odds – that if Nate Silver took me to a random high school and instead of giving me a bunch of SAT scores, gave me parental occupation (as a proxy for income) I would make money – in a market where everyone else was flipping a coin. A lot of it. I am willing to bet, for example, on 300:1 odds that the son of a Yale educated lawyer and Berkley educated computer scientist (not me, by the way, but who cares same point) will never except of his own choice earn less than $50,000 a year. I’ll take 150:1 odds that he earns more than $100,000. And if I was betting against a random market, I’d make ten times as much in a tenth of the time. [In retrospect: I would not place 300:1 odds on the second bet, that is just me lying. I might place 10:1 odds, on the bet that inequality is getting worse. Regardless, this isn’t the point as much as the chance that someone in the top quintile will fall to the median, especially near the top 5%. There I am willing to place very high odds].

This is bad for all the reasons evident from moral philosophy and basic humanity. But it has deeper consequence. It engenders deep rot and complacence among those who benefit. My dad’s professor, once, told him “if your GPA falls below 3.8 [voiding your fellowship], don’t ask me for any leeway, the only thing I can give you is a free ride to the airport”. That is the stress under which people without any money study and work. Do you have any doubt that I, in that position today, would work a tenth as hard? (Not to mention the fact that both my parents worked jobs at odd hours when I am more likely to be sleeping or studying in comfort).

I would never will it upon myself, but I wish the politicians would. Say I believed there’s a 50% chance that I would earn less than $40,000 a year. That’s a high chance and would be a big drop from [the latter part of] my upbringing. I would work hard as hell to avoid that. People talk too much about inequality without the economic rot emerging from predictability: too much signal, and too little noise.

If you think that complacence doesn’t affect productivity riddle this. Each person has a normal distribution of their earning potential. The mean will be the fundamentals – IQ, attractiveness, height, charm, parental income – and the variance uncertainty thereof. Now, be careful in interpreting this, it is not the aggregate normal distribution. That means richer people may have just as high, or even higher, variance – but just from a much higher mean.

There is also reason to believe that the distribution is log-normal at the tail, implying a right skew. This is because income has a zero lower bound, but a more or less infinite upper bound (Bill Gates is effectively, if not technically, “arbitrarily” rich). Even if you believe, too optimistically, the the median American has a mean chance of success – that is he has a 1% chance of earning more than $350,000 you would see rot in the system. Because the log-normal distribution has a right skew, the 1% quantile for someone in the 1% should be higher than the .01% of the country as a whole. How high depends on your variance. But my odds would be lower than this statistical sample. By the way, if this weren’t the case, inequality would be higher. I call that good inequality. You could measure this, if you had a way of forming a good prior, as the difference between potential variance and observed variance. Of course potential is too hard to define!

Clearly the more accurately I can predict your income, the sadder meritocratic conditions are. You might put it another way: we want entropy in our socioeconomic system. We want noise. Something called the Theil Index of inequality does just this. As I explained earlier:

The math behind the measure (between 0 and 1) requires a fair understanding of information theory but the idea is lower index implies a higher economic “entropy”.

Your physics teacher might tell you that this is a bad thing (heat death and all) but, economically, it’s a little more complex. As Boltzmann showed, entropy increases as predictability of an event decreases. This means the entropy of a fair coin is higher than a biased one. Similarly, in a very equal economy it is very difficult to distinguish between two earners based only on their income. Indeed in a perfectly equal society this is impossible. However, as society stratifies itself, knowledge of ones income conveys far more information (redundancy), thereby decreasing entropy.

Within a system, Theil makes it easy for econometricians to understand the amount of total inequality due to within-group inequality and across-group inequality. If this is a little hard to grasp, think about it this way. If the total differences in economic output remained constant between countries (that is, India is still poor and Norway rich) but income was equally distributed within each country the residual inequality would be the “across-country” inequality. The residual from the converse, where all countries remain as unequal as they were, but world economic output is distributed equally to countries (not people), represents the “within-country” inequality.

It’s not the same as what I describe, but intutional similarities are clear. My bluster in making such strong bets on future income – proved through college admission – is a sign of a rotting economic system. I’ll end with a story of post-Revolution America. Divorced from British classism of lords and whatever the hell else you see on Downton Abbey, suddenly candlemakers and peasants felt good about working hard. Because class was determined by income, and gentrified Americans were suddenly at risk of loosing it all. As it ought to have been. Entropy was high in America, and low across the pond. That vibrancy crumbles every time someone like me is unfairly admitted into an institution of choice. It crumbles as the future distribution becomes so certain, that no one would believe that I may have to fry cook at McDonald’s. Part of the evil in this is that someone has the fry cook at McDonald’s (or do some other shitty job if robots take over). And because it’s not me – or anyone in the top 30% of American society – the risk is disproportionate on the bottom 30%. The rot does not come from lack of upward mobility. It comes from lack of downward mobility. I want John Boehner to make a big bet on my future potential. And I dare him not to change it once I give him my background. Because that’s the kind of society he thinks we live in.

By the way, it should come as no surprise that after 1776 American productivity shot well above Britain’s and the rest is history.