Advertisements

Archive

the Economy: Home and the World

David Reich’s new book about the ancestral origins of human populations and separately an MIT Technology Review article which names a website that essentially turns 23andme data into what looks like an IQ score have rekindled perennial controversies, like the existence of a genetic basis of cognitive differences observed across various populations, the actual verisimilitude of which is beside the point and beyond my expertise; rather, this is a comment on what looks like the intractability of further progress on debate and relevant to the people of divergent persuasions eagerly awaiting the precipitous flow of fresh genetic data. This controversy more or less hinges on the propriety of psychometric definitions of intelligence.  Critics contend that these measure some joint function of both intelligence and environment.  This would then certainly preclude agreement about the phenotypic definition of intelligence, thereby voiding common inference from future genetic data before we even start (e.g. just as genetic bases for anaerobic performance cannot be identified from the speed of a paraplegic’s sprint).

One might hypothesize that variations associated with things like intelligence also hold within populations, which would be enough to make the point if population is a hidden factor governing the aforementioned “environment”.  Obviously I’m no expert but it seems as if this would curtail the explanatory power of some “attenuated” polygenic score of intelligence since the genetic drivers of intelligence might vary between groups.  For example, a single SNP that accounts for approximately 30 percent of the phenotypic variation in skin tone between Europeans and West African explains a similar portion of the same across South India, but not East Asians.  In other words, Asians and Europeans are white for different reasons.  Therefore, anything short of parity in the measured amount of intelligence for people of two different populations that exhibit the same subset of variations flagged in the association study may well fail to reject other factors to the satisfaction of skeptics who suggest them.  I don’t see why the controversy won’t retain the same character as it has had for the past few decades, which is disagreement about the extent of bias that inheres within psychometrically-derived definitions of intelligence.  (A compact and accessible review of the genomic details was recently published).

This is not to favor either position.  Economists have strong and divergent opinions about the efficiency of markets, even though the precise formulation of the problem boils down to a “joint hypothesis problem” which according to Wikipedia “refers to the fact that testing for market efficiency [because] anomalous market returns may reflect market inefficiency, an inaccurate asset pricing model, or both.”   Likewise, this and other tangential impasses seem related to the fact that it can be effectively impossible to reject hypotheses that derive from a very rich and complex model of the world using tractable empirical methods, since this would involve controlling for the enormity of a subject’s upbringing and socialization with the world.  Yet it is precisely such immeasurable factors like contemporary defense of Confederate monuments, interactions current members of Congress have had with police officers, and “oppression, discrimination, and social resentment” that feature prominently in divergence between Ezra Klein’s and Sam Harris’ interpretation of the data.  It is all but impossible to find both a falsifiable hypothesis and an experiment you could conceivably execute given such a powerful hypothesis class.

This is not to disparage either position.  An alleged victory of 20th century psychometrics was the belief that a human behavior as complex and versatile as intelligence – used to describe a range of pioneers from Shakespeare to Ramanujan to Einstein – could be parsimoniously characterized by aggregating the population-normed performance on tests that were surprisingly simplistic to design and administer.  By contrast, there are others who believe intelligence is exactly as nuanced as the layman would imagine.  It seems plausible in fact that one could conclusively demonstrate that further inference on this particular question is intractable as that would essentially assume the conclusion along the line.  This would in turn make it possible for those like Professor Reich to write about various genomic associations without causing controversy so long as the work is restricted to genomic inquiry rather than claims about the sociological character of things.

Advertisements

A number of economic analyses focus on changes in ownership between real and financial assets.  By way of example, Blackrock wrote a report last year titled “The Ascent of Real Assets”, suggesting that investors are increasing their ownership of “real” things like land, infrastructure, and natural resources over “financial” things like stocks or derivatives.  The simplest dichotomy between real and financial assets I can find comes from Lasse Pedersen who describes the former as things that “produce goods and services” and the latter as things that offer a claim on the former.  I think a conceptually-meaningful difference between the two may be slightly more complicated.

The first point is that various liabilities are not measured and classification of what is measured as real or financial depends on the legal allocation of liability, which is less transparent, for example, if two partners pledge their personal wealth to the banker than if they incorporate allowing that put to be realized in the traded price.  Even without debt, those who buy goods and services sold may have some future claim on damages incurred depending on the judicial system, and the value of these claims on the real good of “factory + management” isn’t included in measures of real versus financial.

Moreover, if the company is incorporated and traded, expectations of future growth are included in the price.  In this sense, the stock market is just sort of an extension of the fractional-reserve nature of the banking system.  If there is a central bank that holds gold reserves as capital against all bank reserves, private banks may fund various partnerships through bank loans.  Assuming that the aggregate banking system does not decide to increase its reserve ratio, all real assets that have been bought may be sold if prices have not changed and new buyers can expect to operationalize the assets approximately as well as the previous owners.  But if most partnerships incorporate the aggregate market capitalization will eventually reach a point where it cannot be materially liquidated at once without sparking something akin to a bank run.

Seeing this is simple.  Everything can only be sold for how much money there is.  There is enough money to buy the original real assets like factories and the like so there must be enough money to sell what was bought if the banking system does not increase reserves held at the central bank.  If the market capitalization after incorporation increases more than the amount of money created by the banking system, it is essentially funded by some entity acting like a zero-reserve bank.  Ultimately, therefore, it seems the entire dichotomy is a question of the unit in which central bank liabilities are denominated.  If central bank liabilities are only denominated in something the government can create, people that hold the currency become the entity acting like a “zero-reserve bank”.

Given this, I’m not sure why people occasionally try to classify changes in financial versus real asset ownership between countries or over time.  Much of it is just a reaction to unmeasured changes in the allocation of liabilities, like tort or bankruptcy laws.  If the purpose of a real asset is to somehow assert a claim on current production that is in some way guaranteed, its format depends on the liability structure of the central bank.  And it must in some way be guaranteed for it to be interesting because otherwise it is no different than a financial asset in that there is nothing curious about the fact that relative prices change.  Someone might buy from the government a claim on some portion of the fixed currency it creates which would be analogous, in a previous era, to the ownership of gold or silver.  This truly “real asset” would be a “real liability” on part of the government, and therefore everyone

 

This title is borrowed from Yanis Varoufakis’ polemic against European austerity.  I haven’t read very much of his book, and this post isn’t about European macroeconomic policy at all.  Rather, I want to consider public responsibility towards the poor more generally.  Just under 2 million children are homeless in America.  A good number more don’t eat well.  And yet even more lack access to quality healthcare or appropriate education.  Why aren’t we doing more?

One answer may be that the long quest to solve each of these problems through government intervention has desensitized individuals of their responsibility towards fellow citizen.  The many words written each year about various healthcare, welfare, and education policies, whether liberal or conservative in nature, presume that it is not reasonable to expect that private individuals can or will collectively aid the weak and helpless.  Of course, there’s an inexorable logic to this belief:  it’s hard to feel much responsibility towards the poor when the government swaps our moral obligation in exchange for much of our income.

And so it happens that when people see pictures of Syrian refugees or walk by homeless kids they will resolve to end this madness by voting for Bernie Sanders.  It must after all be true that you would have felt like an altruist after voting for Barack Obama if you felt that the rich people voting for Mitt Romney tax cuts were greedy.  It must also be true that many of the people that take to saying things like “People like me can afford to pay more in taxes” have an attenuated sense of obligation towards the poor and helpless.  They did pay their taxes, after all.

In early post-revolutionary America many roads that should not have been built were built.  They should not have been built because their investors rarely expected to recoup the cost of their investment.  They were built anyway because their many owners, not especially rich or established, felt that public improvement was a social and moral obligation.  This wasn’t an anomaly; a great many roads were successfully built and maintained on this model of private investment.

Wherever individual responsibility is abdicated to the government, the outcome will be judged not through the lens of individual morality but one of majoritarian politics.  So it will tautologically become true that the weak will suffer what they must because the primary channel of altruism becomes one of “vote for less suffering” rather than “reduce suffering yourself”, and the question of how much people must suffer is dictated by political currents more than individual virtue.

The antiwar left may have succeeded where the current one fails because individual resolve, by itself, was not enough to stop a violent superpower without political influence over the civil government that controlled it.  The same cannot be said about contemporary concerns of the left.

I recently explained why I thought price level estimates may not be as useful as the ideal price index is in economic theory.  Although theoretical treatments of the price level do consider quality-adjustments that are necessary, a lot of technological innovation may better be thought of as a reduction in the price of some new good from infinity rather than an improvement in the general quality of an already-existing item.  Three problems immediately strike me as relevant:

(1) As an example of how revolutionary innovation presents a conceptual problem, consider that the leap from horse-carriages to automobiles could be included in the quality-adjusted price for transportation, albeit imprecisely.  On the other hand, the Internet allowed us to do things that were previously impossible; by the mid-1990s the price of a consumer email had fallen from approximately infinity a decade ago to the cost of a computer and dial-up service.  Similarly, the advent of modern home appliances provided a higher quality substitute for full-time homemakers at a lower price than the unmeasured opportunity cost of having women work at home.

(2) In addition to conceptual problems such as the above, quality adjustments are too crude and may give a false sense of empirical precision.  A so-called hedonic adjustment for laptops might regress the price against memory and display resolution to recognize innovation in computer technology that may not be evident in the price.  But in practice these adjustments can’t retain relevance for more than five or six years; for example the relationship people have with their personal laptop has changed immeasurably over the past 20 years.

(3) There’s no good way to properly adjust for durable goods.  The Bureau of Labor Statistics presents this as a dilemma because economists think the price of durables increases more slowly than frequently-consumed goods.  A more conceptually-intractable issue is that as the economic life of goods increases the durable purchase is depreciated over a longer time; a special version of quality improvements that aren’t measurable.

Each of these pose a big problem because they affect some goods and not others.  For example, the most durable purchases, like houses and cars, have sufficiently developed rental markets allowing economists to discover imputed prices.  Laptops and phones may last longer now, though it would be hard to observe.  This results in an inconsistency that corrects itself, resulting in a directional bias.  Let’s say laptop prices increased by 20 percent over 5 years, quality remained similar, but usable life increased from 5 to 15 years.  A personal expenditures based index would initially note an increase in price level as people purchased the more expensive laptop.  However, as they replace laptops at a much lower rate than before, the relevant measured expenditure would indicate a decline in the price level.  After some time, the measured price level of the two laptops may even out, but there is a problem because so long as we know that there is rapid innovation within an industry, the presented price level estimate presents a directional bias.

Year to year changes in the price level may not have some of these problems, but they are also irrelevant when they are as low as they are.  It may be true that things that already exist have a tendency to slowly increase in price over time; but it’s not clear to me why that’s relevant for long-term monetary policy decisions, since the measure doesn’t seem to tell you much that walking around a few retail stores does not.

Measures of inflation seem more useful to me as a benchmark price for common consumer items, than as some estimate of a theoretically-relevant price level that is used to inform monetary policy choices or other important economic policy.

Economists suggest that increases in nominal income partially reflect higher prices for goods and services, a phenomenon more commonly known as inflation.  Wikipedia defines inflation as,

a sustained increase in the general price level of goods and services in an economy over a period of time resulting in a loss of value of currency.

You can pretty easily find estimates of common purchasing power indices online to examine the portion of your income growth that is actually fake.  According to the CPI-U, for example, reasonable people would be indifferent between purchasing $60,000 of 2015 goods and services at 2015 prices, $34,000 of 1990 goods and services at 1990 prices, $20,000 of 1980 goods and 1980 prices, and $11,000 of 1970 goods and 1970 prices.  Most reasonable people, of course, would not be anywhere close to indifferent about these choices.

In 1970, the median income was about $4,000 – so $11,000 sounds pretty good.  In a “competing for fixed resources compared to the median” sense, $11,000 in 1970 would be like $165,00 today.  The difference between earning $165,000 and $60,000 today is nothing close to the difference between earning $60,000 today and $11,000 in 1970.  Access to bigger houses in better cities, slightly better healthcare, fine dining, and cashmere sweaters is many orders of magnitude less valuable than improvements in health technology that have dramatically increased the quality of the last-quarter of life, ubiquity of high quality entertainment, explosive growth in consumer choice, and safer working conditions for the people that make all of the above possible.

This effect isn’t just a result of the long time that has passed in my example above.  According to the same measure, $45,000 in 2002 is similar to $60,000 in 2015.  The median annual rent increased by about $3,000 over this time.  Food prices probably didn’t increase much, and even if they did, most people would probably rather eat at home more frequently if it meant access to 2016 stuff instead of 2002 stuff.  In fact I would find it totally reasonable if people would rather earn $45,000 today than $45,000 in 2002, suggesting that it is at least reasonably possible that the Federal Reserve’s most important measure has a margin of error greater than 100 percent.

Even if this insane effect was a result of inflation being inconsistent over time, it doesn’t matter.  Small increases in price level from year to year may be both correct and totally irrelevant – the only reason people care about inflation is long-term compounding effects that result in meaningful differences.  There is obviously a distributional component.  Inflation is probably quite a bit higher for very poor people that spend most of their money on fixed things like rent and for very rich people who can only direct income towards buying more beach houses or private jets than it is for the rest of us.  In particular, earning $4,000 in 1970 is probably better than earning $4,000 in 2015.

For this reason I find it amusing that there are economists at the Federal Reserve that are studying the question of whether to raise the cost of borrowing by 25 basis points this year or next based on changes in a measure which, in aggregate, is probably off by several orders of magnitude over any length of time that is relevant.  The answer to the secular stagnation paradox might be that we’ve had persistent deflation of over 1 percent since 1990.  The answer to Robert Solow’s famous and oft-repeated quip that “You can see the computer age everywhere but in the productivity statistics,” might be “the productivity statistics you’re using are clearly wrong.”

The United States is party to a number of international tax treaties adapted to prevent double taxation of the same income.  At first pass, it seems reasonable that we offset a citizen’s income tax liability by taxes paid to foreign governments.  Conceptually, however, these arrangements don’t make all that much sense.

Proponents of the income tax offer myriad justifications that are used to assert the government’s right over some portion of private economic creation (these justifications come in the form of “rich people use public institutions and must therefore pay” or “the rich can afford to pay for the poor” or so forth);  and tax treaties come about because two government’s assert their right over the same thing.

But these treaties are just a lazy way to collect a portion of justified revenues in a reasonable way.  If some set of countries have a global tax regime where rich countries set higher tax rates than poor countries, the poor countries are deprived of tax revenue to which they are entitled that would have derived from poor country tax residents that earn some income in a rich country.  Despite this incongruity, most people would agree that tax treaties make a lot of sense given domestic tax laws.

Therefore, a sane tax regime might be one where tax treaties make no sense.  (For example, it would make no sense to offset the land tax liability I incur by owning a Manhattan penthouse by similar taxes owed on properties in south Bombay.)

Nor is the answer “end global taxation”.  Within an income-tax oriented system, a global tax might be necessary in some fashion.  For one, because income is a ethereal accounting notion, capital structure choices might result in present value differences between tax liabilities assessed on economically-identical transactions.  More concretely, a corporation – or individual with complicated finances – can’t divide income earned by territory in a sensible way.  Consider Apple, which designs high-end technology in Cupertino, but produces them cheaply in China from where the final goods are exported.

So if we didn’t have a global tax system a lot of income that was earned in the United States would never be taxed.  Capital gains taxes are one way of getting around that in part, but this assumes the contrary as it would imply that the owner of Apple is paying taxes on China income.  In practice it’s even more complicated since Apple isn’t owned only by Americans.

This is not to mention the fact that the above scenario assumes, contrary to most of reality, that capital income is taxed on a mark-to-market basis rather than as gains are realized.  Though this might make economic sense, it would just create a big incentive for US-domiciled corporations to be owned by foreigners who can afford to hold their appreciating stock.

Perhaps we should have a tax regime where bilateral treaties like the ones discussed above are nonsensical.  Carbon, land, and congestion taxes come to mind.

I’m writing this in the context of numerous takes on the implications of Hillary Clinton’s clear victory in the national popular vote, and more generally other “data-driven” analyses of electoral results that try to explain voter behavior, like any articles expressing conviction in one direction or the other that Bernie Sanders would have won against Donald Trump.

The most important point is that the fact that Hillary Clinton won the popular vote, by itself, is approximately meaningless.  If we had a popular vote based system maybe more people on either side that took California for granted would have voted.  Likewise with Texas, and every other state.  No one has any idea.  (Obviously both campaigns would have operated differently in a popular  vote world, as as been commonly noted, but it’s entirely possible Trump would have won even if neither campaign changed its strategy.)

The other point is that it’s probably hard to make sense of voting data without modeling voter regret in some way.  Republicans in early primary states that don’t like Trump, but stayed home on the assumption that Trump wouldn’t win, may have regretted their choice later on.  Similarly, Democrats that stayed home in Michigan since the polls were predicting a clear Clinton victory may have regretted that choice on November 9th.  This is the more detailed way of saying that Hillary Clinton’s popular vote victory might say more about Joe Biden’s popularity than her own, depending on how you model each vote.

As far as I can tell there’s no obvious way to get around this problem with only data.  Many people thought Donald Trump would loose Florida based on early voting data and historical trends of right-wing, election day turnout.  But the contest between Hillary Clinton and Donald Trump is obviously a historical anomaly, like pretty much every other presidential election, and therefore informing predictions based on history is nothing more than a slightly educated guess.

I suspect it means something that Hillary Clinton won the popular vote.  But any explanation of what that might be requires some incorporation of the fact that voting is a funny behavior, inconsistent over time, and sensitive to the conditions under which the ballot is being cast, which itself recursively relies on expectations of these parameters for the population held by individual voters.

This amounts to a rather complicated model of voting that is probably impracticable.  Instead we might try disciplined verbal reasoning about knowable facts.