Archive

Tag Archives: complexity

In almost every “run” there is. I’m making this post a) because of its relevance to my recent calls for steep land value taxation which was of (relatively) high interest and b) because Paul Krugman and Noah Smith beg to differ. (How often does a not Very Serious Person get to disagree with Krugtron, after all?)

There are obviously strong theoretical reasons to believe that land prices correlate well with population. Probably the simplest argument is a rising demand on a perfectly inelastic supply. More intricately, David Ricardo argued that an increasing working class would steadily increase the demand for grains increasing rents earned on fertile land, thereby the net present value of all future returns and hence the price.

In America, Henry George – perhaps not coincidentally after a failed attempt at finding gold – angrily argues for land taxation in his Progress and Poverty, not with arguments too far from what the classical economists made or what you might hear today.

Smith thinks it’s all about agglomeration:

In other words, New York City real estate is high-priced because New York City is an agglomeration of economic activity. It is not high-priced because an increasing number of people are being forced to live in New York City. That isn’t even the case! No law makes people cram themselves into NYC (except in that Kurt Russell movie!); you are legally free to move out to North Texas and get a nice ranch. People choose to live in the heart of New York City because of the economic (and social) opportunities offered by proximity to all the other people living there. So they’re willing to pay lots for land.

Krugman piggyback’s on Smith’s point and also notes that the city can always spread out into unused land:

Even if people want to stay in existing metro areas, they can hive off “edge cities” at the, um, edges of these metro areas, so that the relevant population density — the density that makes land in or near urban hubs expensive — might not rise even if the overall population of the metro area goes up.

And we have data! Via Richard Florida, new work by the Census (pdf) calculates “population-weighted density” — a weighted average of density across census tracts, where the tracts are weighted not by land area but by population; this gives a much better idea of how the average person lives.

Together, they make a strong argument that only in the longest of runs (and perhaps not even then) when a city can grow in size no more, is land price truly a reflection of population constraints. The logical conclusion of this eternally long run wouldn’t be far from America – the whole thing! – becoming a city state.

But there are a few important problems with the argument, mostly Krugman’s suggestion that the city can just “hive off”. The city is not a discrete blot as much a diffused core. The agglomeration economies then derive from the ability to commute efficiently to a well-recognized center. Now note, not all cities are monocentric, and polycentric models exist. Most business activity and, hence, economic output happens in the immediate vicinity of these centers. Indeed, variations of density can differ by over an order of magnitude and land prices in the core may be over 30 times that in the “hived” peripheries.

But cities are limited in the extent to which they can expand. A highly-developed rapid transit system with a commuter rail allows for cheap and quick commuting. As a new working paper notes:

This raises the issue of population density. When we compare cities cross- sectionally, at the same time but across different sizes, we tend to find that larger cities are denser. Nevertheless, in the United States and increasingly all over the world, we also find many modern urban forms, and especially many low-density large cities, such as Atlanta or Dallas. Are these lesser cities than the West Village that Jane Jacobs knew, or the walkable towns that Smart Growth planers advocate? The perspective of cities as interaction networks tells us how all these urban forms can co-exist: the spatial extent of the city is determined by the interplay between interactivity and the relative cost of mobility. When it is possible to move fast across space, cities become much more diaphanous and are able to spread out while preserving their connectivity. It is in fact the diffusion of fast transportation technologies, especially now in developing world cities, that is allowing them to spread out spatially, sometimes faster than they grow in terms of population (32). This, of course, creates possible vulnerabilities. For example, if the cost of transportation relative to incomes suddenly rises (e.g. because it is tied to oil prices) then cities may not be able to stay connected, leading potentially to a decrease in their socioeconomic production rates. Ideas for shrinking cities that have lost population apply the same ideas in reverse.

From the same paper (Bettencourt, 2013) we get confirmation of something I suspected earlier. There’s a greater flaw with assuming land prices aren’t causally related with population in the short run, but agglomeration economies. Population causes agglomeration. (This is theoretically and empirically founded). That is, on a log-log scale of population and income, the slope is about 1.13. Or a doubling of population increases total income by 113%. People get disproportionately richer. More money is chasing the same land because more people are chasing the same land.

This means land prices do associate well with population, in fact it’s about a 50% increase in rents for every unit increase in population:

There are two important consequences for general land use considerations. First, the price of land rises faster with population size than incomes. This is the result of per capita increases in both density and economic productivity, so that money spent per unit area and unit time, i.e. land rents, is expected to increase by 50% with every doubling of city population size! It is this rise in the price of land that mediates, indirectly, many of the spontaneous solutions that reduce per capita energy use in larger cities. Cars become expensive to park, and taller buildings become necessary to keep the price of floor space in pace with incomes, thus leading to smaller surface area to volume, reducing heating and cooling costs per person. These effects may also create the conditions for public transportation to be a viable alternative to automobiles, even when the price of time is high. Thus, larger cities may be greener as an unintended consequence of their more intensive land use. Policies that increase the supply of land per capita or reduce transportation costs (such as urban renewal), while addressing other problems, will tend to create cities that are less dense and that require higher rates of energy consumption in buildings and transportation.

Smith argues that agglomeration effects are path dependent, and that theoretically if land use restrictions were implemented there could be a chance that agglomeration would decrease and hence price would fall.

He’s right about most of it, but I take a much simpler view of agglomeration:

Image

In other words, it’s all the population. And by the way, the figures are pretty similar across the world (that’s actually what I’m working on for India). Read the paper if you want to convince yourself this just isn’t coincidence.

Smith talks about technology, but then oversimplifies the idea that there’s always unoccupied land nearby. The farther I have to drive to get to the center, the more I pay for gas and the more wages and leisure I sacrifice in commute. Transportation technology can mitigate this to an extent, after which point a city is largely stationary. Krugman’s point that the average American lives in sparser city today is well-taken, but also simplistic. We have more medium-sized cities than ten years ago. We don’t have more Chicago’s than ten years ago.

It’s kind of like saying “the average person in a medium or rich countries today is poorer than he was ten years ago”. That’s because a bunch of people just entered that “medium” category. But we don’t have a new America or Norway. In fact, it’s noted that initial urbanization happens in what happened to Delhi or Mumbai a few decades ago, but after a point the drive comes from the new and small places that are now classified as “cities”.

In any case, Smith’s point is actually causally linked with population. It always has been, and always will be.

P.S. Also read Bill McBride on why land prices will rise as population.

Noah Smith has a post about why macroeconomics doesn’t work (well):

1.  There are a number […] “heterodox” schools of thought, [which] claim that macro’s relative uselessness is based on an obviously faulty theoretical framework, and that all we have to do to get better macro is to use different kinds of theories – philosophical “praxeology”, or chaotic systems of nonlinear ODEs, etc. I’m not saying those theories are wrong, but you should realize that they are all just alternative theories, not alternative empirics. The weakness of macro empirics means that we’re going to be just as unable to pick between these funky alternatives as we are now unable to pick between various neoclassical DSGE models.



2. Macroeconomists should try to stop overselling their results. Just matching some of the moments of aggregate time series is way too low of a bar. [It is important] when models are rejected by statistical tests […] When models have low out-of-sample forecasting power, that is important. These things should be noted and reported. Plausibility is not good enough. We need to fight against the urge to pretend we understand things that we don’t understand.



3. To get better macro we need better micro. The fact that we haven’t fond any “laws of macroeconomics” need not deter us; as many others have noted, with good understanding of the behavior of individual agents, we can simulate hypothetical macroeconomies and try to do economic “weather forecasting”. We can also discard a whole slew of macro theories and models whose assumptions don’t fit the facts of microeconomics. This itself is a very difficult project, but there are a lot of smart decision theorists, game theorists, and experimentalists working on this, so I’m hopeful that we can make some real progress there. (But again, beware of people saying “All we need to do is agent-based modeling.” Without microfoundations we can believe in, any aggregation mechanism will just be garbage-in, garbage-out.)

This led to a very interesting Twitter discussion:

Ashok Rao ‏Personally, I’d frame it that modern theory is fundamentally deductive in nature whereas the marcoeconomy is inductive/Bayesian.

Noah Smith ‏I think that’s a wrong way of seeing things. Real science involves an iterative process of induction and deduction.

Ashok Rao ‏But your claim also assumes there’s something “fundamental” about the economy in the sense of a real science. Is there?

Noah Smith Maybe. There’s real science in earthquakes but we can’t predict them at all. 

Ashok Rao ‏Hm. So there are systemic laws. But can these not be “understood” only through induction? As in the economy as machine learning.

Noah Smith Maybe!

Ashok Rao As long as we agree that there is a lot of doubt! 🙂

This conversation is at the very heart of my discomfort with much of modern economics, and I’ve been wanting to blog about this for a while, so now is as good a time as ever to dive right into it. Before I go on, I want to clarify that it seems like Noah and I have a very different understanding of what inductive is (or at least should be):

Ashok Rao ‏Yes but the 3 ‘main’ equlibria frameworks (general, classical game theory, and rational expectations) are all deductive. Right?

Noah Smith ‏No, you can easily make a Walrasian equilibrium happen in a lab, it’s very robust under certain conditions!

Of course, to the extent that empirical creations in the lab or double auctions are inductive Noah is right. But the macroeconomics behind this is principally deductive. By this I mean mathematicians economists have employed mathematics (the major premise) to a set of assumptions (the minor premises) to infer a conclusion. Ultimately, the theory is a grand syllogism, and highly deductive in nature. Further, the comparison to earthquakes doesn’t sit well with me. Physicists have very good microfoundations about how the earth works, and they’re not in perpetual motion. Scientists might fail at the aggregation of these bits of knowledge, but economics has a much more inherent flaw.

This is precisely the reason classical game theoretic approaches work only in “small lab settings” and that the Walrasian equilibrium holds only under “certain conditions”. That they do granted the right assumptions is tautological. Indeed, mathematics is internally consistent and hence in a concocted economy (the double auction) specific deductive models have to hold.

But by induction, I don’t mean experimental confirmation tempered by statistical reasoning. W. Brian Arthur at the PARC puts it better than anyone else:

This ongoing materialization of exploratory actions causes an always-present Brownian motion within the economy. The economy is permanently in disruptive motion as agents explore, learn, and adapt. These disruptions, as we will see, can get magnified into larger phenomena. 

If economists want to import one idea from physics, it should be Brownian motion:

One way to model this is to suppose economic agents form individual beliefs (possibly several) or hypotheses—internal models—about the situation they are in and continually update these, which means they constantly adapt or discard and replace the actions or strategies based on these as they explore. They proceed in other words by induction  

The best way I can imagine this idea is a “Bayesian machine”, if you will. While classical game theory, rational expectations, and competitive (Walrasian) theory might have inductive verification, Arthur is suggesting that the economy is inherently inductive.

The catch here is that for something that is at its heart inductive, there is not deductive verification. This is why many such as myself are skeptical of the mathematical models that dominate economics as they cannot either explain or verify anything. Often criticized, is the unrealistic nature of rational expectations. But in a real economy, I not only know that I’m not rational, but I also know that fellow agents are irrational too. This means I have subjective preferences, but also have subjective preferences about other people’s subjective preferences. These two degrees of subjectivity make many economic assumptions not just wrong, but impossible. (Think the epistemological difference between is not and can not be). 

This is why I disagree with Noah. While in  specific circumstances – equilibrium is a sub-class of non-equilibrium, after all – deductive engines work, macroeconomics has failed because the economy is inductive. At every moment in time there is a constant ferment, a change in attitude and belief. Standard economics holds that we all have one, perfectly rational prior. Induction holds that we all have pretty crappy priors that are constantly updated not only by economic outcomes, but also political and institutional motion.

Talk to goldbugs (actually, avoid it if you can). They’ll tell you about how they fear a government-Jewish orchestrated New World Order meant to line the pockets of rich bankers at the cost of the worker, by debasing our currency. Every economic indicator tells you they are wrong.

In a deductive model, it is impossible to accommodate for such people. If we modify a standard DSGE to tolerate such granularity it becomes intractable. A computer scientist would think about this as a machine learning problem. While there are a handful for which analytical solutions might work, the driving theme behind modern data mining and machine learning projects, even as simple as classification problems, is the flexibility of statistical computer science.

But the problem with induction is that, well, it’s not deduction. A well-formed syllogism guarantees its inference. Very much like the sum of two and two has to be four. On the other hand, induction is fuzzy and unclear. You can’t prove any sweeping laws and ideas with inductive reasoning, as Karl Popper has brilliantly argued. Indeed, inductive thinking is fragile against “black swan” events.

These aren’t real limitations, though. In the economists’ imagination theory trumps empirics. For the same reason, running large simulations on supercomputers is hardly as appealing as theorizing Walrasian economists. Proving things is really fun (if you’re smart enough).

But just as natural evolution doesn’t lend itself to equilibrium analysis, economists cannot believe that the fundamental structure of the economy is static. It is constantly reborn in updated preferences, political upheaval, and institutional ferment. Human minds and mathematics can never model this. But a supercomputer might help.

The Dynamic Stochastic General Equilibrium (DSGE) has received quite a bit of criticism since the onset of our financial crisis, from prominent economists such as the likes of Robert Solow and Greg Mankiw (indeed, criticism of mainstream economic models is hardly a point unique to the left):

New classical and new Keynesian research has had little impact on practical macroeconomists who are charged with the messy task of conducting actual monetary and fiscal policy. It has also had little impact on what teachers tell future voters about macroeconomic policy when they enter the undergraduate classroom. From the standpoint of macroeconomic engineering, the work of the past several decades looks like an unfortunate wrong turn.

Economists were reminded of this flaw recently by Larry Summers (via Brad DeLong):

I was tempted to blast off at DSGE. But what is it that wouldn’t be a DSGE? A SCPE model. It is hard to see how that would be an improvement. Is macro about–as it was thought before Keynes, and came to be thought of again–cyclical fluctuations about a trend determined somewhere else, or about tragic accidents with millions of people unemployed for years in ways avoidable by better policies. If we don’t think in the second way, we are missing our major opportunity to engage in human betterment. And inserting another friction in a DSGE model isn’t going to get us there. Now it is easier to criticize than to do. But multiple equilibria, fragile equilibria, and so forth have promise. A little bit of avoiding what’s happened over the past six years would have paid enormous dividends…

And Noah Smith has another fantastic takedown of the DSGE:

Imagine a huge supermarket isle a kilometer long, packed with a million different kinds of peanut butter. And imagine that all the peanut butter brands look very similar, with the differences relegated to the ingredients lists on the back, which are all things like “potassium benzoate”. Now imagine that 85% of the peanut butter brands are actually poisonous, and that only a sophisticated understanding of the chemistry of things like potassium benzoate will allow you to tell which are good and which are poisonous.

This scenario, I think, gives a good general description of the problem facing any policymaker who wants to take DSGE models at face value and use them to inform government policy.

He goes on to suggest:

Experiments, detailed studies of consumer behavior, detailed studies of firm behavior, etc. – basically, huge amounts of serious careful empirical work – to find out which set of microfoundations are approximately true, so that we can focus only on a very narrow class of models, instead of just building dozens and dozens of highly different DSGE models and saying “Well, maybe things work this way!” Second, I’d suggest incorporating these reliable microeconomic insights into large-scale simulations.

Now this is all fine and dandy, but let’s take a step back to see why the DSGE has such a grasp on modern economics. Part of it has to do with its application of “microfoundations” – the idea that sound macroeconomic models are but the aggregation of various, heterogenous, utility-maxomizing agents rather than the assumption of macroscopic relationships between national output and employment as a method of framing models. This is known as the Lucas Critique.

But here’s the problem with the DSGE, the “dynamic stochastic” part of the model is pretty weak. By this I mean that while, yes, you can keep adding “frictions” to make the model a more realistic map of the macroeconomy, the system gets awkward very, very quickly. This is because understanding human behavior through optimization and equilibrium (as all mainstream economics is) becomes ridiculously difficult to solve compute. As an example, determining Walrasian equilibrium (don’t even get me started on how unrealistic this even is) is, “NP Complete”. (This means if you find an efficient solution to this you, a) receive the Millennium prize and b) can break the RSA crypto system).

When programmers reach a problem they know to be impossible to solve in an efficient manner they either tolerate approximations or find robust heuristics. On the other hand, economists seem to think that they can somehow ignore the laws of computation altogether.

I was disappointed that neither Summers nor Smith really encouraged the study of agent-based modeling (ABM). – To be fair, Noah (on Twitter) seems to be fairly enthused..

The advent of ABM comes with the preponderance of cheap and powerful computing, modern algorithms, and the study of complexity and emergence. Some of the most fascinating work done in this field is Epstein and Axtell’s Sugarscape. (See more here). Epstein et al. imagined a world with two goods – sugars and spices – with randomly distributed “mounds” across society, riddled with agents that have heterogeneous preferences and skills.

From this remarkably simple set of frictions (indeed – so simple that even the DSGE could have handled it), Epstein et al. simulate an economy with very basic and realistic macroeconomic emergence. Most remarkably, the idea that trade creates wealth, but inequality.

With modern computers, it’s very easy to program agents to act in a local environment based on simple conditional predicates. The brilliance of ABM, however, is that a realistic set of preferences and frictions – both local and global – can be instituted within the program without hitting exponential time. Indeed, with sufficient research from physicists, computer scientists, economists, and behaviorists it might just be possible to model our economy this way.

This is actually exactly what the European Central Bank is doing. This is precisely the kind of thing America should be leading. We, by far, have the best immediate access to the skilled researchers in the fields needed for ABM – the best finance quants, physicists, and economists.

Economists are (usually) very eager when the government funds large-scale scientific research or training. Such progress is very much in the American spirit. However, economists’ approach to their own discipline is a little more confusing. On the mainstream left, Summers understands (much better than I do) the flaws of a DSGE. However, he still believes in the possible success of “multiple” and “fragile” equilibria. His mindset rests, rather firmly, in the idea of economics as a game of optimization.

It’s time for the Fed to sponsor a contest, much like the DARPA self-driving car, that rewards a team of scholars who program an ABM that best captures the macroeconomy. This would be under a larger imperative to study economics through computational techniques that are in polynomial time (i.e. doable) but still very realistic representations of the world.

The idea of ABMs was floated in Science and Technology hearings of Congress back in 2010, but seem to have made little headway outside of imaginative computer science departments since.

It’s time to, dare I say it, spend a lot more on economic research. The randomized control trials are great, but here’s to the day when the Fed buys a supercomputer… or ten.