Dynamic Stochastic What? (Or why we should spend a lot more on economics research)
The Dynamic Stochastic General Equilibrium (DSGE) has received quite a bit of criticism since the onset of our financial crisis, from prominent economists such as the likes of Robert Solow and Greg Mankiw (indeed, criticism of mainstream economic models is hardly a point unique to the left):
New classical and new Keynesian research has had little impact on practical macroeconomists who are charged with the messy task of conducting actual monetary and fiscal policy. It has also had little impact on what teachers tell future voters about macroeconomic policy when they enter the undergraduate classroom. From the standpoint of macroeconomic engineering, the work of the past several decades looks like an unfortunate wrong turn.
Economists were reminded of this flaw recently by Larry Summers (via Brad DeLong):
I was tempted to blast off at DSGE. But what is it that wouldn’t be a DSGE? A SCPE model. It is hard to see how that would be an improvement. Is macro about–as it was thought before Keynes, and came to be thought of again–cyclical fluctuations about a trend determined somewhere else, or about tragic accidents with millions of people unemployed for years in ways avoidable by better policies. If we don’t think in the second way, we are missing our major opportunity to engage in human betterment. And inserting another friction in a DSGE model isn’t going to get us there. Now it is easier to criticize than to do. But multiple equilibria, fragile equilibria, and so forth have promise. A little bit of avoiding what’s happened over the past six years would have paid enormous dividends…
And Noah Smith has another fantastic takedown of the DSGE:
Imagine a huge supermarket isle a kilometer long, packed with a million different kinds of peanut butter. And imagine that all the peanut butter brands look very similar, with the differences relegated to the ingredients lists on the back, which are all things like “potassium benzoate”. Now imagine that 85% of the peanut butter brands are actually poisonous, and that only a sophisticated understanding of the chemistry of things like potassium benzoate will allow you to tell which are good and which are poisonous.
This scenario, I think, gives a good general description of the problem facing any policymaker who wants to take DSGE models at face value and use them to inform government policy.
He goes on to suggest:
Experiments, detailed studies of consumer behavior, detailed studies of firm behavior, etc. – basically, huge amounts of serious careful empirical work – to find out which set of microfoundations are approximately true, so that we can focus only on a very narrow class of models, instead of just building dozens and dozens of highly different DSGE models and saying “Well, maybe things work this way!” Second, I’d suggest incorporating these reliable microeconomic insights into large-scale simulations.
Now this is all fine and dandy, but let’s take a step back to see why the DSGE has such a grasp on modern economics. Part of it has to do with its application of “microfoundations” – the idea that sound macroeconomic models are but the aggregation of various, heterogenous, utility-maxomizing agents rather than the assumption of macroscopic relationships between national output and employment as a method of framing models. This is known as the Lucas Critique.
But here’s the problem with the DSGE, the “dynamic stochastic” part of the model is pretty weak. By this I mean that while, yes, you can keep adding “frictions” to make the model a more realistic map of the macroeconomy, the system gets awkward very, very quickly. This is because understanding human behavior through optimization and equilibrium (as all mainstream economics is) becomes ridiculously difficult to solve compute. As an example, determining Walrasian equilibrium (don’t even get me started on how unrealistic this even is) is, “NP Complete”. (This means if you find an efficient solution to this you, a) receive the Millennium prize and b) can break the RSA crypto system).
When programmers reach a problem they know to be impossible to solve in an efficient manner they either tolerate approximations or find robust heuristics. On the other hand, economists seem to think that they can somehow ignore the laws of computation altogether.
I was disappointed that neither Summers nor Smith really encouraged the study of agent-based modeling (ABM). – To be fair, Noah (on Twitter) seems to be fairly enthused..
The advent of ABM comes with the preponderance of cheap and powerful computing, modern algorithms, and the study of complexity and emergence. Some of the most fascinating work done in this field is Epstein and Axtell’s Sugarscape. (See more here). Epstein et al. imagined a world with two goods – sugars and spices – with randomly distributed “mounds” across society, riddled with agents that have heterogeneous preferences and skills.
From this remarkably simple set of frictions (indeed – so simple that even the DSGE could have handled it), Epstein et al. simulate an economy with very basic and realistic macroeconomic emergence. Most remarkably, the idea that trade creates wealth, but inequality.
With modern computers, it’s very easy to program agents to act in a local environment based on simple conditional predicates. The brilliance of ABM, however, is that a realistic set of preferences and frictions – both local and global – can be instituted within the program without hitting exponential time. Indeed, with sufficient research from physicists, computer scientists, economists, and behaviorists it might just be possible to model our economy this way.
This is actually exactly what the European Central Bank is doing. This is precisely the kind of thing America should be leading. We, by far, have the best immediate access to the skilled researchers in the fields needed for ABM – the best finance quants, physicists, and economists.
Economists are (usually) very eager when the government funds large-scale scientific research or training. Such progress is very much in the American spirit. However, economists’ approach to their own discipline is a little more confusing. On the mainstream left, Summers understands (much better than I do) the flaws of a DSGE. However, he still believes in the possible success of “multiple” and “fragile” equilibria. His mindset rests, rather firmly, in the idea of economics as a game of optimization.
It’s time for the Fed to sponsor a contest, much like the DARPA self-driving car, that rewards a team of scholars who program an ABM that best captures the macroeconomy. This would be under a larger imperative to study economics through computational techniques that are in polynomial time (i.e. doable) but still very realistic representations of the world.
The idea of ABMs was floated in Science and Technology hearings of Congress back in 2010, but seem to have made little headway outside of imaginative computer science departments since.
It’s time to, dare I say it, spend a lot more on economic research. The randomized control trials are great, but here’s to the day when the Fed buys a supercomputer… or ten.
Pingback: A not-so-dismal “science” | This is Ashok.