Evolve a Model: On Endogenizing the Future

As evidenced by the huge interest in – and, indeed, rebuke of – the so styled “Dynamic Stochastic General Equilibrium” models (DSGEs) we know that economists desperately want a predictive framework on which to guide policy. The epistemological problems associated with prediction, of which there are many, aside, this is a laudable goal. However, unlike the stock market, there seems to be little financial reward for “getting it right”. To the extent many of these models are developed for no-name technocrats within a central bank, the intellectual glory also seems small. Maybe I’m wrong about this, but intellectual glory in economics seems well-divorced from prediction per se* (especially when our prior on the ability to do so is small). Without well aligned incentives, as those in the profession certainly know, the results are poor. This must be changed; yes, we must apply economics to itself.

I initially believed that Federal Reserve sponsored “contests” for the best model would create a fertile environment for development. Models would not be restricted to (what I think are) dubious DSGEs, but may include the more promising agent based models (ABMs) which rely not on analytics but computation. There seems to be hope in DSGEs with financial frictions (see this), but that just strikes me as a flavor-of-the-month type of deal. I think, now, there is a better way: the prediction market.

A DSGE has, broadly, two components: the model specification (incorporation of wage rigidities, the financial system, etc.) and parametric estimation (invariant discount rates, cost of adjusting a firm’s capital stock, etc.) The former are endogenous, and the latter exogenous. A little digression here. While the emergence of DSGEs is linked with the Lucas Critique, which warns against the use of aggregate economic variables in policy action, it is questionable to suggest that the DSGEs are in any meaningful sense immune thereof, as almost every proponent claims. It is much more accurate to say that DSGEs assume the Lucas Critique away, by believing in policy-invariant parametric estimations. Indeed, many are estimated with by “historical averages”, which would earn Robert Lucas’ ire. More succinctly: microfoundations mutate.

Back to the idea. A full model can be captured by its internal specification and parametric estimations. The former are usually various iterations of neoclassical theory (rational expectations, sticky prices, and such). The latter are generally taken as Bayesian priors formed from quite sophisticated statistical methods.

Imagine now a marketplace wherein participants may “bet” on various models. The bet would be specified in the form of an investor’s odds that a given model will deviate from its prediction over n years by less than k%. (That is the total area between the projected and actual paths). This ensures it succeeds on out-of-sample projection. Model designers may “upload” their product onto the market, with acceptance from regulators. Tweakers may take an existing model and create another iteration thereof by adjusting its parameters. The site will show each model with every associated mutation. (Important to note, a prediction market for models is quite different for one on outcomes: for example, long-term Treasuries are a good bet on growth expectations but tell us nothing on economic structure)

If such a market existed, and were thick, it would have policy relevance. For example, currently central banks employ statistical techniques to measure discount rates and capital adjustment costs. That’s great, but hard to measure, resulting in weak priors on its veracity. By allowing investors to tweak the model framework – which will grow large incredibly fast – the market will estimate the best guess on the expected future parameters. This would better overcome the Lucas Critique, as predictions from this are not simple statistical aggregations, but rational expectations.

(Quick sidebar: the market might be illiquid, so it’s clearly not something on which central banks can form definitive rules, but it still aggregates information more efficiently and creates more lasting incentives – it is, as I will explain, best seen as a source for metamodeling).

The market would also tell us the overall confidence investors have in one model over the other. While in-sample (past) accuracy would obviously be incorporated into part of the expectation, this would also include more important intangibles that only a whole market can know. Therefore, this prediction market would tell a central banker a) the expectedly best model and b) the expected parametric estimates for said model.

This has incredible policy relevance, but also tells economists what the market estimates wage rigidities are. There is no prediction market to measure this today, as the answer to the question is itself is tethered to a host of theoretical and statistical estimates. A prediction market for a complete DSGE, on the other hand, would yield a more fruitful answer. I am not being fully honest. If the market predicts that parameters for the best model are {a, b, c, d…} we can only know that it believes that a is the closest estimate given the other parameters. Therefore, we can only form a prior on wage rigidity by assuming many other coefficients as well. This is still hugely useful, I can’t think of an economist that wouldn’t want to know these numbers.

There is also the obvious reality that the parameter set for one model will not be the same as another. This is tautological, as models have different parameters to begin with (like financial market frictions in post-crisis DSGEs). More importantly, if investors believe a certain model places incorrect emphasis on a certain parameter (like wage rigidities) it will skew the bet towards said parameter without necessarily revealing anything nontrivial. Therefore the best parameters for the best model are the key. Compare polls versus prediction markets!

The market would then create for us a better model than ever existed which itself would then enter the marketplace and tolerate bets. Just like a stock market, it would be in constant ferment, but hopefully show broad underlying trends which themselves may be incorporated into future designs. Modelers might learn that the market doesn’t think price rigidities are important and hence focus on other frictions allowing a more efficient use of dynamism, as analytical methods are inherently intractable after a point.

The prediction market would also be open to ABMs and pure statistical estimates. The former are still quite new to economics, and hence underdeveloped, and the latter have shown to be less accurate than DSGEs in the long-run. And to the extent all models endogenize central bank behavior, as they should, the market will be immune to government failure from a surprise change in monetary regime.

I have explained how we can use the market to tell us the right DSGE. But we can also use it to incentivize the creation of better models, microfounded or not. Think about how Nate Silver made election forecast history in 2008 and again in 2012. He relied a little bit on polls (statistical estimates or individual DSGEs) and a bit on Intrade (the prediction market). He decided he knew better than the market as a whole – and way, way better than some idiotic estimates from Rasmussen or Gallup. He profited. Of course, if he did bet on a prediction market, his winnings were limited by the fact that everyone else thought Obama would win as well.

An election forecast is discrete (though it may be analogized in the form of distributions etc.) The growth rate of productivity, wages, capital, and GDP are not. If I’m a Nate Silver, and I feel the current favorite of the prediction market can be improved, I go home and create a nice little DSGE. I’m also the first one to know about this DSGE, by definition. I form a subjective prior on the veracity of my product, which ipso facto should be higher than my odds that the market pick is right, and upload it to the market, and place my (relatively) high odds on the pick: and a nice, juicy bet.

If it turns out my model does “beat the market”, I’ll earn a healthy sum until other models and the market-pick tweak it into the average (weak EMH-ish). Suddenly, model designers across the world are incentivized not just to create DSGEs that will most beautifully grace the letters of AER or impress friends, but those that will best predict out-of-sample trends. So not only do we have a better aggregation of existing models and parameters which itself is hugely useful, but we have, without any cost, incentivized people to create a better model. Furthermore, you may not be a model designer but a econometrician. You now have the incentive to gather the best estimates of future parameters and tweak it off your DSGE of choice. Clearly such a market will also encourage cleaner and better statistics, as the goal is now accuracy not publication.

Of course, there is one flaw. Say I don’t have much money of my own to begin with, and it’s certainly not disposable. I might have a beautiful DSGE, but I wouldn’t bet much on it due to risk aversion. There are three reasons why this doesn’t matter much:

  • The variance of wealth and attitudes towards risk will be small among the sample of people that can actually create a better DSGE (largely academics but almost certainly all upper middle class folks).
  • To the extent the work is sponsored by a university, the risk (and reward) will be shared institutionally.
  • At worst I will not be incentivized, and chances are I will. If don’t have confidence in myself to bet on the market, I will just publish my DSGE as I do today, and receive the same credit that I do today. So even at its worst (and this is highly unlikely), this market only better aligns incentives.

I believe if we were to create such a market, with the usual expectations of completeness and competitiveness, the economic prediction landscaping will mirror Kuhnian paradigms, just like a real science. This will be so because shortly after the emergence of such a model, some academic will create a highly successful metamodel (vis-a-vis the market, if not reality). This would run some, presumably complex, algorithm on variables from the world and prediction market itself, and create a DSGE therefrom. This is a Nate Silver.

Suddenly, someone will discover an even better metamodel either because it was hitherto undiscovered, or reflects a changing economic landscape that would have been too complicated to endogenize. This is the paradigm shift, the revolution. Over a long period of time, this market will develop and more closely approximate the efficient market hypothesis. The emergence of complex financial instruments on this market (such as but not limited to options on parameter values, etc.) will result in such sophisticated estimates of future trends that no individual person could design. Within a paradigm, economic theorists can learn general trends among successful DSGEs (maybe the top 100 DSGEs all have wage stickiness).

We would, quite literally, be evolving models. Economists nod to the difficulty of creating a model that is limited on specification to in-sample data. Prediction markets are the mecca of everything future; everything out-of-sample. Excitingly, this is not limited to economics, but modeling of any kind. While these are usually most present in the social sciences, where microfoundations are not well understood, it can play a crucial role in geophysical models of earthquake prediction and climate change. Indeed, there is no reason this model marketplace (“model” the adjective and “model” the noun) be restricted to economics at all.

Brad DeLong calls big finance a big parasite, which it totally is. But there’s no reason it has to be, and this is an example to that effect. If I could, I’d be willing to bet on that. Shame that I can’t.


*At least when predictions are right (see: austrians)

1 comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s