Economics by Induction

Noah Smith has a post about why macroeconomics doesn’t work (well):

1.  There are a number […] “heterodox” schools of thought, [which] claim that macro’s relative uselessness is based on an obviously faulty theoretical framework, and that all we have to do to get better macro is to use different kinds of theories – philosophical “praxeology”, or chaotic systems of nonlinear ODEs, etc. I’m not saying those theories are wrong, but you should realize that they are all just alternative theories, not alternative empirics. The weakness of macro empirics means that we’re going to be just as unable to pick between these funky alternatives as we are now unable to pick between various neoclassical DSGE models.

2. Macroeconomists should try to stop overselling their results. Just matching some of the moments of aggregate time series is way too low of a bar. [It is important] when models are rejected by statistical tests […] When models have low out-of-sample forecasting power, that is important. These things should be noted and reported. Plausibility is not good enough. We need to fight against the urge to pretend we understand things that we don’t understand.

3. To get better macro we need better micro. The fact that we haven’t fond any “laws of macroeconomics” need not deter us; as many others have noted, with good understanding of the behavior of individual agents, we can simulate hypothetical macroeconomies and try to do economic “weather forecasting”. We can also discard a whole slew of macro theories and models whose assumptions don’t fit the facts of microeconomics. This itself is a very difficult project, but there are a lot of smart decision theorists, game theorists, and experimentalists working on this, so I’m hopeful that we can make some real progress there. (But again, beware of people saying “All we need to do is agent-based modeling.” Without microfoundations we can believe in, any aggregation mechanism will just be garbage-in, garbage-out.)

This led to a very interesting Twitter discussion:

Ashok Rao ‏Personally, I’d frame it that modern theory is fundamentally deductive in nature whereas the marcoeconomy is inductive/Bayesian.

Noah Smith ‏I think that’s a wrong way of seeing things. Real science involves an iterative process of induction and deduction.

Ashok Rao ‏But your claim also assumes there’s something “fundamental” about the economy in the sense of a real science. Is there?

Noah Smith Maybe. There’s real science in earthquakes but we can’t predict them at all. 

Ashok Rao ‏Hm. So there are systemic laws. But can these not be “understood” only through induction? As in the economy as machine learning.

Noah Smith Maybe!

Ashok Rao As long as we agree that there is a lot of doubt! 🙂

This conversation is at the very heart of my discomfort with much of modern economics, and I’ve been wanting to blog about this for a while, so now is as good a time as ever to dive right into it. Before I go on, I want to clarify that it seems like Noah and I have a very different understanding of what inductive is (or at least should be):

Ashok Rao ‏Yes but the 3 ‘main’ equlibria frameworks (general, classical game theory, and rational expectations) are all deductive. Right?

Noah Smith ‏No, you can easily make a Walrasian equilibrium happen in a lab, it’s very robust under certain conditions!

Of course, to the extent that empirical creations in the lab or double auctions are inductive Noah is right. But the macroeconomics behind this is principally deductive. By this I mean mathematicians economists have employed mathematics (the major premise) to a set of assumptions (the minor premises) to infer a conclusion. Ultimately, the theory is a grand syllogism, and highly deductive in nature. Further, the comparison to earthquakes doesn’t sit well with me. Physicists have very good microfoundations about how the earth works, and they’re not in perpetual motion. Scientists might fail at the aggregation of these bits of knowledge, but economics has a much more inherent flaw.

This is precisely the reason classical game theoretic approaches work only in “small lab settings” and that the Walrasian equilibrium holds only under “certain conditions”. That they do granted the right assumptions is tautological. Indeed, mathematics is internally consistent and hence in a concocted economy (the double auction) specific deductive models have to hold.

But by induction, I don’t mean experimental confirmation tempered by statistical reasoning. W. Brian Arthur at the PARC puts it better than anyone else:

This ongoing materialization of exploratory actions causes an always-present Brownian motion within the economy. The economy is permanently in disruptive motion as agents explore, learn, and adapt. These disruptions, as we will see, can get magnified into larger phenomena. 

If economists want to import one idea from physics, it should be Brownian motion:

One way to model this is to suppose economic agents form individual beliefs (possibly several) or hypotheses—internal models—about the situation they are in and continually update these, which means they constantly adapt or discard and replace the actions or strategies based on these as they explore. They proceed in other words by induction  

The best way I can imagine this idea is a “Bayesian machine”, if you will. While classical game theory, rational expectations, and competitive (Walrasian) theory might have inductive verification, Arthur is suggesting that the economy is inherently inductive.

The catch here is that for something that is at its heart inductive, there is not deductive verification. This is why many such as myself are skeptical of the mathematical models that dominate economics as they cannot either explain or verify anything. Often criticized, is the unrealistic nature of rational expectations. But in a real economy, I not only know that I’m not rational, but I also know that fellow agents are irrational too. This means I have subjective preferences, but also have subjective preferences about other people’s subjective preferences. These two degrees of subjectivity make many economic assumptions not just wrong, but impossible. (Think the epistemological difference between is not and can not be). 

This is why I disagree with Noah. While in  specific circumstances – equilibrium is a sub-class of non-equilibrium, after all – deductive engines work, macroeconomics has failed because the economy is inductive. At every moment in time there is a constant ferment, a change in attitude and belief. Standard economics holds that we all have one, perfectly rational prior. Induction holds that we all have pretty crappy priors that are constantly updated not only by economic outcomes, but also political and institutional motion.

Talk to goldbugs (actually, avoid it if you can). They’ll tell you about how they fear a government-Jewish orchestrated New World Order meant to line the pockets of rich bankers at the cost of the worker, by debasing our currency. Every economic indicator tells you they are wrong.

In a deductive model, it is impossible to accommodate for such people. If we modify a standard DSGE to tolerate such granularity it becomes intractable. A computer scientist would think about this as a machine learning problem. While there are a handful for which analytical solutions might work, the driving theme behind modern data mining and machine learning projects, even as simple as classification problems, is the flexibility of statistical computer science.

But the problem with induction is that, well, it’s not deduction. A well-formed syllogism guarantees its inference. Very much like the sum of two and two has to be four. On the other hand, induction is fuzzy and unclear. You can’t prove any sweeping laws and ideas with inductive reasoning, as Karl Popper has brilliantly argued. Indeed, inductive thinking is fragile against “black swan” events.

These aren’t real limitations, though. In the economists’ imagination theory trumps empirics. For the same reason, running large simulations on supercomputers is hardly as appealing as theorizing Walrasian economists. Proving things is really fun (if you’re smart enough).

But just as natural evolution doesn’t lend itself to equilibrium analysis, economists cannot believe that the fundamental structure of the economy is static. It is constantly reborn in updated preferences, political upheaval, and institutional ferment. Human minds and mathematics can never model this. But a supercomputer might help.

  1. Mahesh Sreekandath said:

    Something which forms the crux of what you are discussing here.
    I feel its irrelevant whether the agents within a complex system are deductive or inductive or both, an open system poses insurmountable road blocks in defining the premise which accommodates all the variables, their values and impacts.

    • arra95 said:

      Very interesting. Some of the older economists (notably Adam Smith) thought about this a lot more than we do today. But I think your point is precisely the reason we need inductive (read: computational) approaches.

  2. BSEconomist said:

    I think that this is just an extremely confused post. Noah’s point is that, at the end of the day, economics may just never have the data it needs. There is no way for me to read this post (while giving you the benefit of the doubt, that you aren’t just spewing nonsense) without coming away with the conclusion that you are saying the same thing.

    First, let’s start with something important. From a philosophy of science perspective (from which Noah is coming from), economics is neither inductive nor deductive. Deduction–the derivation of knowledge from a set of axioms–doesn’t supply new insights, rather it can only explore, umm… deductively, propositions which are consistent with those axioms. To derive new knowledge–propositions which are not simply logical implications of a known set of axioms–requires inductive or abductive logic. If you are doing mathematics and you make an induction argument as part of a proof, then you are doing deductive logic. In science, if you decide to call a relationship/theory a law–because it has never failed an empirical test, for example–then you are doing induction. Or if you reject a theory (based on empirical tests, for example) and replace that theory with a new one, you are doing deduction (eliminating a theory) then abduction (picking a new theory out of a hat, hopefully one consistent with known facts).

    Economists do all these things. From this perspective, Econ is all of these things (inductive, deductive and abductive). To the extent that some people (in my opinion wrongly) argue that Econ is deductive, what they usually mean is something very close to what you (seem) to be/are arguing: human actors have minds of their own, they learn and they have, for example, pro-social preferences which are unknowable, or at least unmeasurable. That many people think this IS the case is specifically what the Lucas critique addresses: if we were to rely only on inductive knowledge of the economy, people will adjust their behavior so that we can’t exploit the relationship and likely in ways that we couldn’t predict, since we can’t get in their heads.

    As an aside, this is specifically where I am being generous interpreting your post, because it is very much not true either that this dynamic cannot be described “deductively” and specifically as an equilibrium process or that economists haven’t tried already–if you’re curious you should familiarize yourself with “self-confirming” or “correlated” equilibria.

    So, putting that aside, I read you (generously) as saying that your preferred reading of the economy should be called “inductive”, even though most economists, reasoning roughly from the same place, would say that we must rely more on deductive logic in our exploration of economic theory. To put this another way, you are using the word “inductive” to describe a class of theories (perhaps better described as bayesian)–which is to say, a set of axioms–and suggesting that there is no “deduction” possible from these. Huh? Even chaos/turbulence/tipping points can only be understood in the context of equilibrium theories (deductive, since all theories are deductive, that is, derived from a set of axioms)–and as an aside, there are many, many, many more than three kinds of equilibrium in economics, it depends on the axioms! Deduction is just a matter of logic–you have made a set of claims, and there exist certain propositions which are consistent with those. I’m left to conclude that you are using the words “deduction” and “induction” differently that Noah–i.e. differently than the philosophy of science meaning.

    Again, being as generous as possible with what (I can only guess) you are trying to say, the best I can come up with is that you mean that your preferred theory is weak on predictable observables. And maybe that’s even right. The problem, as I said at the beginning, is that this is exactly Noah’s point. The data sucks and at best you are arguing with Noah about the degree to which this is true, with you taking the position, if I read you right, that perhaps some of the data which would be needed for a predictive theory is unknowable. The rest of the argument here seems to me to be one kind of confusion or another.

    Now that I’ve narrowed down the disagreement to something substantive, rather than philosophical, let me add my two cents, briefly, before I go. I suspect that (my reading of) your position is at least partly correct, but I think it doesn’t prove what you want to say it does. A good example being Krugman’s point that the (very simple) IS-LM theory actually did make some substantive predictions that have largely been born out (for example, that interest rates would fall during the great recession, Europe’s economy would continue to shrink bcs of austerity)–but simple theories like IS-LM don’t make very many predictions! The complexity of the economy could mean that we are never able to make predictions, except under very broad conditions.

    • Mahesh Sreekandath said:

      This is quite in sync with what Hayek discuss in that paper and exactly what I wanted to convey!

    • arra95 said:

      “There is no way for me to read this post (while giving you the benefit of the doubt, that you aren’t just spewing nonsense) without coming away with the conclusion that you are saying the same thing.” That’s because we are 🙂 There was a little argument on Twitter but, largely, I agree with what he says.

      “From a philosophy of science perspective”. This is particularly the perspective from which I’m not writing this. I use induction in the sense that I do here because a) it’s established within the complexity science community and b) few words capture the concept better.

      “I’m left to conclude that you are using the words “deduction” and “induction” differently that Noah–i.e. differently than the philosophy of science meaning.” Again, yes, exactly. I noted this in the post that I mean induction in a Bayesian sense. And you keep talking about how my point is similar to Noah’s and, as I’ve noted, this is broadly true.

      This post was meant to expand on the idea that Walrasian equilibria working in the lab are not inductive in the sense I meant (the “philosophy of science” sense). You’re just, maybe more lucidly, stating this point.

      Now, basically your whole point in the first six paragraphs was that either I don’t know what I’m talking about or that I actually agree with Noah. The latter is definitely true, and I’m willing to cede the former. Now the answer the part where you say something of substance.

      Yep, I just read Krugman’s post and was thinking of precisely this. IS-LM is handcuffed principally by its inability to make a broad set of actionable predictions. As Krugman says, its value is derived from the precision during times of crisis, unlike more complex theories.

      My greater concern is with the overwhelming attitude that mathematical precision and sophistication is ipso facto better. IS-LM math is a breeze, but some other ones are not: and are pretty fruitless in outcome. All I want is that we more seriously consider agent-based modelling. The Fed and America in general is behind on this: ECB with its EURACE has made some cool progress, but even there not enough money is spent.

      Plus, ABMs are the most realistic answer to the Lucas Critique. While the DSGE is built, unlike older Keynesian models, from microfoundations, it is limited by analytical intractability. ABMs would be entirely built on individual reactions and preferences, aggregated in a meaningful, if fuzzy, manner.

      • BSEconomist said:

        I’m glad to know that we’re on the same page, with the exception of terminology. I think your view is much more reasonable than those people who argue that economics must be “deductive” because of the Lucas critique. To be honest, the reason I wasn’t sure whether or not you were talking nonsense was that your “inductive” framing, far from suggesting computational techniques (although other parts of your post did suggest this), sounds as if you meant it in the same way that the “deductive” people mean what they say. And most of them definitely don’t understand (or at least don’t care) about scientific methodology.

        When I hear someone say that “Economics is fundamentally deductive”, I hear “Economics is bullpucky”; so by analogy, when I hear “Economics is fundamentally inductive”, I hear “Economics is impossible”. I don’t accept either.

  3. Mahesh Sreekandath said:

    Agent based modelling derives on rules and patterns identified from statistics, which brings us back to Hayek.

    “Statistics, however, deals with the problem of large numbers essentially by eliminating complexity and deliberately treating the individual elements which it counts as if they were not systematically connected. It avoids the problem of complexity by substituting for the information on the individual elements information on the frequency with which their different properties occur in classes of such elements, and it deliberately disregards the fact that the relative position of the different elements in a structure may matter. In other words, it proceeds on the assumption that information on the numerical frequencies of the different elements of a collective is enough to explain the phenomena and that no information is required on the manner in which the elements are related. The statistical method is therefore of use only where we either deliberately ignore, or are ignorant of, the relations between the individual elements with different attributes, i.e., where we ignore or are ignorant of any structure into which they are organized. Statistics in such situations enables us to regain simplicity and to make the task manageable by substituting a single attribute for the unascertainable individual attributes in the collective. It is, however, for this reason irrelevant to the solution of problems in which it is the relations between individual elements with different attributes which matters.” (Chapter 4, The Theory Of Complex Phenomena by FA Hayek)

    • I’m not sure if you mean this as a defense or a criticism of ABM. The Hayek quote is both a criticism of statistics as a means of knowing things, and an ode to the importance of understanding interactions. Your opening sentence highlights the role of statistics in building ABM models, and thus would seem to cast them in a negative light. But to me the essence of ABM is that it can teach you things about the effects of interactions.

      It seems to me the hardest problem in ABM is to create a world that can evolve. In biological evolution, the instructions (the genes) evolve along with the reader of the instructions (also built by genes, and governed by the underlying properties of the materials involved). In economic evolution, the individual behaviors and the society in which they play out are evolving together, and the society is made up of the individual behaviors.

      In ABM, in contrast, there seems to be an irreducible minimum, where the modeler has to set up the reader, rather than letting it evolve. That is, the modeler has to establish some limits on what kinds of things can evolve.

      But even with that limitation, it seems like a useful way to test hypotheses the interaction between specific _sets_ of behaviors and social structures in which they play out.

      • Mahesh Sreekandath said:

        Seems like statistics limits our ability to comprehend the evolutionary attributes of agents, when we are unable to ascertain all the variables involved in a complex phenomena how much practical value can a hypothesis add?

        Within a social order each individual is an open system and the factors influencing their behavior can only be partially accounted within the rules of the model, I am not dismissing or defending ABM but in fact this is a genuine question I have in light of Hayek’s paper.

        Been trying to get my hands on Philip Mirowski’s Machine Dreams for sometime now, may be that should explain how economics evolved on ABM like road map?

  4. Nathanael said:

    “In a deductive model, it is impossible to accommodate for such people. ”

    Well, actually, you *can*, *sort of*. It requires a very different sort of model than the sort of model which is used by economists, however. Some of the models which I’ve occasionally seen used by political scientists (“Assume we have five types of countries, each with the following behaviors… these are the types we’ve seen in the wild… type one is obsessed with national honor, type two is obsessed with defense, type three is aggressive and expansionist, type four is interested in trade…”) feel very deductive, but they’re based on the assumption that we have individuals of several observed-in-the-wild types.

    So you *could* have a microfounded model… if you had any microeconomics to found it on.

    I agree with the statement that you can’t have a microfounded model unless you have a decent microeconomics.

    And current microeconomics is basically garbage. It doesn’t even capture the major microecononomic effects (publiclity, product differentiation, reputation). I’ve described microeconomics as we know it as “a model of coal and steel”, because it’s really terrible at describing anything else. There is some research being done which may eventually develop a more useful microeconomics, but it’s in its *infancy*. It won’t be ready to use for macroeconomic purposes for *decades* if *ever*.

  5. ‘If economists want to import one idea from physics, it should be Brownian motion’

    They already have; the EMH.

  6. stearm74 said:

    If someone believes that DSGE model and, in general, modern economics is not ‘purely’ deductive, something went really wrong with the teaching of economics in the last 80 years.

    History of Economics: From J.S Mill to Schumpeter, all the greatest economists answered this question and the answer was always: deductive!
    Philosophy: ‘normal’ science is post-Popperian, that is, moderate positivism.
    Sociology: Talcott Parsons and analytical realism.

    A good economist must have a broad knowledge of these three disciplines and how they have already solved the epistemological nature of ‘normal’ science, to which economics belongs.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s