What is the cost of ‘tractable’ economic models?

Edit September 2021. I have organized my thoughts on tractability into a short paper available here.

Edit: here’s a follow-up post in which I clarify my definition of ‘tractability’ and my priors on this topic

Economists study cycles, but they also create some. Every other month, a British heterodox economist explains why economics is broken, and other British economists respond that the critic doesn’t understand what economists really do (there’s even a dedicated hashtag). The anti and pro arguments have more or less been the same for the past 10 years. This week, the accuser is Howard Reed and the defender, Diane Coyle. It would be business as usual without interesting comments by Econocracy coauthor Cahal Moran at Opendemocracy and by Jo Michell on twitter along the same lines. What matters, they argue, is not what economists do but how they do it. The problem is not whether some economists deal with money, financial instability, inequality or gender, but how their dominant modeling strategies allow them to take them into account or rather, they argue, constrain them to leave these crucial issues out of their analysis. In other words, the social phenomena economists choose to study and the questions they choose to ask, which have come under fire since the crisis, are in fact determined by the method they choose to wield. Here lays the culprit: how economists write their models, how they validate their hypotheses empirically, what they believe is a good proof is too monolithic.

One reason I find this an interesting angle is because I read the history of economics in the past 70 years as moving from a mainstream defined by theories to a mainstream defined by models (aka tools aimed at fitting theories to reality, thus involving methodological choices). And eventually, to a mainstream defined by methods. Some historians of economics argue that the neoclassical core has fragmented so much because of the rise of behavioral and complexity economics, among others,  that we have now entered a post-mainstream state. I disagree. If “mainstream” economics is what get published in the top-5 journals, maybe you don’t need representative agent, DSGE or strict inter temporal maximization anymore. What the gatekeepers will scrutinize instead, these days, is how you derive your proof, and whether your research design, in particular your identification strategy, is acceptable.  Whether this is a good evolution or not is not something for me to judge, but the cost & benefits of methodological orthodoxy becomes an important question.

Another reason why the method angle warrants further consideration is that a major  fault line in current debates is how much economists should sacrifice to get ‘tractable’ models. I have long mulled over a related question, namely how much ‘tractability’ has shaped economics in the past decades. In a 2008 short paper, Xavier Gabaix and David Laibson list 7 properties of good models: parsimony, tractability, conceptual insightfulness, generalizability, falsifiability, empirical consistency, and predictive precision. They don’t rank them, and their conscious and unconscious ranking has probably sharply evolved across time. But while tractability have probably never ranked highest, I believe the unconscious hunt for tractable models may have thoroughly  shaped economics. I have hitherto failed to find an appropriate strategy to investigate the influence of ‘tractability.’ But I think no fruitful discussions can be carried on the current state of economics without answering this question. Let me give you an exemple:

While the paternity of the theoretical apparatus underlying the new neoclassical synthesis in macro is contested, there is wide agreement that the methodological framework was largely architected by Robert Lucas. What is debated is to what extent Lucas’s choices were intellectual or ideological. Alan Blinder hinted to a mix of both when he commented in 1988 that “the ascendancy of new classicism in academia was… the triumph of a priori theorizing over empiricism, of intellectual aesthetics over observation and, in some measure, of conservative ideology over liberalism.” Recent commentators like Paul Romer or Brad de Long are not just trying to assess Lucas’s results and legacy, but also his intentions. Yet the true reasons behind modeling choices are hard to pin down. Bringing a representative agent meant foregoing the possibility to tackle inequality, redistribution and justice concerns. Was it deliberate? How much does this choice owe to tractability? What macroeconomists were chasing, in these years, was a renewed explanation of the business cycle. They were trying to write microfounded and dynamic models. Building on intertemporal maximization therefore seemed a good path to travel, Michel de Vroey explains.

The origins of some of the bolts and pipes Lucas put together are now well-known. Judy Klein has explained how Bellman’s dynamic programming was quickly implemented at Carnegie’s GSIA. Antonella Rancan has shown that Carnegie was also where Simon, Modigliani, Holt and Muth were debating how agents’ expectations should be modeled. Expectations was also the topic of a conference organized by Phelps, who came up with the idea to consider imperfect information by modeling agents on islands. But Lucas, like Sargent and others, also insisted that the ability of these models to imitate real-world fluctuations be tested, as their purpose were to formulate policy recommendations: “our task…is to write a FORTRAN program that will accept specific economic policy rules as “input” and will generate as “output” statistics describing the operating characteristics of time series we care about,” he wrote in 1980. Rational expectations imposed cross-equation restrictions, yet estimating these new models substantially raised the computing burden. Assuming a representative agent mitigated computational demands, and allowed macroeconomists to get away with general equilibrium aggregate issues: it made new-classical models analytically and computationally tractable. So did quadratic linear decision rules: “only a few other functional forms for agents’ objective functions in dynamic stochastic optimum problems have this same necessary analytical tractability. Computer technology in the foreseeable future seems to require working with such as a class of functions,” Lucas and Sargent conceded in 1978.

Was tractability the main reason why Lucas embraced the representative agent (and market clearing)? Or could he have improved tractability through alternative hypotheses, leading to opposed policy conclusions? I have no idea. More important, and more difficult to track, is the role played by tractability in the spread of Lucas’ modeling strategies. Some macroeconomists may have endorsed the new class of Lucas-critique-proof models because they liked its policy conclusions. Other may have retained some hypotheses, then some simplifications, “because it makes the model tractable.” And while the limits of simplifying assumptions are often emphasized by those who propose them , as they spread, caveats are forgotten. Tractability restrict the range of accepted models and prevent economists from discussing some social issues, and with time, from even “seeing” them. Tractability ‘filters’ economists’ reality. My question is not restricted to macro. Equally important is to understand why James Mirrlees and Peter Diamond choose to reinvestigate optimal taxation in a general equilibrium-with-representative agent setting (here, the genealogy harks back to Ramsey), whether this modeling strategy spread because it was tractable, and what the consequences on public economics were. The aggregate effect of “looking for tractable models” is unknown, and yet it is crucial to understand the current state of economics.

10 Comments

  1. Intertemporal maximization and rational expectations equilibrium each make models less tractable. If an economist has committed – for reasons other than tractability – to write REE models with dynamic optimizing agents, these models may be so complicated that it is essential to make other assumptions, such as the existence of a representative agent, on the grounds of tractability. But if tractability were the primary concern, the economist would probably prefer to write models in which agents have adaptive expectations and/or solve static problems (with assets in the utility function, etc.). Economists don’t generally do this, and in fact spent a long time developing technical methods to solve dynamic optimization problems/find complicated fixed points, which would not have been necessary if they had worked with more tractable static-optimization/non-optimizing/non-equilibrium models. (Even now that macro has largely moved to computational methods rather than pen and paper analytics, purely on grounds of tractability it would be much easier to solve agent-based models which have no fixed points, but the literature prefers to solve more computationally demanding REE models.) This suggests to me that tractability is a secondary concern compared to economists’ a priori commitment to general equilibrium, optimizing behavior, environments in which the welfare theorems hold, etc.

    It’s logically possible to endorse the rest of the Lucas project (rational expectations, general equilibrium) without endorsing the simplifying assumptions that facilitate tractability (time-separable utility, representative agent, etc.). An interesting and rare example is Fisher Black. In the introduction to ‘Exploring General Equilibrium’ he says that most models in the literature “are narrower, not for carefully-spelled-out economic reasons, but for reasons of convenience. I don’t know what to do with models like that, especially when the designer says he imposed restrictions to simplify the model or to make it more likely that conventional data will lead us to reject it…The restrictions usually strike me as extreme. When we reject a restricted version of the general equilibrium model, we are not rejecting the general equilibrium model itself.” It’s interesting to me that this view is so rarely expressed (I’m certainly not endorsing it myself). This suggests that another role of ‘simplifying assumptions’ is not just that they make models tractable, but that the combination of equilibrium + simplifying assumptions is much easier to falsify than the assumption of equilibrium. So the simplifying assumptions facilitate research programs/cottage industries in which a simple model failing to match the data is a ‘puzzle’, people write slightly less simple models, etc.

    1. But I never said that tractability was the primary driver of economists’ modeling choices. I agree with you that commitment to modeling strategies either reflecting key features of the world or allowing prediction or policy evaluation are most important (as I say more explicitly in this follow up (https://beatricecherrier.wordpress.com/2018/04/20/how-tractability-has-shaped-economic-knowledge-a-follow-up/). My point was that, precisely because tractability assumptions are secondary, they are never examined, and yet their aggregation might inadvertantly play a role in shaping economic knowledge

      The point about using simplifying assumption for falsifiability is well-taken, and very useful. I hadn’t thought about that. Thank you.

  2. I completely agree with this, and especially with your description of “tractability standards” in the follow-up post. I guess I was responding to the narrower question of how far the choice of a representative agent, market clearing, etc. were driven by tractability rather than other concerns. My take would be that other concerns (esp. a commitment to optimizing models and equilibrium) are the fundamental drivers, as evidenced by the fact that dynamic optimizing and/or equilibrium models have been adopted even when they’re less tractable than the alternatives – with the result that the models have to be made simpler along other dimensions, e.g. rep agent, as you point out. (Whatever the other virtues of rational expectations, equilibrium, dynamic optimization, etc. might be, they are certainly not *simplifying* assumptions.) But this doesn’t mean that the tractability assumptions that *are* made don’t have longer term effects (which, on re-reading, I see was the main point of your post).

  3. “What the gatekeepers will scrutinize instead, these days, is how you derive your proof, and whether your research design, in particular your identification strategy, is acceptable”

    Wow, that is simply not true. Well, if it is true, such gatekeepers are doing an awful job…

Leave a comment