Edit September 2021. I have organized my thoughts on tractability into a short paper available here.
Edit: here’s a follow-up post in which I clarify my definition of ‘tractability’ and my priors on this topic
Economists study cycles, but they also create some. Every other month, a British heterodox economist explains why economics is broken, and other British economists respond that the critic doesn’t understand what economists really do (there’s even a dedicated hashtag). The anti and pro arguments have more or less been the same for the past 10 years. This week, the accuser is Howard Reed and the defender, Diane Coyle. It would be business as usual without interesting comments by Econocracy coauthor Cahal Moran at Opendemocracy and by Jo Michell on twitter along the same lines. What matters, they argue, is not what economists do but how they do it. The problem is not whether some economists deal with money, financial instability, inequality or gender, but how their dominant modeling strategies allow them to take them into account or rather, they argue, constrain them to leave these crucial issues out of their analysis. In other words, the social phenomena economists choose to study and the questions they choose to ask, which have come under fire since the crisis, are in fact determined by the method they choose to wield. Here lays the culprit: how economists write their models, how they validate their hypotheses empirically, what they believe is a good proof is too monolithic.
One reason I find this an interesting angle is because I read the history of economics in the past 70 years as moving from a mainstream defined by theories to a mainstream defined by models (aka tools aimed at fitting theories to reality, thus involving methodological choices). And eventually, to a mainstream defined by methods. Some historians of economics argue that the neoclassical core has fragmented so much because of the rise of behavioral and complexity economics, among others, that we have now entered a post-mainstream state. I disagree. If “mainstream” economics is what get published in the top-5 journals, maybe you don’t need representative agent, DSGE or strict inter temporal maximization anymore. What the gatekeepers will scrutinize instead, these days, is how you derive your proof, and whether your research design, in particular your identification strategy, is acceptable. Whether this is a good evolution or not is not something for me to judge, but the cost & benefits of methodological orthodoxy becomes an important question.
Another reason why the method angle warrants further consideration is that a major fault line in current debates is how much economists should sacrifice to get ‘tractable’ models. I have long mulled over a related question, namely how much ‘tractability’ has shaped economics in the past decades. In a 2008 short paper, Xavier Gabaix and David Laibson list 7 properties of good models: parsimony, tractability, conceptual insightfulness, generalizability, falsifiability, empirical consistency, and predictive precision. They don’t rank them, and their conscious and unconscious ranking has probably sharply evolved across time. But while tractability have probably never ranked highest, I believe the unconscious hunt for tractable models may have thoroughly shaped economics. I have hitherto failed to find an appropriate strategy to investigate the influence of ‘tractability.’ But I think no fruitful discussions can be carried on the current state of economics without answering this question. Let me give you an exemple:
While the paternity of the theoretical apparatus underlying the new neoclassical synthesis in macro is contested, there is wide agreement that the methodological framework was largely architected by Robert Lucas. What is debated is to what extent Lucas’s choices were intellectual or ideological. Alan Blinder hinted to a mix of both when he commented in 1988 that “the ascendancy of new classicism in academia was… the triumph of a priori theorizing over empiricism, of intellectual aesthetics over observation and, in some measure, of conservative ideology over liberalism.” Recent commentators like Paul Romer or Brad de Long are not just trying to assess Lucas’s results and legacy, but also his intentions. Yet the true reasons behind modeling choices are hard to pin down. Bringing a representative agent meant foregoing the possibility to tackle inequality, redistribution and justice concerns. Was it deliberate? How much does this choice owe to tractability? What macroeconomists were chasing, in these years, was a renewed explanation of the business cycle. They were trying to write microfounded and dynamic models. Building on intertemporal maximization therefore seemed a good path to travel, Michel de Vroey explains.
The origins of some of the bolts and pipes Lucas put together are now well-known. Judy Klein has explained how Bellman’s dynamic programming was quickly implemented at Carnegie’s GSIA. Antonella Rancan has shown that Carnegie was also where Simon, Modigliani, Holt and Muth were debating how agents’ expectations should be modeled. Expectations was also the topic of a conference organized by Phelps, who came up with the idea to consider imperfect information by modeling agents on islands. But Lucas, like Sargent and others, also insisted that the ability of these models to imitate real-world fluctuations be tested, as their purpose were to formulate policy recommendations: “our task…is to write a FORTRAN program that will accept specific economic policy rules as “input” and will generate as “output” statistics describing the operating characteristics of time series we care about,” he wrote in 1980. Rational expectations imposed cross-equation restrictions, yet estimating these new models substantially raised the computing burden. Assuming a representative agent mitigated computational demands, and allowed macroeconomists to get away with general equilibrium aggregate issues: it made new-classical models analytically and computationally tractable. So did quadratic linear decision rules: “only a few other functional forms for agents’ objective functions in dynamic stochastic optimum problems have this same necessary analytical tractability. Computer technology in the foreseeable future seems to require working with such as a class of functions,” Lucas and Sargent conceded in 1978.
Was tractability the main reason why Lucas embraced the representative agent (and market clearing)? Or could he have improved tractability through alternative hypotheses, leading to opposed policy conclusions? I have no idea. More important, and more difficult to track, is the role played by tractability in the spread of Lucas’ modeling strategies. Some macroeconomists may have endorsed the new class of Lucas-critique-proof models because they liked its policy conclusions. Other may have retained some hypotheses, then some simplifications, “because it makes the model tractable.” And while the limits of simplifying assumptions are often emphasized by those who propose them , as they spread, caveats are forgotten. Tractability restrict the range of accepted models and prevent economists from discussing some social issues, and with time, from even “seeing” them. Tractability ‘filters’ economists’ reality. My question is not restricted to macro. Equally important is to understand why James Mirrlees and Peter Diamond choose to reinvestigate optimal taxation in a general equilibrium-with-representative agent setting (here, the genealogy harks back to Ramsey), whether this modeling strategy spread because it was tractable, and what the consequences on public economics were. The aggregate effect of “looking for tractable models” is unknown, and yet it is crucial to understand the current state of economics.