How ‘tractability’ has shaped economic knowledge: a few conjectures

Edit September 2021. I have organized my thoughts on tractability into a short paper available here

Yesterday, I blogged about a question I have mulled over for years: what is the aggregate consequence of the thousands of hundreds of “I model the government’s objective function with well-behaved X social welfare-function because canonical paper Y does it” or “I assume a representative agent to make the model tractable” or “I log-linearize to make the model tractable” sentences economists routinely write on the knowledge they collectively produce? Today I had several interesting exchanges (see here and here) which helped me clarify and spell out my object and hypotheses. Here is an edited version of some points I made on twitter. These are not the conclusion to some analysis, but the priors with which I approach the question, and which I’d like to test.

1. A ‘tractable’ model is one that you can solve, which means there are several types of tractability : analytical tractability (finding a solution to a theoretical model), empirical tractability (being able to estimate/calibrate your model) and computational tractability (finding numerical solutions). It is sometimes hard to discriminate between theoretical and empirical, or empirical and computational tractability

(note: if you want a definition of model, read the typology in the Palgrave Dictionary  article by Mary Morgan. If you want to see the typology in action, read this superb paper by Verena Haslmayer on the 7 ways Robert Solow conceived economic models)

2.  Economists’ don’t merely make modeling choices because they believe these are either key to imitate the world or to predict correctly or to produce useful policy recommendations, but also because they otherwise can’t manipulate their models. My previous post reflected of how Robert Lucas’s reconstruction of macroeconomics has been interpreted. While he believed microfoundations to be a fundamental condition for a model to generate good policy evaluation, he certainly didn’t think a representative agent  assumption would made macro models better. No economist does. The assumption is meant to avoid aggregation issues. Neither does postulating linearity  or normality usually reflect how economists see the world. What I’d like to capture is the effect of those choices economists make “for convenience,” to be able to reach solutions, to simplify, to ease their work, in short, to make a model tractable. While those assumptions are conventional and meant to be lifted as mathematical, theoretical and empirical skills and technology (hardware and software) ‘progress,’ their underlying rationale is often lost as they are taken-up by other researchers, spread, and become standard (implicit in the last sentence is the idea that what a tractable model is evolves as new techniques and technologies are brought in)

3. An interesting phenomenon, then, is “tractability standards” (Noah Smith suggested to it “tractability+canonization”, but canonization implies a reflexive endorsement, while standardization convey the idea that these modeling choices spread without specific attention, that they seem mundane – yes, I’m nitpicking). Tractability standards have been more or less stringent over time, and they haven’t followed a linear pattern whereby contraints are gradually relaxed, allowing economists to design and manipulate more complex (and richer) models decades after decades. My prior on the recent history of tractability in macroeconomics, for instance,  is something like this (beware, wishful thinking ahead):

Between the late 1930s and the 1970s, economists started building large-scale macroeconometric models, and the number of equations and the diversity of theories  combined soon swelled out of control. It meant that finding analytical solutions and estimating and simulating those models was hell (especially when all you had was 20 hours IBM360 a week). But there was neither a preferable or a prohibited way to make a model tractable. Whatever solution you could come out with was fine: you could either just take a whole block of equation down if it was messing up your simulation (like wage equations+ labor market in the case of the MPS model). You could devise a mix of two-stage least squares, limited information maximum likelihood and instrumental variable techniques and run recursive block estimation (like Frank Fisher did). You could pick up your phone and ask bayesian Zellner to devise a new set of tests for you (like Modigliani and Ando did).

With Lucas, Sargent, Kydland and Prescott came new types of models, thus new analytical, empirical and computational challenges. But this time, “tractability standards”   spread alongside the models (not sure by whom or how coordinated or intentional it was). If you wanted to publish a macro paper in top journals in the 1980s, you were not allowed to take whatever action you wished to make your model tractable. Representative agent and market-clearing were fine; non-microfounded simple models were not. Linearizing was okay, but finding solution through numerical approximation wasn’t generally seen as satisfactory. Closed-form solutions were preferred. And so was the case in some micro fields like public economics. Representative agents and well-behaved social welfare functions made general-equilibrium optimal taxation models “tractable.” That these assumptions were meant to be later lifted was forgotten, models standardized, and so did research questions. How inequalities evolved wasn’t one you couldn’t answer anymore, and maybe inequality wasn’t something you could even “see.”

What has happened in the post-crisis decade is less clear. My intuition is that tractability standards have relaxed again, allowing for more diverse ways to hunt for solutions. But it’s not clear to me why. The usual answer is that better software and faster computers are fostering the spread of numerical techniques, making tractability concerns a relic of the past. I have voiced my skepticism elsewhere. The history of the relations between economists and computer is one in which there’s a leap in hardware, or software or econometric techniques every 15 years, with economists declaring the end of history, enthusiastically flooding their models with more equations and more refinements…. and finding themselves 10 years later with non-tractable models yet over again. Reflecting on the installation of an IBM650 at Iowa State University in 1956, R. Beneke, for instance,  joked that once the computer accommodated the inversion of a 198 row matrix, “new families of exotic regression models came on the scene.” Agricultural economists , he remarked, “enjoyed proliferating the regression equation forms they fitted to data sets.”  Furthermore, it takes more than computers to spread numerical techniques. After all, these techniques have been in limited uses since the late 1980s. An epistemological shift is needed, one that involves relinquishing the closed-form solution fetish. Was this the only option left for economists to move on after the crisis? Or has the rise of behavioral economics, of field and laboratory experiments, and the structural vs reduced form debate opened the range tractability choices?

4. Some heterodox economists (Lars Syll here) believe  that the whole focus on tractability is a hoax. It shows that restricting oneself to deductivist mathematical models is flawed. There are other ways to model, he argues, and ontological considerations should take precedence over formalistic tractability. Cahal Moran goes further, arguing that economists should be allowed to reason without modeling. There is a clear fault line here, for Lucas, among others, has insisted in a letter to Matsuyama that “explicit modeling can give a spurious sense of omniscience that one has to guard against… but if we give up explicit modeling, what have we got left except ideology? I don’t think either Hayek or Coase have faced up to this question.” Perry Mehrling, on the other hand, believes tractability is more about teaching, communication and publication than thinking and exploring.

5. Focusing on tractability offer new lenses to approach debates on the current state of the discipline (in particular macro). Absent archival or interview smoking-guns, constantly ranting that the new classical revolution is a undoubtedly neoliberal or neocon turn and whether the modeling choices of Lucas, Sargent, Prescott or Plosser are of course ideological produces more heat than light. These choices reflect a mix of political, intellectual and methodological values, a mix of beliefs and tractability constraints. The two aspects might be impossible to disentangle, but it at least make sense to investigate their joint effects, and to make room for the possibility that the standardization of tractability strategies have shaped economic knowledge.

6. The tractability lens also helps me make sense of what is happening in economics now, and what might come next.  Right now, clusters of macroeconomists are each working on relaxing one or two tractability assumptions: research agendas span heterogeneity, non-rational expectations, financial markets, non-linearities, fat-tailed distributions, etc. But if you put all these adds-on together (assuming you can design a consistent model, and that adds-on are the way forward, which many critics challenge), you’re back to non-tractable. So what is the priority? How do macroeconomists rank these model improvements? And can the profession affords waiting 30 more years, 3 more financial crises and two trade war before it can finally say it has a model rich enough to anticipate crises?

2 Comments

  1. Very interesting. I think I’m most often to be seen complaining about the unthinking use of “tractability standards.” Tractability can’t be wished away for certain applications, but how might we get better at being thoughtful / transparent about the losses involved in the simplification?

    (Side note: I wonder if the format of some journal articles (e.g. length limitations, formal or informal) has been a driver of the usage of tractability standards.)

Leave a comment