Why the computerization of economics needs to be historicized
I’m co-organizing a conference investigating whether the transformation of economics from the 1970s is better characterized as an “applied turn” (applied meaning empirical and/or concerned with real-world and/or policy-related). Economists’ reaction to the topic has been ambivalent, a mix of strong interest and “no research needed, the rise of applied of economics is data+computer” dismissal. This trope is becoming ubiquitous among economics, the canonical history of economics in the past 40 years. In their Business is Booming in Empirical Economics piece, for instance, Betsey Stevenson and Justin Wolfers emphasized that “technological change has brought opportunities to do economics in a way that our predecessors could only have dreamed about,” and Justin Fox viewed the last AEA conference as a manifestation that economics has gone “from theory to data,” a shift due to improved data-crunching.
The problem is that ascribing the rise of applied economics to “computer+data” is a bit like explaining Adele’s success by “you know, her voice.” It doesn’t explain anything. Computerization might be a necessary but not sufficient condition for the development of new applied techniques. Computerization and new data did not merely enable the implementation of pre-existing techniques and theories. They may have fostered new techniques and render others obsolete, and changed theoretical as much as empirical work. Neither are the timing and dynamics of the transformation easy to grasp. There’s also the dynamics of the story no that no one seems to agree about. Relying on a bibliometric study by Dan Hamermesh, Fox points out that the relative share of theory papers in top journals decreased from 1983 onward, that is, when personal PCs became affordable. But what Hamermesh’s top 5 journals-based study actually shows is that empirical work has gained prestige in academia in the last decades. My belief is that empirical work has always been important in economics, though pursued at research and governmental institutions rather than university departments, therefore published in other outlets. Newly minted Nobel Angus Deaton remembers the 1950s was already an effervescent era where Cambridge economists learned FORTRAN, tried to estimate Richard Stone’s linear expenditure system and got crazy results.
The canonical story of the applied turn is also implicitly one about the demise of theory. But then, why wasn’t econ theory transformed by computer and data the way empirical work was? Fox points to the 80s growing disillusionment with theory, yet it was matched by a crisis in empirical work (Leamer-Hendry-Let’s take the cons), one resolved by the development of quasi-experimental techniques, some, in fact, much less demanding in terms of computing power. Stevenson and Wolfers rather see the role of theory as being redefined to deal with the new empirical evidence which is driving economists to relinquish the hypotheses of rational behavior and representative agents. But the latter hypotheses were not historically adopted because of the lack of empirical evidence (experimental research on consumer behavior had been carried since the 1930s, not to mention the surveys carried at Michigan by Katona and others). Rather, modeling non-rational behavior and heterogeneous agents created theoretical difficulties that economists were not equipped to deal with. Deaton likewise explains that what was holding back economics in the 50s was not the dearth of data or the lack of computational power. There was plenty of contradictory empirical evidence on consumption. What was needed was a good theory to reconcile it (Modigliani-Brumberg). He also made it clear that the computerization of consumer theory was conditional upon theoretical developments, namely Gorman’s use of duality.
The “computer+data” trope thus leaves all kinds of questions unanswered: the technical, intellectual and institutional conditions for data and computer to influence the development of economics; the timing; possible differences across 1) fields; 2) countries –Deaton suggests U of Bristol lost several great minds because of its lack of equipment; 3) sciences – a meaningful narrative requires the comparative analysis of the computerization of economics, physics, psychology, biology, and so forth. While historical investigations of economic data, numbers, observations and quantification have flourished in the past years (survey needed), virtually no historian has written on the computerization of economics (with exception of Charles Renfro on econometrics software). I’ve put together a chronology here, and tried to make sense of it there. Here are some additional thoughts and questions I have:
Computerization as a necessary but not sufficient condition for the rise of applied economics?
Buying a mainframe computer or a bunch of PCs wasn’t enough to change economic practices. So what did it take? The first software developed by economists neglected the data storage-retrieval aspects of econometrics, Renfro explains, until the development and commercialization of large data banks and forecasts in the 1970s pushed programmers to build these requirements into their codes. The role of government is unclear, though Canada implemented a proactive data policy subsequently emulated by the US. And whether software is critical to the spread of a new approach requires further investigation: it took Autobox, BRAP or MUDA for time series analysis, Bayesian econometrics or experimental econ to take off. Agent-based modelers say the development of their approach is held back by the lack of software. On the other hand, the rise of DSGE predates Dynare. Likewise, the failure of the lavish computer lab Austin Hoggatt set up at Berkeley in 1960s to spearhead experiments was due to his exclusive focus on infrastructure rather than building a community building, writing and sharing software, and fostering collaborations between theorists and experimentalists (see Svorencik, ch3.1). The computerization of economics was itself dependent upon theoretical transformations. Conversely, some approaches largely deriving from the new possibilities offered by computers remained largely marginal. The fate of Guy Orcutt’s simulation-based research program comes to mind here.
Fragmented, insular, communities in computational economics
What puzzled me when navigating the recent literature on computational economics to assemble my chronology is how fragmented it is. My current map of those economists heavily building on computers to develop new approaches is one of separate insular communities: various brands of simulation, including ACE, experimentalists, advocates of the development of numerical methods, ATP researchers, algorithmic game theorists and market designers (and are there any CGE economists left?) hardly cite each others. Yet, this state of affair doesn’t seem to be the result of some specialization. There various streams don’t seem to originate into the same places, communities, research programs, though many are somewhat related to MIT and reclaim Von Neumann’s legacy. Yet, one can imagine these researchers could have pooled together to change major journals’ editorial practices and to frame alternative epistemological foundations for their field.
Beyond empirical economics, a longstanding lack of visibility?
Another puzzle was with the lack of visibility of all those computational economists more concerned with theory/simulation. The canonical history on the recent transformation of the discipline carried in survey papers/blog posts/media is all about empirical economics, and all about a genealogy of empirical technique – structural econometrics-VARs and time series refinements- panel data and empirical micro –data mining and machine learning, and overshadowed a much larger array of computer-related endeavors. There are several possible explanations:
1) Computational economists’ work is simply too technical for the bulk of economists, who are more trained into STATA/SAS/Maple/R
2) These lines of research don’t deal with macro/social issues (the recent crises, inequality, poverty, unemployment, growth), so they are less controversial and attract less public attention
3) There is no agreement on the status of these techniques, of the notions of “proof” and “truth” they support. They are therefore under-represented in top journals
4) It is a lot about market design and market designers eschew public attention. Computational economics is largely funded by IT firms whose research and data are proprietary and subject to non-disclosure agreement, therefore not often published in academic journals
5) Economists want general theories and computational methods generate/are perceived to generate theories that are tied to specific contexts.