“The rise of applied economics from the 1970s onward is a consequence of computerization and better data.” This increasingly canonical narrative has the allure of truthiness, and it is therefore surprising that the computerization of economics has never been historicized. Gathering a few important occurrences of economists’ engagement with computers has made me aware of the unclear interplay of hardware, software, econometric theory, modeling, coding, data gathering and policy making. My tentative chronology hints at the many ways in which the development of computers have affected economics, from speeding up econometrics calculations to challenging entrenched conceptions of economic “proof,” from providing new objects of studies to forging new relationship between theory and empirical work, even eliminating the distinction between the two spheres. More specifically, here are the 9 tentative layers of influence I read in it (comments much welcome)
1) Improving calculations: as they were brought by academic and research institutions, mainframe computers were instantly used to calculate input-output tables and to compute the moments necessary to calculate estimates with various pre-determined econometric techniques. They were then used to carry on model specification, estimation, and tests of all kinds, and to compute solutions for linear dynamics optimization problems. Although this is the more immediate an intuitive way in which computer affected economics, it is nevertheless conditional upon the acquisition of individual coding skills, then upon the development of softwares.
2) Fostering new empirical techniques: computerization did not only enable the application of pre-existing techniques, it also fostered new empirical practices. That is, what computers can do is taken into account in the development of new practices, whose theoretical properties are subsequently investigated. The latest example is Al Varian, Susan Athey, Guido Imbens and others’ attempts to merge Machine Learning with more traditional causal inference techniques. Or the application of data mining techniques to economics.
3) New data management and production: computerization did not merely allow faster and more complex calculations, but also the storage and retrieval of bigger preexisting data sets (see for instance the Billion Prices Project). It also allowed new types of data to be generated: real-time data on individual transactions and behaviors were increasingly recorded on actual markets, but also generated via experimentation in computers’ labs (note that computerization was, again, a necessary but by no mean sufficient condition to improve data processing).
4) Removing theoretical strictures: better computation has not merely changed the way in which a model could be confronted to data. It has also affected theorizing by enabling the removal of those unwanted theoretical simplifications that have always been necessary : Leontief input-output and disaggregation; CGE and removal of linear restrictions of I-O; agent-based modeling removing aggregations assumptions, such as the representative agent (for instance in the study of the housing market)
5) Helping theorem-proving: while the most visible impact of computer is on empirical economics, its development have (or could have) altered theorizing as well. Stephanie Dick has documented how, at the turn of the 1980s, an AURA (automated Proof Assistant) was developed by mathematicians at Argonne. 30 years later, Automated Theorem Proving is only marginally used by economists (to analyze concepts of solutions for games and auctions, or proving and discovering theorems). Economists’ lack of interest for such techniques require explanation (I was suggested that economists are more interested in understanding why a theorem is true than whether it is true, because higher order logic is difficult to mechanize…)
6) Replacing theorem-proving: computers offered new ways to approach theories, and therefore challenged traditional concepts of proofs. Ken Judd has advocated new complementarities between traditional deductive proofs and the use of numerical methods, but complains that these new ways of analyzing theories are not well accepted in top-journals. His grievances are matched by Vela Veluppilai, who argued that computational techniques are only slowly changing economists’ practices because of the latter’s historical reliance on the Hilbertian paradigm (proofs à la Debreu). An algorithmic revolution is needed, he explained.
7) New markets requiring new modes of analysis. If economists have been reluctant to change their theorizing to accommodate the rise of algorithmic thinking, they nonetheless claim that their practices are being transformed indirectly by the new objects of study brought about by computerization – online markets using sophisticated auction types of pricing, digital networks, new types of economic transactions etc. These new institutions have induced computer scientists and economists to tap each others’ expertise in game theory and market analysis, economists explain : after 15 years “studying the complexity of computing Nash and price equilibria, analyzing the efficiency of equilibria through the “price of anarchy,” and developing a computational theory of mechanism design which has informed the design of digital auctions, there is a consensus that the field is now ready for the next generation of problems and insights” (see for instance the introduction to the 2014 JET issue on the topic).
8) New models of behavior and market interactions inspired by computer science. So far, I’ve focused on the impact of the computer as a machine. But its development was underpinned by a science with new theoretical and epistemological foundations. The old physics emulation, which induced some economists to build economic mechanisms into machines (for instance Philips’s MONIAC) has gradually given way to a reliance upon computer science to understand both how economic agent think and behave and how market work (process information, etc.). Or so Phil Mirowski contends in his history of the relationships between economics, cybernetics, automata theory and computer science. He believes this shift is exemplified by by the rise of “mechanism design” – in particular auction design –, “zero-intelligence agents” in experimental economics, the market microstructure literature within finance, market design à la Roth-Miller, and the artificial intelligence research. The result is that markets were increasingly understood as evolving automated algorithms.
9) Fading boundaries between theoretical and applied economics: scientific models have long been framed to accommodate computation limitation, in economics as in other sciences (for instance the use of quadratic functions on money models); but new practices prompted by computerization, combined with new policy challenges and business demands, have deeply changed the relationship between theory and applied work. Think calibration, mechanism design, simulation, agent-based modeling, or else. In this respect, computerization have changed the way models are written down and what theory is expected to yield. To some extent, it has blurred the boundaries between the two spheres. Simulation (in particular ACE) is a way of modeling that doesn’t fit the theory/empirical divide, since models are not encoded but are essentially conceived as computer programs. Fading boundaries between theory and applied may explain why the computerization of economics has been associated with a quest for new epistemological foundations: simulation has been (re)conceived as a third way of doing science, and theories and data may be placed on the same footing, if they are considered as analogies.
Further thoughts on the computerization of economics here