It seems that economists have a serious problem with theory lately. The pushback against rational expectations is growing, Richard Thaler is prophesizing the extinction of the homo economicus, and even the debate stirred by the publication of Paul Romer’s “mathiness” paper is eventually more about theory than about math: the original argument was about economists’ use of math to give ideological assumptions some sort of respectability, but it quickly evolved into yet another rendition of the old “as if” debate – should economists use unrealistic assumptions to explain real world situations and suggest policy directions.
What economists are looking for is revamped relationships between theoretical, empirical and policy analysis. The general feeling is that the longstanding hierarchy between theoretical and applied work has collapsed in the past decades. Many have declared theory dead, emphasizing the decrease of theory papers in top journals, and the relative decline of their citation count. The 1911-born “theory” category in the JEL classification was even taken down in the 1990 overhaul of the codes. A recent depiction of Stanford econ department, candidate for the “next leading econ department” title, likewise highlights its excellence in “‘empirical microeconomics,’ the analysis of how things work in the real world.” At the same time, however, there is a growing sense that a situation in which empirical work is prioritized over theoretical modeling is not healthier than the previous theory-first order. Hence some pushback against the perceived “triumphalism” in empirical economics
My concern is that this soul-searching is predicated upon a historically inaccurate idea: that theory has hitherto enjoyed a higher status compared with applied economics (empirics+policy). Historians are currently entertaining the idea that the dominance of theory was only one of prestige, not quantity nor intellectual dependance. And even the higher prestige of theory may be a short-lived phenomenon (1965-2005 approximately). Historians’ attempt to rethink the relationships between theory and applied econ, and to explain the scope and causes of the last-XXth century “applied turn” is very much is progress, but here are a few historical clues.
(Note that a similar debate is raging in history of science and engineering, with historian Paul Forman arguing that the primacy of science on technology is currently being turned upside down. Some disagree, and point out that the “linear model” than run from pure science to applied research and technological innovation is a straw man anyway).
Before World War II
The basic idea is that there was no theory-applied pecking order before World War II. The reason was that in the interwar, economics was developing as a pluralistic science. Some economists –Taussig, Young, Knight, Clark – worked out theoretical models of the consumer and the firm (with a small minority —Frisch, Hicks, Chamberlin,ect— using math). Others, like Wesley Mitchell at the NBER or agricultural economists, were busy collecting a huge amount of data on business cycles, crop yields, weather and the like. And in trying to make sense of their series, they felt absolutely no need to bow to any a priori theoretical framework. Institutionalism dominated economics, and economic philosophers therefore had ample opportunity to debate whether economics was an inductive or a deductive science.
1940-1960: tensions and unsettled theory-applied relationships
By all means, World War two was a huge watershed. But both theoretical and applied economics emerged strengthened in scientific credentials, however with no stabilized relationship between the two. On the one hand, the migration of European mathematical theorists, applied mathematics advances in linear programming, OR, activity analysis, calculus of variations and their immediate interdisciplinary dissemination throughout war research laid the basis for major theoretical advance, including game theory, general equilibrium, and rational decision under uncertainty. Samuelson drafted a book that would convince generations of students to think about economic as a set of optimizing behaviors, whose aggregate effect could be analyzed through comparative statics: Foundations. On the other hand, economists came out of the war with their empirical skills and data sets equally improved: quality control techniques, the hurried development of national accounting systems, the compilation of price and cost of living estimates, and cross-sectional industrial data statistics.
Though theoretical and applied economics gained credibility during the war, the postwar discussions on government science policies made economists painfully aware that they needed more solid foundations. The National Science Foundation, established in 1950 amidst the McCarthysm witch-hunt, had no social science division. The natural scientists in charge had felt that social sciences were immature, riven with ideology, and unable to establish true laws.
This context resulted in growing tensions between theoretical and applied economists. In 1946, Cowles commission vice director Tjalling Koopmans scathingly identified Burn and Mitchell’s Measuring Business Cycles (in which business cycles were identified as regularities in dozens of time series) as the “Kepler stage” of economics. He claimed it was necessary to move from their “Measurement without Theory” to a “Newton stage,” one embodied in the “fuller utilization of the concepts and hypotheses of economic theory” that was the hallmark of the Cowles Commission. The same year, Richard Lester expressed doubts about the increasing use of marginal analysis to analyze the behavior of firms, one he though was unsupported by data. Machlup fired back, launching a decade-long “marginal cost controversy.”
The perceived need to advance theoretical analysis was best seen in the AEA survey on the reform of graduate education. In the resulting 1953 report, Howard Bowen reported that 90% of surveyed professors believed that theory should be part of the “core” graduate education, against 53% for statistics and 55% for econ history. But it was because most felt that econ theory was under-represented in current syllabi. And 3% only thought theory and applied fields should be better integrated. And as a matter of fact, most applied fields in the 50s lived a separate existence, some more theory-dependent, others where theoretical insights derived from statistical data or historical narrative (development economics and Rostow’s take-off theory, for instance). In macroeconomics, huge theoretical and empirical debates coexisted (the Philips curve, the Friedman-Tobin post ergo propter hoc debate on money-income causality).
Tensions were not merely intellectual. In an effort to advance theoretical endeavor, the John Bates Clark medal was established in 1947, but after it was awarded to Samuelson, Boulding, Friedman and Tobin, young economists complained about the theoretical orientation of the prize and asked for a distinct “applied” award. But in discussions with the Ford Foundation, then social sciences’ largest patron, it was Marschak, Koopmans, Simon and other theorists who bitterly complained that industry, the military and foundations refused to fund basic research. And in many instance, patrons’ practical demand for quality control and product optimization produced new theoretical insights. Throughout that period, AEA editors and executives were deeply divided over the revision of their classification for economic literature: whether the theory category should be abolished or expanded instead remained, for more than 50 years, a burning issue.
1960s and 1970s: theory becomes “core”
It was only in the mid-1960s that theory came to dominate economics. At that time, general equilibrium theory has stabilized and disseminated, and even fields which had long resisted maximizing behavior foundations were subsumed: microfoundations in macro, Diamond-Mirrlees optimal taxation in public economics, New Urban Economics, old and new fields became unified around a common theoretical “core” (introduction of Nancy Ruggles’ 1970 Economics)
At about the same time, optimization-based models were extended to problem non-traditionally considered economics: family, crime, conflicts, non-market decision making.
Fragmentation and the “applied turn”: what happened to theory since the 1970s?
What happened next is not yet history (but we’re working on it). Our intuition is that the unification of economics around a theoretical core happened in a context that bore the seeds for a reorientation of economists’ practices: the discipline was also fragmenting; the economic, social, urban, environmental and international crisis of the late 60s and early 70s triggered new social outcry,a change in policy regime, a econ students’ greater demand for relevance, internal discontent within the discipline (from the creation of the URPE in 1968 to Leontieff’s 1970 critical presidential address). Add to this the IT revolution, the emergence of new sites for applied economic research (NBER was revamped by Martin Feldstein, IMF, Central Banks crowded with econ PhDs), the redefinition of the theory-data relationships in a tiny but growing subfield, experimental economics…. The specific influence of these various trends on the “applied turn,” and its nature are still under discussion.
But equally questionable is how these evolutions can be equated with a demise of theory. When the JEL editors eventually dismissed the “theory” category in 1990, they claimed it was because “good research in economics is a blend of theory and empirical work, » not because theory has disappeared. Neither is a demise visible in the latest John Bates Clark citations: Emmanuel Saez has “brought the theory of taxation closer to practical policy making,” Jonathan Levin’s work “combines a sophisticated grasp of economic theory with careful empirical analysis,” Amy Finkelstein’s models show “how theory and empirics can be combined in creative ways,” ect ect etc… And Noah Smith’s depiction of another “next-top department” candidate, Berkeley, again suggests that what is needed to reform economists’ vision of decision making is mix of theoretical and empirical innovation. Will this be enough to avoid yet another chicken-egg deductive/inductive/data/theory fight. Not sure.