Working on 1960s macroeconometrics : there’s an echo on the line

Three years ago, a group of historians of economics embarked on a reexamination of the relationships between theoretical and empirical work in macroeconomics. Our goal was inward looking. We were not primarily looking to contribute to present discussions on the state of macro, but to correct what we perceived as a historiographical bias in our own scholarship: the tendency to paint the history of macroeconomics as a succession of theoretical battles between Keynesians, monetarists, new classical, neo-keynesians, etc.  This emphasis on theory did not square well with a common thread in all the interviews of 1960s grad students from MIT and elsewhere several of us had conducted: those revealed that a common formative experience seemed to be contributing to one of the large-scale macroeconometric models developed in these years. My own pick was the model jointly developed at the Fed, the MIT and the University of Pennsylvania (hereafter FRB model). Yet as I complete my second paper on the model (joint with Roger Backhouse, just uploaded here), I find that the dusty debates we document have found an unexpected echo in contemporary exchanges.

I learned two lessons from writing on the FRB model. The first one is that I wasn’t as immune from the epistemic spell of not-yet-defunct economists as I had thought. I came to the project with no hidden opinion on the DSGE approach to macro, one borne out of a synthesis between the modeling rules spelled out by Sargent and Lucas and a variety of add-ons proposed by so-called New Keynesians aimed at providing more a satisfactory depiction of shocks and response mechanisms. But like most historians of macro, I had been trained as an economist. I had been raised into believing that microfounded models were the clean rigorous way to frame a discourse on business cycles (the insistence that rational expectations was the gold standard was, I think, already gone by the mid 2000s). If I wanted to trade rigor for predictive power, then I needed to switch to an altogether different practice, VARs (which I effectively did as a central bank intern tasked with predicting short-term moves in aggregate consumption). What I discovered was that my training had biased the historiographical lenses through which I was approaching the history of macroeconometric models: what I was trying to document was the development and use of A model, one defined by a consistent set of behavioral equations and constraints and a stable set of rules whereby such system was estimated and simulated for policy purpose. The problem, I quickly found out, was that there was no such historical object to research.

What we found in the archives was a collection of equations whose specification and estimation procedures were constantly changing across time and locations. There was no such thing as the FMP model. To begin with, the Fed team and the academic team closely collaborated by developed distinct models that were only merged after 3 years. And the boundaries of each model constantly evolved as students returned new blocks of equations and simulations blew up. The ordinary business of macroeconometric modeling looked like a giant jigsaw. This December 1967 letter from Albert Ando to Franco Modigliani is representative:

Screen Shot 2018-10-14 at 22.50.40Screen Shot 2018-10-14 at 22.50.08

 

Screen Shot 2018-10-15 at 19.13.11.pngViewed from the perspective of modern macro, it was a giant mess, and in our first drafts we thus chose to characterize macroeconometrics as a “messy” endeavor. But being “messy” in the sense of not being theoretically and econometrically founded and thus unscientific is exactly why Lucas and Sargent argued these models should be dismissed. Their famous 1979 “After Keynesian Macroeconomics” paper is an all-out attack on models of the FRB kind: they pointed to the theoretical “failure to derive behavioral relationships from any consistently posed dynamic optimization problems,” the econometric “failure of existing models to derive restrictions on expectations” and the absence of convincing identification restrictions, concluding with the “spectacular failure of the Keynesian models in the 1970s.” In his critique paper, Lucas also cursed “intercept adjustment,” also known as “fudging” (revising the intercept to improve forecast accuracy, a practice which spread as building inflationary pressures resulted in false predictions in the 1970s). It was a proof those models were misconceived, he argued.

 

The second lesson I learned from working on primary sources is that macroeconometricians were perfectly aware of the lack of theoretical consistency and the fuzziness of estimation and simulation procedures. More, they endorsed it. Every historian knows, for instance, that the quest for microfoundations did not begin with Lucas, having repeatedly stumbled on pre-Lucasian statements on the topic. Jacob Marschak opened his 1948 Chicago macro course with this statement : “this is a course in macro-economics. It deals with aggregates…rather than with the demand or supply or sinfle firms or families for single commodities. The relations between aggregates have to be consistent, to be sure, with our knowledge of the behavior of single firms or households with regards to single good.” In 1971, Terence Gorman likewise opened his lectures on aggregation with a warning: “theorists attempt to derive some macro theory from the micro theory, usually allowing it to define the aggregate in question. In practice they are reduced to asking ‘when can this be done.’ The answer is ‘hardly ever.’” Kevin Hoover has argued that there were at least three competing microfoundational programs in the postwar period, Lucas’s use of representative agent being just one of them. But for macroeconometricians, the lack of theoretical consistency in the Lucasian science was also the result of doing big science, and of facing a trade-off between theoretical consistency and data fit.

Building a macroeconometric model of the FRB kind involved several teams and more than 50 researchers, and it was impossible that all of them agree on the specification of all equations: “None of us holds the view that there should be only one model. It would indeed be unhealthy if there were no honest differences among us as to what are the best specifications of some of the sectors of the model, and when such differences do exist, we should maintain alternative formulation until such time as performances of two formulations can be thoroughly compared,” Ando explained to Modigliani in 1967. By 1970, it had become clear that neither would macroeconomists agree on the adequate tests to compare alternative specifications. Empirical practices, goals and trade-offs were too different. The Fed team wanted a model which could quickly provide good forecasts: “We get a considerable reduction in dynamic simulation errors if we change the total consumption equation by reducing the current income weight and increasing the lagged income weight […] We get a slight further reduction of simulation error if we change the consumption allocation equations so as to reduce the importance of current income and increase the importance of total consumption,” Fed project leader Frank de Leeuw wrote to Modigliani in 1968. But the latter’s motive for developing the FRB model was different: he wanted to settle a theoretical controversy with Friedman and Meiselman on whether the relation of output to money was more stable than the Keynesian multiplier. He was therefore not willing to compromise theoretical integrity for better forecasting power: “I am surprised to find that in these equations you have dropped completely current income. Originally this variable had been introduced to account for investment of transient income in durables. This still seems a reasonable hypothesis,” he responded to De Leuuw.

Different goals and epistemic values resulted in different tradeoffs between theoretical consistency and data fit, between model integrity and flexibility. The intercept fudging disparaged by Lucas turned out to be what clients of the new breed of firms selling forecasts based on macroeconometric models paid for. What businessmen wanted was the informed judgment of macroeconomists, one that the Federal Reserve Board also held in higher esteem than mere “mechanical forecasts.” Intercept corrections were later reframed by David Hendry as an econometric strategy to accommodate structural change. In short, the messiness of macroeconometrics was not perceived as a failure; it was, rather, messiness by design. In his response to Lucas and Sargent, Ando explained that reducing a complex system to a few equations required using different types of evidence and approximations, so that the task of improving them should be done ‘informally and implicitly.”

Screen Shot 2018-10-15 at 19.17.24That recent discussions on the state of macroeconomics somehow echo the epistemic choices of 1960s macroeconometricians is an interesting turn. Since 2011, Simon Wren-Lewis has been calling for a more “pragmatic” approach to microfoundations. His most recent blog post describes the development of the British COMPACT model as weighing costs and gains of writing internally non-consistent models –the model features an exogenous credit constraint variable. His calls this approach “data-based” and “eclectic,” and he argues that macro would have been better had it allowed this kind of approach to coexist with DSGE. Last year, Vitor Constancio, Vice-president of the European Central Bank, noted that “we constantly update our beliefs on the key economic mechanisms that are necessary to fit the data,” concluding that “the model should be reasonably flexible.” Olivier Blanchard also recently acknowledged that macroeconomic models fulfilled different goals (descriptive, predictive and prescriptive). He advocated building different models for different purposes: academic DSGE are still fit for structural analysis, he argued, but “policy modelers should accept the fact that equations that truly fit the data can have only a loose theoretical justification.” In a surprising turn, he argued that “early macroeconomic models had it right: the permanent income theory, the life-cycle theory, and the Q theory provided guidance for the specification of consumption and investment behaviour, but the data then determined the final specification.” Are we witnessing an epistemological revolutionOr a return to epistemological positions that economists thought they had abandoned?