The ordinary business of macroeconometric modeling: working on the MIT-Fed-Penn model (1964-1974)

Against monetarism?

 In the early days of 1964, George Leland Bach, former dean of the Carnegie Business School and consultant to the Federal Reserve, arranged a meeting between the Board of Governors and 7 economists, including Stanford’s Ed Shaw, Yale’s James Tobin, Harvard’s James Dusenberry and MIT’s Franco Modigliani. The hope was to tighten relationships between the Fed economic staff and “academic monetary economists.” The Board’s concerns were indicated by a list of questions sent to the panel: “when should credit restraint being in an upswing?” “What role should regulation of the maximum permissible rate on time deposits play in monetary policy?” “What weight should be given to changes in the ‘quality’ of credit in the formation of monetary policy?”

Fed chairman William McChesney Martin’s tenure had opened with the negotiation of the 1951 Accord which restored the Fed’s independence, which he had since constantly sought to assert and strengthen. In the past years, however, the constant pressure CEA chairman Walter Heller exerted to keep short-term rates low (so as not to offset the expansionary effects of his proposed tax cut) had forced Martin into playing defense. The board was now in a weird position. On the one hand, after the emphasis had been on fiscal stimulus, inflationary pressures were building up and the voices of those economists pushing for active monetary stabilization were increasingly heard. Economists like Franco Modigliani, trained in the Marschakian tradition, were hardly satisfied with existing macroeconometric models of the Brookings kind, with their overwhelming emphasis on budget channels and atrophied money/finance blocks.

On the other hand, Milton Friedman, who was invited to talk to the board a few weeks after the panel, was pushing a monetarist agenda which promised to kill the Fed’s hard-fought autonomy in steering the economy. Money supply only affected output and employment in a transitory way, he explained, and it was a messy process because of  lags in reacting to shifts in interest rates. Ressurecting the prewar quantity theory of money, Friedman insisted that the money supply affected output through financial and non-financial asset prices. He and David Meiselman had just published an article in which they demonstrated that the correlation between money and consumption was higher and more stable than between consumption and expenditures. MIT’s Robert Solow and John Kareken had questioned Friedman and Meiselman’s interpretation of lags and their empirical treatment of causality, and their colleagues Modigliani and Albert Ando were working on their own critique of FM’s consumption equation. This uncertain situation was summarized in the first sentences of Dusenberry’s comments to the 1964 panel:

Decision making in the monetary field is always difficult. There are conflicts over the objectives of monetary policy and over the nature of momentary influences on income, employment prices and the balance of payments. The size and speed of impact of the effects of central bank actions are also matters of dispute. The Board’s consultants try to approach their task in a scientific spirit but we cannot claim to speak with the authority derived from a wealth of solid experimental evidence. We must in presenting our views emphasize what we don’t know as well as what we do know. That may be disappointing vut as Mark Twain said: “it ain’t things we don’t know that hurt, it’s the things we know that ain’t so.

 

Winning the theory war implied researching channels whereby monetary policy influenced real aggregates, but winning the policy war implied putting these ideas to work. During a seminar held under the tutelage of the SSRC’s Council of Economic Stability, economists came to the conclusion that the Brookings model previously funded came short of integrating the monetary and financial sphere with the real one, and Modigliani and Ando soon proposed to fashion another macroeconomic model. For the Keynesian pair, the model was explicitly intended as a workshorse against Friedman’s monetarism. At the Fed, head of the division of research and statistics Daniel Brill and Frank De Leeuw, a Harvard PhD who had written down the Brooking’s model monetary sector, had come to the same conclusion and started to build their own model. It was decided to merge the two projects. Funded by the Fed through the Social Science Research Council, the resulting model came to be called the MPS, for MIT-Penn (where Ando had moved in 1967)-SSRC. Intended as a large-scale quarterly model, its 1974 operational version exhibited around 60 simultaneous behavioral equations (against several hundreds for some versions of the Wharton and Brookings models), and up to 130 in 1995, when it was eventually replaced. Like companion Keynesian models, its supply equations were based on a Solovian model of growth, which determined the characteristics of the steady state, and a more refined demand set of equations, with 6 major blocks: final demand, income distribution, tax and transfers, labor market, price determination, and a huge financial sector (with consumption and investment equations).Non conventional monetary transmission mechanisms (aka, other than that the cost-of-capital channel) were emphasized.

comparison
Model comparison, NBER 1976

To work these equations out, Modigliani and Ando tapped the MIT pool of graduate students. Larry Meyer, for instance, was in charge of the housing sector (that is, modeling how equity and housing values are impacted by monetary policy), Dwight Jaffee worked on the impact of credit-rationing on housing, Georges de Menil handled the wage equation with a focus on the impact of unions on wages, Charles Bischoff provided a putty-clay model of plant and equipment investment, Gordon Sparks wrote the demand equation for mortgage. Senior economists were key contributors too: Ando concentrated on fiscal multiplier estimates, Modigliani researched how money influenced wages, and how to model expectations to generate a consistent theory of interest rates determination with students Richard Sutch, then Robert Schiller. The growing inflation and the oil shock later forced them to rethink the determination of prices and wages, the role inflation played in transmission mechanisms and to add a Phillips curve to the model. The Fed also asked several recrues, including Enid Miller, Helen Popkin, Alfred Tella and Peter Tinsley, to work on the banking & financial sector and transmission mechanisms, in particular portfolio adjustments. The latter were led by   de Leeuv and Edward Gramlich, who had just graduated from Yale under Tobin and Art Okun. Responsibilities for data compilation, coding, running simulations were also split between academics and the Fed, with Penn assistant professor Robert Rasche playing a key role.

PG1 1964 10 30

The final model was much influenced by Modigliani’s theoretical framework. The project generated streams of papers investigating various transmission mechanisms, including the effect of interest rates on housing and plants investment and durable goods consumption,  credit rationing and the impact of expectations of future changes in asset prices on the term structure and on the structure of banks’ and households’ portfolio, and Tobin’s q. The MPS model did not yield expected results. Predictive performance was disappointing, estimated money multipliers were small, lags were important, and though their architects were not satisfied with the kind of adaptive expectations embedded in the behavioral equations, they lacked the technical apparatus to incorporate rational expectations. In short, the model didn’t really back aggressive stabilization policies.

Modigliani’s theoretical imprint on the MPS model, and his use of its empirical results in policy controversies are currently being investigated by historian of macro Antonella Rancan. My own interest lies, not with the aristocratic theoretical endeavors and big onstage debates, but with the messy daily business of crafting, estimating and maintaining the model.

From theoretical integrity to messy practices

A first  question is how such a decentralized process led to a consistent result. I don’t have an exhaustive picture of the MPS project yet, but it seems that graduate students picked a topic, then worked in relative isolation for months, gathering their own data, surveying the literature on the behavior of banks, firms, unions consumers or investors before sending back a block of equations. Because these blocks each had different structure, characteristics and properties, disparate methods were summoned to estimate them: sometimes TSLS, sometimes LILM or IV. Finally, because the quality of the forecasts was bad, a new batch of senior researchers reworked the housing, consumption, financial and investment blocks in 1969-1973. How is this supposed to yield a closed hundred equations model?

Bringing consistency to hundreds of equations with disparate underlying theories, data and estimation methods was a recurring concern for postwar macroeconometric modelers. At Brookings, the problem was to aggregate tens of subsectors. “When the original large scale system was first planned and constructed, there was no assurance that the separate parts would fit together in a consistent whole,” a 1969 Brookings report reads. Consistency was brought by a coordinating team and through the development of common standards, Michael McCarthy explains: large database capabilities with easy access and efficient update procedures, common packages (AUTO-ECON), efficient procedures for checking the accuracy of the code (the residual check procedure), and common simulation methods. But concerns with unification only appeared post-1969 in the Modigliani-Ando-Fed correspondence. Modigliani was traveling a lot, involved in the development of an Italian macromodel, and did not seem to care very much about the nooks and crannies of data collection and empirical research. Was a kind of consistency achieved through the common breeding of model builders, then? Did Modigliani’s monetary and macro courses at MIT create a common theoretical framework, so that he did not have to provide specific guidelines as to which behavior equations were acceptable, and which were not? Or were MIT macroeconomists’ practices shaped by Ed Kuh and Richard Schmalensee’s empirical macro course, and the TROLL software?

IBM360
IBM360

To mess things further up, Fed and academic researchers had different objectives, which translated in diverging, sometimes antagonistic practices. In his autobiography, Modigliani claimed that “the Fed wanted the model to be developed outside, the academic community to be aware of this decision, and the result not to reflect its idea of how to operate.” Archival records show otherwise. Not only were Fed economists very much involved in model construction and simulations, data collection and software management, but they further reshaped equations to fit their agenda. Intriligator, Bodkin and Hsiao list three objectives macroeconometric modeling tries to achieve: structural analysis, forecasting and policy evaluation, that is, a descriptive, a predictive and a prescriptive purpose. Any macroeconometric model thus embodies tradeoffs between these uses. This is seen in the many kinds of simulations Fed economists were running, each answering a different question. “Diagnostic simulations” were aimed at understanding the characteristics of the model: whole blocks were taken as exogenous , so as to pin down causes and effects in the rest of the system. “Dynamics simulations” required feeding forecasts from the previous period into the model for up to 38 quarters, and check whether the model blew up (it often did) or remained stable and yielded credible estimates for GDP or unemployment. “Stochastic simulations” were carried out by specifying initial conditions, then making out-of-sample forecasts. Policy simulations relied on shocking an exogenous variable after the model had been calibrated.

How the equations were handled also reflected different tradeoffs between analytical consistency and forecasting performance. True, Board members needed some knowledge on how monetary policy affect prices, employment and growth, in particular on scope, channels and lags. But they were not concerned with theoretical debates. They would indifferently consult with Modigliani, Dusenberry, Friedman or Metlzer. Fed economists avoided the terms “Keynesian” or “monetarist.” At best, they joked about “radio debates” (FM-AM stood for Friedman/Meiselman-Ando/Modigliani). More fundamentally, they were clearly willing to trade theoretical consistency for improved forecasting ability. In March 1968, for instance, De Leeuv wrote that dynamic simulations were improved if current income was dropped from the consumption equation:

We change the total consumption equation by reducing the current income weight and increasing the lagged income weight […] We get a slight further reduction of simulation error if we change the consumption allocation equations so as to reduce the importance of current income and increase the importance of total consumption. This reduction of error occurs regardless of which total consumption equation we use. These two kinds of changes taken together probably mean that when we revise the model the multipliers will build up more gradually than in our previous policy simulations, and also that the government expenditure multiplier will exceed the tax multiplier. You win!

 But Modigliani was not happy to sacrifice theoretical sanity in order to gain predictive power. “I am surprised to find that in these equations you have dropped completely current income. Originally this variable had been introduced to account for investment of transient income in durables. This still seems a reasonable hypothesis,” he responded.

The Fed team was also more comfortable with fudging, aka adding an ad-hoc quantity to the intercept of an equation to improve forecasts, than Modigliani and Ando were. As explained by Arnold Kling, this was made necessary by the structural shift associated with mounting inflationary pressures of all kinds, including the oil crisis. After 1971, macroeconometric models were systematically under-predicting inflation. Ray Fair later noted that analyses of the Wharton and OBE models showed that ex-ante forecast from model builders (with fudge factors) were more accurate than the ex-post forecasts of the models (with actual data). “The use of actual rather than guessed values of the exogenous variables decreased the accuracy of the forecasts,” he concluded. According to Kling, the hundreds of fudge factors added to large-scale models were precisely what clients were paying for when buying forecasts from Wharton, DRI or Chase. They were “providing us with the judgment of Eckstein, Evans and Adams […] and these judgments are more important to most of their customers than are the models themselves,” he ponders.

20160404_114229
Material from Modigliani’s MPS folders, Rubinstein Library, Duke University

Diverging goals therefore nurtured conflicting model adjustments. Modigliani and Ando primarily wanted to settle an analytical controversy, while the Fed used MPS as a forecasting tool. How much MPS was aimed as a policy aid is more uncertain. By the time the model was in full operation, Arthur Burns had replaced Martin as chairman. Though a highly skilled economist – he had coauthored Welsey Mitchell’s business cycles study– his diaries suggest that his decisions were largely driven by political pressures. Kling notes that “the MPS model plays no role in forecasting at the Fed.” The forecasts were included in the Greenbook, the memorandum used by the chair for FOMC meetings. “The staff is not free to come up with whatever forecast it thinks is most probable. Instead, the Greenbook must support the policy direction favored by the Chairman),” writes Kling. Other top Fed officials were openly dismissive of the whole macroeconometric endeavor. Lyle Gramley, for instance, wouldn’t trust the scenarios derived from simulations. Later dubbed the “inflation tamer,” he had a simple policy agenda: bring inflation down. A consequence of these divergences, two models were, in fact, curated side by side throughout the decade: an academic one (A), and a Fed one (B). With time, they exhibited growing differences in steady states and transition properties. During the final years of the project, some unification was undertaken, but several MPS models kept circulating throughout the 1970s and 1980s.

Against the linear thesis

Archival records finally suggest that there is no such thing as a linear downstream relationship from theory to empirical work. Throughout the making of the MPS, empirical analysis and computational constraints seem to have spurred macroeconomic and econometric theory innovations. One example is the new work carried by Modigliani, Ando, Rasche, Cooper, Gramlich and Shiller on the effects of the expectations of price increases on investment, on credit constraints in the housing sector and on saving flow in the face of poor predictions. Economists were also found longing for econometric tests enabling the selection of a model specification over others. The MPS model was constantly compared with those developed by the Brookings, Wharton, OBE, BEA, DRI or St Louis teams. Public comparisons were carried through conferences and volumes sponsored by the NBER. But in 1967, St Louis monetarists also privately challenged MPS Keynesians to a duel. In those years, you had to specify what counted as a fatal blow, choose the location, the weapon, but also its operating mechanism. In a letter to Modigliani, Meltzer clarified their respective hypotheses on the relationship between short-term interest rates and the stock of interest bearing government debt held by the public. He then proceeded to define precisely what data they would use to test these hypotheses, but he also negotiated the test design itself. “Following is a description of some tests that are acceptable to us. If these tests are acceptable to you, we ask only (1) that you let us know […] (2) agree that you will us copies of all of the results obtained in carrying out these tests, and (3) allow us to participate in decisions about appropriate decisions of variable.”

test

Ando politely asked for compiled series, negotiated the definition of some variables, and agreed to 3 tests. This unsatisfactory armory led Ando and Modigliani to nudge econometricians: “we must develop a more systematic procedure for choosing among the alternative specifications of the model than the ones that we have at our disposal. Arnold Zellner of the University of Chicago has been working on this problem with us, and Phoebus Dhrymes and I have just obtained a National Science Foundation grant to work on this problem,” Modigliani reported in 1968 (I don’t understand why Zellner specifically).

20160404_100356
Punchcard instructions (MPS folders)

More generally, it is unclear how the technical architecture, including computational capabilities simulation procedures and FORTRAN coding, shaped the models, their results and their performances. 1960s reports are filled with computer breakdowns and coding nightmares: “the reason for the long delay is […] that the University of Pennsylvania computer facilities have completely broken down since the middle of October during the process of conversion to a 360 system, and until four days ago, we had to commute to Brookings in Washington to get any work done,” Ando lamented in 1967. Remaining artifacts such as FORTRAN logs, punchcard instructions and endless washed-out output reels or hand-made figures speaks to tediousness of the simulation process. All this must have been especially excruciating for those model builders who purported to settle the score with a monetarist who wielded parsimonious models with a handful of equations and loosely defined exogeneity.

 

Capture d_écran 2017-03-15 à 03.42.08
Output reel (small fraction, MPS folders)

As is well known, these computational constraints have stimulated scientists’ creativity (Gauss-Seidel implemented through SIM package Erdman residual check procedure, etc). Did they foster other creative practices, types of conversations? Have the standardization of models evaluation brought by the enlargement of the tests toolbox and the development of econometric software package improve macroeconomic debates since Ando, Modigliani, Brunner and Meltzer’s times? As Roger Backhouse and I have documented elsewhere, historians are only beginning to scratch the surface of how computer changed economics. While month-long tedious simulations now virtually take two clicks to run, data import included, this neither helped the spread of simulations, nor prevented the marginalization of Keynesian macroeconometrics, the current crisis of DSGE modeling and the rise of computer-economical quasi-experimental techniques.

DSC00125
MPS programme (MPS folder)

Overall, my tentative picture of the MPS model is not one of a large-scale consistent Keynesian model. Rather, it is one of multiple compromises and back and forth between theory, empirical work and computations. Its is not even a model, but a collection of equations whose scope and contours can be adapted to the purpose at hand.

Advertisement

1 Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s