Heterogeneous agent macroeconomics has a long history, and it raises many questions

A cornerstone of thoughtful and lazy criticisms of mainstream macroeconomics alike is the idea that macroeconomists have spent 40 years just writing infinitely lived representative agent models (in which all agents behave the same way and are eternal), and isn’t it a ridiculous assumption? To which mainstream macroeconomics invariably respond: “heterogeneous agent models are all over the place,” which is in turn invariably met with “yes, but this is a very recent development.” I have myself long considered the development of heterogeneous agent macro as a response to the 2008 crisis. Then I began working on the history of economics at Minnesota, and I realized that by the mid-1980s, heterogeneous agent models were already all over the place. Much of the current discussions on the-state-of-macro are premised on a flawed historical narrative. And acknowledging that representative agent models have long coexisted with heterogeneous agent models of various stripes raises a host of new questions for critics and proponents of mainstream macro.

Heterogeneous agent models: the Minnesota genealogy

In 1977, Cowles economist Truman Bewley was looking to microfound the permanent income hypothesis (which basically state that agents try to smooth their consumption over time by saving and dis-saving). He came up with a model in which agents are heterogeneous in the income fluctuations they face (back then you didn’t call these “shocks” yet), and, crucially, who are not allowed to borrow. Though he subsequently moved from general equilibrium theory to survey-based investigation of why wage are sticky, his earlier work gave rise to a class of models named after him by Lars Ljundqvist and Tom Sargent. Their characteristic was that market incompleteness, in particular incomplete insurance, creates heterogeneity in the kind of shocks otherwise initially identical agents reacted to, so that their ex-post wealth follows a distribution that needs to be characterized. Major contributions to this literature included landmark works by former physics student Rao Aiyagari (1981 Minnesota PhD under Wallace) and Mark Huggett (1991 Minnesota PhD under Prescott) in the early 1990s.

Huggett wanted to understand why risk-free interest were on average lower than what calibrated representative-agent models predict. He constructed a model in which households face idiosyncratic income endowment shocks that they can’t fully insure against because they face a borrowing constraint, and showed that this results in higher precautionary savings (so that they can later smooth consumption in spite of uncertainty), thus a lower risk-free rate. “A low risk-free rate is needed to persuade agents not to accumulate large credit balances so that the credit market can clear,” he explained. Aiyagari intended to study how an economy with a large number of agents behaved, hoping to bridge the gap between representative-agent models and observed patterns in individual behavior. He also wanted to study how individual risk affect aggregate saving (little, he found). He wrote a production model where agents differ in that they face uninsured idiosyncratic labor endowment shocks and trade assets between themselves. Since agent save more, the capital stock, the wage rate and the labor productivity are all higher, he showed.  He also used a Bewley model to show that if market were incomplete, capital income could be taxed. This challenged the famous result independently reached by Kenneth Judd and Christophe Chamley (in a representative agent setting) that positive capital taxation is suboptimal.

Minnesota macroeconomists in the 1980s were fond of another type of model in which heterogeneity bred restricted participation to markets, leading to lack of insurance against extrinsic uncertainty (that is shocks affecting nonfundamental variables). The heterogeneity was in age, and it was generated by the coexistence of at least two generations of otherwise identical agents in each one of an infinite succession of periods. Since each agent is born then dies, this  de facto restricts participation to past markets. What came to be known as overlapping generation models were originally engineered by Maurice Allais in 1947 and Paul Samuelson in 1958. Samuelson’s consumption-loan model allowed him to investigate the determination of interest rates (he identified an autarkic equilibrium in which money has no value because agents consume their endowment and another one in which money is used to delay consumption). His purpose was to ‘point up a fundamental and intrinsic deficiency in a free price system … no tendency to get you to positions on the [Pareto efficiency frontier] that are ethically optimal.” OLG has subsequently been wedded to the examination of the role of money in the economy – it was used in the famous 1972 paper by Lucas, the original model in which Cass and Shell framed their sunspots (they argued that OLG was the “only dynamic disaggregated macroeconomic model”) and it was the core of a conference on microfounding the role of money organized by John Kareken and Neil Wallace at the Federal Bank of Minneapolis in 1978 (Bewley, Cass and Shell contributed to the resulting volume). Aiyagari, and many others (half of Wallace’s PhD students) at Minnesota, had spent years comparing the properties of infinitely-lived agent models and OLGs.

There were several reasons to bring heterogeneity into general equilibrium models. Some researchers wanted to study its consequences on the level and dynamics of price and quantities, others were interested in understanding the effects of business cycles on the welfare of various types of consumers (something that governments might want to offset by removing some risks agent faced, through social security for instance). It was the motive behind the dissertation Ayse Imrohoroglu completed at Minnesota in 1988 under Prescott. One of her papers pushed back against Lucas’ 1987 conclusion that the business cycle did not really affect aggregate consumption. She wrote a model where variable probabilities to find a job create some idiosyncratic income uncertainty which agents cannot completely offset because they have borrowing constraints. She concluded that in specific settings and for some parameters, the welfare cost of business cycle was significant ($128 per person, 5 time larger than the one in an economy with perfect insurance).

 Per Krusell (1992 Minnesota PhD under Wallace) and Carnegie economist Tony Smith were also concerned with the consequences of heterogeneity on the business cycle and its welfare implications. Their agenda was to check whether a heterogeneous agent model fared better than a representative agent one when it came to replicate the behavior of macro aggregates.  They used a production model in which households face idiosyncratic  employment shocks with borrowing restriction. These agents consequently hold different positions in the wealth distribution, with some of them ‘rich’ and some of them ‘poor.’ They also added an aggregate productivity shock, and, in a spinoff of the model, differences in agents’ random discount factor (their degree of patience).

They found out that when shocks are calibrated to mimic real GDP fluctuations, the resulting overall wealth distribution is close to the real-world one. They noted that the resulting level and dynamics of aggregates was not substantially different from what was obtained with a representative agent model, a result that was later attributed to their calibration choices. Furthermore, they explained that “the distribution of aggregate wealth was almost completely irrelevant for how the aggregates behave in the equilibrium” ( because the value of the shocks they chose was not that big, agents ended up insured enough so that their marginal propensity to save was largely independent from their actual wealth and income, except for the poorest, who don’t weigh a lot in aggregate wealth anyway. The borrowing constraint didn’t play a big role in the end).

            In a follow-up survey, Krusell and Smith made it clear that their purpose was not “to provide a detailed assessment of the extent to which inequality influences the macroeconomy … [or] how inequality is determined.” It seems to me that back then, studying inequality in wealth, income, wage was not the main motive for developing these models (Aiyagari excepted). The growing amount of micro data produced, in particular through the US Panel Study on Income Dynamics initiated by Michigan economist Jim Morgan in the wake of Johnson’s War on Poverty, provided a new set of facts calibrators were challenged to replicate. These included a more disagregated picture of income and wealth distribution. If Bewley models featured prominently in Ljungqvist and Sargent’s 2000 Recursive Macroeconomic Theory textbook, thus, it was because it was necessary to match the “ample evidence that individual households’ positions within the distribution of wealth move over time.” Macroeconomists’ motives to use heterogeneous agent models gradually shifted, as they became more directly interested in the two-ways relations between inequalities and aggregate fluctuations. Other types of heterogeneity were introduced, in the demographic structure, the type of shock agent face (fiscal and productivity shocks among others) and their innate characteristics (in the liquidity of their asset portfolios, their marginal propensity to consume, their health, their preferences, etc.).

Innovations were much needed to solve these models, as aggregate equilibrium prices depended not just on exogenous variables, but also on the entire wealth and income distribution of agents that endogenously changes over time. This meant that solutions could not be analytically derived. If Krusell and Smith’s work proved so influential, it wasn’t merely because they proposed a new model, but also because they built on numerical methods to provide an iterative numerical solution algorithm (based on their idea that the only thing that was necessary for agents to make consumption decisions – thus for them to compute the solution – was the mean of the wealth distribution, which determined future interest rates). The development of heterogeneous agent macro models from the 1990s onward therefore paired theoretical and computational investigations.

 

Questions for critics, supporters and historians of mainstream macro

What I describe here is just a subset of the heterogeneous agent models written from the 1980s onward. I only deal only with part of the Minnesota genealogy  (excluding works by Diaz, Manuelli and others) and with household heterogeneity. The work done by other economists on precautionary savings and consumption, the catalyzing role played by Angus Deaton’s empirical work on aggregation and consumption and all the literature on firm heterogeneity is left behind. This is a rough account which probably contains analytical errors and whose underlying narrative will likely evolve with the historical evidence I gather. But sketchy as it is, this account already raises a host of new questions, some I ask as a historian and other as a candid observer of current macro modeling debates

First, I’m puzzled by how this line of research has been ignored by critics of mainstream macro and wiped out of the canonical history of macro told by its major protagonists. You might judge that this cluster of economists was anecdotal, but as I said, it is only a subset of those macroeconomists who have worked with heterogeneous agent models since the 1980s. They were located at the center of the field, working in or produced by the department of economics critics and eulogists alike all believe was setting the agenda for macro in these years: Minnesota. A further sign of its importance, this approach was institutionalized in curricula, surveys and textbooks ways before the 2010s. OLGs featured prominently in Sargent’s Minnesota course notes (published in 1987) as the preferred vehicle to model credit, money and government finance. As I said above, his 2000 textbook made ample space for Bewley models. Some computationally oriented textbooks even devoted half their space to heterogenous agent models by the mid-2000s. José-Victor Rios-Rull, who completed his 1990 Minnesota PhD under Prescott on OLG models, immediately set out to teach the theory of computation of Bewley, OLGs and other heterogeneous agent models.

Screen Shot 2018-11-28 at 02.10.27Heterogeneous agent modeling was therefore part of the macro playbook already at the turn of the 2000s, that is, at the moment the state of macro was considered “good.” Contrary to what I had previously imagined, then, the most recent crop of models was not a reaction to the crisis. Rough citations patterns to Aiyagari from the Web of Science economics database, for instance, illustrate that the rise of heterogeneity in macro was a long slow but steady trend. So why this invisibility? Was it because the general sense was that these models’ conclusions were not essentially different from those reached with representative agent models? Pre-crisis assessments of the literature differ. Lucas, for instance, wrote in 2003 that “[Krusell and Smith] discovered, realistically modeled household heterogeneity just does not matter very much. For individual behavior and welfare, of course, heterogeneity is everything.” But in a survey circulated three years later, Krusell and Smith themselves were more nuanced:

“The aggregate behavior of a model where the only friction … is the absence of insurance markets for idiosyncratic risk is almost identical to that of a representative-agent model … for other issues and other setups, the representative-agent model is not robust Though we do not claim generality, we suspect that the addition of other frictions is key here. As an illustration we add a certain form of credit-market constraints to the setup with missing insurance markets. We show here that for asset pricing—in particular, the determination of the risk-free interest rate—it can make a big difference whether one uses a representative-agent model or not.”

Anecdotal evidence I’ve been collecting suggests even more fluctuating assessments were offered in 90s and 2000s graduate macro courses, ranging from “these models are the future of macro” to “these models are useless as they don’t improve on representative agent models.” If so, why this variance? Tractability issues – authors had to invent new computation algorithms as they brought new types of heterogeneity in these models? Confirmation bias – retaining only papers that emphasized similarities between the two types of models? Ideology –rejecting of heterogeneous agent models because they opened the door to active risk offsetting policies?

My other set of questions is predicated on the claim that acknowledging this modeling tradition should shift current debates on macroeconomic models. Faulting macroeconomists with sticking with infinitely lived representative agent models for 40 years is simply incorrect. Now, you might argue that the departures from this benchmark model I have described were insufficient. But then, the questions become how much heterogeneity is enough? If you believe that the flourishing literature born out of these early efforts (HANK, etc) is a minimal departure from the “standard” model no matter how far the approach is stretched to encompass new types of heterogeneity, why isn’t it enough? If a more radical break with these practices is needed, if a shift in the nature rather than the sophistication of models in unavoidable (for instance, by shifting to agent-based models or non-microfounded models), why is that?

Another question on how much “progress” there is in macro right now and whether it’s fast enough. The standard reason invoked for the burgeoning heterogeneous agent literature is better computer+ better data. I have already speculated elsewhere on why I think this is necessary but far from sufficient condition to transform economists’ practices (roughly, the effects of running models on faster computers to match them with more fined-grained data is conditional upon the profession’s acceptance of agreed standards of proof, agreed standards of conclusive empirical work, shared software). But let assume this is the case. Then how can macroeconomists be confident that models will be improved enough to “see” the next crisis brewing. How much heterogeneity is this going to take? Now you can have heterogeneity on a shock, on a consumer’s characteristics and maybe on one firm’s characteristic. How long before tractable models with heterogeneity + financial frictions+non-rational expectations+search+monopolistic market structure can be developed? What if it takes 20 years and the next crisis is 5 years ahead? To put it graphically,  how can I be confident that I won’t see a 2020 NYT cartoon picturing policy makers surrounded by a crowd of homeless starving people, handing charts plotting x & y econ variables plummeting, knocking at a door with a “academic macroeconomists” tag on it. The door is closed, with a speech bubble that reads: “Come back in 10 years, we’re not ready!” All my questions, in the end, are about the right strategy to build macro engines versatiles enough to pin down not just past crises but also brewing imbalances.

EDIT: I got many comments on the final sentence. It has been read as a defense of “forecasting the next crisis” as the criterion on which the quality of macroeconomic models should be judged. This is not what I meant, but I confess that my use of “see” and “brewing crises” is extremely vague. My views on how macroeconomic models should be evaluated are equally muddled. I think that it all boils down to what you put under the label “forecasting.” I have already explained here why it is epistemologically inconsistent to hold economists responsible for bad unconditional forecasts. My tentative criterion is that the models macroeconomists use should allow them to track a variety of important phenomena like, back in the 2000s, securitization and bubbles, and in the 2010s, trade wars, climate, inequality (all of which macroeconomists take into account), political factors like shrinking democracy and growing polarization (don’t know if macro models now feature this kind of variable).  It’s not a positive history statement on what quality criteria in macroeconomics were and are, just a lay opinion on the present situation.

A model is a device that allows economists to observe, and sometimes explain, the economy through zooming on a small subset of phenomena they think are most relevant. The bulk of macroeconomic models published in the 2000s did not single out what was happening of financial markets as important to understand the evolution of macroeconomic aggregate. Economists had positive and normative things to say about how financial market work, but in a distinct field, finance. A few Rajans tracked the mortgage security market and reflected on macroeconomic consequences, but my understanding is that macroeconomists as a group simply did not have a collective discourse on the macroeconomic consequences of securitization and risk exposure to bring to the public scene (happy to be disproved). Of course you can never know in advance which phenomena are going to turn relevant. But you can build a set of observational devices that track a larger range of phenomena (it also include statistics. You cannot “see” a new phenomenon, like the rise of the share of wealth of the top 1%, unless you can measure those things). Is what I’m trying to say with so many words here is just that macroeconomist should get better at conditional forecast?

Advertisements
Posted in Uncategorized | Tagged | Leave a comment

Working on 1960s macroeconometrics : there’s an echo on the line

Three years ago, a group of historians of economics embarked on a reexamination of the relationships between theoretical and empirical work in macroeconomics. Our goal was inward looking. We were not primarily looking to contribute to present discussions on the state of macro, but to correct what we perceived as a historiographical bias in our own scholarship: the tendency to paint the history of macroeconomics as a succession of theoretical battles between Keynesians, monetarists, new classical, neo-keynesians, etc.  This emphasis on theory did not square well with a common thread in all the interviews of 1960s grad students from MIT and elsewhere several of us had conducted: those revealed that a common formative experience seemed to be contributing to one of the large-scale macroeconometric models developed in these years. My own pick was the model jointly developed at the Fed, the MIT and the University of Pennsylvania (hereafter FRB model). Yet as I complete my second paper on the model (joint with Roger Backhouse, just uploaded here), I find that the dusty debates we document have found an unexpected echo in contemporary exchanges.

I learned two lessons from writing on the FRB model. The first one is that I wasn’t as immune from the epistemic spell of not-yet-defunct economists as I had thought. I came to the project with no hidden opinion on the DSGE approach to macro, one borne out of a synthesis between the modeling rules spelled out by Sargent and Lucas and a variety of add-ons proposed by so-called New Keynesians aimed at providing more a satisfactory depiction of shocks and response mechanisms. But like most historians of macro, I had been trained as an economist. I had been raised into believing that microfounded models were the clean rigorous way to frame a discourse on business cycles (the insistence that rational expectations was the gold standard was, I think, already gone by the mid 2000s). If I wanted to trade rigor for predictive power, then I needed to switch to an altogether different practice, VARs (which I effectively did as a central bank intern tasked with predicting short-term moves in aggregate consumption). What I discovered was that my training had biased the historiographical lenses through which I was approaching the history of macroeconometric models: what I was trying to document was the development and use of A model, one defined by a consistent set of behavioral equations and constraints and a stable set of rules whereby such system was estimated and simulated for policy purpose. The problem, I quickly found out, was that there was no such historical object to research.

What we found in the archives was a collection of equations whose specification and estimation procedures were constantly changing across time and locations. There was no such thing as the FMP model. To begin with, the Fed team and the academic team closely collaborated by developed distinct models that were only merged after 3 years. And the boundaries of each model constantly evolved as students returned new blocks of equations and simulations blew up. The ordinary business of macroeconometric modeling looked like a giant jigsaw. This December 1967 letter from Albert Ando to Franco Modigliani is representative:

Screen Shot 2018-10-14 at 22.50.40Screen Shot 2018-10-14 at 22.50.08

 

Screen Shot 2018-10-15 at 19.13.11.pngViewed from the perspective of modern macro, it was a giant mess, and in our first drafts we thus chose to characterize macroeconometrics as a “messy” endeavor. But being “messy” in the sense of not being theoretically and econometrically founded and thus unscientific is exactly why Lucas and Sargent argued these models should be dismissed. Their famous 1979 “After Keynesian Macroeconomics” paper is an all-out attack on models of the FRB kind: they pointed to the theoretical “failure to derive behavioral relationships from any consistently posed dynamic optimization problems,” the econometric “failure of existing models to derive restrictions on expectations” and the absence of convincing identification restrictions, concluding with the “spectacular failure of the Keynesian models in the 1970s.” In his critique paper, Lucas also cursed “intercept adjustment,” also known as “fudging” (revising the intercept to improve forecast accuracy, a practice which spread as building inflationary pressures resulted in false predictions in the 1970s). It was a proof those models were misconceived, he argued.

 

The second lesson I learned from working on primary sources is that macroeconometricians were perfectly aware of the lack of theoretical consistency and the fuzziness of estimation and simulation procedures. More, they endorsed it. Every historian knows, for instance, that the quest for microfoundations did not begin with Lucas, having repeatedly stumbled on pre-Lucasian statements on the topic. Jacob Marschak opened his 1948 Chicago macro course with this statement : “this is a course in macro-economics. It deals with aggregates…rather than with the demand or supply or sinfle firms or families for single commodities. The relations between aggregates have to be consistent, to be sure, with our knowledge of the behavior of single firms or households with regards to single good.” In 1971, Terence Gorman likewise opened his lectures on aggregation with a warning: “theorists attempt to derive some macro theory from the micro theory, usually allowing it to define the aggregate in question. In practice they are reduced to asking ‘when can this be done.’ The answer is ‘hardly ever.’” Kevin Hoover has argued that there were at least three competing microfoundational programs in the postwar period, Lucas’s use of representative agent being just one of them. But for macroeconometricians, the lack of theoretical consistency in the Lucasian science was also the result of doing big science, and of facing a trade-off between theoretical consistency and data fit.

Building a macroeconometric model of the FRB kind involved several teams and more than 50 researchers, and it was impossible that all of them agree on the specification of all equations: “None of us holds the view that there should be only one model. It would indeed be unhealthy if there were no honest differences among us as to what are the best specifications of some of the sectors of the model, and when such differences do exist, we should maintain alternative formulation until such time as performances of two formulations can be thoroughly compared,” Ando explained to Modigliani in 1967. By 1970, it had become clear that neither would macroeconomists agree on the adequate tests to compare alternative specifications. Empirical practices, goals and trade-offs were too different. The Fed team wanted a model which could quickly provide good forecasts: “We get a considerable reduction in dynamic simulation errors if we change the total consumption equation by reducing the current income weight and increasing the lagged income weight […] We get a slight further reduction of simulation error if we change the consumption allocation equations so as to reduce the importance of current income and increase the importance of total consumption,” Fed project leader Frank de Leeuw wrote to Modigliani in 1968. But the latter’s motive for developing the FRB model was different: he wanted to settle a theoretical controversy with Friedman and Meiselman on whether the relation of output to money was more stable than the Keynesian multiplier. He was therefore not willing to compromise theoretical integrity for better forecasting power: “I am surprised to find that in these equations you have dropped completely current income. Originally this variable had been introduced to account for investment of transient income in durables. This still seems a reasonable hypothesis,” he responded to De Leuuw.

Different goals and epistemic values resulted in different tradeoffs between theoretical consistency and data fit, between model integrity and flexibility. The intercept fudging disparaged by Lucas turned out to be what clients of the new breed of firms selling forecasts based on macroeconometric models paid for. What businessmen wanted was the informed judgment of macroeconomists, one that the Federal Reserve Board also held in higher esteem than mere “mechanical forecasts.” Intercept corrections were later reframed by David Hendry as an econometric strategy to accommodate structural change. In short, the messiness of macroeconometrics was not perceived as a failure; it was, rather, messiness by design. In his response to Lucas and Sargent, Ando explained that reducing a complex system to a few equations required using different types of evidence and approximations, so that the task of improving them should be done ‘informally and implicitly.”

Screen Shot 2018-10-15 at 19.17.24That recent discussions on the state of macroeconomics somehow echo the epistemic choices of 1960s macroeconometricians is an interesting turn. Since 2011, Simon Wren-Lewis has been calling for a more “pragmatic” approach to microfoundations. His most recent blog post describes the development of the British COMPACT model as weighing costs and gains of writing internally non-consistent models –the model features an exogenous credit constraint variable. His calls this approach “data-based” and “eclectic,” and he argues that macro would have been better had it allowed this kind of approach to coexist with DSGE. Last year, Vitor Constancio, Vice-president of the European Central Bank, noted that “we constantly update our beliefs on the key economic mechanisms that are necessary to fit the data,” concluding that “the model should be reasonably flexible.” Olivier Blanchard also recently acknowledged that macroeconomic models fulfilled different goals (descriptive, predictive and prescriptive). He advocated building different models for different purposes: academic DSGE are still fit for structural analysis, he argued, but “policy modelers should accept the fact that equations that truly fit the data can have only a loose theoretical justification.” In a surprising turn, he argued that “early macroeconomic models had it right: the permanent income theory, the life-cycle theory, and the Q theory provided guidance for the specification of consumption and investment behaviour, but the data then determined the final specification.” Are we witnessing an epistemological revolutionOr a return to epistemological positions that economists thought they had abandoned?

Posted in Uncategorized | Tagged , ,

How ‘tractability’ has shaped economic knowledge: a few conjectures

Yesterday, I blogged about a question I have mulled over for years: what is the aggregate consequence of the thousands of hundreds of “I model the government’s objective function with well-behaved X social welfare-function because canonical paper Y does it” or “I assume a representative agent to make the model tractable” or “I log-linearize to make the model tractable” sentences economists routinely write on the knowledge they collectively produce? Today I had several interesting exchanges (see here and here) which helped me clarify and spell out my object and hypotheses. Here is an edited version of some points I made on twitter. These are not the conclusion to some analysis, but the priors with which I approach the question, and which I’d like to test.

1. A ‘tractable’ model is one that you can solve, which means there are several types of tractability : analytical tractability (finding a solution to a theoretical model), empirical tractability (being able to estimate/calibrate your model) and computational tractability (finding numerical solutions). It is sometimes hard to discriminate between theoretical and empirical, or empirical and computational tractability

(note: if you want a definition of model, read the typology in the Palgrave Dictionary  article by Mary Morgan. If you want to see the typology in action, read this superb paper by Verena Haslmayer on the 7 ways Robert Solow conceived economic models)

2.  Economists’ don’t merely make modeling choices because they believe these are either key to imitate the world or to predict correctly or to produce useful policy recommendations, but also because they otherwise can’t manipulate their models. My previous post reflected of how Robert Lucas’s reconstruction of macroeconomics has been interpreted. While he believed microfoundations to be a fundamental condition for a model to generate good policy evaluation, he certainly didn’t think a representative agent  assumption would made macro models better. No economist does. The assumption is meant to avoid aggregation issues. Neither does postulating linearity  or normality usually reflect how economists see the world. What I’d like to capture is the effect of those choices economists make “for convenience,” to be able to reach solutions, to simplify, to ease their work, in short, to make a model tractable. While those assumptions are conventional and meant to be lifted as mathematical, theoretical and empirical skills and technology (hardware and software) ‘progress,’ their underlying rationale is often lost as they are taken-up by other researchers, spread, and become standard (implicit in the last sentence is the idea that what a tractable model is evolves as new techniques and technologies are brought in)

3. An interesting phenomenon, then, is “tractability standards” (Noah Smith suggested to it “tractability+canonization”, but canonization implies a reflexive endorsement, while standardization convey the idea that these modeling choices spread without specific attention, that they seem mundane – yes, I’m nitpicking). Tractability standards have been more or less stringent over time, and they haven’t followed a linear pattern whereby contraints are gradually relaxed, allowing economists to design and manipulate more complex (and richer) models decades after decades. My prior on the recent history of tractability in macroeconomics, for instance,  is something like this (beware, wishful thinking ahead):

Between the late 1930s and the 1970s, economists started building large-scale macroeconometric models, and the number of equations and the diversity of theories  combined soon swelled out of control. It meant that finding analytical solutions and estimating and simulating those models was hell (especially when all you had was 20 hours IBM360 a week). But there was neither a preferable or a prohibited way to make a model tractable. Whatever solution you could come out with was fine: you could either just take a whole block of equation down if it was messing up your simulation (like wage equations+ labor market in the case of the MPS model). You could devise a mix of two-stage least squares, limited information maximum likelihood and instrumental variable techniques and run recursive block estimation (like Frank Fisher did). You could pick up your phone and ask bayesian Zellner to devise a new set of tests for you (like Modigliani and Ando did).

With Lucas, Sargent, Kydland and Prescott came new types of models, thus new analytical, empirical and computational challenges. But this time, “tractability standards”   spread alongside the models (not sure by whom or how coordinated or intentional it was). If you wanted to publish a macro paper in top journals in the 1980s, you were not allowed to take whatever action you wished to make your model tractable. Representative agent and market-clearing were fine; non-microfounded simple models were not. Linearizing was okay, but finding solution through numerical approximation wasn’t generally seen as satisfactory. Closed-form solutions were preferred. And so was the case in some micro fields like public economics. Representative agents and well-behaved social welfare functions made general-equilibrium optimal taxation models “tractable.” That these assumptions were meant to be later lifted was forgotten, models standardized, and so did research questions. How inequalities evolved wasn’t one you couldn’t answer anymore, and maybe inequality wasn’t something you could even “see.”

What has happened in the post-crisis decade is less clear. My intuition is that tractability standards have relaxed again, allowing for more diverse ways to hunt for solutions. But it’s not clear to me why. The usual answer is that better software and faster computers are fostering the spread of numerical techniques, making tractability concerns a relic of the past. I have voiced my skepticism elsewhere. The history of the relations between economists and computer is one in which there’s a leap in hardware, or software or econometric techniques every 15 years, with economists declaring the end of history, enthusiastically flooding their models with more equations and more refinements…. and finding themselves 10 years later with non-tractable models yet over again. Reflecting on the installation of an IBM650 at Iowa State University in 1956, R. Beneke, for instance,  joked that once the computer accommodated the inversion of a 198 row matrix, “new families of exotic regression models came on the scene.” Agricultural economists , he remarked, “enjoyed proliferating the regression equation forms they fitted to data sets.”  Furthermore, it takes more than computers to spread numerical techniques. After all, these techniques have been in limited uses since the late 1980s. An epistemological shift is needed, one that involves relinquishing the closed-form solution fetish. Was this the only option left for economists to move on after the crisis? Or has the rise of behavioral economics, of field and laboratory experiments, and the structural vs reduced form debate opened the range tractability choices?

4. Some heterodox economists (Lars Syll here) believe  that the whole focus on tractability is a hoax. It shows that restricting oneself to deductivist mathematical models is flawed. There are other ways to model, he argues, and ontological considerations should take precedence over formalistic tractability. Cahal Moran goes further, arguing that economists should be allowed to reason without modeling. There is a clear fault line here, for Lucas, among others, has insisted in a letter to Matsuyama that “explicit modeling can give a spurious sense of omniscience that one has to guard against… but if we give up explicit modeling, what have we got left except ideology? I don’t think either Hayek or Coase have faced up to this question.” Perry Mehrling, on the other hand, believes tractability is more about teaching, communication and publication than thinking and exploring.

5. Focusing on tractability offer new lenses to approach debates on the current state of the discipline (in particular macro). Absent archival or interview smoking-guns, constantly ranting that the new classical revolution is a undoubtedly neoliberal or neocon turn and whether the modeling choices of Lucas, Sargent, Prescott or Plosser are of course ideological produces more heat than light. These choices reflect a mix of political, intellectual and methodological values, a mix of beliefs and tractability constraints. The two aspects might be impossible to disentangle, but it at least make sense to investigate their joint effects, and to make room for the possibility that the standardization of tractability strategies have shaped economic knowledge.

6. The tractability lens also helps me make sense of what is happening in economics now, and what might come next.  Right now, clusters of macroeconomists are each working on relaxing one or two tractability assumptions: research agendas span heterogeneity, non-rational expectations, financial markets, non-linearities, fat-tailed distributions, etc. But if you put all these adds-on together (assuming you can design a consistent model, and that adds-on are the way forward, which many critics challenge), you’re back to non-tractable. So what is the priority? How do macroeconomists rank these model improvements? And can the profession affords waiting 30 more years, 3 more financial crises and two trade war before it can finally say it has a model rich enough to anticipate crises?

 

Posted in Uncategorized | 2 Comments

What is the cost of ‘tractable’ economic models?

Edit: here’s a follow-up post in which I clarify my definition of ‘tractability’ and my priors on this topic

Economists study cycles, but they also create some. Every other month, a British heterodox economist explains why economics is broken, and other British economists respond that the critic doesn’t understand what economists really do (there’s even a dedicated hashtag). The anti and pro arguments have more or less been the same for the past 10 years. This week, the accuser is Howard Reed and the defender, Diane Coyle. It would be business as usual without interesting comments by Econocracy coauthor Cahal Moran at Opendemocracy and by Jo Michell on twitter along the same lines. What matters, they argue, is not what economists do but how they do it. The problem is not whether some economists deal with money, financial instability, inequality or gender, but how their dominant modeling strategies allow them to take them into account or rather, they argue, constrain them to leave these crucial issues out of their analysis. In other words, the social phenomena economists choose to study and the questions they choose to ask, which have come under fire since the crisis, are in fact determined by the method they choose to wield. Here lays the culprit: how economists write their models, how they validate their hypotheses empirically, what they believe is a good proof is too monolithic.

One reason I find this an interesting angle is because I read the history of economics in the past 70 years as moving from a mainstream defined by theories to a mainstream defined by models (aka tools aimed at fitting theories to reality, thus involving methodological choices). And eventually, to a mainstream defined by methods. Some historians of economics argue that the neoclassical core has fragmented so much because of the rise of behavioral and complexity economics, among others,  that we have now entered a post-mainstream state. I disagree. If “mainstream” economics is what get published in the top-5 journals, maybe you don’t need representative agent, DSGE or strict inter temporal maximization anymore. What the gatekeepers will scrutinize instead, these days, is how you derive your proof, and whether your research design, in particular your identification strategy, is acceptable.  Whether this is a good evolution or not is not something for me to judge, but the cost & benefits of methodological orthodoxy becomes an important question.

Another reason why the method angle warrants further consideration is that a major  fault line in current debates is how much economists should sacrifice to get ‘tractable’ models. I have long mulled over a related question, namely how much ‘tractability’ has shaped economics in the past decades. In a 2008 short paper, Xavier Gabaix and David Laibson list 7 properties of good models: parsimony, tractability, conceptual insightfulness, generalizability, falsifiability, empirical consistency, and predictive precision. They don’t rank them, and their conscious and unconscious ranking has probably sharply evolved across time. But while tractability have probably never ranked highest, I believe the unconscious hunt for tractable models may have thoroughly  shaped economics. I have hitherto failed to find an appropriate strategy to investigate the influence of ‘tractability.’ But I think no fruitful discussions can be carried on the current state of economics without answering this question. Let me give you an exemple:

While the paternity of the theoretical apparatus underlying the new neoclassical synthesis in macro is contested, there is wide agreement that the methodological framework was largely architected by Robert Lucas. What is debated is to what extent Lucas’s choices were intellectual or ideological. Alan Blinder hinted to a mix of both when he commented in 1988 that “the ascendancy of new classicism in academia was… the triumph of a priori theorizing over empiricism, of intellectual aesthetics over observation and, in some measure, of conservative ideology over liberalism.” Recent commentators like Paul Romer or Brad de Long are not just trying to assess Lucas’s results and legacy, but also his intentions. Yet the true reasons behind modeling choices are hard to pin down. Bringing a representative agent meant foregoing the possibility to tackle inequality, redistribution and justice concerns. Was it deliberate? How much does this choice owe to tractability? What macroeconomists were chasing, in these years, was a renewed explanation of the business cycle. They were trying to write microfounded and dynamic models. Building on intertemporal maximization therefore seemed a good path to travel, Michel de Vroey explains.

The origins of some of the bolts and pipes Lucas put together are now well-known. Judy Klein has explained how Bellman’s dynamic programming was quickly implemented at Carnegie’s GSIA. Antonella Rancan has shown that Carnegie was also where Simon, Modigliani, Holt and Muth were debating how agents’ expectations should be modeled. Expectations was also the topic of a conference organized by Phelps, who came up with the idea to consider imperfect information by modeling agents on islands. But Lucas, like Sargent and others, also insisted that the ability of these models to imitate real-world fluctuations be tested, as their purpose were to formulate policy recommendations: “our task…is to write a FORTRAN program that will accept specific economic policy rules as “input” and will generate as “output” statistics describing the operating characteristics of time series we care about,” he wrote in 1980. Rational expectations imposed cross-equation restrictions, yet estimating these new models substantially raised the computing burden. Assuming a representative agent mitigated computational demands, and allowed macroeconomists to get away with general equilibrium aggregate issues: it made new-classical models analytically and computationally tractable. So did quadratic linear decision rules: “only a few other functional forms for agents’ objective functions in dynamic stochastic optimum problems have this same necessary analytical tractability. Computer technology in the foreseeable future seems to require working with such as a class of functions,” Lucas and Sargent conceded in 1978.

Was tractability the main reason why Lucas embraced the representative agent (and market clearing)? Or could he have improved tractability through alternative hypotheses, leading to opposed policy conclusions? I have no idea. More important, and more difficult to track, is the role played by tractability in the spread of Lucas’ modeling strategies. Some macroeconomists may have endorsed the new class of Lucas-critique-proof models because they liked its policy conclusions. Other may have retained some hypotheses, then some simplifications, “because it makes the model tractable.” And while the limits of simplifying assumptions are often emphasized by those who propose them , as they spread, caveats are forgotten. Tractability restrict the range of accepted models and prevent economists from discussing some social issues, and with time, from even “seeing” them. Tractability ‘filters’ economists’ reality. My question is not restricted to macro. Equally important is to understand why James Mirrlees and Peter Diamond choose to reinvestigate optimal taxation in a general equilibrium-with-representative agent setting (here, the genealogy harks back to Ramsey), whether this modeling strategy spread because it was tractable, and what the consequences on public economics were. The aggregate effect of “looking for tractable models” is unknown, and yet it is crucial to understand the current state of economics.

Posted in Uncategorized | 10 Comments

A game of mirrors? Economists’ models of the labor market and the 1970s gender reckoning

Written with Cleo Chassonnery-Zaigouche and John Singleton

The underrepresentation of women in science is drawing increasing attention from scientists as well as from the media. For example, research examining glass ceilings, leaking or small pipelines, the influence of mentorship, biases in refereeing, recommendations, and styles of undergraduate education or textbooks are flourishing in STEM, engineering, social sciences, and the humanities. Economics is no exception, as a paper that drew widespread coverage by Alice Wu released in the summer of 2017 exemplified. One thing that nevertheless sets economics and (to greater and lesser extents) its cognate disciplines apart, however, is that research topics such as the gender wage gap, women’s labor supply, and labor market discrimination are phenomena that many researchers in these areas both experience and study. An obvious question raised, therefore, is how the theories, models, and empirical evidence that economists develop and produce in turn shape their understanding of gender issues within their profession. Early debates surrounding the foundation of the Committee on the Status of Women in the Economics Profession (CSWEP) in 1972 are revealing in this regard.

'Women Of The World Unite!'The foundation of CSWEP, which we briefly narrated here, stood at the crossroads of various historical and social trends. One was the growing public awareness of discrimination issues and an associated shift within the US legal context. The Equal Pay Act of 1963 and the last-minute inclusion of gender in the 1964 Civil Right Act brought about a stream of sex discrimination cases, including the famous Bell AT&T case, whose settlement benefitted 15,000 women and minority employees. Phyllis Wallace, a founding member of CSWEP, was the expert coordinator for the Equal Employment Opportunity Commission. Though the 1964 Act excluded employees of public bodies, including governmental and universities hires, legal battles at Ivy Leagues universities resulted in compliance rules and the 1972 Equal Employment Opportunity Act. Professional societies were no exception. Beginning in 1969, the American Historical Association and the American Sociological Association established committees on the status of women in their respective disciplines. Chairing the sociology committee was Elise Boulding, whose husband Kenneth later joined as a founding member of CSWEP. Kenneth Boulding would draft “Role Prejudice as an Economic Problem,” the first part of a paper introducing CSWEP’s first report, “Combatting Role Prejudice and Sex Discrimination.”

Many of the actions pursued by economists were indeed similar to (and inspired by) other professional and academic societies, such as making day care available at major conferences, creating mentorship programs, and developing a roster of women in every field in economics to chair and participate in conferences panels. But other issues were idiosyncratic to economics. In particular, the problems of gender bias in economics were viewed as economic issues from the beginning, as seen in Boulding’s 1973 article on sex Boulding explaining CSWEP creationdiscrimination within the profession (picture on the left). Early CSWEP reports routinely framed their organizational efforts as attempts to study and fix the “supply and demand for women economists,” that is, the labor market for economists. The framing applied in the reports echoes the objectives of the AEA Committee on Hiring Practices to establish better recruitment practices and the preceding work by the Committee on the Structure of the Economic Profession. The reports relied on the logic of basic econ 101 principles at times, but on other occasions, the CSWEP originators, most of whom were trained in labor economics, delved more deeply into ongoing debates on the interpretation of earnings differentials, the determinants of women’s labor supply, the extent of discrimination and causes of occupational segregation, and on ways to fix the labor market for economists. One particularly revealing occasion was a letter exchange between Carolyn Shaw Bell, usually hailed as the driving force behind the women’s caucus that led to CSWEP’s creation, and University of Chicago economist Milton Friedman.

Bell vs Friedman 

29shaw_190Carolyn Shaw Bell (1920-2006) had received her Ph.D in 1949 from the London School of Economics and would spend her academic career at Wellesley College. After war work at the Office of Price Administration with Galbraith, she later did empirical work on innovation and income distribution, and contributed to consumer economics. Bell was convinced to accept the inaugural chairpersonship of the Committee (rather than retire) after the American Economic Association voted to establish the CSWEP in 1971 and launch an annual survey of women economists. In the summer of 1973, she sought to organize session at the December ASSA meeting. She wanted to assemble a panel of economists from various, sometimes opposed, backgrounds to comment on the findings. She therefore asked Elizabeth Clayton, a specialist of Soviet economics at the University of Missouri, leftish labor economist David Gordon of the New School for Social Research, and Milton Friedman to participate. CSWEP members expected “out in the open” controversy from the panel.

 

In an August reply to Bell’s invitation, Friedman declined, as he was not planning to attend the meetings (he was to be replaced by George Stigler on the panel). He did so regretfully, he explained, because he held strong views on the CSWEP report. He especially disagreed with the statement that “every economics department shall actively encourage qualified women graduate students without regard to age, marital or family status.” Though he “sympathize[d] very much with the objective of eliminating extraneous considerations from any judgment of ability or performance potential,” Friedman confessed he “never believed in reverse discrimination whether for women or for Jews or for blacks.” To this list he later added discrimination against conservative scholars, which he believed was strong on college and university campuses.

Whether preferential treatment produced “reverse discrimination” against the majority groups was a key point of contention over affirmative action policies. The social context was politically charged. While ending some of the “Great Society” programs, the Nixon Administration also set up the first affirmative action policy in 1969: the “Philadelphia Plan,” required that federal contractors and unions meet targeted goals for minority hires. The Nixon policy was sold as “racial goals and timetables, not quotas,” but criticisms focused on de facto quotas and applicability. Though formal quotas in US universities did not exist for women and racial quotas were ruled out by a 1978 decision, formal and informal affirmative action were debated in similar ways: Does encouraging women and minority applicant discouraged white men to apply? Was the goal of equal opportunity equal representation? These were recurring questions.

Screen Shot 2018-03-06 at 11.59.32Friedman’s answer, elaborated in his reply to Bell, was straightforward: affirmative action is inefficient and unethical: “should we… encourage men age 65 to enter graduate study on a par with young men age 20?”, he asked “Surely training in advanced economics is a capital investment and is justified only if it can be expected that the yield from it will repay the cost… Individuals trained do not bear the full cost of their training. We have limited funds with which to subsidize such training; it is appropriate to use those funds in such way to maximize the yield for the purpose for which the funds were made available. In the main, those funds were made available to promote a discipline rather than to promote the objectives of particular groups,” he continued. “It is relevant to take into account the age of men or women, the marital or family status of men or women, and the sex of potential applicants insofar as that affects the likely yield from the investment in their training,” Friedman emphasized, in an argument that strongly echoed Becker’s human capital theory: prohibiting the use of criteria such as gender and race in investment decisions was inefficient if they contributed to correctly predicting returns.

Overall, Friedman concluded, equal opportunity would not yield equal representation or “balance”:

I have no doubt that there has been discrimination against women. I have no doubt that one of its results has been that those women who do manage to make their mark are much abler than their male colleagues. As a result, it has seemed to me that a justified impression has grown up that women are intellectually superior to men rather than the reverse. I realize this is small comfort to those women who have been denied opportunities, but I only urge you to consider the consequences of reverse discrimination in producing the opposite effect.

Bell outlined the reasons for her disagreement with Friedman in lengthy response. She insisted that CSWEP favored non-financial “encouragement” over “any preferential financial aid for women.” More generally, while she agreed that the “free market lessens the opportunities for discrimination inasmuch as competition gives paramount recognition to economic efficiency,” she contended that this reasoning only applied to goods, not to human beings. She agreed with Friedman regarding the criteria for investing in professional training, but objected that there was nothing “in [Friedman’s] statement, in the discipline per se or in the existence of scarce resources, to identify those recipients who will, in fact, contribute most to the field.” Instead, she argued, the recipients of investment were selected by those “controlling the awards who learned certain cultural patterns, including beliefs about sex roles.”

Bell went on to admonish economists to re-examine their own biases: like the employers and employees they studied, they had been “brainwashed”: “beginning in the cradle. children… learn over and over again that what is appropriate and relevant for boys is not necessarily appropriate and relevant for girls.” These societal norms were biasing market forces, in that they influenced both supply and the demand of labor, she explained. Career and family expectations, decisions of whether or not to invest in education and training, the choice of education and occupation, as well as the allocation of time are all distorted. “This means that the occupations followed by young men and women do not reflect market considerations,” she concluded, only to add that “until we have a society where little girls are not only able to become dentists and surveyors and readily as little boys but are expected to become dentists and surveyors as readily as little boys we cannot in all conscience rely on the dictates of economic efficiency to allocate human beings.”

In a rejoinder to Bell’s reply, Friedman reprised one of his most famous arguments: market solutions should be preferred because alternatives systematically lead to the tyranny of the majority:

Screen Shot 2018-03-06 at 12.18.38

In short, Bell was advocating for institutional changes to the professional training of and labor market for economists (procedures in a very concrete way, cf. premise of JOE), while Friedman was arguing political philosophy. In her final response, Bell rejected the notion that actively countering “the existing system of brainwashing” through affirmative action would be useless, arguing instead that present discrimination resulted in capital investment “which may reduce the mobility of other resources in the future” and therefore was inefficient. Finally, Bell insisted that the CSWEP report advocated for voluntary participation in affirmative action plans, and that the “mild suggestions” proposed were far from the “dictatorial imposition of power” that worried Friedman.

 

Conflicting models of the labor market

The exchange quoted above reveals how much thinking about the status of women in economics and what should be done about it was embedded in wider economic debates on how to model the role of women in the economy. The initial focus of Bell and Friedman’s exchange was the plan advanced by CSWEP. They both tacitly considered it a special case of the larger debate on whether affirmative action would advance the status of women in the US economy, the disagreement deriving from their respective visions of the labor market, of how agents make economic decisions, and the extent to which gaps in outcomes and other phenomena reflected discrimination. Moreover, the 1970s were a decade in which the field was pervaded with thorny debates, some which reflected rapid changes in US labor market themselves.

13342748403Friedman’s arguments drew on a vision of the labor market that was becoming dominant at that time and that he had contributed to shaping through his famous 301 Price Theory course at the University of Chicago. As he was jousting with Bell, Becker’s 1957 Economics of Discrimination had just been republished with a fanfare that contrasted sharply with the resistance the book had encountered 15 years earlier (Friedman had to put considerable pressure on Chicago University Press for them to publish his former student’s Ph.D. dissertation). The main thread of Becker’s work, one partly inspired by Friedman himself, was to model some employers as rational maximizers with a taste for discrimination. That is, they used “non-monetary considerations in deciding whether to hire, work with, or buy from an individual or group.” Those employers, he contended, were disadvantaged, as their taste acted as a tariff in a trade model. Discrimination was thus an inefficient behavior that would be pushed out of competitive markets in equilibrium. The notion that the labor market where employees, being rational about their human capital investment and their work/leisure tradeoff, meet cost-conscious employers, was efficient, pervades his exchange with Bell.

But Friedman’s vision exhibited an additional characteristic historical and political twist. The proof that markets were, in the long run, efficient was historical: they had brought improvements in the living conditions of Jews, African-Americans, and Irish people throughout decades and centuries. And the market did so by protecting them from the tyranny of the majority, so that any attempt to fiddle with the market to accelerate the transition was bound to failure. This argument had come to maturity in Capitalism and Freedom (1962). In the fifth chapter, Friedman took issue with Roosevelt’s 1941 Fair Employment Practice Committee (FEPC), tasked with banning discrimination in war-related industries. He wrote:

If it is appropriate for the state to say that individuals may not discriminate in employment because of color or race or religion, then it is equally appropriate for the state, provided a majority can be found to vote that way, to say that individuals must discriminate in employment on the basis of color, race or religion. The Hitler Nuremberg laws and the laws in the Southern states imposing special disabilities upon Negroes are both examples of laws similar in principle to FEPC.

Like Becker, Friedman believed this general framework applied to any kind of discrimination: against Jews, foreigners, women (his correspondence with Bell echoed Capitalism and Freedom almost verbatim), people with specific religious of political beliefs, and blacks. In a lengthy interview with Playboy the previous year, for instance, he again explained that “it’s precisely because the market is a system of proportional representation [as opposed to the majority rule in the political system] that it protects the interests of minorities. It’s for this reason that minorities like the blacks, like the Jews, like the Amish, like SDS [Student for Democratic Society], ought to be the strongest supporters of free-enterprise capitalism.

Bell’s vision of labor markets, on the other hand, was not a straightforward reflection of a stabilized research agenda. The field was buzzing with new approaches in these years and her work stood at the confluence of three of them: attempts to understand the consequences of imperfect information on labor supply and demand; new empirical evidence on the wage gap and on the composition, income, and occupation of American households; and the challenges brought to mainstream economic modeling by feminist economists.

For example, her letters and the solutions that she pushed to fight the poor representation of women in economics betray a concern with the consequences of imperfect information on access to employment opportunities, expectations, employers’ and employees’ behavior, and in the end, the efficiency of market outcomes. In this regard, some of her statements closely echoed Arrow’s theory of statistical discrimination, which he had elaborated in a paper given in Princeton in 1971. His idea was that employers use gender as a proxy for unobservable characteristics: beliefs on average characteristics of groups translate into discrimination against individual member of these groups. As Bell and Friedman were corresponding (a correspondence that Bell circulated to the other CSWEP members and to Arrow), Arrow was refining his screening theory, in which preferences were endogenous to dynamic interactions that may create self-selection, human capital underinvestment, segregation and self-fulfilling prophecies. Arrow’s awareness to information imperfection hadn’t prevented him from telling Bell that, as AEA president in charge of the 1971 conference program, he couldn’t find many qualified women to raise the number of female presenters and discussants. Bell quickly came up with 300 names, 150 of whom expressed interest in presenting a paper. Bell’s advocacy for formal procedures producing information on jobs but also on “qualified women” in economics contributed to the establishment of a CSWEP-sponsored roster of women economists and of the JOE soon afterwards.

Bell also participated in the flourishing attempts to document women and household behavior empirically. While there is no evidence that she was then aware of Blinder and Oaxaca’s attempts to use data from the Survey of Economic Opportunity (which eventually became the PSID) to decompose the wage gap, she was involved in commenting on the 1973 Economic Report to the President, in which the Council of Economic Advisors had included a chapter on “the economic role of women.” Her familiarity with the economic and sociological empirical literature on earnings differentials subsequently led Bell to gather a substantial body of statistics on US families. She summarized her results in “Let’s Get Rid of Families,” a 1977 article she decided to publish in Newsweek rather than in an academic journal. That she aimed to challenge the notion that the typical US family was built around a breadwinning father and a stay-at-home mother was already seen in her insistence that economists and citizens alike are “brainwashed” by social norms and beliefs.

Finally, Bell’s letters to Friedman suggest that she wasn’t merely looking for a microeconomic model alternative to the Beckerian theory of discrimination. Her whole contribution was embedded in a more radical theoretical criticism of economic theory, one she would later carefully outline in a 1974 paper on “Economics, sex and gender.” The way that economic decisions are modeled does not allow an accurate representation of how women make economics choices as producers and consumers, she argued. Agents are making a choice between work and leisure, yet “leisure” in fact covers many types of work between which women need to allocate their time. Likewise, women usually did not consume an income they had independently earned, as standard micro models assumed. Not taking into account this variety of decisions in fact reinforces the social model economists tend to take as given, in which women are primarily caregivers who don’t exercise any economic independent choice. “Both economic analysis and economic policy dealing with individuals, either in their roles as producers or consumers, have been evolved primarily by men,” she concluded.

 

Bell and Friedman’s divergent takes on which actions the newly-established CSWEP should implement were thus inextricably intertwined with their theoretically and empirically-informed views of the labor market. Their exchange further reveals that their views were also tied to their respective methodological beliefs and personal experiences. Their arguments exhibited different blends of principles and data, models and action. Friedman was primarily focused on foundational principles about markets and the free society and explicitly connected them to the discrimination he had experienced as a Jew and a conservative. When the first letter he had sent to Bell was reprinted in a 1998 JEP issue celebrating the 25th anniversary of CSWEP, he further explained that “the pendulum has probably swung too far so that men are the ones currently being discriminated against.” Bell, by contrast, grounded her defense of the CSWEP agenda in economics principles, economic facts, beliefs and prejudices found in American society, as well as in her interactions with AEA officials and decades at Wellesley of tirelessly mentoring women in economics.

Note: Permission to quote from the Friedman-bell correspondance was granted by the Hoover Institution

Posted in Uncategorized | Tagged , , , , , | Leave a comment

Not going away: on Al Roth’s 2018 AEA Presidential Address and the ethical shyness of market designers

Screen Shot 2018-01-06 at 19.48.49Encountering Al Roth’s ideas has always been a “squaring the circle” experience to me. The man is the epitome of swag (as my students would say), his ideas the hype, and his past achievements ubiquitous in the media. He has become the antidote for post-2008 econ criticisms, the poster child for a science that is socially useful, that saves lives. And yet, as he proceeds to explain the major tenets of market design, I’m usually  left puzzled.

20180106_171323-001Yesterday’s presidential address was no exception (edit: here’s the webcast of his talk). Roth outlined many examples of how matching market design can improve citizens’ life, from making the economics job market thicker to avoiding domestic fights in pediatric surgeons’ households, to substantially raising the number of kidney transplants in the US, thereby effectively saving lives. Though the rules of each market differs, design principles are often identical. Repugnance, congestion or unraveling (people leaving the market because they have to accept job offers early or because they are put off by the matches proposed) threaten exchange, and market thickness is restored through implementing matching algorithms or signaling schemes (like expressing interest for a limited number of job openings). By the end of his talk, Roth nevertheless expressed some frustration. He wishes to do much more: raise kidney matching market thickness through allowing foreigners to participate into US kidney chains of exchange or organize large-scale refuge resettlement. Yet these more ambitious projects are however faced with greater legal challenges, opposition and ultimately repugnance, he lamented.

20180106_175858-001

 

Screen Shot 2018-01-07 at 8.21.30As Roth’s presentation moved from mundane to life-saving achievements, his slides became loaded with ethical statements. Moneyless kidney exchange is “fair,” he argued. Global kidney markets shouldn’t be considered “exploitative.”  Yet he never saw fit to discuss the ethical underpinnings of the designs he advances. Is it because, as explained in his introduction, his address focuses on “practical” marketplace design rather than on the identification of the theoretical properties of mechanisms? Or because the underlying ethics is obvious – isn’t raising the number of kidney transplants a universally accepted policy end? Or because he believes that the opposition to his international kidney market scheme betrays an unwarranted sensitivity on repugnance? Elsewhere, he has argued that economists should not take for granted citizens’ repugnance to engage in some kind of transactions (money landing or prostitution are other historical examples). That society has banned markets for such transactions has sometimes harmed their welfare, and Roth believes it is possible to carefully design such markets in a way that commodification and coercion won’t happen.

My puzzlement is twofold. As a historian, I find economists’ contemporary reluctance to get their hands dirty with ethics (Roth is not alone is this) highly unusual. For, contra the popular received view, those economists usually considered the founders of contemporary economics like Paul Samuelson or Kenneth Arrow were constantly arguing about the ethical foundations of their discipline, in particular welfarism. And as an observer of a changing discipline, I fear that this ethical shyness may at best prevent them from communicating with their public, at worst backfire. Let me elaborate.

 

The Good, the Bad, and the Ugly

Once upon a time, economists were not shy of handling normative issues. Though the XVIIIth and XIXth centuries were largely about separating out positive from normative analysis (and preserving a space for the “art of economics”), both were considered equally important parts of “political economy.” Reflections on neutrality, objectivity and impartiality, dating back to Adam Smith, were carried out separately. Max Weber was, at the turn of the XXth century, one of the first to relate ethics to subjectivity, through his quest for a reconciliation between an ideal Wertfreiheit (value-freedom) and an inescapable Wertbeziehung (relation to values through relation to the world, aka human condition).

Screen Shot 2018-01-07 at 8.20.04In the next decades, as the Popperian positivist mindset gradually replaced the aristotelician hierarchy between ethics and science, economists began hunting for subjectivity, and in the process ended up pushing normative thinking outside the boundaries of their science. In his famous Essay on the Nature and Significance of Economic Science (1932), Lionel Robbins outlined a clear separation between ends and means, is and ought, ethics and economics and normative and positive analysis. He did so by combining these with the fact/value distinction. Science was concerned with the confrontation with facts, he hammered. Interpersonal utility comparisons, which required “elements of conventional evaluation,” therefore fell outside the realm of economics.

Robbins’s British colleagues attempted to construct a value-free welfare economics. One exemple is Nicholas Kaldor and John Hicks’s Paretian compensation criterion. These endeavors were met with considerable skepticism in America. US economists strived to make their discipline more scientific and objective through an endorsement of Popperian falsifiability (apparent in Friedman’s famous 1957’s methodological essay), mathematization, data collection and the refinement of empirical techniques aimed at purging economic theories and models from subjectivity, rather than by trying to avoid normative statements. Those were inescapable if the economist was to be of any help to policy makers, researchers concurred. New Welfare “cannot be used as a guide to social policy,” Arrow complained in his 1951 Social Choice and Individual Values PhD. “Concretely, the new welfare economics economics is supposed to be able to throw light on such questions as to whether the Corn Laws should have been repealed,” yet it “gives no real hue to action,” Samuelson likewise remarked in his 1947 Foundation of Economics Analysis.

Screen Shot 2018-01-07 at 8.24.17Samuelson and Arrow are usually credited with laying out the theoretical and epistemological foundations for contemporary “mainstream” economics, yet each spent an inordinate amount of time crafting out a space for normative analysis rather than getting rid of it. “Ethical conclusions cannot be derived in the same way that scientific hypotheses are inferred/verified… but it is not valid to conclude from this that there is no room in economics for what goes under the name of ‘welfare economics.’ It is a legitimate exercice of economic analysis to examine the consequences of various value judgments, whether or not they are shared by the theorist, just as the study of comparative ethics is itself a science like any other branch of sociology,” Samuelson warned in Foundations. In those years, Gunnar Myrdal was crafting a new epistemology whereby the economist was to identify societies’ value judgment, make his choice of a value set explicit, and import it into economic analysis. Georges Stigler likewise claimed that “the economist may … cultivate a second discipline, the determination of the ends of his society particularly relevant to economic policy.” Abram Bergson concurred that “the determination of prevailing values for a given community is… a proper and necessary task for the economist.” Richard Musgrave would, in his landmark 1959 Theory of Finance, later join the chorus: “I have reversed my original view … the theory of the revenue-expenditure process remains trivial unless [the social preferences] scales are determined,” he explained.

 Screen Shot 2018-01-07 at 8.20.37 The postwar was thus characterized by a large consensus that the economist should discuss the ethical underpinning of his models. Objectivity was ensured, not by the lack of normative analysis, but by the lack of subjective bias. The values were not the economist’s ones, but some he chose within society, usually resulting from a collective decision or a consensus. In any case, economists were tasked with making choices. Throughout the Cold War, economists did not shy away from arguing over the weights they chose for their cost-benefit analysis, for instance, or about how to define “social welfare” and embed it into one of economists’ favorite tools, the social welfare function. Though it was ordinal, it allowed interpersonal comparisons of “irrelevant alternatives” and was meant to allow any policy maker (or dictator) to aggregate the values of the citizens and use it to make policy decisions. “You seriously misunderstand me if you think I have ever believed that only Pareto-optimality is meaningful […] Vulgar Chicagoans sometimes do but their Screen Shot 2018-01-07 at 8.21.11vulgarities are not mine,” Samuelson later wrote to Suzumura. It was, Herrade Igersheim documents in a superb paper  his main bone of contention with Arrow. For 50 years, Arrow claimed that his impossibility theorem was a serious blow to Samuelson’s tireless promotion of the Bergson-Samuelson social welfare function. And for 50 years, Samuelson argued his construct was essentially different from Arrow’s one: “Arrow has said more than once that any theory of ethics boils down to how the individuals involved feel about ethics. I strongly disagree. I think every one of us as individuals knows that our orderings are imperfect. They are inconsistent; they are changeable; they come back […] People talk about paternalism as if we were bowing down to a dictator, but it is wrong in ethics to rule out imposition, and even dictatorship, because that is the essence of ethics,” he continued in his aforementioned letter to Suzumura (excerpted from Igersheim’s paper).

  Samuelson and Arrow both endorsed a welfarist and utilitarian ethics (policy outcomes should be judged according to their impact on economic agents’ welfare). Beginning in the early 1970s, a host of alternative approaches flourished. Arrow himself moved to Harvard with the explicit purpose of delving deeper into ethical and philosophical topics. He founded a joint workshop with Amartya Sen and John Rawls that explored the latter’s notions of fairness and justice. Alternative theories and measures of well-being and of inequality were developed. Joy, capabilities and envy were brought into the picture. And yet, it was in that period that, through benign neglect, ethical concerns were finally pushed to the side of the discipline. Economics remained untouched by the Kuhnian revolution, by ideas that scientists cannot abstract from metaphysics and that “pure facts” simply doesn’t exist (see Putnam’s work). Just the contrary. Though heterodoxies have constantly pointed to the ideological characters of some fundamental assumptions in macro and micro alike, economists now routinely considered their work has having no ethical underpinning worth their attention. The ‘empirical revolution’ – better and more data, more efficient computational devices, thus an improved ability to confront hypotheses with facts – is considered as a gatekeper. And economists have become increasingly shy to engage in ethical reflection. Nowhere is this state of mind more obvious than in theoretical mechanism and applied market design, a field that, as AEA president-elect Olivier Blanchard pointed out, is not merely concerned with analyzing markets but with actively shaping them.

 

Market designers, neutral, clean and shy since 1972

In their vibrant introduction to a recent journal issue showcasing the intellectual challenges and social benefits of market design, Scott Kominers and Alex Teytelboym  chose to put ethical concerns aside. They merely note that “the market designer’s job is to optimize the market outcome with respect to society’s preferred objective function, while “maintain[ing] an informed neutrality between reasonable ethical positions’ regarding the objective function itself.” “Of course,” they point in a footnote, “market design experts should – and do – play a crucial role in the public discourse about what the objective function and constraints ought to be.” A few references are provided which, with exception of Sandel and Helm, are at least 50 years old. Yet, as in Roth’s address, unspoken ethics is all over the place in their overview: market design allow for “equity” and other goals, they point, for instance, “ensuring that the public purse can benefit from the revenue raised in spectrum license reallocation.”

Their edited volume is especially interesting because it includes a separate paper on the ethics in market design, by Shengwu Li. It is one representative of how economists handle ethical foundations of mechanism design today: smart, clear and extremely cautious, constantly walking on eggshells. Li argues that (1) “the literature on market design does not, and should not, rely exclusively on preference,” BUT (2) since economists have no special ability to resolve ethical disagreement, “market designers should study the connection between designs and consequences, and should not attempt to resolve fundamental ethical questions.” In the end, (3) “the theory and practice of market design should maintain an informed neutrality between reasonable ethical positions.” The economist, in his view, is merely able to “formalize value judgment, such as whether a market is fair, transparent, increases welfare, and protects individual agency;” What he advocates is economists able to “investigate” a much larger set of values than preference utilitarism to evaluate design without solving fundamental ethical questions, so as both to guide policy and preserve their sancrosanct “neutrality”:

“To a policymaker concerned about designing a fair market, we could ask, ‘what kind of fairness do you mean?’ and offer a range of plausible definitions, along with results that relate fairness to other properties the policy-maker might care about, such as efficiency and incentives.”

Screen Shot 2018-01-07 at 8.38.08A similar sense of agnosticism pervades Matt Jackson’s recent piece of the past, present and future prospects of theory in mechanism design in an age of big data. Opening with a list of the metaphors economists have used to describe themselves in the past century, he charts a history of progress from the XIXth century view of economists as “artists and ethicists” to contemporary “schizophrenic economists.” “It is natural that economists’ practical ambitions have grown with available tools and data,” he explains. Economists’ shyness is however best displayed in Jean Tirole’s Economics from the Common Good, out this Fall. This, is spite of his effort to devote on of his first chapter to “The Morals Limits of the Market.” For the chapter’s purpose is to justify the lack of ethical discussion in the rest of the book by appealing to Rawls’s veil of ignorance, one Tirole seems to believe economists actively contribute to sew over and over:

It is possible, however, to eliminate some of the arbitrariness inherent in defining the common good… to abstract ourselves from our attributes and our position is society, to place ourselves « behind the veil of ignorance »… The individual interest and the common good interest diverge as soon as my free will clashes with your interests, but they converge in part behind the veil of ignorance. The quest for the common good takes as its starting point our well-being behind the veil of ignorance ….

Economics, like other human and social sciences, does not seek to usurp society’s role in defining the common good. But it can contribute in two ways. First, it can focus discussion of the ojectives embodied in the concept of the common good by distinguishing ends from means…. Second, and more important, once a definition of the common good has been agreed upon, economics can help develop tools that contribute to achieving it … In each [chapter], I analyze the role of public and private actors, and reflect on the institutions that might contribute to the convergence of individual and general interest – in short, to the common good.

What Tirole does here is reprising a theme Cold war economists faced with the difficulty of choosing ethical foundations for their work often relied on: the notion of a underlying social consensus. Needless to say, this is not enough to silence the reader’s ethical questions when walked through Tirole’s proposed policy to fix climate change, Europe’s failing labor markets or financial regulation, or to harness digital markets.

 

Historians and sociologists unchained

As expected, philosophers, historians and sociologist of mechanism and market design have been much less shy in commenting on the ethical foundations of the field. In the interest of keeping this already too long piece in acceptable boundaries (and attending a few ASSA sessions today), I’m merely providing an non-exhaustive list of suggested readings here. Philosopher Michael Sandel has confronted market designers’ constructs in his book on What Money Can’t Buy, economic philosophers Marc Fleurbaey and Emmanuel Picavet has provided extensive reflections on the moral foundations of the discipline, and Francesco Guala has masterfully excavated the epistemological underpinnings of the FCC auctions. Sociologists Kieran Healy, Dan Breslau and Juan Pardo-Guerra have investigated the ethics and politics of, respectively, organ transplants, electricity market design, and financial markets. There’s also the wealth of literature on performativity (by Donald Mckenzie, Fabian Muniesa, or Nicolas Brisset among many others). Munesia, for instance, disagrees with my shyness diagnostic. He rather see market designers as ethically disinhibited.

Screen Shot 2018-01-06 at 20.11.27

Screen Shot 2018-01-07 at 8.40.36Neither is shyness to be found in the history of mechanism design published this Fall by Phil Mirowksi (who has extensively written on the history of information economics) and Eddie Nik-Kah (whose PhD was an archive-based investigation of the FCC auctions). The core of their book is a classification of the mechanism and market design literature in three trends, each reflecting a distinct approach to information. How economists view and design market is closely tied to their understanding of the role of agents’ knowledge, they explain. The first trend was the Walrasian school, architected at Cowles under the leadership of Arrow, Hurwicz, Reiter and the Marschaks (even Stiglitz and Akerlof to some extent). They considered information as a commodity to be priced and mechanism as a preferably decentralized information gathering process of the Walrasian tâtonnement kind. The Bayes-Nash school of mechanism design originates in Vickrey and Raiffa, and was spread by Bob Wilson who taught the Milgrom generation at Stanford. Information is distributed and manipulated, concealed and revealed. Designing mechanism is thus meant to help them make “no regret” decisions under asymmetric and imperfect information, and this can be achieved through auctions. The experimentalist school of design, including Smith, Plott, Rassenti and Roth is more focused on the algorithmic properties of the market. Information is not located within economic agents, but within the market.

            They offer several hypotheses to explain this transformation (changing notions of information in natural science, a changing vision of economic agents’ cognitive abilities, the growing stronghold of the Hayekian view that markets are information processors, the shifting politics of the profession). But they have a clear take on the result of this transformation: mechanism designers serve neoliberal interests, as exemplified by the FCC auction and TARP cases in which, they argued, economists worked for the commercial interests of the telecom or banking business rather than for the citizens. “Changes in economists’ attitudes toward agents’ knowledge brought forth changes in how economists viewed their own roles,” they conclude:

“Those who viewed individuals as possessing valuable knowledge about the economy generally conceived of themselves as assisting the government in collecting and utilizing it; those who viewed individuals as mistaken in their knowledge tasked themselves as assisting participants in inferring true knowledge; and finally, those who viewed people’s knowledge as irrelevant to the operation of markets tended to focus on building boutique markets.”

Ethics strikes back: economists as engineers in corporate economy

Mechanism/market designers, and economists more largely, thus believe ethical agnosticism is both desirable and attainable (see also Tim Scanlon’s remarks here). It is a belief Roth and Tirole inherited from the engineering culture they were trained into. The ‘economics as engineer’ reference is all over the place: in Roth’s address – he was the one who articulated this view in a famous 2002 article–, in Tirole’s book, in Li’s paper on ethics in mechanism design, which opens with the following quote by Sen: “it is, in fact, arguable that economics has had two rather different origins […] concerned respectively with ‘ethics’ on the one hand, and with what may be called ‘engineering’ on the other.” Li’s core question, therefore, becomes “How (if at all) should economic engineers think about ethics?”

This view is a bit light, and it might even backfire.

First, because as pointed out by Sandel to Roth here, agnosticism is itself a moral posture. Refusing to consider repugnance as a moral objection rather than a prejudice itself shows economists’ repugnance (my term) to engage with moral philosophy, Sandel argues.

Second, because the question of whose values drive the practices of market designers and economists more largely cannot be easily settled with an appeal to consensus and to “obvious ends.” That more lives should obviously be saved, that more citizens should obviously should be fed, that inequalities should obviously be fought (which wasn’t that obvious 20 years ago), that well-being should obviously be improved seems to dispense economists with further inquiry. Doesn’t everyone agree on that? Marc Fleurbay has  encouraged economists to think more explicitly about the concrete measures of wellbeing embodied in economic models and with question of envy and independent preferences. The utilitarianism that has shaped economic tools in the past century has been challenged in the past decade (see Mankiw’s comments here or Li’s paper), and that economists’ focus is currently shifting to re-emphasize wealth and income inequality as well as the role of race and gender in shaping economic outcomes in fact carries a set of collective moral judgments.

Third, because market designers’ tools have grown so powerful, there should be a reflection on what their aggregate effects on distribution, fairness and various conceptions of justice and well-being is/should be, and on who should be accountable for these effects. Economists have been held accountable for the 2008 financial collapse. What if a market they had contributed to design badly screw up? What if their powerful algorithms are used for bad purpose? When physicists, psychologists and engineers have sensed that their tools were powerful enough to manipulate people or launch nuclear wars, they have set up disciplinary ethics committees, gone into social activism and tried to educate decision makers and the public. How about economists?

Finally, the market designers’ rationale outlined above crucially depends on one key assumption: that the ends those designs are meant to fulfill reflect the common good, or a democratic consensus or at least a collective decision carried by a benevolent policy-maker. In Tirole’s book, the economist’s client is society. In Li’s paper, the market designer’s client always and only is “the policy-maker.” But there’s tons of research challenging the benevolence of policy-makers. And more fundamentally, what if the funding, institutional and incentive structure of the discipline, and of market design in particular, is shifting toward corporate interests? Historians have shown how Cold War economics has been shaped by the interests of the military, then the dominant patron. How acceptable is shaping markets on behalf of private clients such as IT firms?

If these questions are not going away, it because they are not deficiencies to be fixed through scientific progress, but choices to be made by economists, no matter how big their data and powerful their modeling tools. Unpacking the epistemological and ethical choice mechanism design make, the benefits they expect, but also the paths foregone is important.

Posted in Uncategorized | 2 Comments

The making and dissemination of Milton Friedman’s 1967 AEA Presidential Address

Joint with Aurélien Goutsmedt

In a few weeks, the famous presidential address in which Milton Friedman is remembered to have introduced the notion of an equilibrium rate of unemployment and opposed the use of the Phillips curve in macroeconomic policy will turn 50. It has earned more that 8,000 citations, more than Arrow, Debreu and McKenzie’s proofs of the existence of a general equilibrium combined, more than Lucas’s 1976 critique. In one of the papers to be presented at the AEA anniversary session in January, Greg Mankiw and Ricardo Reis ask “what explains the huge influence of his work,” one they interpret as “a starting point for Dynamic Stochastic General Equilibrium Models.” Neither their paper nor Olivier Blanchard’s contribution, however, unpack how Friedman’s address captured macroeconomists’ minds. This is a task historians of economics – who are altogether absent from the anniversary session – are better equipped to perform, and as it happens, some recent historical research indeed sheds light on the making and dissemination of Friedman’s address.

Capture d_écran 2017-11-28 à 00.27.52

The making of Friedman’s presidential address

 On a December 1967 Friday evening, in the Washington Sheraton Hall, AEA president Milton Friedman began his presidential address:

 “There is wide agreement about the major goals of economic policy high employment stable prices and rapid growth. There is less agreement that these goals are mutually compatible, or, among those who regard them as incompatible, about the terms at which they can and should be substituted for one another. There is least agreement about the role that various instruments of policy can and should play in achieving the several goals. My topic for tonight is the role of one such instrument – monetary policy,”

 the published version reads. As explained by  James Forder, Friedman had been thinking about his address for at least 6 months. In July, he had written down a first draft, entitled “Can full employment be a criterion of monetary policy?” At that time, Friedman intended to debunk the notion that there existed a tradeoff between inflation and unemployment. That “full employment […] can be and should be a specific criterion of monetary policy – that the monetary authority should be ‘easy’ when unemployment is high […] is so much taken for granted that it will be hard for you to believe that […] this belief is wrong,” he wrote. One reason for this was that there is a “natural rate of unemployment […] the level that would be ground out by the Walrasian system of general equilibrium equations,” one that is difficult to target. He then proceeded to explain why there was, in fact, no long run tradeoff between inflation and unemployment.

 

Phillipscurve

Phillips’s 1958 curve

 Most of the argument was conducted without explicit reference to the “Phillips Curve,” whose discussion was restricted to a couple pages. Friedman, who has, while staying at LSE in 1952, thoroughly discussed inflation and expectations with William Phillips and Phillip Cagan among others, explained that the former’s conflation of real and nominal wages, while understandable in an era of stable prices, was now becoming problematic. Indeed, as inflation pushes real wages (and unemployment) downwards, expectations adapt: “there is always a temporary trade-off between inflation and unemployment; there is no permanent trade-off. The temporary trade-off comes not from inflation per se, but from unanticipated inflation, which generally means, from a rising rate of inflation,” he concluded.

In the end, however, the address Friedman gave in December covered much more ground. The address began with a demonstration that monetary policy cannot not peg interest rates, and the section on the natural rate of unemployment was supplemented with reflections on how monetary policy should be conducted. In line with what he had advocated since 1948, Friedman suggested that monetary authorities should abide by three principles; (1) do not make monetary policy a disturbing force; (2) target magnitudes authorities can control, and (3) avoid sharp swings. These 3 principles were best combined when “adopting publicly the policy of achieving a steady rate of growth like a precise monetary total,” which became known as Friedman’s “k% rule.”

The usual interpretation of Friedman’s address is the one conveyed by Mankiw and Reis, that is, a reaction to Samuelson and Solow’s 1960 presentation of the Phillips curve as “the menu of choice between different degrees of unemployment and price stability.” Mankiw and Reis assume that this interpretation, with the qualification that the tradeoff may vary across time, was so widespread that they consider Samuelson, Solow and their disciples as the only audience Friedman meant to address. Yet, Forder and Robert Leeson, among others, provide substantial evidence that macroeconomists then already exhibited a much more subtle approach to unemployment targeting in monetary policy. The nature of expectations and the shape of expectations was widely discussed in the US and UK alike. Samuelson, Phelps, Cagan, Hicks or Phillips had repeatedly and publicly explained, in academic publications as well as newspapers, that the idea of a tradeoff should be seriously qualified in theory, and should in any case not guide monetary policy in the late 1960s. Friedman himself had already devoted a whole 1966 Newsweek chronicle to explain why “there will be an inflationary recession.”

This intellectual environment, as well as the changing focus of the final draft of his address led Forder to conclude that “there is no evidence that Friedman wished to emphasize any argument about expectations or the Phillips curve and […] that he would not have thought such as argument novel, surprising or interesting.” We disagree. For a presidential address was a forum Friedman would certainly not have overlooked, especially at a moment both academic and policy discussion on monetary policy were gaining momentum. The day after the address, John Hopkins’s William Poole presented a paper on “Monetary Policy in an Uncertain World.” 6 months afterwards, the Boston Fed held a conference titled “Controlling Monetary Aggregates.” Meant as the first of a “proposed series covering a wide range of financial and monetary problems,” its purpose was to foster exchanges on “one of the most pressing of current policy issues – the role of money in economic activity.” It brought together Samuelson, David Meiselman, James Tobin, Alan Meltzer, John Kareken on “the Federal reserve’s Modus Operandi,” James Duesenberry on “Tactics and Targets of Monetary Policy,” and Board member Sherman Maisel on “Controlling Monetary aggregates.” Opening the conference, Samuelson proposed that “the central issue that is debated these days in connection with macro-economics is the doctrine of monetarism,” citing, not Friedman’s recent address, but his 1963 Monetary History with Anna Schwartz. That same year, the Journal of Money, Credit and Banking was established, followed by the Journal of Monetary Economics in 1973. Economists had assumed a larger role at the Fed since 1965, when Ando and Modigliani were entrusted with the development of a large macroeconometric model, and the Green and Blue books were established.

 

Reflecting on “The Role of Monetary Policy” at such a catalyzing moment, Friedman thus tried to engage variegated audiences. This resulted in an address that was theoretical, historical and policy-oriented at the same time, waving together several lines of arguments with the purpose of proposing a convincing package. What makes tracking its dissemination and understanding its influence tricky is precisely that, faced with evolving contexts and scientific debates, those different audiences retained, emphasized and naturalized different bits of the package.

Friedman’s address in the context of the 1970s

Academic dissemination

GordonFriedman’s most straightforward audience was academic macroeconomists. The canonical history (echoed by Mankiw and Reis) is that Friedman’s address paved the way for the decline of Keynesianism and the rise of New Classical economics, not to say DSGE. But some ongoing historical research carried by one of us (Aurélien) in collaboration with Goulven Rubin suggests that it was Keynesian economists –rather than New Classical ones –  who were instrumental in spreading the natural rate of unemployment (NRU) hypothesis. A key protagonist was Robert Gordon, who had just completed his dissertation on Problems in the Measurement of Real Investment in the U.S. Private Economy under Solow at MIT when Friedman gave his address. He initially rejected the NRU hypothesis, only to later nest it into what would become the core textbook New Keynesian model of the 1970s.

What changed his mind was not the theory. It was the empirics: in the Phillips curve with wage inflation driven by inflation expectations and unemployment he and Solow separately estimated in 1970, the parameter on inflation expectation was extremely small, which he believed dismissed Friedman’s accelerationist argument. Gordon therefore found the impact of the change in the age-sex labor force composition on the structural rate of unemployment, highlighted by George Perry, a better explanation for the growing inflation of the late 1960s. By 1973, the parameter had soared enough for the Keynesian economist to change his mind. He imported the NRU in a non-clearing model with imperfect competition and wage rigidities, which allowed for non-voluntary unemployment, and, most important, preserved the rationale for active monetary stabilization policies.

Gordon textbookThe 1978 textbook in which Gordon introduced his AS-AD framework exhibited a whole chapter on the Phillips curve, in which he explicitly relied on Friedman’s address to explain why the curve was assumed to be vertical on the long-run. Later editions kept referring to the NRU and the long run verticality, yet rather explained by imperfect competition and wage rigidity mechanisms. 1978 was also the year Stanley Fischer and Rudiger Dornbusch’s famed Macroeconomics (the blueprint for subsequent macro textbooks) came out. The pair alluded to a possible long run trade-off, but like Gordon, settled on a vertical long-run Phillips curve. Unlike Gordon though, they immediately endorsed “Keynesian” foundations.

At the same time, New Classical economists were going down a slightly different, yet  famous route. They labored to ‘improve’ Friedman’s claim by making it consistent with rational expectations, pointing out the theoretical consequence of this new class of models for monetary policy. In 1972, Robert Lucas made it clear that Friedman’s K-% rule is optimal in his rational expectation model with information asymmetry, and Thomas Sargent and Neil Wallace soon confirmed that “an X percent growth rule for the money supply is optimal in this model, from the point of view of minimizing the variance of real output”. Lucas’s 1976 critique additionally underscored the gap between the content of Keynesian structural macroeconometrics models of the kind the Fed was using and Friedman’s argument.

Policy Impact

Friedman Burns

Friedman and Burns

Several economists in the Washington Sheraton Hall, including Friedman himself, were soon tasked with assessing the relevance of the address for policy. Chairing the 1968 AEA session was Arthur Burns, the NBER business cycle researcher and Rutgers economist who convinced young Friedman to pursue an economic career. He walked out of the room convinced by Friedman’s view that inflation was driven by adaptive expectations. In a December 1969 confirmation hearing to the Congress, he declared: “I think the Phillips curve is a generalization, a very rough generalization, for short-run movements, and I think even for he short run the Phillips curve can be changed.” A few weeks afterwards, he was nominated federal board chairman. Edward Nelson documents how, to Friedman’s great dismay, Burns’ shifting views quickly led him to endorse Nixon’s proposed wage-price controls, implemented in August 1971. In reaction, monetarists Karl Brunner and Allan Meltzer founded the Shadow Open Market Committee in 1973. As Meltzer later explained, “Karl Brunner and I decided to organize a group to criticize the decision and point out the error in the claim that controls could stop inflation.”

Capture d_écran 2017-11-23 à 16.09.45While the price and wage controls were removed in 1974, the CPI index suddenly soared by 12% (following the October 1973 oil shock), at a moment unemployment was on the way to reach 9% in 1975. The double plague, which British politician Ian MacLeod had dubbed “stagflation” in 1965, deeply divided the country (as well as economists, as shown by the famous 1971 Time cover). What should be addressed first, unemployment or inflation? In 1975, Senator Proxmire, chairman of the Committee on Banking of the Senate, submitted a resolution that would force the Fed into coordinating with the Congress, taking into account production increase & “maximum employment” alongside stable prices in its goals, and disclosing “numerical ranges” of monetary growth. Friedman was called to testify, and the resulting Senate report strikingly echoed the “no long-term tradeoff” claim of the 1968 address:

“there appears to be no long-run trade-off. Therefore, there is no need to choose between unemployment and inflation. Rather, maximum employment and stable prices are compatible goals as a long-run matter provided stability is achieved in the growth of the monetary and credit aggregates commensurate with the growth of the economy’s productive potential.”

 If there was no long-term trade-off, then explicitly pursuing maximum employment wasn’t necessary. Price-stability would bring about employment, and Friedman’s policy program would be vindicated.

Capture d’écran 2017-11-28 à 01.03.04.pngThe resulting Concurrent Resolution 133 however did not prevent the Fed staff from undermining
congressional attempts at controlling monetary policy: their strategy was to present a confusing set of five different measure of monetary and credit aggregates. Meanwhile, other assaults on the Fed mandate were gaining strength. Employment activists, in particular those who, in the wake of Coretta Scott King, were pointing out that black workers were especially hit by mounting unemployment, were organizing protests after protests. In 1973, black California congressman Augustus Hawkins convened a UCLA symposium to draw the contours of “a full employment policy for America.” Participants were asked to discussed early drafts of a bill jointly submitted by Hawkins and Minnesota senator Hubert Humphrey, member of the Joint Economic Committee. Passed in 1978 as the “Full Employment ad Balanced Growth Act,” it enacted Congressional oversight of monetary policy. It required that the Fed formally report twice a year to Congress, and establish and follow a monetary policy rule that would term both inflation and unemployment. The consequences of the bill were hotly debated as soon as 1976 at the AEA, in the Journal of Monetary Economics, or in Challenge. The heat the bill generated contrasted with its effect on monetary policy, which, again, was minimal. The following year, Paul Volcker became Fed chairman, and in October, he abruptly announced that the Fed would set binding rules for reserve aggregate creation and let interests rates drift away if necessary.

 

 

A convoluted academia-policy pipeline?

The 1967 address thus initially circulated both in the academia and in public policy circles, with effects that Friedman did not always welcome. The natural rate of unemployment was adopted by some Keynesian economists because it seemed empirically robust, or at least useful, yet it was nested in models supporting active discretionary monetary policy. Monetary policy rules became gradually embedded in the legal framework presiding over the conduct of monetary policy, but this was with the purpose of reorienting the Fed toward the pursuit of maximum unemployment. Paradoxically, New Classical research, usually considered the key pipeline whereby the address was disseminated within and beyond economics, seemed only loosely connected to policy.

Capture d_écran 2017-11-21 à 00.45.26 Indeed, one has to read closely the seminal 1970s papers usually associated with the “New Classical Revolution” to find mentions of the troubled policy context. The framing of Finn Kydland and Edward Prescott’s “rule vs discretion” paper, in which the use of rational expectations raised credibility and time consistency issues, was altogether theoretical. It closed with the cryptic statement that “there could be institutional arrangements which make it a difficult and time-consuming process to change the policy rules in all but emergency situations. One possible institutional arrangement is for Congress to legislate monetary and fiscal policy rules and these rules become effective only after a 2-year delay. This would make discretionary policy all but impossible.” Likewise, Sargent and Wallace opened their “unpleasant monetarist arithmetic” 1981 paper with a discussion of Friedman’s presidential address, but quickly added that the paper was intended as a theoretical demonstration of the impossibility to control inflation. None of the institutional controversies were mentioned, but the author ended an earlier draft with this sentence: “we wrote this paper, not because we think that our assumption about the game played by the monetary and fiscal authorities describes the way monetary and fiscal policies should be coordinated, but out of a fear that it may describe the way the game is now being played.”

 Lucas was the only one to write a paper that explicitly discussed Friedman’s monetary program, and why it had ‘so limited an impact.” Presented at a 1978 NBER conference, he was asked to discuss “what policy should have been in 1973-1975,” but declined. The question was “ill-posed,” he wrote. The source of the 1970s economic mess, he continued, was to be found in the failure to build appropriate monetary and fiscal institutions, which he proceeded to discuss extensively. Mentioning the “tax revolt,” he praised the California Proposition 13 designed to limit property taxes. He then defended Resolution 133’s requirement that the Fed announces monetary growth targets in advance, hoping for a more binding extension.

 Capture d’écran 2017-11-28 à 00.49.03.pngThis collective distance contrasts with both Monetarist and Keynesian economists’ willingness to discuss existing US monetary institutional arrangements in academic writings and in the press alike. It is especially puzzling given that those economists were working within the eye of the (institutional) storm. Sargent, Wallace and Prescott were then in-house economists at the Minneapolis Fed, and the Sargent-Wallace paper mentioned above was published by the bank’s Quarterly Review. Though none of them seemed primarily concerned with policy debates, their intellectual influence was, on the other hand, evident from the Minneapolis board’s statements. Chairman Mark Willes, a former Columbia PhD student in monetary economics, was eager to preach the New Classical Gospel at the FOMC. “There is no tradeoff between inflation and unemployment,” he hammered in a 1977 lecture at the University of Minnesota. He later added that:

“it is of course primarily to the academic community and other research groups that we look for …if we are to have effective economic policy you must have a coherent theory of how the economy works…Friedman doesn’t seem completely convincing either. Perhaps the rational expectationists here …. Have the ultimate answer. At this point only Heaven, Neil Wallace, and Tom Sargent know for sure.”

 If debates were raging at the Minneapolis Fed as well as within the university of Minnesota’s boundaries, it was because the policies designed to reach maximum unemployment were designed by the Minnesota senator, Humphrey, himself advised by a famous colleague of Sargent and Wallace, Keynesian economist, former CEA chair and architect of the 1964 Kennedy tax cut Walter Heller.

 

The independent life of “Friedman 1968” in the 1980s and 1990s?

Friedman’s presidential address seem to have experienced a renewed citation pattern in the 1980s and 1990s, but this is yet an hypothesis that needs to be documented. Our bet is that macroeconomists came to re-read the address in the wake of the deterioration of economic conditions they associated with Volcker’s targeting. After the monetary targeting experience was discontinued in 1982, macroeconomists increasingly researched actual institutional arrangements and policy instruments. We believe that this shift is best reflected in John Taylor’s writings. Leeson recounts how, a senior student at the time Friedman pronounced his presidential address, Taylor’s research focused on the theory of monetary policy. His two stints as CEA economist got him obsessed with how to make monetary policy more tractable. He increasingly leaned toward including monetary “practices in the analysis, a process which culminated in the formulation of the Taylor rule in 1993 (a paper more cited that Friedman’s presidential address). Shifting academic interest, which can be interpreted as more in line with the spirit, if not the content, of Friedman’s address, were also seen in 1980s discussions of nominal income targets. Here, academic debates preceded policy reforms, with the Fed’s dual inflation/employment mandate being only appeared in a FOMC statement under Ben Bernanke in 2010, in the wake of the financial crisis (see this thread by Claudia Sahm). This late recognition may, again, provide a new readership to the 1968 AEA presidential address, an old lady whose charms appear timeless.

 

Friedman 1968 title

Posted in Uncategorized | Tagged , | Leave a comment