Working on 1960s macroeconometrics : there’s an echo on the line

Three years ago, a group of historians of economics embarked on a reexamination of the relationships between theoretical and empirical work in macroeconomics. Our goal was inward looking. We were not primarily looking to contribute to present discussions on the state of macro, but to correct what we perceived as a historiographical bias in our own scholarship: the tendency to paint the history of macroeconomics as a succession of theoretical battles between Keynesians, monetarists, new classical, neo-keynesians, etc.  This emphasis on theory did not square well with a common thread in all the interviews of 1960s grad students from MIT and elsewhere several of us had conducted: those revealed that a common formative experience seemed to be contributing to one of the large-scale macroeconometric models developed in these years. My own pick was the model jointly developed at the Fed, the MIT and the University of Pennsylvania (hereafter FRB model). Yet as I complete my second paper on the model (joint with Roger Backhouse, just uploaded here), I find that the dusty debates we document have found an unexpected echo in contemporary exchanges.

I learned two lessons from writing on the FRB model. The first one is that I wasn’t as immune from the epistemic spell of not-yet-defunct economists as I had thought. I came to the project with no hidden opinion on the DSGE approach to macro, one borne out of a synthesis between the modeling rules spelled out by Sargent and Lucas and a variety of add-ons proposed by so-called New Keynesians aimed at providing more a satisfactory depiction of shocks and response mechanisms. But like most historians of macro, I had been trained as an economist. I had been raised into believing that microfounded models were the clean rigorous way to frame a discourse on business cycles (the insistence that rational expectations was the gold standard was, I think, already gone by the mid 2000s). If I wanted to trade rigor for predictive power, then I needed to switch to an altogether different practice, VARs (which I effectively did as a central bank intern tasked with predicting short-term moves in aggregate consumption). What I discovered was that my training had biased the historiographical lenses through which I was approaching the history of macroeconometric models: what I was trying to document was the development and use of A model, one defined by a consistent set of behavioral equations and constraints and a stable set of rules whereby such system was estimated and simulated for policy purpose. The problem, I quickly found out, was that there was no such historical object to research.

What we found in the archives was a collection of equations whose specification and estimation procedures were constantly changing across time and locations. There was no such thing as the FMP model. To begin with, the Fed team and the academic team closely collaborated by developed distinct models that were only merged after 3 years. And the boundaries of each model constantly evolved as students returned new blocks of equations and simulations blew up. The ordinary business of macroeconometric modeling looked like a giant jigsaw. This December 1967 letter from Albert Ando to Franco Modigliani is representative:

Screen Shot 2018-10-14 at 22.50.40Screen Shot 2018-10-14 at 22.50.08

 

Screen Shot 2018-10-15 at 19.13.11.pngViewed from the perspective of modern macro, it was a giant mess, and in our first drafts we thus chose to characterize macroeconometrics as a “messy” endeavor. But being “messy” in the sense of not being theoretically and econometrically founded and thus unscientific is exactly why Lucas and Sargent argued these models should be dismissed. Their famous 1979 “After Keynesian Macroeconomics” paper is an all-out attack on models of the FRB kind: they pointed to the theoretical “failure to derive behavioral relationships from any consistently posed dynamic optimization problems,” the econometric “failure of existing models to derive restrictions on expectations” and the absence of convincing identification restrictions, concluding with the “spectacular failure of the Keynesian models in the 1970s.” In his critique paper, Lucas also cursed “intercept adjustment,” also known as “fudging” (revising the intercept to improve forecast accuracy, a practice which spread as building inflationary pressures resulted in false predictions in the 1970s). It was a proof those models were misconceived, he argued.

 

The second lesson I learned from working on primary sources is that macroeconometricians were perfectly aware of the lack of theoretical consistency and the fuzziness of estimation and simulation procedures. More, they endorsed it. Every historian knows, for instance, that the quest for microfoundations did not begin with Lucas, having repeatedly stumbled on pre-Lucasian statements on the topic. Jacob Marschak opened his 1948 Chicago macro course with this statement : “this is a course in macro-economics. It deals with aggregates…rather than with the demand or supply or sinfle firms or families for single commodities. The relations between aggregates have to be consistent, to be sure, with our knowledge of the behavior of single firms or households with regards to single good.” In 1971, Terence Gorman likewise opened his lectures on aggregation with a warning: “theorists attempt to derive some macro theory from the micro theory, usually allowing it to define the aggregate in question. In practice they are reduced to asking ‘when can this be done.’ The answer is ‘hardly ever.’” Kevin Hoover has argued that there were at least three competing microfoundational programs in the postwar period, Lucas’s use of representative agent being just one of them. But for macroeconometricians, the lack of theoretical consistency in the Lucasian science was also the result of doing big science, and of facing a trade-off between theoretical consistency and data fit.

Building a macroeconometric model of the FRB kind involved several teams and more than 50 researchers, and it was impossible that all of them agree on the specification of all equations: “None of us holds the view that there should be only one model. It would indeed be unhealthy if there were no honest differences among us as to what are the best specifications of some of the sectors of the model, and when such differences do exist, we should maintain alternative formulation until such time as performances of two formulations can be thoroughly compared,” Ando explained to Modigliani in 1967. By 1970, it had become clear that neither would macroeconomists agree on the adequate tests to compare alternative specifications. Empirical practices, goals and trade-offs were too different. The Fed team wanted a model which could quickly provide good forecasts: “We get a considerable reduction in dynamic simulation errors if we change the total consumption equation by reducing the current income weight and increasing the lagged income weight […] We get a slight further reduction of simulation error if we change the consumption allocation equations so as to reduce the importance of current income and increase the importance of total consumption,” Fed project leader Frank de Leeuw wrote to Modigliani in 1968. But the latter’s motive for developing the FRB model was different: he wanted to settle a theoretical controversy with Friedman and Meiselman on whether the relation of output to money was more stable than the Keynesian multiplier. He was therefore not willing to compromise theoretical integrity for better forecasting power: “I am surprised to find that in these equations you have dropped completely current income. Originally this variable had been introduced to account for investment of transient income in durables. This still seems a reasonable hypothesis,” he responded to De Leuuw.

Different goals and epistemic values resulted in different tradeoffs between theoretical consistency and data fit, between model integrity and flexibility. The intercept fudging disparaged by Lucas turned out to be what clients of the new breed of firms selling forecasts based on macroeconometric models paid for. What businessmen wanted was the informed judgment of macroeconomists, one that the Federal Reserve Board also held in higher esteem than mere “mechanical forecasts.” Intercept corrections were later reframed by David Hendry as an econometric strategy to accommodate structural change. In short, the messiness of macroeconometrics was not perceived as a failure; it was, rather, messiness by design. In his response to Lucas and Sargent, Ando explained that reducing a complex system to a few equations required using different types of evidence and approximations, so that the task of improving them should be done ‘informally and implicitly.”

Screen Shot 2018-10-15 at 19.17.24That recent discussions on the state of macroeconomics somehow echo the epistemic choices of 1960s macroeconometricians is an interesting turn. Since 2011, Simon Wren-Lewis has been calling for a more “pragmatic” approach to microfoundations. His most recent blog post describes the development of the British COMPACT model as weighing costs and gains of writing internally non-consistent models –the model features an exogenous credit constraint variable. His calls this approach “data-based” and “eclectic,” and he argues that macro would have been better had it allowed this kind of approach to coexist with DSGE. Last year, Vitor Constancio, Vice-president of the European Central Bank, noted that “we constantly update our beliefs on the key economic mechanisms that are necessary to fit the data,” concluding that “the model should be reasonably flexible.” Olivier Blanchard also recently acknowledged that macroeconomic models fulfilled different goals (descriptive, predictive and prescriptive). He advocated building different models for different purposes: academic DSGE are still fit for structural analysis, he argued, but “policy modelers should accept the fact that equations that truly fit the data can have only a loose theoretical justification.” In a surprising turn, he argued that “early macroeconomic models had it right: the permanent income theory, the life-cycle theory, and the Q theory provided guidance for the specification of consumption and investment behaviour, but the data then determined the final specification.” Are we witnessing an epistemological revolutionOr a return to epistemological positions that economists thought they had abandoned?

Advertisements
Posted in Uncategorized | Tagged , ,

How ‘tractability’ has shaped economic knowledge: a few conjectures

Yesterday, I blogged about a question I have mulled over for years: what is the aggregate consequence of the thousands of hundreds of “I model the government’s objective function with well-behaved X social welfare-function because canonical paper Y does it” or “I assume a representative agent to make the model tractable” or “I log-linearize to make the model tractable” sentences economists routinely write on the knowledge they collectively produce? Today I had several interesting exchanges (see here and here) which helped me clarify and spell out my object and hypotheses. Here is an edited version of some points I made on twitter. These are not the conclusion to some analysis, but the priors with which I approach the question, and which I’d like to test.

1. A ‘tractable’ model is one that you can solve, which means there are several types of tractability : analytical tractability (finding a solution to a theoretical model), empirical tractability (being able to estimate/calibrate your model) and computational tractability (finding numerical solutions). It is sometimes hard to discriminate between theoretical and empirical, or empirical and computational tractability

(note: if you want a definition of model, read the typology in the Palgrave Dictionary  article by Mary Morgan. If you want to see the typology in action, read this superb paper by Verena Haslmayer on the 7 ways Robert Solow conceived economic models)

2.  Economists’ don’t merely make modeling choices because they believe these are either key to imitate the world or to predict correctly or to produce useful policy recommendations, but also because they otherwise can’t manipulate their models. My previous post reflected of how Robert Lucas’s reconstruction of macroeconomics has been interpreted. While he believed microfoundations to be a fundamental condition for a model to generate good policy evaluation, he certainly didn’t think a representative agent  assumption would made macro models better. No economist does. The assumption is meant to avoid aggregation issues. Neither does postulating linearity  or normality usually reflect how economists see the world. What I’d like to capture is the effect of those choices economists make “for convenience,” to be able to reach solutions, to simplify, to ease their work, in short, to make a model tractable. While those assumptions are conventional and meant to be lifted as mathematical, theoretical and empirical skills and technology (hardware and software) ‘progress,’ their underlying rationale is often lost as they are taken-up by other researchers, spread, and become standard (implicit in the last sentence is the idea that what a tractable model is evolves as new techniques and technologies are brought in)

3. An interesting phenomenon, then, is “tractability standards” (Noah Smith suggested to it “tractability+canonization”, but canonization implies a reflexive endorsement, while standardization convey the idea that these modeling choices spread without specific attention, that they seem mundane – yes, I’m nitpicking). Tractability standards have been more or less stringent over time, and they haven’t followed a linear pattern whereby contraints are gradually relaxed, allowing economists to design and manipulate more complex (and richer) models decades after decades. My prior on the recent history of tractability in macroeconomics, for instance,  is something like this (beware, wishful thinking ahead):

Between the late 1930s and the 1970s, economists started building large-scale macroeconometric models, and the number of equations and the diversity of theories  combined soon swelled out of control. It meant that finding analytical solutions and estimating and simulating those models was hell (especially when all you had was 20 hours IBM360 a week). But there was neither a preferable or a prohibited way to make a model tractable. Whatever solution you could come out with was fine: you could either just take a whole block of equation down if it was messing up your simulation (like wage equations+ labor market in the case of the MPS model). You could devise a mix of two-stage least squares, limited information maximum likelihood and instrumental variable techniques and run recursive block estimation (like Frank Fisher did). You could pick up your phone and ask bayesian Zellner to devise a new set of tests for you (like Modigliani and Ando did).

With Lucas, Sargent, Kydland and Prescott came new types of models, thus new analytical, empirical and computational challenges. But this time, “tractability standards”   spread alongside the models (not sure by whom or how coordinated or intentional it was). If you wanted to publish a macro paper in top journals in the 1980s, you were not allowed to take whatever action you wished to make your model tractable. Representative agent and market-clearing were fine; non-microfounded simple models were not. Linearizing was okay, but finding solution through numerical approximation wasn’t generally seen as satisfactory. Closed-form solutions were preferred. And so was the case in some micro fields like public economics. Representative agents and well-behaved social welfare functions made general-equilibrium optimal taxation models “tractable.” That these assumptions were meant to be later lifted was forgotten, models standardized, and so did research questions. How inequalities evolved wasn’t one you couldn’t answer anymore, and maybe inequality wasn’t something you could even “see.”

What has happened in the post-crisis decade is less clear. My intuition is that tractability standards have relaxed again, allowing for more diverse ways to hunt for solutions. But it’s not clear to me why. The usual answer is that better software and faster computers are fostering the spread of numerical techniques, making tractability concerns a relic of the past. I have voiced my skepticism elsewhere. The history of the relations between economists and computer is one in which there’s a leap in hardware, or software or econometric techniques every 15 years, with economists declaring the end of history, enthusiastically flooding their models with more equations and more refinements…. and finding themselves 10 years later with non-tractable models yet over again. Reflecting on the installation of an IBM650 at Iowa State University in 1956, R. Beneke, for instance,  joked that once the computer accommodated the inversion of a 198 row matrix, “new families of exotic regression models came on the scene.” Agricultural economists , he remarked, “enjoyed proliferating the regression equation forms they fitted to data sets.”  Furthermore, it takes more than computers to spread numerical techniques. After all, these techniques have been in limited uses since the late 1980s. An epistemological shift is needed, one that involves relinquishing the closed-form solution fetish. Was this the only option left for economists to move on after the crisis? Or has the rise of behavioral economics, of field and laboratory experiments, and the structural vs reduced form debate opened the range tractability choices?

4. Some heterodox economists (Lars Syll here) believe  that the whole focus on tractability is a hoax. It shows that restricting oneself to deductivist mathematical models is flawed. There are other ways to model, he argues, and ontological considerations should take precedence over formalistic tractability. Cahal Moran goes further, arguing that economists should be allowed to reason without modeling. There is a clear fault line here, for Lucas, among others, has insisted in a letter to Matsuyama that “explicit modeling can give a spurious sense of omniscience that one has to guard against… but if we give up explicit modeling, what have we got left except ideology? I don’t think either Hayek or Coase have faced up to this question.” Perry Mehrling, on the other hand, believes tractability is more about teaching, communication and publication than thinking and exploring.

5. Focusing on tractability offer new lenses to approach debates on the current state of the discipline (in particular macro). Absent archival or interview smoking-guns, constantly ranting that the new classical revolution is a undoubtedly neoliberal or neocon turn and whether the modeling choices of Lucas, Sargent, Prescott or Plosser are of course ideological produces more heat than light. These choices reflect a mix of political, intellectual and methodological values, a mix of beliefs and tractability constraints. The two aspects might be impossible to disentangle, but it at least make sense to investigate their joint effects, and to make room for the possibility that the standardization of tractability strategies have shaped economic knowledge.

6. The tractability lens also helps me make sense of what is happening in economics now, and what might come next.  Right now, clusters of macroeconomists are each working on relaxing one or two tractability assumptions: research agendas span heterogeneity, non-rational expectations, financial markets, non-linearities, fat-tailed distributions, etc. But if you put all these adds-on together (assuming you can design a consistent model, and that adds-on are the way forward, which many critics challenge), you’re back to non-tractable. So what is the priority? How do macroeconomists rank these model improvements? And can the profession affords waiting 30 more years, 3 more financial crises and two trade war before it can finally say it has a model rich enough to anticipate crises?

 

Posted in Uncategorized | 2 Comments

What is the cost of ‘tractable’ economic models?

Edit: here’s a follow-up post in which I clarify my definition of ‘tractability’ and my priors on this topic

Economists study cycles, but they also create some. Every other month, a British heterodox economist explains why economics is broken, and other British economists respond that the critic doesn’t understand what economists really do (there’s even a dedicated hashtag). The anti and pro arguments have more or less been the same for the past 10 years. This week, the accuser is Howard Reed and the defender, Diane Coyle. It would be business as usual without interesting comments by Econocracy coauthor Cahal Moran at Opendemocracy and by Jo Michell on twitter along the same lines. What matters, they argue, is not what economists do but how they do it. The problem is not whether some economists deal with money, financial instability, inequality or gender, but how their dominant modeling strategies allow them to take them into account or rather, they argue, constrain them to leave these crucial issues out of their analysis. In other words, the social phenomena economists choose to study and the questions they choose to ask, which have come under fire since the crisis, are in fact determined by the method they choose to wield. Here lays the culprit: how economists write their models, how they validate their hypotheses empirically, what they believe is a good proof is too monolithic.

One reason I find this an interesting angle is because I read the history of economics in the past 70 years as moving from a mainstream defined by theories to a mainstream defined by models (aka tools aimed at fitting theories to reality, thus involving methodological choices). And eventually, to a mainstream defined by methods. Some historians of economics argue that the neoclassical core has fragmented so much because of the rise of behavioral and complexity economics, among others,  that we have now entered a post-mainstream state. I disagree. If “mainstream” economics is what get published in the top-5 journals, maybe you don’t need representative agent, DSGE or strict inter temporal maximization anymore. What the gatekeepers will scrutinize instead, these days, is how you derive your proof, and whether your research design, in particular your identification strategy, is acceptable.  Whether this is a good evolution or not is not something for me to judge, but the cost & benefits of methodological orthodoxy becomes an important question.

Another reason why the method angle warrants further consideration is that a major  fault line in current debates is how much economists should sacrifice to get ‘tractable’ models. I have long mulled over a related question, namely how much ‘tractability’ has shaped economics in the past decades. In a 2008 short paper, Xavier Gabaix and David Laibson list 7 properties of good models: parsimony, tractability, conceptual insightfulness, generalizability, falsifiability, empirical consistency, and predictive precision. They don’t rank them, and their conscious and unconscious ranking has probably sharply evolved across time. But while tractability have probably never ranked highest, I believe the unconscious hunt for tractable models may have thoroughly  shaped economics. I have hitherto failed to find an appropriate strategy to investigate the influence of ‘tractability.’ But I think no fruitful discussions can be carried on the current state of economics without answering this question. Let me give you an exemple:

While the paternity of the theoretical apparatus underlying the new neoclassical synthesis in macro is contested, there is wide agreement that the methodological framework was largely architected by Robert Lucas. What is debated is to what extent Lucas’s choices were intellectual or ideological. Alan Blinder hinted to a mix of both when he commented in 1988 that “the ascendancy of new classicism in academia was… the triumph of a priori theorizing over empiricism, of intellectual aesthetics over observation and, in some measure, of conservative ideology over liberalism.” Recent commentators like Paul Romer or Brad de Long are not just trying to assess Lucas’s results and legacy, but also his intentions. Yet the true reasons behind modeling choices are hard to pin down. Bringing a representative agent meant foregoing the possibility to tackle inequality, redistribution and justice concerns. Was it deliberate? How much does this choice owe to tractability? What macroeconomists were chasing, in these years, was a renewed explanation of the business cycle. They were trying to write microfounded and dynamic models. Building on intertemporal maximization therefore seemed a good path to travel, Michel de Vroey explains.

The origins of some of the bolts and pipes Lucas put together are now well-known. Judy Klein has explained how Bellman’s dynamic programming was quickly implemented at Carnegie’s GSIA. Antonella Rancan has shown that Carnegie was also where Simon, Modigliani, Holt and Muth were debating how agents’ expectations should be modeled. Expectations was also the topic of a conference organized by Phelps, who came up with the idea to consider imperfect information by modeling agents on islands. But Lucas, like Sargent and others, also insisted that the ability of these models to imitate real-world fluctuations be tested, as their purpose were to formulate policy recommendations: “our task…is to write a FORTRAN program that will accept specific economic policy rules as “input” and will generate as “output” statistics describing the operating characteristics of time series we care about,” he wrote in 1980. Rational expectations imposed cross-equation restrictions, yet estimating these new models substantially raised the computing burden. Assuming a representative agent mitigated computational demands, and allowed macroeconomists to get away with general equilibrium aggregate issues: it made new-classical models analytically and computationally tractable. So did quadratic linear decision rules: “only a few other functional forms for agents’ objective functions in dynamic stochastic optimum problems have this same necessary analytical tractability. Computer technology in the foreseeable future seems to require working with such as a class of functions,” Lucas and Sargent conceded in 1978.

Was tractability the main reason why Lucas embraced the representative agent (and market clearing)? Or could he have improved tractability through alternative hypotheses, leading to opposed policy conclusions? I have no idea. More important, and more difficult to track, is the role played by tractability in the spread of Lucas’ modeling strategies. Some macroeconomists may have endorsed the new class of Lucas-critique-proof models because they liked its policy conclusions. Other may have retained some hypotheses, then some simplifications, “because it makes the model tractable.” And while the limits of simplifying assumptions are often emphasized by those who propose them , as they spread, caveats are forgotten. Tractability restrict the range of accepted models and prevent economists from discussing some social issues, and with time, from even “seeing” them. Tractability ‘filters’ economists’ reality. My question is not restricted to macro. Equally important is to understand why James Mirrlees and Peter Diamond choose to reinvestigate optimal taxation in a general equilibrium-with-representative agent setting (here, the genealogy harks back to Ramsey), whether this modeling strategy spread because it was tractable, and what the consequences on public economics were. The aggregate effect of “looking for tractable models” is unknown, and yet it is crucial to understand the current state of economics.

Posted in Uncategorized | 10 Comments

A game of mirrors? Economists’ models of the labor market and the 1970s gender reckoning

Written with Cleo Chassonnery-Zaigouche and John Singleton

The underrepresentation of women in science is drawing increasing attention from scientists as well as from the media. For example, research examining glass ceilings, leaking or small pipelines, the influence of mentorship, biases in refereeing, recommendations, and styles of undergraduate education or textbooks are flourishing in STEM, engineering, social sciences, and the humanities. Economics is no exception, as a paper that drew widespread coverage by Alice Wu released in the summer of 2017 exemplified. One thing that nevertheless sets economics and (to greater and lesser extents) its cognate disciplines apart, however, is that research topics such as the gender wage gap, women’s labor supply, and labor market discrimination are phenomena that many researchers in these areas both experience and study. An obvious question raised, therefore, is how the theories, models, and empirical evidence that economists develop and produce in turn shape their understanding of gender issues within their profession. Early debates surrounding the foundation of the Committee on the Status of Women in the Economics Profession (CSWEP) in 1972 are revealing in this regard.

'Women Of The World Unite!'The foundation of CSWEP, which we briefly narrated here, stood at the crossroads of various historical and social trends. One was the growing public awareness of discrimination issues and an associated shift within the US legal context. The Equal Pay Act of 1963 and the last-minute inclusion of gender in the 1964 Civil Right Act brought about a stream of sex discrimination cases, including the famous Bell AT&T case, whose settlement benefitted 15,000 women and minority employees. Phyllis Wallace, a founding member of CSWEP, was the expert coordinator for the Equal Employment Opportunity Commission. Though the 1964 Act excluded employees of public bodies, including governmental and universities hires, legal battles at Ivy Leagues universities resulted in compliance rules and the 1972 Equal Employment Opportunity Act. Professional societies were no exception. Beginning in 1969, the American Historical Association and the American Sociological Association established committees on the status of women in their respective disciplines. Chairing the sociology committee was Elise Boulding, whose husband Kenneth later joined as a founding member of CSWEP. Kenneth Boulding would draft “Role Prejudice as an Economic Problem,” the first part of a paper introducing CSWEP’s first report, “Combatting Role Prejudice and Sex Discrimination.”

Many of the actions pursued by economists were indeed similar to (and inspired by) other professional and academic societies, such as making day care available at major conferences, creating mentorship programs, and developing a roster of women in every field in economics to chair and participate in conferences panels. But other issues were idiosyncratic to economics. In particular, the problems of gender bias in economics were viewed as economic issues from the beginning, as seen in Boulding’s 1973 article on sex Boulding explaining CSWEP creationdiscrimination within the profession (picture on the left). Early CSWEP reports routinely framed their organizational efforts as attempts to study and fix the “supply and demand for women economists,” that is, the labor market for economists. The framing applied in the reports echoes the objectives of the AEA Committee on Hiring Practices to establish better recruitment practices and the preceding work by the Committee on the Structure of the Economic Profession. The reports relied on the logic of basic econ 101 principles at times, but on other occasions, the CSWEP originators, most of whom were trained in labor economics, delved more deeply into ongoing debates on the interpretation of earnings differentials, the determinants of women’s labor supply, the extent of discrimination and causes of occupational segregation, and on ways to fix the labor market for economists. One particularly revealing occasion was a letter exchange between Carolyn Shaw Bell, usually hailed as the driving force behind the women’s caucus that led to CSWEP’s creation, and University of Chicago economist Milton Friedman.

Bell vs Friedman 

29shaw_190Carolyn Shaw Bell (1920-2006) had received her Ph.D in 1949 from the London School of Economics and would spend her academic career at Wellesley College. After war work at the Office of Price Administration with Galbraith, she later did empirical work on innovation and income distribution, and contributed to consumer economics. Bell was convinced to accept the inaugural chairpersonship of the Committee (rather than retire) after the American Economic Association voted to establish the CSWEP in 1971 and launch an annual survey of women economists. In the summer of 1973, she sought to organize session at the December ASSA meeting. She wanted to assemble a panel of economists from various, sometimes opposed, backgrounds to comment on the findings. She therefore asked Elizabeth Clayton, a specialist of Soviet economics at the University of Missouri, leftish labor economist David Gordon of the New School for Social Research, and Milton Friedman to participate. CSWEP members expected “out in the open” controversy from the panel.

 

In an August reply to Bell’s invitation, Friedman declined, as he was not planning to attend the meetings (he was to be replaced by George Stigler on the panel). He did so regretfully, he explained, because he held strong views on the CSWEP report. He especially disagreed with the statement that “every economics department shall actively encourage qualified women graduate students without regard to age, marital or family status.” Though he “sympathize[d] very much with the objective of eliminating extraneous considerations from any judgment of ability or performance potential,” Friedman confessed he “never believed in reverse discrimination whether for women or for Jews or for blacks.” To this list he later added discrimination against conservative scholars, which he believed was strong on college and university campuses.

Whether preferential treatment produced “reverse discrimination” against the majority groups was a key point of contention over affirmative action policies. The social context was politically charged. While ending some of the “Great Society” programs, the Nixon Administration also set up the first affirmative action policy in 1969: the “Philadelphia Plan,” required that federal contractors and unions meet targeted goals for minority hires. The Nixon policy was sold as “racial goals and timetables, not quotas,” but criticisms focused on de facto quotas and applicability. Though formal quotas in US universities did not exist for women and racial quotas were ruled out by a 1978 decision, formal and informal affirmative action were debated in similar ways: Does encouraging women and minority applicant discouraged white men to apply? Was the goal of equal opportunity equal representation? These were recurring questions.

Screen Shot 2018-03-06 at 11.59.32Friedman’s answer, elaborated in his reply to Bell, was straightforward: affirmative action is inefficient and unethical: “should we… encourage men age 65 to enter graduate study on a par with young men age 20?”, he asked “Surely training in advanced economics is a capital investment and is justified only if it can be expected that the yield from it will repay the cost… Individuals trained do not bear the full cost of their training. We have limited funds with which to subsidize such training; it is appropriate to use those funds in such way to maximize the yield for the purpose for which the funds were made available. In the main, those funds were made available to promote a discipline rather than to promote the objectives of particular groups,” he continued. “It is relevant to take into account the age of men or women, the marital or family status of men or women, and the sex of potential applicants insofar as that affects the likely yield from the investment in their training,” Friedman emphasized, in an argument that strongly echoed Becker’s human capital theory: prohibiting the use of criteria such as gender and race in investment decisions was inefficient if they contributed to correctly predicting returns.

Overall, Friedman concluded, equal opportunity would not yield equal representation or “balance”:

I have no doubt that there has been discrimination against women. I have no doubt that one of its results has been that those women who do manage to make their mark are much abler than their male colleagues. As a result, it has seemed to me that a justified impression has grown up that women are intellectually superior to men rather than the reverse. I realize this is small comfort to those women who have been denied opportunities, but I only urge you to consider the consequences of reverse discrimination in producing the opposite effect.

Bell outlined the reasons for her disagreement with Friedman in lengthy response. She insisted that CSWEP favored non-financial “encouragement” over “any preferential financial aid for women.” More generally, while she agreed that the “free market lessens the opportunities for discrimination inasmuch as competition gives paramount recognition to economic efficiency,” she contended that this reasoning only applied to goods, not to human beings. She agreed with Friedman regarding the criteria for investing in professional training, but objected that there was nothing “in [Friedman’s] statement, in the discipline per se or in the existence of scarce resources, to identify those recipients who will, in fact, contribute most to the field.” Instead, she argued, the recipients of investment were selected by those “controlling the awards who learned certain cultural patterns, including beliefs about sex roles.”

Bell went on to admonish economists to re-examine their own biases: like the employers and employees they studied, they had been “brainwashed”: “beginning in the cradle. children… learn over and over again that what is appropriate and relevant for boys is not necessarily appropriate and relevant for girls.” These societal norms were biasing market forces, in that they influenced both supply and the demand of labor, she explained. Career and family expectations, decisions of whether or not to invest in education and training, the choice of education and occupation, as well as the allocation of time are all distorted. “This means that the occupations followed by young men and women do not reflect market considerations,” she concluded, only to add that “until we have a society where little girls are not only able to become dentists and surveyors and readily as little boys but are expected to become dentists and surveyors as readily as little boys we cannot in all conscience rely on the dictates of economic efficiency to allocate human beings.”

In a rejoinder to Bell’s reply, Friedman reprised one of his most famous arguments: market solutions should be preferred because alternatives systematically lead to the tyranny of the majority:

Screen Shot 2018-03-06 at 12.18.38

In short, Bell was advocating for institutional changes to the professional training of and labor market for economists (procedures in a very concrete way, cf. premise of JOE), while Friedman was arguing political philosophy. In her final response, Bell rejected the notion that actively countering “the existing system of brainwashing” through affirmative action would be useless, arguing instead that present discrimination resulted in capital investment “which may reduce the mobility of other resources in the future” and therefore was inefficient. Finally, Bell insisted that the CSWEP report advocated for voluntary participation in affirmative action plans, and that the “mild suggestions” proposed were far from the “dictatorial imposition of power” that worried Friedman.

 

Conflicting models of the labor market

The exchange quoted above reveals how much thinking about the status of women in economics and what should be done about it was embedded in wider economic debates on how to model the role of women in the economy. The initial focus of Bell and Friedman’s exchange was the plan advanced by CSWEP. They both tacitly considered it a special case of the larger debate on whether affirmative action would advance the status of women in the US economy, the disagreement deriving from their respective visions of the labor market, of how agents make economic decisions, and the extent to which gaps in outcomes and other phenomena reflected discrimination. Moreover, the 1970s were a decade in which the field was pervaded with thorny debates, some which reflected rapid changes in US labor market themselves.

13342748403Friedman’s arguments drew on a vision of the labor market that was becoming dominant at that time and that he had contributed to shaping through his famous 301 Price Theory course at the University of Chicago. As he was jousting with Bell, Becker’s 1957 Economics of Discrimination had just been republished with a fanfare that contrasted sharply with the resistance the book had encountered 15 years earlier (Friedman had to put considerable pressure on Chicago University Press for them to publish his former student’s Ph.D. dissertation). The main thread of Becker’s work, one partly inspired by Friedman himself, was to model some employers as rational maximizers with a taste for discrimination. That is, they used “non-monetary considerations in deciding whether to hire, work with, or buy from an individual or group.” Those employers, he contended, were disadvantaged, as their taste acted as a tariff in a trade model. Discrimination was thus an inefficient behavior that would be pushed out of competitive markets in equilibrium. The notion that the labor market where employees, being rational about their human capital investment and their work/leisure tradeoff, meet cost-conscious employers, was efficient, pervades his exchange with Bell.

But Friedman’s vision exhibited an additional characteristic historical and political twist. The proof that markets were, in the long run, efficient was historical: they had brought improvements in the living conditions of Jews, African-Americans, and Irish people throughout decades and centuries. And the market did so by protecting them from the tyranny of the majority, so that any attempt to fiddle with the market to accelerate the transition was bound to failure. This argument had come to maturity in Capitalism and Freedom (1962). In the fifth chapter, Friedman took issue with Roosevelt’s 1941 Fair Employment Practice Committee (FEPC), tasked with banning discrimination in war-related industries. He wrote:

If it is appropriate for the state to say that individuals may not discriminate in employment because of color or race or religion, then it is equally appropriate for the state, provided a majority can be found to vote that way, to say that individuals must discriminate in employment on the basis of color, race or religion. The Hitler Nuremberg laws and the laws in the Southern states imposing special disabilities upon Negroes are both examples of laws similar in principle to FEPC.

Like Becker, Friedman believed this general framework applied to any kind of discrimination: against Jews, foreigners, women (his correspondence with Bell echoed Capitalism and Freedom almost verbatim), people with specific religious of political beliefs, and blacks. In a lengthy interview with Playboy the previous year, for instance, he again explained that “it’s precisely because the market is a system of proportional representation [as opposed to the majority rule in the political system] that it protects the interests of minorities. It’s for this reason that minorities like the blacks, like the Jews, like the Amish, like SDS [Student for Democratic Society], ought to be the strongest supporters of free-enterprise capitalism.

Bell’s vision of labor markets, on the other hand, was not a straightforward reflection of a stabilized research agenda. The field was buzzing with new approaches in these years and her work stood at the confluence of three of them: attempts to understand the consequences of imperfect information on labor supply and demand; new empirical evidence on the wage gap and on the composition, income, and occupation of American households; and the challenges brought to mainstream economic modeling by feminist economists.

For example, her letters and the solutions that she pushed to fight the poor representation of women in economics betray a concern with the consequences of imperfect information on access to employment opportunities, expectations, employers’ and employees’ behavior, and in the end, the efficiency of market outcomes. In this regard, some of her statements closely echoed Arrow’s theory of statistical discrimination, which he had elaborated in a paper given in Princeton in 1971. His idea was that employers use gender as a proxy for unobservable characteristics: beliefs on average characteristics of groups translate into discrimination against individual member of these groups. As Bell and Friedman were corresponding (a correspondence that Bell circulated to the other CSWEP members and to Arrow), Arrow was refining his screening theory, in which preferences were endogenous to dynamic interactions that may create self-selection, human capital underinvestment, segregation and self-fulfilling prophecies. Arrow’s awareness to information imperfection hadn’t prevented him from telling Bell that, as AEA president in charge of the 1971 conference program, he couldn’t find many qualified women to raise the number of female presenters and discussants. Bell quickly came up with 300 names, 150 of whom expressed interest in presenting a paper. Bell’s advocacy for formal procedures producing information on jobs but also on “qualified women” in economics contributed to the establishment of a CSWEP-sponsored roster of women economists and of the JOE soon afterwards.

Bell also participated in the flourishing attempts to document women and household behavior empirically. While there is no evidence that she was then aware of Blinder and Oaxaca’s attempts to use data from the Survey of Economic Opportunity (which eventually became the PSID) to decompose the wage gap, she was involved in commenting on the 1973 Economic Report to the President, in which the Council of Economic Advisors had included a chapter on “the economic role of women.” Her familiarity with the economic and sociological empirical literature on earnings differentials subsequently led Bell to gather a substantial body of statistics on US families. She summarized her results in “Let’s Get Rid of Families,” a 1977 article she decided to publish in Newsweek rather than in an academic journal. That she aimed to challenge the notion that the typical US family was built around a breadwinning father and a stay-at-home mother was already seen in her insistence that economists and citizens alike are “brainwashed” by social norms and beliefs.

Finally, Bell’s letters to Friedman suggest that she wasn’t merely looking for a microeconomic model alternative to the Beckerian theory of discrimination. Her whole contribution was embedded in a more radical theoretical criticism of economic theory, one she would later carefully outline in a 1974 paper on “Economics, sex and gender.” The way that economic decisions are modeled does not allow an accurate representation of how women make economics choices as producers and consumers, she argued. Agents are making a choice between work and leisure, yet “leisure” in fact covers many types of work between which women need to allocate their time. Likewise, women usually did not consume an income they had independently earned, as standard micro models assumed. Not taking into account this variety of decisions in fact reinforces the social model economists tend to take as given, in which women are primarily caregivers who don’t exercise any economic independent choice. “Both economic analysis and economic policy dealing with individuals, either in their roles as producers or consumers, have been evolved primarily by men,” she concluded.

 

Bell and Friedman’s divergent takes on which actions the newly-established CSWEP should implement were thus inextricably intertwined with their theoretically and empirically-informed views of the labor market. Their exchange further reveals that their views were also tied to their respective methodological beliefs and personal experiences. Their arguments exhibited different blends of principles and data, models and action. Friedman was primarily focused on foundational principles about markets and the free society and explicitly connected them to the discrimination he had experienced as a Jew and a conservative. When the first letter he had sent to Bell was reprinted in a 1998 JEP issue celebrating the 25th anniversary of CSWEP, he further explained that “the pendulum has probably swung too far so that men are the ones currently being discriminated against.” Bell, by contrast, grounded her defense of the CSWEP agenda in economics principles, economic facts, beliefs and prejudices found in American society, as well as in her interactions with AEA officials and decades at Wellesley of tirelessly mentoring women in economics.

Note: Permission to quote from the Friedman-bell correspondance was granted by the Hoover Institution

Posted in Uncategorized | Tagged , , , , , | Leave a comment

Not going away: on Al Roth’s 2018 AEA Presidential Address and the ethical shyness of market designers

Screen Shot 2018-01-06 at 19.48.49Encountering Al Roth’s ideas has always been a “squaring the circle” experience to me. The man is the epitome of swag (as my students would say), his ideas the hype, and his past achievements ubiquitous in the media. He has become the antidote for post-2008 econ criticisms, the poster child for a science that is socially useful, that saves lives. And yet, as he proceeds to explain the major tenets of market design, I’m usually  left puzzled.

20180106_171323-001Yesterday’s presidential address was no exception (edit: here’s the webcast of his talk). Roth outlined many examples of how matching market design can improve citizens’ life, from making the economics job market thicker to avoiding domestic fights in pediatric surgeons’ households, to substantially raising the number of kidney transplants in the US, thereby effectively saving lives. Though the rules of each market differs, design principles are often identical. Repugnance, congestion or unraveling (people leaving the market because they have to accept job offers early or because they are put off by the matches proposed) threaten exchange, and market thickness is restored through implementing matching algorithms or signaling schemes (like expressing interest for a limited number of job openings). By the end of his talk, Roth nevertheless expressed some frustration. He wishes to do much more: raise kidney matching market thickness through allowing foreigners to participate into US kidney chains of exchange or organize large-scale refuge resettlement. Yet these more ambitious projects are however faced with greater legal challenges, opposition and ultimately repugnance, he lamented.

20180106_175858-001

 

Screen Shot 2018-01-07 at 8.21.30As Roth’s presentation moved from mundane to life-saving achievements, his slides became loaded with ethical statements. Moneyless kidney exchange is “fair,” he argued. Global kidney markets shouldn’t be considered “exploitative.”  Yet he never saw fit to discuss the ethical underpinnings of the designs he advances. Is it because, as explained in his introduction, his address focuses on “practical” marketplace design rather than on the identification of the theoretical properties of mechanisms? Or because the underlying ethics is obvious – isn’t raising the number of kidney transplants a universally accepted policy end? Or because he believes that the opposition to his international kidney market scheme betrays an unwarranted sensitivity on repugnance? Elsewhere, he has argued that economists should not take for granted citizens’ repugnance to engage in some kind of transactions (money landing or prostitution are other historical examples). That society has banned markets for such transactions has sometimes harmed their welfare, and Roth believes it is possible to carefully design such markets in a way that commodification and coercion won’t happen.

My puzzlement is twofold. As a historian, I find economists’ contemporary reluctance to get their hands dirty with ethics (Roth is not alone is this) highly unusual. For, contra the popular received view, those economists usually considered the founders of contemporary economics like Paul Samuelson or Kenneth Arrow were constantly arguing about the ethical foundations of their discipline, in particular welfarism. And as an observer of a changing discipline, I fear that this ethical shyness may at best prevent them from communicating with their public, at worst backfire. Let me elaborate.

 

The Good, the Bad, and the Ugly

Once upon a time, economists were not shy of handling normative issues. Though the XVIIIth and XIXth centuries were largely about separating out positive from normative analysis (and preserving a space for the “art of economics”), both were considered equally important parts of “political economy.” Reflections on neutrality, objectivity and impartiality, dating back to Adam Smith, were carried out separately. Max Weber was, at the turn of the XXth century, one of the first to relate ethics to subjectivity, through his quest for a reconciliation between an ideal Wertfreiheit (value-freedom) and an inescapable Wertbeziehung (relation to values through relation to the world, aka human condition).

Screen Shot 2018-01-07 at 8.20.04In the next decades, as the Popperian positivist mindset gradually replaced the aristotelician hierarchy between ethics and science, economists began hunting for subjectivity, and in the process ended up pushing normative thinking outside the boundaries of their science. In his famous Essay on the Nature and Significance of Economic Science (1932), Lionel Robbins outlined a clear separation between ends and means, is and ought, ethics and economics and normative and positive analysis. He did so by combining these with the fact/value distinction. Science was concerned with the confrontation with facts, he hammered. Interpersonal utility comparisons, which required “elements of conventional evaluation,” therefore fell outside the realm of economics.

Robbins’s British colleagues attempted to construct a value-free welfare economics. One exemple is Nicholas Kaldor and John Hicks’s Paretian compensation criterion. These endeavors were met with considerable skepticism in America. US economists strived to make their discipline more scientific and objective through an endorsement of Popperian falsifiability (apparent in Friedman’s famous 1957’s methodological essay), mathematization, data collection and the refinement of empirical techniques aimed at purging economic theories and models from subjectivity, rather than by trying to avoid normative statements. Those were inescapable if the economist was to be of any help to policy makers, researchers concurred. New Welfare “cannot be used as a guide to social policy,” Arrow complained in his 1951 Social Choice and Individual Values PhD. “Concretely, the new welfare economics economics is supposed to be able to throw light on such questions as to whether the Corn Laws should have been repealed,” yet it “gives no real hue to action,” Samuelson likewise remarked in his 1947 Foundation of Economics Analysis.

Screen Shot 2018-01-07 at 8.24.17Samuelson and Arrow are usually credited with laying out the theoretical and epistemological foundations for contemporary “mainstream” economics, yet each spent an inordinate amount of time crafting out a space for normative analysis rather than getting rid of it. “Ethical conclusions cannot be derived in the same way that scientific hypotheses are inferred/verified… but it is not valid to conclude from this that there is no room in economics for what goes under the name of ‘welfare economics.’ It is a legitimate exercice of economic analysis to examine the consequences of various value judgments, whether or not they are shared by the theorist, just as the study of comparative ethics is itself a science like any other branch of sociology,” Samuelson warned in Foundations. In those years, Gunnar Myrdal was crafting a new epistemology whereby the economist was to identify societies’ value judgment, make his choice of a value set explicit, and import it into economic analysis. Georges Stigler likewise claimed that “the economist may … cultivate a second discipline, the determination of the ends of his society particularly relevant to economic policy.” Abram Bergson concurred that “the determination of prevailing values for a given community is… a proper and necessary task for the economist.” Richard Musgrave would, in his landmark 1959 Theory of Finance, later join the chorus: “I have reversed my original view … the theory of the revenue-expenditure process remains trivial unless [the social preferences] scales are determined,” he explained.

 Screen Shot 2018-01-07 at 8.20.37 The postwar was thus characterized by a large consensus that the economist should discuss the ethical underpinning of his models. Objectivity was ensured, not by the lack of normative analysis, but by the lack of subjective bias. The values were not the economist’s ones, but some he chose within society, usually resulting from a collective decision or a consensus. In any case, economists were tasked with making choices. Throughout the Cold War, economists did not shy away from arguing over the weights they chose for their cost-benefit analysis, for instance, or about how to define “social welfare” and embed it into one of economists’ favorite tools, the social welfare function. Though it was ordinal, it allowed interpersonal comparisons of “irrelevant alternatives” and was meant to allow any policy maker (or dictator) to aggregate the values of the citizens and use it to make policy decisions. “You seriously misunderstand me if you think I have ever believed that only Pareto-optimality is meaningful […] Vulgar Chicagoans sometimes do but their Screen Shot 2018-01-07 at 8.21.11vulgarities are not mine,” Samuelson later wrote to Suzumura. It was, Herrade Igersheim documents in a superb paper  his main bone of contention with Arrow. For 50 years, Arrow claimed that his impossibility theorem was a serious blow to Samuelson’s tireless promotion of the Bergson-Samuelson social welfare function. And for 50 years, Samuelson argued his construct was essentially different from Arrow’s one: “Arrow has said more than once that any theory of ethics boils down to how the individuals involved feel about ethics. I strongly disagree. I think every one of us as individuals knows that our orderings are imperfect. They are inconsistent; they are changeable; they come back […] People talk about paternalism as if we were bowing down to a dictator, but it is wrong in ethics to rule out imposition, and even dictatorship, because that is the essence of ethics,” he continued in his aforementioned letter to Suzumura (excerpted from Igersheim’s paper).

  Samuelson and Arrow both endorsed a welfarist and utilitarian ethics (policy outcomes should be judged according to their impact on economic agents’ welfare). Beginning in the early 1970s, a host of alternative approaches flourished. Arrow himself moved to Harvard with the explicit purpose of delving deeper into ethical and philosophical topics. He founded a joint workshop with Amartya Sen and John Rawls that explored the latter’s notions of fairness and justice. Alternative theories and measures of well-being and of inequality were developed. Joy, capabilities and envy were brought into the picture. And yet, it was in that period that, through benign neglect, ethical concerns were finally pushed to the side of the discipline. Economics remained untouched by the Kuhnian revolution, by ideas that scientists cannot abstract from metaphysics and that “pure facts” simply doesn’t exist (see Putnam’s work). Just the contrary. Though heterodoxies have constantly pointed to the ideological characters of some fundamental assumptions in macro and micro alike, economists now routinely considered their work has having no ethical underpinning worth their attention. The ‘empirical revolution’ – better and more data, more efficient computational devices, thus an improved ability to confront hypotheses with facts – is considered as a gatekeper. And economists have become increasingly shy to engage in ethical reflection. Nowhere is this state of mind more obvious than in theoretical mechanism and applied market design, a field that, as AEA president-elect Olivier Blanchard pointed out, is not merely concerned with analyzing markets but with actively shaping them.

 

Market designers, neutral, clean and shy since 1972

In their vibrant introduction to a recent journal issue showcasing the intellectual challenges and social benefits of market design, Scott Kominers and Alex Teytelboym  chose to put ethical concerns aside. They merely note that “the market designer’s job is to optimize the market outcome with respect to society’s preferred objective function, while “maintain[ing] an informed neutrality between reasonable ethical positions’ regarding the objective function itself.” “Of course,” they point in a footnote, “market design experts should – and do – play a crucial role in the public discourse about what the objective function and constraints ought to be.” A few references are provided which, with exception of Sandel and Helm, are at least 50 years old. Yet, as in Roth’s address, unspoken ethics is all over the place in their overview: market design allow for “equity” and other goals, they point, for instance, “ensuring that the public purse can benefit from the revenue raised in spectrum license reallocation.”

Their edited volume is especially interesting because it includes a separate paper on the ethics in market design, by Shengwu Li. It is one representative of how economists handle ethical foundations of mechanism design today: smart, clear and extremely cautious, constantly walking on eggshells. Li argues that (1) “the literature on market design does not, and should not, rely exclusively on preference,” BUT (2) since economists have no special ability to resolve ethical disagreement, “market designers should study the connection between designs and consequences, and should not attempt to resolve fundamental ethical questions.” In the end, (3) “the theory and practice of market design should maintain an informed neutrality between reasonable ethical positions.” The economist, in his view, is merely able to “formalize value judgment, such as whether a market is fair, transparent, increases welfare, and protects individual agency;” What he advocates is economists able to “investigate” a much larger set of values than preference utilitarism to evaluate design without solving fundamental ethical questions, so as both to guide policy and preserve their sancrosanct “neutrality”:

“To a policymaker concerned about designing a fair market, we could ask, ‘what kind of fairness do you mean?’ and offer a range of plausible definitions, along with results that relate fairness to other properties the policy-maker might care about, such as efficiency and incentives.”

Screen Shot 2018-01-07 at 8.38.08A similar sense of agnosticism pervades Matt Jackson’s recent piece of the past, present and future prospects of theory in mechanism design in an age of big data. Opening with a list of the metaphors economists have used to describe themselves in the past century, he charts a history of progress from the XIXth century view of economists as “artists and ethicists” to contemporary “schizophrenic economists.” “It is natural that economists’ practical ambitions have grown with available tools and data,” he explains. Economists’ shyness is however best displayed in Jean Tirole’s Economics from the Common Good, out this Fall. This, is spite of his effort to devote on of his first chapter to “The Morals Limits of the Market.” For the chapter’s purpose is to justify the lack of ethical discussion in the rest of the book by appealing to Rawls’s veil of ignorance, one Tirole seems to believe economists actively contribute to sew over and over:

It is possible, however, to eliminate some of the arbitrariness inherent in defining the common good… to abstract ourselves from our attributes and our position is society, to place ourselves « behind the veil of ignorance »… The individual interest and the common good interest diverge as soon as my free will clashes with your interests, but they converge in part behind the veil of ignorance. The quest for the common good takes as its starting point our well-being behind the veil of ignorance ….

Economics, like other human and social sciences, does not seek to usurp society’s role in defining the common good. But it can contribute in two ways. First, it can focus discussion of the ojectives embodied in the concept of the common good by distinguishing ends from means…. Second, and more important, once a definition of the common good has been agreed upon, economics can help develop tools that contribute to achieving it … In each [chapter], I analyze the role of public and private actors, and reflect on the institutions that might contribute to the convergence of individual and general interest – in short, to the common good.

What Tirole does here is reprising a theme Cold war economists faced with the difficulty of choosing ethical foundations for their work often relied on: the notion of a underlying social consensus. Needless to say, this is not enough to silence the reader’s ethical questions when walked through Tirole’s proposed policy to fix climate change, Europe’s failing labor markets or financial regulation, or to harness digital markets.

 

Historians and sociologists unchained

As expected, philosophers, historians and sociologist of mechanism and market design have been much less shy in commenting on the ethical foundations of the field. In the interest of keeping this already too long piece in acceptable boundaries (and attending a few ASSA sessions today), I’m merely providing an non-exhaustive list of suggested readings here. Philosopher Michael Sandel has confronted market designers’ constructs in his book on What Money Can’t Buy, economic philosophers Marc Fleurbaey and Emmanuel Picavet has provided extensive reflections on the moral foundations of the discipline, and Francesco Guala has masterfully excavated the epistemological underpinnings of the FCC auctions. Sociologists Kieran Healy, Dan Breslau and Juan Pardo-Guerra have investigated the ethics and politics of, respectively, organ transplants, electricity market design, and financial markets. There’s also the wealth of literature on performativity (by Donald Mckenzie, Fabian Muniesa, or Nicolas Brisset among many others). Munesia, for instance, disagrees with my shyness diagnostic. He rather see market designers as ethically disinhibited.

Screen Shot 2018-01-06 at 20.11.27

Screen Shot 2018-01-07 at 8.40.36Neither is shyness to be found in the history of mechanism design published this Fall by Phil Mirowksi (who has extensively written on the history of information economics) and Eddie Nik-Kah (whose PhD was an archive-based investigation of the FCC auctions). The core of their book is a classification of the mechanism and market design literature in three trends, each reflecting a distinct approach to information. How economists view and design market is closely tied to their understanding of the role of agents’ knowledge, they explain. The first trend was the Walrasian school, architected at Cowles under the leadership of Arrow, Hurwicz, Reiter and the Marschaks (even Stiglitz and Akerlof to some extent). They considered information as a commodity to be priced and mechanism as a preferably decentralized information gathering process of the Walrasian tâtonnement kind. The Bayes-Nash school of mechanism design originates in Vickrey and Raiffa, and was spread by Bob Wilson who taught the Milgrom generation at Stanford. Information is distributed and manipulated, concealed and revealed. Designing mechanism is thus meant to help them make “no regret” decisions under asymmetric and imperfect information, and this can be achieved through auctions. The experimentalist school of design, including Smith, Plott, Rassenti and Roth is more focused on the algorithmic properties of the market. Information is not located within economic agents, but within the market.

            They offer several hypotheses to explain this transformation (changing notions of information in natural science, a changing vision of economic agents’ cognitive abilities, the growing stronghold of the Hayekian view that markets are information processors, the shifting politics of the profession). But they have a clear take on the result of this transformation: mechanism designers serve neoliberal interests, as exemplified by the FCC auction and TARP cases in which, they argued, economists worked for the commercial interests of the telecom or banking business rather than for the citizens. “Changes in economists’ attitudes toward agents’ knowledge brought forth changes in how economists viewed their own roles,” they conclude:

“Those who viewed individuals as possessing valuable knowledge about the economy generally conceived of themselves as assisting the government in collecting and utilizing it; those who viewed individuals as mistaken in their knowledge tasked themselves as assisting participants in inferring true knowledge; and finally, those who viewed people’s knowledge as irrelevant to the operation of markets tended to focus on building boutique markets.”

Ethics strikes back: economists as engineers in corporate economy

Mechanism/market designers, and economists more largely, thus believe ethical agnosticism is both desirable and attainable (see also Tim Scanlon’s remarks here). It is a belief Roth and Tirole inherited from the engineering culture they were trained into. The ‘economics as engineer’ reference is all over the place: in Roth’s address – he was the one who articulated this view in a famous 2002 article–, in Tirole’s book, in Li’s paper on ethics in mechanism design, which opens with the following quote by Sen: “it is, in fact, arguable that economics has had two rather different origins […] concerned respectively with ‘ethics’ on the one hand, and with what may be called ‘engineering’ on the other.” Li’s core question, therefore, becomes “How (if at all) should economic engineers think about ethics?”

This view is a bit light, and it might even backfire.

First, because as pointed out by Sandel to Roth here, agnosticism is itself a moral posture. Refusing to consider repugnance as a moral objection rather than a prejudice itself shows economists’ repugnance (my term) to engage with moral philosophy, Sandel argues.

Second, because the question of whose values drive the practices of market designers and economists more largely cannot be easily settled with an appeal to consensus and to “obvious ends.” That more lives should obviously be saved, that more citizens should obviously should be fed, that inequalities should obviously be fought (which wasn’t that obvious 20 years ago), that well-being should obviously be improved seems to dispense economists with further inquiry. Doesn’t everyone agree on that? Marc Fleurbay has  encouraged economists to think more explicitly about the concrete measures of wellbeing embodied in economic models and with question of envy and independent preferences. The utilitarianism that has shaped economic tools in the past century has been challenged in the past decade (see Mankiw’s comments here or Li’s paper), and that economists’ focus is currently shifting to re-emphasize wealth and income inequality as well as the role of race and gender in shaping economic outcomes in fact carries a set of collective moral judgments.

Third, because market designers’ tools have grown so powerful, there should be a reflection on what their aggregate effects on distribution, fairness and various conceptions of justice and well-being is/should be, and on who should be accountable for these effects. Economists have been held accountable for the 2008 financial collapse. What if a market they had contributed to design badly screw up? What if their powerful algorithms are used for bad purpose? When physicists, psychologists and engineers have sensed that their tools were powerful enough to manipulate people or launch nuclear wars, they have set up disciplinary ethics committees, gone into social activism and tried to educate decision makers and the public. How about economists?

Finally, the market designers’ rationale outlined above crucially depends on one key assumption: that the ends those designs are meant to fulfill reflect the common good, or a democratic consensus or at least a collective decision carried by a benevolent policy-maker. In Tirole’s book, the economist’s client is society. In Li’s paper, the market designer’s client always and only is “the policy-maker.” But there’s tons of research challenging the benevolence of policy-makers. And more fundamentally, what if the funding, institutional and incentive structure of the discipline, and of market design in particular, is shifting toward corporate interests? Historians have shown how Cold War economics has been shaped by the interests of the military, then the dominant patron. How acceptable is shaping markets on behalf of private clients such as IT firms?

If these questions are not going away, it because they are not deficiencies to be fixed through scientific progress, but choices to be made by economists, no matter how big their data and powerful their modeling tools. Unpacking the epistemological and ethical choice mechanism design make, the benefits they expect, but also the paths foregone is important.

Posted in Uncategorized | 2 Comments

The making and dissemination of Milton Friedman’s 1967 AEA Presidential Address

Joint with Aurélien Goutsmedt

In a few weeks, the famous presidential address in which Milton Friedman is remembered to have introduced the notion of an equilibrium rate of unemployment and opposed the use of the Phillips curve in macroeconomic policy will turn 50. It has earned more that 8,000 citations, more than Arrow, Debreu and McKenzie’s proofs of the existence of a general equilibrium combined, more than Lucas’s 1976 critique. In one of the papers to be presented at the AEA anniversary session in January, Greg Mankiw and Ricardo Reis ask “what explains the huge influence of his work,” one they interpret as “a starting point for Dynamic Stochastic General Equilibrium Models.” Neither their paper nor Olivier Blanchard’s contribution, however, unpack how Friedman’s address captured macroeconomists’ minds. This is a task historians of economics – who are altogether absent from the anniversary session – are better equipped to perform, and as it happens, some recent historical research indeed sheds light on the making and dissemination of Friedman’s address.

Capture d_écran 2017-11-28 à 00.27.52

The making of Friedman’s presidential address

 On a December 1967 Friday evening, in the Washington Sheraton Hall, AEA president Milton Friedman began his presidential address:

 “There is wide agreement about the major goals of economic policy high employment stable prices and rapid growth. There is less agreement that these goals are mutually compatible, or, among those who regard them as incompatible, about the terms at which they can and should be substituted for one another. There is least agreement about the role that various instruments of policy can and should play in achieving the several goals. My topic for tonight is the role of one such instrument – monetary policy,”

 the published version reads. As explained by  James Forder, Friedman had been thinking about his address for at least 6 months. In July, he had written down a first draft, entitled “Can full employment be a criterion of monetary policy?” At that time, Friedman intended to debunk the notion that there existed a tradeoff between inflation and unemployment. That “full employment […] can be and should be a specific criterion of monetary policy – that the monetary authority should be ‘easy’ when unemployment is high […] is so much taken for granted that it will be hard for you to believe that […] this belief is wrong,” he wrote. One reason for this was that there is a “natural rate of unemployment […] the level that would be ground out by the Walrasian system of general equilibrium equations,” one that is difficult to target. He then proceeded to explain why there was, in fact, no long run tradeoff between inflation and unemployment.

 

Phillipscurve

Phillips’s 1958 curve

 Most of the argument was conducted without explicit reference to the “Phillips Curve,” whose discussion was restricted to a couple pages. Friedman, who has, while staying at LSE in 1952, thoroughly discussed inflation and expectations with William Phillips and Phillip Cagan among others, explained that the former’s conflation of real and nominal wages, while understandable in an era of stable prices, was now becoming problematic. Indeed, as inflation pushes real wages (and unemployment) downwards, expectations adapt: “there is always a temporary trade-off between inflation and unemployment; there is no permanent trade-off. The temporary trade-off comes not from inflation per se, but from unanticipated inflation, which generally means, from a rising rate of inflation,” he concluded.

In the end, however, the address Friedman gave in December covered much more ground. The address began with a demonstration that monetary policy cannot not peg interest rates, and the section on the natural rate of unemployment was supplemented with reflections on how monetary policy should be conducted. In line with what he had advocated since 1948, Friedman suggested that monetary authorities should abide by three principles; (1) do not make monetary policy a disturbing force; (2) target magnitudes authorities can control, and (3) avoid sharp swings. These 3 principles were best combined when “adopting publicly the policy of achieving a steady rate of growth like a precise monetary total,” which became known as Friedman’s “k% rule.”

The usual interpretation of Friedman’s address is the one conveyed by Mankiw and Reis, that is, a reaction to Samuelson and Solow’s 1960 presentation of the Phillips curve as “the menu of choice between different degrees of unemployment and price stability.” Mankiw and Reis assume that this interpretation, with the qualification that the tradeoff may vary across time, was so widespread that they consider Samuelson, Solow and their disciples as the only audience Friedman meant to address. Yet, Forder and Robert Leeson, among others, provide substantial evidence that macroeconomists then already exhibited a much more subtle approach to unemployment targeting in monetary policy. The nature of expectations and the shape of expectations was widely discussed in the US and UK alike. Samuelson, Phelps, Cagan, Hicks or Phillips had repeatedly and publicly explained, in academic publications as well as newspapers, that the idea of a tradeoff should be seriously qualified in theory, and should in any case not guide monetary policy in the late 1960s. Friedman himself had already devoted a whole 1966 Newsweek chronicle to explain why “there will be an inflationary recession.”

This intellectual environment, as well as the changing focus of the final draft of his address led Forder to conclude that “there is no evidence that Friedman wished to emphasize any argument about expectations or the Phillips curve and […] that he would not have thought such as argument novel, surprising or interesting.” We disagree. For a presidential address was a forum Friedman would certainly not have overlooked, especially at a moment both academic and policy discussion on monetary policy were gaining momentum. The day after the address, John Hopkins’s William Poole presented a paper on “Monetary Policy in an Uncertain World.” 6 months afterwards, the Boston Fed held a conference titled “Controlling Monetary Aggregates.” Meant as the first of a “proposed series covering a wide range of financial and monetary problems,” its purpose was to foster exchanges on “one of the most pressing of current policy issues – the role of money in economic activity.” It brought together Samuelson, David Meiselman, James Tobin, Alan Meltzer, John Kareken on “the Federal reserve’s Modus Operandi,” James Duesenberry on “Tactics and Targets of Monetary Policy,” and Board member Sherman Maisel on “Controlling Monetary aggregates.” Opening the conference, Samuelson proposed that “the central issue that is debated these days in connection with macro-economics is the doctrine of monetarism,” citing, not Friedman’s recent address, but his 1963 Monetary History with Anna Schwartz. That same year, the Journal of Money, Credit and Banking was established, followed by the Journal of Monetary Economics in 1973. Economists had assumed a larger role at the Fed since 1965, when Ando and Modigliani were entrusted with the development of a large macroeconometric model, and the Green and Blue books were established.

 

Reflecting on “The Role of Monetary Policy” at such a catalyzing moment, Friedman thus tried to engage variegated audiences. This resulted in an address that was theoretical, historical and policy-oriented at the same time, waving together several lines of arguments with the purpose of proposing a convincing package. What makes tracking its dissemination and understanding its influence tricky is precisely that, faced with evolving contexts and scientific debates, those different audiences retained, emphasized and naturalized different bits of the package.

Friedman’s address in the context of the 1970s

Academic dissemination

GordonFriedman’s most straightforward audience was academic macroeconomists. The canonical history (echoed by Mankiw and Reis) is that Friedman’s address paved the way for the decline of Keynesianism and the rise of New Classical economics, not to say DSGE. But some ongoing historical research carried by one of us (Aurélien) in collaboration with Goulven Rubin suggests that it was Keynesian economists –rather than New Classical ones –  who were instrumental in spreading the natural rate of unemployment (NRU) hypothesis. A key protagonist was Robert Gordon, who had just completed his dissertation on Problems in the Measurement of Real Investment in the U.S. Private Economy under Solow at MIT when Friedman gave his address. He initially rejected the NRU hypothesis, only to later nest it into what would become the core textbook New Keynesian model of the 1970s.

What changed his mind was not the theory. It was the empirics: in the Phillips curve with wage inflation driven by inflation expectations and unemployment he and Solow separately estimated in 1970, the parameter on inflation expectation was extremely small, which he believed dismissed Friedman’s accelerationist argument. Gordon therefore found the impact of the change in the age-sex labor force composition on the structural rate of unemployment, highlighted by George Perry, a better explanation for the growing inflation of the late 1960s. By 1973, the parameter had soared enough for the Keynesian economist to change his mind. He imported the NRU in a non-clearing model with imperfect competition and wage rigidities, which allowed for non-voluntary unemployment, and, most important, preserved the rationale for active monetary stabilization policies.

Gordon textbookThe 1978 textbook in which Gordon introduced his AS-AD framework exhibited a whole chapter on the Phillips curve, in which he explicitly relied on Friedman’s address to explain why the curve was assumed to be vertical on the long-run. Later editions kept referring to the NRU and the long run verticality, yet rather explained by imperfect competition and wage rigidity mechanisms. 1978 was also the year Stanley Fischer and Rudiger Dornbusch’s famed Macroeconomics (the blueprint for subsequent macro textbooks) came out. The pair alluded to a possible long run trade-off, but like Gordon, settled on a vertical long-run Phillips curve. Unlike Gordon though, they immediately endorsed “Keynesian” foundations.

At the same time, New Classical economists were going down a slightly different, yet  famous route. They labored to ‘improve’ Friedman’s claim by making it consistent with rational expectations, pointing out the theoretical consequence of this new class of models for monetary policy. In 1972, Robert Lucas made it clear that Friedman’s K-% rule is optimal in his rational expectation model with information asymmetry, and Thomas Sargent and Neil Wallace soon confirmed that “an X percent growth rule for the money supply is optimal in this model, from the point of view of minimizing the variance of real output”. Lucas’s 1976 critique additionally underscored the gap between the content of Keynesian structural macroeconometrics models of the kind the Fed was using and Friedman’s argument.

Policy Impact

Friedman Burns

Friedman and Burns

Several economists in the Washington Sheraton Hall, including Friedman himself, were soon tasked with assessing the relevance of the address for policy. Chairing the 1968 AEA session was Arthur Burns, the NBER business cycle researcher and Rutgers economist who convinced young Friedman to pursue an economic career. He walked out of the room convinced by Friedman’s view that inflation was driven by adaptive expectations. In a December 1969 confirmation hearing to the Congress, he declared: “I think the Phillips curve is a generalization, a very rough generalization, for short-run movements, and I think even for he short run the Phillips curve can be changed.” A few weeks afterwards, he was nominated federal board chairman. Edward Nelson documents how, to Friedman’s great dismay, Burns’ shifting views quickly led him to endorse Nixon’s proposed wage-price controls, implemented in August 1971. In reaction, monetarists Karl Brunner and Allan Meltzer founded the Shadow Open Market Committee in 1973. As Meltzer later explained, “Karl Brunner and I decided to organize a group to criticize the decision and point out the error in the claim that controls could stop inflation.”

Capture d_écran 2017-11-23 à 16.09.45While the price and wage controls were removed in 1974, the CPI index suddenly soared by 12% (following the October 1973 oil shock), at a moment unemployment was on the way to reach 9% in 1975. The double plague, which British politician Ian MacLeod had dubbed “stagflation” in 1965, deeply divided the country (as well as economists, as shown by the famous 1971 Time cover). What should be addressed first, unemployment or inflation? In 1975, Senator Proxmire, chairman of the Committee on Banking of the Senate, submitted a resolution that would force the Fed into coordinating with the Congress, taking into account production increase & “maximum employment” alongside stable prices in its goals, and disclosing “numerical ranges” of monetary growth. Friedman was called to testify, and the resulting Senate report strikingly echoed the “no long-term tradeoff” claim of the 1968 address:

“there appears to be no long-run trade-off. Therefore, there is no need to choose between unemployment and inflation. Rather, maximum employment and stable prices are compatible goals as a long-run matter provided stability is achieved in the growth of the monetary and credit aggregates commensurate with the growth of the economy’s productive potential.”

 If there was no long-term trade-off, then explicitly pursuing maximum employment wasn’t necessary. Price-stability would bring about employment, and Friedman’s policy program would be vindicated.

Capture d’écran 2017-11-28 à 01.03.04.pngThe resulting Concurrent Resolution 133 however did not prevent the Fed staff from undermining
congressional attempts at controlling monetary policy: their strategy was to present a confusing set of five different measure of monetary and credit aggregates. Meanwhile, other assaults on the Fed mandate were gaining strength. Employment activists, in particular those who, in the wake of Coretta Scott King, were pointing out that black workers were especially hit by mounting unemployment, were organizing protests after protests. In 1973, black California congressman Augustus Hawkins convened a UCLA symposium to draw the contours of “a full employment policy for America.” Participants were asked to discussed early drafts of a bill jointly submitted by Hawkins and Minnesota senator Hubert Humphrey, member of the Joint Economic Committee. Passed in 1978 as the “Full Employment ad Balanced Growth Act,” it enacted Congressional oversight of monetary policy. It required that the Fed formally report twice a year to Congress, and establish and follow a monetary policy rule that would term both inflation and unemployment. The consequences of the bill were hotly debated as soon as 1976 at the AEA, in the Journal of Monetary Economics, or in Challenge. The heat the bill generated contrasted with its effect on monetary policy, which, again, was minimal. The following year, Paul Volcker became Fed chairman, and in October, he abruptly announced that the Fed would set binding rules for reserve aggregate creation and let interests rates drift away if necessary.

 

 

A convoluted academia-policy pipeline?

The 1967 address thus initially circulated both in the academia and in public policy circles, with effects that Friedman did not always welcome. The natural rate of unemployment was adopted by some Keynesian economists because it seemed empirically robust, or at least useful, yet it was nested in models supporting active discretionary monetary policy. Monetary policy rules became gradually embedded in the legal framework presiding over the conduct of monetary policy, but this was with the purpose of reorienting the Fed toward the pursuit of maximum unemployment. Paradoxically, New Classical research, usually considered the key pipeline whereby the address was disseminated within and beyond economics, seemed only loosely connected to policy.

Capture d_écran 2017-11-21 à 00.45.26 Indeed, one has to read closely the seminal 1970s papers usually associated with the “New Classical Revolution” to find mentions of the troubled policy context. The framing of Finn Kydland and Edward Prescott’s “rule vs discretion” paper, in which the use of rational expectations raised credibility and time consistency issues, was altogether theoretical. It closed with the cryptic statement that “there could be institutional arrangements which make it a difficult and time-consuming process to change the policy rules in all but emergency situations. One possible institutional arrangement is for Congress to legislate monetary and fiscal policy rules and these rules become effective only after a 2-year delay. This would make discretionary policy all but impossible.” Likewise, Sargent and Wallace opened their “unpleasant monetarist arithmetic” 1981 paper with a discussion of Friedman’s presidential address, but quickly added that the paper was intended as a theoretical demonstration of the impossibility to control inflation. None of the institutional controversies were mentioned, but the author ended an earlier draft with this sentence: “we wrote this paper, not because we think that our assumption about the game played by the monetary and fiscal authorities describes the way monetary and fiscal policies should be coordinated, but out of a fear that it may describe the way the game is now being played.”

 Lucas was the only one to write a paper that explicitly discussed Friedman’s monetary program, and why it had ‘so limited an impact.” Presented at a 1978 NBER conference, he was asked to discuss “what policy should have been in 1973-1975,” but declined. The question was “ill-posed,” he wrote. The source of the 1970s economic mess, he continued, was to be found in the failure to build appropriate monetary and fiscal institutions, which he proceeded to discuss extensively. Mentioning the “tax revolt,” he praised the California Proposition 13 designed to limit property taxes. He then defended Resolution 133’s requirement that the Fed announces monetary growth targets in advance, hoping for a more binding extension.

 Capture d’écran 2017-11-28 à 00.49.03.pngThis collective distance contrasts with both Monetarist and Keynesian economists’ willingness to discuss existing US monetary institutional arrangements in academic writings and in the press alike. It is especially puzzling given that those economists were working within the eye of the (institutional) storm. Sargent, Wallace and Prescott were then in-house economists at the Minneapolis Fed, and the Sargent-Wallace paper mentioned above was published by the bank’s Quarterly Review. Though none of them seemed primarily concerned with policy debates, their intellectual influence was, on the other hand, evident from the Minneapolis board’s statements. Chairman Mark Willes, a former Columbia PhD student in monetary economics, was eager to preach the New Classical Gospel at the FOMC. “There is no tradeoff between inflation and unemployment,” he hammered in a 1977 lecture at the University of Minnesota. He later added that:

“it is of course primarily to the academic community and other research groups that we look for …if we are to have effective economic policy you must have a coherent theory of how the economy works…Friedman doesn’t seem completely convincing either. Perhaps the rational expectationists here …. Have the ultimate answer. At this point only Heaven, Neil Wallace, and Tom Sargent know for sure.”

 If debates were raging at the Minneapolis Fed as well as within the university of Minnesota’s boundaries, it was because the policies designed to reach maximum unemployment were designed by the Minnesota senator, Humphrey, himself advised by a famous colleague of Sargent and Wallace, Keynesian economist, former CEA chair and architect of the 1964 Kennedy tax cut Walter Heller.

 

The independent life of “Friedman 1968” in the 1980s and 1990s?

Friedman’s presidential address seem to have experienced a renewed citation pattern in the 1980s and 1990s, but this is yet an hypothesis that needs to be documented. Our bet is that macroeconomists came to re-read the address in the wake of the deterioration of economic conditions they associated with Volcker’s targeting. After the monetary targeting experience was discontinued in 1982, macroeconomists increasingly researched actual institutional arrangements and policy instruments. We believe that this shift is best reflected in John Taylor’s writings. Leeson recounts how, a senior student at the time Friedman pronounced his presidential address, Taylor’s research focused on the theory of monetary policy. His two stints as CEA economist got him obsessed with how to make monetary policy more tractable. He increasingly leaned toward including monetary “practices in the analysis, a process which culminated in the formulation of the Taylor rule in 1993 (a paper more cited that Friedman’s presidential address). Shifting academic interest, which can be interpreted as more in line with the spirit, if not the content, of Friedman’s address, were also seen in 1980s discussions of nominal income targets. Here, academic debates preceded policy reforms, with the Fed’s dual inflation/employment mandate being only appeared in a FOMC statement under Ben Bernanke in 2010, in the wake of the financial crisis (see this thread by Claudia Sahm). This late recognition may, again, provide a new readership to the 1968 AEA presidential address, an old lady whose charms appear timeless.

 

Friedman 1968 title

Posted in Uncategorized | Tagged , | Leave a comment

Les économistes sont-ils sexistes?

Un pavé dans la mare des économistes

Sexisme: le mot est sur les bouches de tous les économistes américains depuis quelques semaines. Depuis que, le 8 Août, Justin Wolfers a attiré l’attention sur le mémoire de master rédigé par Alice Wu, étudiante à Berkeley (l’article de Wolfers est traduit ici par Martin Anota). Celle-ci a effectué du text-mining sur les millions de posts de l’Economic Job Market Rumors, un forum anonyme initialement conçu pour partager des informations sur le recrutement des économistes, et devenu depuis un lieu contesté, une machine a café virtuelle où toutes sortes de rumeurs et conseils techniques sont échangés, et où les principaux écrits des économistes Américains sont discutés. Grâce à des technique de machine learning, elle a identifié les mots qui predisent le mieux si chaque post traite d’un homme ou d’une femme.. Pour les premiers, ces mots sont en général associés à leur travail (même si on note la présente de termes tels qu’ « homosexuel») Pour ces dernières, la liste fait froid dans le dos, et une traduction n’est pas nécessaire :

hotter, lesbian, bb (internet speak for “baby”), sexism, tits, anal, marrying, feminazi, slut, hot, vagina, boobs, pregnant, pregnancy, cute, marry, levy, gorgeous, horny, crush, beautiful, secretary, dump, shopping, date, nonprofit, intentions, sexy, dated and prostitute.

 Les réactions, sur twitter, puis sur divers blogs d’économistes ne se sont pas fait attendre.

Une partie des discussions s’est focalisée sur ce qui justifie l’utilité d’un tel forum avant sa colonisation par des trolls, et sur le caractère néamoins minoritaire de telles dérives : l’asymétrie d’information, le caractère très hiérarchique de la discipline, l’anonymat : la totalité du forum était-elle bonne à jeter ? Fallait-il modérer, fermer, remplacer ? Même le nouveau président de l’American Economic Association, Olivier Blanchard, s’est exprimé sur le sujet. Mais parce que certains ont opposé qu’un tel forum anonyme ne saurait constituer un échantillon représentatif, les échanges se sont élargis : le sexisme ordinaire de la profession toute entière a été pointé du doigt, la dureté des séminaire, les remarques en entretien de recrutement, le manque de crédit accordé aux femmes économistes dans la presse, etc.

La faible féminisation de l’économie : un phénomène quantifié difficile à expliquer

Les conséquences de ce sexisme ont ensuite été abordées, en particulier la faible représentation des femmes en économie, et une tendance à la détérioration depuis les années 2000.   Les femmes représentent aujourd’hui aux Etats-Unis 31% des doctorants en économie, 23% des enseignants-chercheurs en tenure-track, 30% des professeurs assistants et 15% des professeurs C’est certes plus qu’en 1972 : les femmes représentaient alors 11% des doctorants, 6% des enseignants-chercheurs dont 2% de professeurs dans les 42 principaux départements du pays. Mais c’est moins que le nombre de femmes travaillant à des postes de direction dans la Silicon Valley (pourtant l’objet de nombreux scandales) ou qui votent dans les jurys décernant les oscars. Ces niveaux de féminisation sont très inférieurs aux autres sciences sociales, et classent l’économie parmi les disciplines les plus inégalitaires, avec les sciences de l’ingénieur et l’informatique. Surtout, le différentiel de salaire entre hommes et femmes assistant et full professor a explosé depuis 1995 (le salaire d’une professeure est passé de 95% de celui d’un homologue masculin à 75% aujourd’hui). Le phénomène est atypique. La source de ce déséquilibre l’est aussi. Alors que la majorité des sciences souffre d’un leaking pipeline, le nombre de femmes diminuant au fur et à mesure qu’elles passent du premier et second au troisième cycle puis évoluent dans la hiérarchie académique, l’économie peine aussi à attirer des étudiantes de premier cycle. Aux Etats-Unis, celles-ci constituent moins de 35% des étudiants de premier cycle ; l’économie est ainsi la seule discipline ou la proportion de docteures en économie (PhD) est supérieure au nombre d’étudiantes titulaires d’une licence (BA). La proportion de doctorantes a également chuté de 6 points depuis les années 1990.

La situation n’est pas meilleure en Grande Bretagne, où la proportion des femmes en premier cycle d’économie, inférieure à 30%, accuse elle aussi une baisse ces dernières années. Cette proportion semble fortement corrélée à celle du taux d’élèves étudiant l’économie dans le secondaire. En revanche, le pourcentage de femmes économistes au gouvernement est en constante augmentation. Seulement 19% des économistes enregistrés sur RePec – une base de données mondiale de plus de 50 000 économistes publiant dont on ne connaît pas la représentativité – sont des femmes. Soledad Zignago documente une forte différence entre les pays (de 4% à 50% de femmes) et entre les champs. Elle montre que les femmes sont encore moins représentées dans les « top 100 » que le site publie régulièrement. Il n’y a qu’une femme, Carmen Reinhart, dans les 100 économistes les plus cités aux Etats-Unis, 5 si on se limite aux 10 dernières années.

Cette faible féminisation n’est pas nécessairement liée au sexisme, qui, quoique rarement défini dans ces débats, semble perçu comme un facteur résiduel. Considéré comme une croyance infondée sur l’infériorité présumée d’un groupe d’humains, c’est-à-dire comme un biais, il constituerait la variable explicative vers laquelle se tourner une fois qu’on a pris en compte d’éventuelles différences d’aptitudes mathématiques, de préférences et d’arbitrage carrière/famille, de socialisation ou de productivité. Car la plupart des facteurs ci-dessous sont soit inexistants (les différences d’aptitude mathématiques), soit insuffisants pour expliquer la totalité des les inégalités de promotion et de salaire entre économistes hommes et femmes. Mais ce qui peut causer ces inégalités résiduelles appelées discrimination (et donc comment celles-ci peuvent être effectivement combattues) n’est pas clair pour autant. Sont invoqués des problèmes d’information imparfaite, de frictions, mais aussi de norme et de culture sexiste suffisamment importants pour altérer l’évaluation des travaux des femmes économistes et leur trajectoire professionnelle. Erin Hengel a par exemple montré que les femmes sont soumises à des exigences de lisibilité plus fortes pour publier dans la prestigieuse revue Econometrica, et que le processus de révision de leurs articles prend en moyenne six mois de plus que celui des hommes. Heather Sarsons a étudié les décisions de tenure (octroi d’un poste de professeur des université) effectuées par les comités de promotion américains, et a constaté que les femmes sont pénalisées quand elle co-signent leurs articles de recherche. Les hommes reçoivent une promotion 75% du temps, qu’ils écrivent seuls ou en équipe. En revanche, seuls les femmes publiant des articles en solo ont un taux de promotion équivalent. Pour celles qui choisissent de coécrire, ce taux chute à 50%

L’apport de l’analyse historique

            Une remise en perspective historique du statut des femmes économistes dans les pays anglo-saxons permet de formuler des hypothèses supplémentaires. Car les femmes ne sont pas absentes de l’histoire de l’économie. Elles rédigent jusqu’à 20% des thèses validées par l’American Economic Association dans les années 30, mais ce chiffre tombe à 4,5% en 1950. En cause, des règles universitaires qui interdisent au femmes de suivre des études doctorales dans nombre de département d’économie, la difficulté d’y trouver un poste, et les opportunités qui s’ouvrent à la même période dans les départements de travail social, d’économie domestique, et dans les agences d’un gouvernement en pleine révolution statistique. Quoiqu’effacées de l’histoire officielle, elles prennent une large part au développement de l’économie empirique. Le premier économiste à programmer un logiciel de régression se nomme Lucy Slater ; l’expert en simulation des années 1950, s’appelle Irma Adelman ; et la première expérimentation contrôlée sur l’impôt négatif est réalisé par Heather Ross. Ces évolutions ne sont pas sans rappeler celles, encore plus tranchées, qu’ont connu les sciences de l’informatique. Les historiens de l’informatique ont montré que la programmation était, après la Second Guerre mondiale, avant tout une affaire de femmes, et que celles-ci furent poussées à quitter ce champ au fur et à mesure que leur spécialité devenait plus scientifique, plus prestigieuse, et plus rentable. Les tests d’aptitude mis en place participèrent à la création d’une identité genrée – le bon programmeur est asocial, systématique, geek, etc. Comprendre comment le sort des femmes économistes est lié au développement de l’économie, et de l’économie appliquée en particulier, nécessite donc d’analyser, outre les inégalités de sexe, les identités de genre, mais aussi les hiérarchies entre sous-domaines d’une même discipline et la trajectoire de cette discipline dans la hiérarchie symbolique du champ scientifique.

            C’est enfin sur fond de troubles sociaux et de controverses théoriques et empiriques que les économistes américains commencèrent à s’intéresser aux problèmes de représentation féminine dans leurs rangs. Car la spécificité des économistes est que la discrimination n’est pas d’abord un problème expérimenté, mais un objet de travail. Cela crée un effet miroir intéressant à étudier. Le débats sur l’offre de travail des femmes du début des années 1970 opposaient les tenants d’une approche plutôt béckerienne  ( explication basée sur les préférences des choix de spécialisation effectués au sein d’un ménage) aux tenantes d’approches plus empiriques, marxistes ou féministes qui mettaient l’accent sur la ségrégation du marché du travail et soulignaient l’importance d’en étudier les institutions. Ce furent ces chercheuses qui dénoncèrent le plafond de verre et des difficultés rencontrées par les universitaires, organisèrent un caucus et obtinrent le vote d’une série de résolutions. Le résultat fut la création d’un marché de l’emploi (le fameux Job Market des économistes américains) et d’un Committee on the Status of Women in the Economic Profession (CSWEP) chargé d’une étude statistique annuelle. Cette démarche fut appuyée par le président de l’époque, Kenneth Arrow, qui, comme le montrent les travaux de Cleo Chassonery-Zaïgouche, travaillait lui aussi à une alternative à la théorie béckerienne : une théorie de la discrimination statistique mettant l’accent sur l’information imparfaite et les coûts de recrutement. Les archives de l’AEA montrent que dans l’esprit de tous ces protagonistes, les modèles utilisés pour comprendre les phénomènes de discrimination ne sont pas séparés des discussions sur le statut des femmes économistes.

Et les femmes économistes en France ?

En France, en revanche, le silence semble assourdissant. Il ne s’agit même pas de se demander si les économistes sont sexistes, mais en premier lieu de se demander quelle est la place des femmes dans la science économique, et si elles subissent des discriminations. Si la question n’est même pas posée, c’est parce qu’il n’existe quasiment aucune donnée sur ce sujet. Car si les causes et conséquences de la faible féminisation de la discipline sont aujourd’hui discutées aux Etats-Unis et en Grande-Bretagne, c’est bien grâce aux efforts d’accumulation des données réalisées par le CSWEP, la Royal Economic Society, où plus récemment, l’association des femmes en finance – le champ le moins féminisé de l’économie. Et de telles données n’existent tout simplement pas en France.

Les chiffres agrégés montrent que si plus de la moitié des étudiants de premier cycle, toutes disciplines confondues, sont des femmes, celles-ci ne forment plus que 40% des maitre de conférence et 20% des professeurs des Universités. Ce constat, complété par des recherches sur les concours de l’enseignement supérieur (qui révèlent une discrimination positive), sur les promotions universitaires, et sur la représentativité des femmes en science (voir cette synthèse de Thomas Breda) a donné lieu à un cycle de conférences gouvernementales et un plan d’action sur l’égalité hommes-femmes dans l’enseignement supérieur et la recherche. Par ailleurs, de nombreux économistes travaillent, en France, à une meilleure compréhension des mécanismes discriminatoires, comme en témoigne ce rapport du conseil d’analyse économique, ce numéro spécial de Regards Croisés sur l’Economie, le succès du programme PRESAGE commun à SciencesPo et l’OFCE, visant à coordonner les recherches et l’enseignement autour des problématiques de genre, ou cet ouvrage rédigé par Jézabel Couppey-Soubeyran et Marianne Rubinstein pour intéresser les femmes au raisonnement économique.

Ces programmes de recherche utilisent des méthodologies différentes, discutées par exemple dans cours dispensé par Hélène Périvier. Mais ces outils ne sont pas appliqués à l’étude du statut des femmes économistes. Celles-ci représentent 26% des économistes français enregistrés sur RePec, et 14 femmes font partie du top-100. Une étude menée par Clément Bosquet, Pierre-Philippe Combes et Cécilia Garcia-Penalosa a partir des données du concours de l’agrégation du supérieur en économie pour les postes de professeurs des universités (PU) et celles du concours de directeur de recherche CNRS (DR) montre que les femmes économistes ont une probabilité d’occuper un poste de PU inférieure de 22 points aux hommes (40 contre 18%), et une probabilité d’occuper un poste de DR CNRS inférieure de 27 points (45 contre 18). La probabilité de réussir le concours est sensiblement la même, la différence se situant au niveau des candidatures : la propension des femmes à postuler est inférieure de 37% à celle des hommes pour l’agrégation et de 45% pour le concours DR CNRS. 86% de ce différentiel est attribuable au sexe des candidats, toutes choses égales par ailleurs. Là encore, il est difficile de faire plus que d’émettre des hypothèses économiques, sociologiques ou psychologiques pour expliquer les sources de ce différentiel.

A ma connaissance (limitée), c’est a peu près tout. Aucune données n’est disponible sur les sites de l’AFSE et de l’AFEP, les deux principales associations d’économistes universitaires. Si la question n’est pas posée, c’est peut-être en raison de différences de tradition statistiques et de réponses institutionnelles aux phénomènes de discrimination entre la France et les pays Anglo-Saxons (cf le débat sur les statistiques ethniques). C’est peut-être aussi que la culture du monde universitaire français rend la question informulable, au risque d’être immédiatement étiqueté comme « la femme chiante qui bosse sur des trucs de femme » (risque dont j’ai cruellement conscience en écrivant ce post). Pourtant, la question de la représentation (statistique et symbolique) des femmes économistes en France est un sujet qui réclame la constitution de bases de données, une créativité théorique empruntant à d’autres sciences humaines, des défis empiriques, et qui ouvre, in fine, vers des possibilités d’amélioration d’allocation des ressources intellectuelles vers de nouvelles question de recherches et de nouvelles techniques : de quoi passionner tous les économistes.

Posted in Uncategorized | Tagged , , | 3 Comments