Why I tweet

Tweeting from scratch only requires a computer and an hour time. Set up an account, choose a name for your handle, read a few “how too” guides to get a sense of what the twitter etiquette is, how to use a #hashtag, how to ask a question and pursue a conversation, and you’re ready to put your thoughts, links and pictures online, and to read those by any other user. Those by the people you follow will appear on your screen in the order they are posted, creating a timeline. You can like a tweet (which is akin to bookmarking), retweet to make it visible to those who follow you, comment, reply and enter a discussion. You can see whatever a nations president, a journalist, a central banker, a blogger, a Nobel Prize recipient, a rockstar economist or a registered colleague sees fit to package in 140 characters and put online. You can also engage with any of them.

Capture d_écran 2017-07-25 à 02.58.11This is where most historians of economics might be tempted to quit reading. For what kind of scholarly idea can be expressed and circulated in 140 characters? Are years of research, construction of objects and subtle methodological distinctions reducible to a 15 words sentence? This social media thing  is total nonsense! The purpose of this post, therefore, is to convince the reader to rethink her resistance to using social network for scholarly purpose, by outlining the various functions twitter can serve in historical research. It is not intended as a “how to” guide or a set of warnings. First, because tutorials specifically designed for historians and discussions of twitter’s shortcomings – from trolling to abuse, limited impact, ephemeral attention, dopamine surge and fleeting illusion of mastery – already abound on the web. Second, because each platform aimed at creating and sharing content and networking has its own set of interaction rules and technological constraints. These rules are constantly transformed, and several platforms, including twitter, might not survive 2017. The present discussion therefore seek to emancipate from the institutional and technical constraints associated with tweeting to focus on the new practices and questions it generates, and their significance for historians of economics.

Capture d_écran 2017-07-25 à 03.14.25For tweeting is not merely about compressing an idea in 140 characters and sending it in the wild, though that exercise is interesting it itself. It allows the dissemination of working papers and professional news, and fosters the development of new objects such as a thread or a tweetstorm, a series of tweets which, taken together, introduce a more elaborated opinion, a narrative, a set of papers or a list of references. Writing a tweetstorm is an interesting writing exercise for a historian. It has to be consistent and organized enough so that the reader will want to read the next one to the bottom of the thread. 140 characters do not allow subtle logical articulation and transitions, so that overall consistency requires shaping a kind of flow, a simple yet compelling narrative arc, possibly a chronological one. Doing so forces the writer to weed her story until its spinal chord is excavated and strengthened.

 The most straightforward benefit of twitter is to improve scholarly communication, but this plays out differently depending on the state of each discipline. In history of economics specifically, it raises questions of objects and audiences. Less known, but equally promising, is how twitter can serve as a tool for researching and writing the history of economics. 

Communicating the history of economics

Our first readers are our colleagues. The gradual marginalization of the history of economics since the 1970s has made places where several historians can interact and trade ideas scarce. Our community is scattered, with members often working alone in an intellectual and institutional environment that is at best curious, sometimes hostile, often indifferent. Restoring a sense of community is momentarily achieved through disciplinary conferences, and such was the purpose for the establishment of the SHOE list in 1995 and of the young scholars’ Playground blog in 2007. As Adam Kotsko has noted in the early days of social media, blogging is “especially great for academics who would otherwise be quite isolated from other academics with similar interests.” I believe that twitter offers a less costly, more flexible and more permanent infrastructure to support an online community. It allows researchers from various locations, disciplinary and institutional background to share news, call for papers, working papers, Capture d_écran 2017-07-25 à 03.15.28publications, PhD defenses, hires. The History of Economic Society and the European Society for the History of Economic Thought have thus recently pooled together to set up a twitter account. It also allows conferences to be live-tweeted: if a paper is publicly available online, or if panelists feel confortable with social media, scholars in the audience tweet major ideas and most significant questions. This helps those who could not attend keep track of the reception of new research and of ongoing debates.

Twitter thus works as an online “faculty lounge.” Since exchanges are (1) public and (2) searchable, these virtual lounges are less closed and exclusive than physical ones. They escape disciplinary boundaries, which is especially fit for a discipline whose survival is predicated on the reaffirmation of an identity, yet not a disciplinary one. What unites the twitter community is its objects, not its institutional structure. It does not matter whether you come from an economics, history, history of science, sociology or anthropology Capture d_écran 2017-07-25 à 02.54.04department, or else. The online structure of scholarly twitter also eases bibliographic search and comparative study. When I work on how economics is classified, why the American Economic Association has set up a prize system after World War II, or on changing notions of what makes “good data,” I systematically wonder what the situation is in other sciences. Querying major history of science publication does not always yield a satisfactory outcome. Twitter allows to jump from account to account, from research program to research program, and to identify ongoing work on physics classification or on the social history of the Fields medal. It also eases the identification of those artifacts which stand in between big fundamental texts and archival traces of the daily lives of scientists, which happened to shape a generation’s approach without being set in stone: an American Economic Association presidential address that was not so much cited but influenced a generation of graduate students, how the Lucas critique was weaponized, which textbooks were in use during the 1980s, or some exchange in which the naming of a new generation of macroeconomic models generated meaningful disagreement.

Other audiences can be reached through twitter: students, journalists, citizens, and of course, economists. Here, the platform offers a new opportunity to solve a longstanding tension. On the one hand, historians of economics complain that their scholarship is largely ignored by economists, but on the other, dissemination is usually held as a separate, secondary and often lower kind of activity compared with research. The problem we face is not one of contempt of disinterest anymore. It’s one of invisibility. The 2017 economics graduate student or assistant professor does not hold history of economics in low esteem, she does not even know such scholarship exists, even less where to find it. Yet, the thirst for history has not disappeared. Students want to know why and how they discipline has become mathematized, how to define a model, or who this “Haavelmo” is. Twitter can be a substitute to those disappeared history courses, it allows economists’ attention to be hacked, hooked, and channeled to a piece of history that could become significant for her. “Social media platforms have disintermediated communication between scholars and publics,” Kieran Healy notes (It might not be true  though that economic twitter has no power structure. A new kind of structure, which does not mirror the tight hierarchy that characterizes academic economics but is hierarchized nonetheless has emerged).

Hacking attention however requires: (1) open, or at least easy access (time is scare, attention is fledging, it is often a matter of now or never, of immediately accessing a paper) and (2) articulated content, overarching stories and clear narrative arcs. Historians of economics are good at studying how Irving Fisher or Charles Kindleberger conceived debt, how Robert Solow produced its growth model or how Lawrence Klein thought individual behavior and macroeconomic aggregate related to one another. At providing broader narratives on how measurements, theories and models of growth have changed throughout centuries or how they have managed to predict in the last 100 years, much less so. There is little incentive to write surveys, to hook pieces of research together. Twitter allows putting reference together, discussing a set of paper together without the costs of writing a full-fledged survey.

            What twitter allows should however not be conceived as reconnecting to a lost audience. There are as many reasons to be interested in the history of economics as they are registered users on twitter. Predicting what topic will “work” and what will not is bound to failure –except Friedman and the Chicago School, always a hit. And altering research interest to please a fantasized audience is the surest path to loose our hard-won intellectual independence. A suggestion, then, is to adopt Corey Robin’s admonition: the public intellectual “is writing for an audience that does not yet exist […] she is writing for a reader she hopes to bring into being.” While Robin uses this idea to target fashionable and successful writers, it is also one useful as a guide for scholars working in a marginalized area: do not strive to regain a lost audience, but bring one into being. Because tweeting is to some extent shouting in the wild, it paradoxically dispenses the history with targeting a specific audience.

Researching the history of economics

 Economics, past and present

Social media platforms have enabled the observation, quantification and measurement all kind of social behaviors, interactions and engagements. Through its APIs, twitter data were initially largely accessible to social scientists, who have scrapped huge quantities of data, and the platform has quickly emerged as a valuable repository. Scrapping and visualizing data is becoming standard practices in some branches of sociology, and the word “digital ethnography” has been coined to designate a new set of practices whereby social behavior is being observed through twitter. Of course, contemporary economists’ behaviors and networks as measured through twitter and the ideas they trade online have not yet become history.  But it does not mean twitter data are not relevant material for historians. First, because they allow us to distinguish permanent from changing features in economists’ methods, discourses and practices. New storage, processing and real-time recording technologies and companies may have ushered economics in an age of big data, but the debates I witness are strikingly reminiscent of the Cowles vs NBER “Measurement Without Theory” controversy and the economics as an inductive vs deductive science question.

Second, the boundaries between the past and the present are unstabilized and permeable, in particular in contemporary history. Till Düppe defines the latter as dealing “with the past that is still remembers by some of those among us.” “Some” might be the former students or children of the economists we study. They might be those economists themselves, retired, or as we move toward present, still active. Twitter offers data which, with appropriate tooling up in sociological and ethnographic methods, harness some of the challenges Düppe highlights. First, observing economists’ exchanges highlights how they wield and weaponize their history, the canonical and the one we produce. Whose protagonists our narratives serve become clearer, and that economists cooperate with historians in part to influence them too. Even those projects which are not aimed at restoring credit alter the relationships of protagonists or communities with one another. By becoming historical objects, the sunspot literature, disequilibrium economics or contingent valuation, become worthy of attention, and, in the end, distinct, consistent and worthy scientific endeavors. Proponents of self-called “heterodox” approaches have long understood this; there are more histories of heterodox economics than mainstream ones.

Twitter offers the possibility of interacting directly with graduate students, government and think thank economists, academics, central bankers, and more. Yet doing so might change our practice as much as we may want to change theirs. At the very least, it makes tensions over purpose, methods and identities more salient. Over the past decades, we have evolved from being economists doing history to become historians studying economics. We have emancipated in terms of objects and methods. Archive oozing and interviews have spread, and the deployment of quantitative techniques more akin to digital humanities than econometrics or experiments are on the rise. Our disciplinary identity has expanded to the edges of sociology and history of science, sociology and intellectual history. Yet, in spite of calls to move to history departments, I suspect that history of economics contributors are still in majority located in economics departments, teach economics, are evaluated according to economics rankings, and define themselves as economists (that’s my case). Most important, we do write history with a purpose, however largely unconscious: changing economists’ theories and practices, providing facts to anchor current debates on the state of the dismal science, instilling more reflexivity into their intellectual and institutional practices, improving policy-makers, citizens and journalists’ ability to decipher and assess economists’ work. We may have diverse audiences in mind, but we want to be relevant, and twitter offers us the ability to get a better grasp at current debates and angst. Whether our research topics should be allowed to shift is a matter of debate, but twitter cues may help us pitch our chosen stories to improve our outreach.

From writing history for public uses to writing history “in public”

According to sociologist Kieran Healy, social media “tend to move the discipline from a situation where some people self-consciously do ‘public sociology’ to one where most sociologists unselfconsciously do sociology in public.” This, he explains, because “new social media platforms have made it easier to be seen,” creating “a distinctive field of public conversation, exchange and engagement.” Twitter does not merely enable historians of economics to trade reference, discuss alternative “modeling” practices of monetary theories or disagree on the influence of the Cold War or Civil Right movements on the objects and methods of economists. It requires them to do so in public. It is often considered as a shortcoming – being challenged in public might highlight some weakness in the analysis and create a reputation of sloppiness. Resistance to airing disciplinary dirty laundry online also derived from the notion that scientific credibility is tied to the ability to achieve and publicize some kind of disciplinary “consensus.

I don’t share this worry. Science is predicated on the belief that truth is not sui generis, involves puzzles, trials and errors. Doubting and arguing in public is a sign of individual soundness and disciplinary self-confidence. It is being comfortable with scientific method. Tweeting is “thinking in progress,” and it is recognized as such (though I have never seen formal guidelines, the etiquette seems to allow tweets to be quoted in blog posts, and blog posts to be quoted in academic papers.) Researching the history of economics in public is also a way to help other scholars relate to our practices. Laying out a puzzle –why have subfields who most benefited from computerization, such as large-scale macroeconometrics or, computational general equilibrium became marginalized as the PC spread –, posting an exchange between Paul Samuelson and Milton Friedman or a figure representing a principal component analysis, a co-citation network or the result of some Newsweek articles text-mining and arguing over interpretation allow to frame history as a process whereby some quantitative and Capture d_écran 2017-07-25 à 03.32.32qualitative data are gathered, exploited and interpreted. Rough data – qualitative and quantitative – are put on display, suggesting both commonalities and specificities in the methods historians need to use to make them speak. Finally, opening the narrative black box by writing history in public and circulating working papers allow fellow historians to engage in a sort of early online public referee process, and economists to react in a public and articulated way.

ICapture d_écran 2017-07-25 à 03.15.00n short, twitter is a tool for researching and communicating the history of economics. The latter can be done at little cost, with the uncertain yet real prospect of high spillovers effects. This is a good reason for all historians who like a good economics argument to give it a try, and join their some 50 colleagues already registered. Twitting also raises all sorts of questions on our research practices and audiences. It forces those who routinely eschew reflexive endeavors (such as the author) to articulate their perspective on the present state and future of their discipline. For writing the history of economics in public and disseminating it in the end requires a good deal of enthusiasm for the quality of what is being published and optimism in its possible social benefits.

Note: this is a draft paper for a historiography volume. Comments welcome. 

Posted in Uncategorized | 1 Comment

Defining Excellence in economics: 70 Years of John Bates Clark Medals

clark_medal_front_smAndrej Svorenčík and I have a new working paper on the history of the John Bates Clark medal (SSRN  and SocArXiv links). We combine archival evidence on the establishment and early years of the award and quantitative analysis of the profiles of the laureates to study the intellectual and institutional determinants of “excellence” in economics: how economists disagreed on what count as a “fundamental contribution” in economics, how they handled topic, methods, institutional and gender diversity issues.

We are still struggling with how to interpret our data, so comments are very much welcome.

Below are excerpts from the introduction. There will be another post dealing with my unanswered questions, methodological struggles and findings on this project.

In 2017 the John Bates Clark Medal (JBC Medal) turned seventy, and the 39th medalist was selected for this prestigious award. Established in 1947 by the American Economic Association (AEA) to reward an American economist under the age of forty for “most significant contribution to economic thought and knowledge,” it has become a widely acknowledged professional and public marker of excellence in economics research. It is frequently dubbed the “baby Nobel Prize” as twelve awardees later went on to receive the Bank of Sweden Award in Economic Sciences in Honor of Alfred (hereafter Nobel Prize). It provides an excellent window into how economists define excellence because it is as much a recognition of the medalists’ achievements as it is a reflection what is considered to be the current state and prospects of the discipline. For the Committee on Honors and Awards (hereafter CHA) and the Executive Committee of the AEA, selecting a laureate involves identifying, evaluating and ranking new trends in economic research as they develop and are represented by young scholars under forty.

The Medal has become such a coveted prize commanding the attention of the entire economics profession and the public that it went from being awarded biennially to annually in 2009. It might thus seem surprising how little is known about the reasons for its establishment and about its tumultuous past. Even less is known about the debates that it provoked such as those pertaining to its selection criteria. After three first unanimous choices of laureates – Paul Samuelson (1947), Kenneth Boulding (1949), and Milton Friedman (1951) – the Medal was increasingly challenged. It was not awarded in 1953, then almost discontinued three times before it finally gained acceptance and stabilized during the 1960s.

Capture d_écran 2017-07-18 à 22.42.55

1947 ballots (Samuelson elected against Boulding and Stigler)

Our purpose, in this paper, is not to study the medal as an incentive, but as a signal for the changing definition of excellence in economics, as well as a marker of how merit and privilege are intertwined in scientific recognition. Indeed, Robert Friedman argues in his study of the history of the Nobel prizes, “excellence is not an unambiguous concept, not even in science” (2001, p. ix). The Nobel Prize has become the ultimate symbol of scientific excellence and a shorthand indicator for genius. But even though exceptional talent is a shared feature of scientists who become laureates, Friedman adds that “prizes, by definition, are political, are a form of governing marked as much by interests and intrigues as by insightful judgment” (2001, p. 1). His extensive survey of discussions surrounding the chemistry, physics and biology prizes show how some awards (or lack thereof) reflected the changing scientific, cultural, political and personal agendas of the members of the Swedish committee. The Nobel Prize in Economics was no exception. Offer and Söderberg 2016 and Mirowski 2016 relate how the prize was born out of the frustration of those economists at the Riksbank and their lack of independence in setting the Swedish monetary policy.

Michael Barany’s 2015 history of the Fields Medal likewise showcases a general point that myths surrounding prizes often conceal a messier reality, and that their history convey rich information about a discipline’s standards and identity. Barany argues that the Fields Medal was not established as a substitute for a missing Nobel Prize in mathematics, but as a way to unify a discipline riven with political and methodological divides in the 1930s. While “exceptional talent seems a prerequisite for a Fields Medal,” he argues, “so does being the right kind of person in the right place at the right time.” Acknowledging various types of contingencies “does not diminish the impressive feats of individual past medalists”. The laureates as a group represent “the products of societies and institutions in which mathematicians have not been mere bystanders” (p. 19).

It is such an approach that we want to follow in this paper in order to understand the evolving nature of excellence in economics. The archival evidence we have gathered shows that the establishment of the John Bates Clark Medal, and early disputes on what represents excellence in economics speaks volumes of the internal dynamics of economics and its situation among other sciences since the 1940s and 1950s. Further, both Barany and Friedman emphasize the lack of diversity both within selecting committees and among laureates in terms of gender, educational background and employment, yet they do not provide a thorough quantitative analysis of their claims about the missing diversity. In order to understand how the nature and diversity of “right person in the right place” have evolved across decades, we have supplemented our qualitative evidence with a quantitative analysis of the trajectories and characteristics of the 39 laureates.

Capture d_écran 2017-07-18 à 23.16.02


Posted in Uncategorized | 1 Comment

How about every historian of science nominates a candidate for the Golden Goose Award?

This morning, while searching for material on the history of mechanism design, I stumbled on the Golden Goose award webpage. Though I’m being told it is quite important in scientific/policy circles, I had never heard of it.

Capture d_écran 2017-05-12 à 03.11.53Founded in 2012 by a pool of private and public patrons, it was aimed at countering Senator William Proxmire’s “Golden Fleece Awards” influence on public society and policy-makers’ perception of science. Between 1975 and 1988, Proxmire set out to throw light and lambast public institutions and research projects he believed were a ridiculous waste of time and money: a NSF-funded study on love, some physical measurement of airline stewardesses, the development of facial coding systems, a study of why prisoners want to escape or Ronald Reagan’s second inauguration. The Golden Goose committee thus wanted to dissipate the idea that federally funded research that sounded odd or obscure to non-scientists was necessary useless, by highlighting cases in which such research happened to have a major impact on society.

Capture d_écran 2017-05-10 à 11.17.51The stories behind laureate research programs are recounted through texts and videos. These include a 1950 study of the sex life of screwworm flies (which was wrongly considered a target of Proxmire and contributed to eradicate the fly and save millions of dollars in cattle treatment and loss), the honeybee algorithm, the national longitudinal study of adolescent to adult health, the marshmallow test, and two economics research programs: the “market design” award went to Alvin Roth, David Gale and Lloyd Shapley in 2013 (one year after the Nobel), and the “auction design” award went to Preston McAfee, Paul Milgrom and Robert Wilson the following year.

Capture d_écran 2017-05-12 à 02.56.18I don’t know about prizes in other disciplines, but I feel the Golden Goose could be bolder on the economics research it singles out. Not that I want to diminish the  outstanding achievements of market and auction design, but my sense is that this research was not the most in need of public spotlight. The history of mechanism design is still in infancy and much contested. It is an area whose protagonists have been eager to write their own history. Historians largely disagree on how to interpret the famous Federal Communication Commission radio spectrum auctions (Francesco Guala and Michel Callon reconstruct them as a case of performative research. Eddie Nik-Khah disagrees and argues that telecommunication business imperatives displaced scientific ones. See also his forthcoming book with Phil Mirowski). My issue is with portraying mechanism design as a field previously perceived as abstract, obscure or irrelevant. Some research in progress suggests that the Stanford environment in which mechanism design was developed benefited from sustained and stable relationships with military then industrial and tech clients, which were confortable with having their scientific clients pursuing theoretical ideas with uncertain applicability. The research program involved economists initially trained in operational research departments, who might have carried new conceptions of theory, applications, and risk-return tradeoffs. As NSF social science funding came under attack at the turn of the 1980s, economic theorists then singled out a mechanism design lab experiment as their flagship example of “socially useful” research. And after the 2007 financial crisis broke out and economists’ expertise came under attack, matching market aud auction design became ubiquitous in their defense of their discipline’s social benefits (here are a few examples).

While it is certainly good to have the key role of Robert Wilson in architecting Stanford’s brand of game theory and mechanism design finally recognized, I nevertheless remain skeptical that this research has ever been construed as obscure, odd or silly. I’m willing to concede that I may be too naïve, given the permanent threat upon federally funded research (see Coburn’s 2011 attacks and summer 2016 debates on the social benefits of NSF-funded economic research). The point is that the Golden Goose award jury could make bolder choices, in economics as in other sciences.

ECapture d’écran 2017-05-12 à 03.10.06.pngducating policy makers and the public on how science is made is the purpose of the Golden Goose award. And it’s one shared by historians of hard, tech, STEM, medicine, computer or social sciences. They spend countless hours uncovering the diverse and complex relationships between theory and applications, induction and deduction, how much is planned and how much is accidental. Operational Research historian Will Thomas told me he’d like more research on “delayed applications” (whether because of the lack of adequate theories or computer infrastructure or money or else, or because of unexpected applications). Historians are also tasked with uncovering the many external hindrances scientists face in pursuing research programs, from claims of being too abstract to claims of being too specific (Proxmire targeted that not just highly abstract science, but also empirical research which seemed to specific to be ever applied elsewhere or generalized was often derided). Scientists have routinely faced pressures by public, private, military organizations and agnotology lobbies to alter, hide or dismiss scientific results. Nevertheless, historians sometimes conclude, they persisted. Historical inquiry finally offers a unique window into the difficulty of defining, identifying and quantifying science’s “social benefits.”

Golden Goose alumni Josh Shiode confirmed that the jury welcome nominations by historians of science. There is neither disciplinary nor temporal restrictions (it is not necessary, for instance, that the scientists whose research is nominated are still alive). Three nomination criteria are:

  • federally funded research
  • projects that may have appeared unusual, obscure, which sounded “funny” or whose value could have been questioned
  • major impact on society

Nominating research projects seems an excellent way for historians of science to educate the public.

Capture d_écran 2017-05-12 à 03.13.16

Posted in Uncategorized | Leave a comment

Speculations on the stabilization and dissemination of the “DSGE” trade name (in progress)

Some research I’ve done for the history of macroeconometric modeling conference that will be held in Utrecht next week led me to wonder who coined and disseminated the  term “Dynamic Stochastic General Equilibrium.” Not the class of models, whose development since Lucas and Prescott’s 1971 paper has been the topic of tons of surveys.  Fellow historian Pedro Duarte has a historical meta-survey of the flow of literature in  which macroeconomists have commented on the state of macro and shaped the idea of a consensus during the 1990s and 2000s. Neither am I hunting for the many competing words used to designate the cluster of models born from the foundational papers by Robert Lucas or Finn Kydland and Ed Prescott, from Real Business Cycle to Dynamic General Equilibrium to stochastic models. What I want to get at is how the exact DSGE wording stabilized. Here is the result of a quick tentative investigation conducted with the help of JSTOR (an unreliable database for historical research) and twitter.

According to JSTOR, it was Robert King and Charles Plosser who, in  their famous 1984 paper titled Real Business Cycles, used the term DSGE for the first time, though with a coma (their 1982 NBER draft did not contain the term): “Analysis of dynamic, stochastic general equilibrium models is a difficult task. One strategy for characterizing equilibrium prices and quantities is to study the planning problem for a representative agent,” they explained upon deriving equilibrium prices and quantities.

Assuming the JSTOR database is exhaustive enough (information on the exact coverage is difficult ti find) and that the term didn’t spread through books or graduate textbooks (which is a big stretch), dissemination in print was slow at first.

Capture d_écran 2017-04-03 à 01.33.21

For more than a decade, only a handful of articles containing the word were published each year. Lars Hansen and Jim Heckman used “Dynamic Stochastic General Equilibrium” without the acronym in a 1996 JEP  survey on calibration. While Hansen causally used the word in many publications throughout the 1990s, Eric Leeper, Chris Sims, Tao Zha, Robert Hall and Ben Bernanke used the word and its acronym much more agressively in a widely cited 1996 BPEA survey of advances in monetary policy research (I thank  Now Here Not There for the pointer).  In a telling footnote, the authors explain that “the DSGE approach  is more commonly known as the real business cycle approach. But while it initially used models without nominal rigidities or any role for monetary policy, the methodology has now been extended to models that include nominal rigidities.” In other words, RBC models were being hybridized with new-keynesian insights with the hope of shaping a synthesis, and their name was evolving alongside their substance. In 1998, Francis Diebold published an article on macroeconomic forecasting, in which he moved from a descriptive to a prescriptive use of the name. DSGE was “the descriptively accurate name” for this class of models originated in Lucas 1972 with fully-articulated preferences, technologies and rules, “built on a foundation of fully-specified stochastic dynamic optimization, as opposed to reduced-form decision rules” (to avoid the Lucas critique).

I’ve been told that, by that time, the word was already in wide currency. But many other terms also circulated, and at some point the need for a new label reached a climax and competition intensified. In November 1999, the Society for Economic Dynamics published its first newsletter. In it, Stern professor David Backus explained that finding a new name for the models he, Jordi Gali, Mark Gertler, Richard Clarida, Julio Rotemberg and Mike Woodford were manipulating was much needed: “I don’t think there’s much question that RBC modeling shed its ‘R’ long ago, and the same applies to IRBC modeling. There’s been an absolute explosion of work on monetary policy, which I find really exciting. It’s amazing that we finally seem to be getting to the point where practical policy can be based on serious dynamic models, rather than reduced form IS/LM or AS/AD … So really we need a better term than RBC. Maybe you should take a poll,” Backus declared.

Capture d_écran 2017-04-03 à 02.35.12And indeed, Chris Edmond told me, a poll was soon organized on the QM&RBC website, curated by Christian Zimmerman. Members were presented with 7 proposals. Dynamic General Equilibrium Model (DGE) gathered 76% of votes. “Stochastic Calibrated Dynamic General Equilibrium” (SCADGE) was an alternative proposed by Julio Rotemberg, who explained that  the addition of “stochastic” was meant to distinguish their models from Computational General Equilibrium. The proposal collected almost 10% of the votes. Then came  Quantitative Equilibrium Model (QED, which Michael Woodford believed was a good name for the literature as a whole, though not for an individual model and Prescott liked as well), RBC, Kydland Prescott Model (KPM, which Randall Wright found “accurate and fair”), and Serious Equilibrium Model. Prescott and tim Kehoe liked the idea of having the term “applied” in the new name, Pete Summers wanted RBC for “Rather Be Calibrating” and Frank Portier suggested “Intertemporal Stochastic Laboratory Models.”

It wasn’t yet enough to stabilize a new name. Agendas underpin names, and in those years, agendas were not unified. Clarida, Gali and Gertler’s famous 1999 “Science of Monetary Policy” JEL piece used the term “Dynamic General Equilibrium” model, but  they pushed the notion that the new class of models they surveyed reflected a “New Keynesian Perspective” blending nominal price rigidities with new classical models. In  his 2003 magnum opus Interests and Price, Woodford eschewed DGE and DSGE labels alike in favor of the idea that his models represented  a “New Neoclassical Synthesis.”  It was only in 2003 that the number of published papers using the term DGSE went beyond a handful, and in 2005 that the acronym appeared in titles. I don’t know yet whether there was a late push to better publicize the “stochastic” character of the new monetary models, and if so, who was behind it. Recollections would be much appreciated here.

Posted in Uncategorized | Tagged , | 3 Comments

“The American Economic Association declares that economics is not a man’s field”: the missing story


The December 1971 meeting of the American Economic Association in New Orleans was a watershed, Denison University’s Robin Bartlett remembers. The room in which Barbara Bergmann chaired a session on “What economic equality for women requires” was packed, as was the meeting of the Caucus of Women Economists who elected Wellesley’s Carolyn Bell as chair and entrusted her with the task of presenting a resolution to the business meeting. On the night of the 28th, she brought a set of resolutions to president John Kenneth Galbraith, president elect Kenneth Arrow and the executive committee. “Resolved that the American Economic Association declares that economics is not a man’s field,” the first sentence read.

After heated debates, “not exclusively” was slipped into the sentence, but most resolutions were passed unaltered. In an effort to “adopt a positive program to eliminate sex discrimination among economists,” they provided for the establishment of a “Committee on the Status of Women in the Economic Profession” (CSWEP) whose first task would be to gather data and produce a report on the status of women in the profession. It also instituted “an open listing of all employment opportunities,” the JOE. The CSWEP’s inaugural report, “Combatting Role Prejudice and Sex Discrimination,” was published in the 1973 American Economic Review. It highlighted that the top 42 PhD-granting economic departments of the country hosted a mere 11% women graduate students and 6% women faculty (including 2% professors).

45 year later, according to the 2016 CSWEP report, women now represent around 30% of PhD students and 23,5% of tenure-track faculty. It’s three time more than in 1972, but less than the proportion of female Silicon Valley managers or Oscar juries. And it is a rate much lower than in other social sciences, more akin to what is found in engineering and computer sciences. Worst, the wage gap between men and women assistant and full professor has soared in the past 20 years (a women full-professor now earn 75% of what an equivalent man earns, vs 95% in 1995). Economics is also an outlier in the kind of inequality mechanism at work. While most sciences suffer from a leaking pipeline, economics rather exhibits a “tiny” pipeline: only 35% of econ undergraduates are women, which makes economics the only discipline with a proportion of female students at the PhD level higher than at the BA level. And the former is down 6 point since the 1990s.


Several explanations have been proposed and tested. Differences in comparative advantages have been rejected by data and economists have therefore turned to productivity gaps and discrimination models, and to the analysis of biases in reviewing, interviewing, hiring and citation practices. Results were weak and contradictory. Psychologists have investigated the possible effect of biological differences (for instance in mathematical and spatial abilities) and differentiated early socialization. Researchers have also hypothesized that women hold different preferences regarding flexibility vs wage and work vs family arbitrages or people vs thing research environments. But none of these factors suffice to explain the wage gap, the difficulty in luring female undergraduates into studying economics and the full-professorship glass ceiling (this research is extensively surveyed by Ceci, Ginther, Kahn and Williams. See also Bayer and Rouse). My suggestion is that historicizing the place and status of women in economics can shed light on longer trends and generate new hypotheses to explain the current situation. Let me explain.


The uncertain place of women in economics


Millicent Fawcett

Existing histories of women in economics suggest that women economists have enjoyed a higher status at the turn of the XXth century than in the postwar period. Granted, female economists had to overcome all sorts of cultural and institutionally entrenched forms of discrimination and sexism. According to Kirsten Madden, this led them to develop adaptation strategies that included “superperformance,” “subordination” and “separatism.” The result, Evelyn Forget documents, was that 12% of those economics PhDs listed by the AER in 1912 were defended by women, up to 20% in 1920. Most of these doctorates were awarded by Columbia, Chicago, Vassar and Wellesley. There were privileged topics, such as consumption, development or home economics, but women’s interests spanned all fields, including theory, and were published in top journals. Women also largely contributed to economics from outside academia. The books in which Harriet Martineau popularized Pareto’s views sold much better than those written by the illustrious founding father himself. In the early XXth century, Beatrice Potter Webb, Millicent Garrett Fawcett and Eleanor Rathbone fiercely debated the economics underlying “equal pay for equal work,” Cleo Chassonery-Zaigouche relates.

From the 1930s onward, however, women were increasingly marginalized. Forget offers several explanations. First, social work and home economics became separate academic fields. The establishment of dedicated department and vocational programs was supported by the development of land-grant institutions, and attracted those women who were systematically denied tenure in economics departments. Other seized the new opportunities offered by the expansion of governmental needs for statistics and empirical work in consumption, price indices, poverty, unemployment and wages. Many of the students trained by Margaret Reid at the university of Chicago, Dorothy Brady, and Rose Friedman, for instance, choose a civil servant over an academic career. By 1950, the proportion of PhDs defended by women was down to 4,5%. The recovery was slow. Women were allowed into a larger number of graduate programs – at Harvard, for instance, they would receive a doctorate from Radcliffe – but they were confined to assistant positions. It was the growing awareness to discrimination issues and the establishment of the CSWEP, Forget speculates, that eventually opened the gates.


Is the fate of women economists tied to the changing status of applied work?

What led me to reflect on the history of women economists is the sheer number I encountered in the archives. Most of them, however, were nothing like the model role for economics badassery, namely Joan Robinson. They were named Margaret Reid, Dorothy Brady, Anne Carter, Lucy Slater, Irma Adelman, Nancy Ruggles, Barbara Bergmann, Myra Strober, Marina Whitman or Heather Ross. At best they had an entry in the Dictionary of Women Economists or in an Eminent Economists collection, or an interview published in a field journal, but they were usually absent from economists’ and historians’ big narratives. Most of them hadn’t written  the kind of “Theory of…” 20-pages article the Nobel committee is so fond of, but instead produced datasets and procedures to collect them, lines of codes, the first regression software, simulations, early randomized experiments and new ways to measure consumption, inequality, development, wealth, education or health. They were applied economists at a time many of the topics they researched and the tools they used did not enjoy the prestige it had at the beginning of the century and would regain in its last decades.

Though women’s enduring self-selection on some topics like labor, discrimination, inequalities, development, consumption or home economics have been investigated by Agnes le Tollec or Evelyn Forget, there is no systematic study of how the status of women in economics is tied to that of applied economics. My problem, to begin with, is that I entertain contradictory hypotheses of how the rising prestige of applied economics might have affected women economists. The straightforward assumption is that more applied economics being funded by the NSF and foundations, published in top journals, and used in policy discussions enabled more women to become tenure-track economists (the surge in 1970 to 1990 percentages documented by the CSWEP). But this process was also characterized by a shift in applied economists’ location, with more government and Fed micro and macro researchers publishing in academic journals. CSWEP surveys have covered academia only so far, and it is possible that the recent stagnation in women’s tenure-track is paired with a growing feminization of governmental and international agencies, Federal banks or independent research bodies.

womenwerecomputerPossible explanations for the slacking feminization of economics might also be found into the history of computer science, a discipline confronted with a similar problem. Yet, it is precisely the professionalization and scientization of this increasingly lucrative and prestigious discipline that led to the marginalization of women, historians of computer claim. Back in the 1940s, Janet Abbate outlines, computers were women, that is, women were routinely employed to compute, to inverse matrix, and when the first analog then electronic machines were put to work, to punch holes, to input punchcards and to code. They famously worked on the first ENIAC, calculated the trajectory of John Glen’s Apollo 11 moon mission and coded its onboard flight software. In the 1960s, the shortage of programmers allowed them to combine part time computer jobs and raising kids. In 1967, Grace Hopper explained in a Cosmopolitan article below that “Women are ‘naturals’ at computer programming,” Nathan Ensmenger notes. They flocked the new undergraduate computer science program throughout the country, and represented up to 40% undergrads in 1985.


Cosmopolitan (1967, from Ensmenger)

But another phenomenon was at work in these years. The professionalization, academicization and scientization of computer science brought a redefinition of programmers’ identity, though one not linear. Since the 1950s, a picture of the good programmer as systematic, logical, task-oriented, “detached,” chess player, solver of mathematical puzzles, and masculine was gaining traction. This gendered identity was embedded in the various aptitude tests and personality profiles used by companies to recruit their programmers. As programing rose in status and became more lucrative, this identity spread to academic program managers, then to those teams who marketed the first PCs for boys in the 1980s and eventually, to prospective college students. After 1985, the number of women computer undergrad declined constantly, down to 17% in the 2010s.


Capture d_écran 2017-03-30 à 22.50.57


Economists applying their tools to understand their gender issue

Did the growing prestige of applied economics from the 1960s onward result in a similar gender identity shift? I don’t know, the construction of a self-image is elusive and difficult to track. But the CSWEP record, with its mix of quantitative surveys and qualitative testimonies might well be a good place to chase it. The history of the CSWEP also points to other contexts that have shaped how economists understand their own sex imbalance. The 1973 CSWEP inaugural report opened, not with survey results, but with a lengthy introduction drafted by Kenneth Boulding, then member of the committee. Its title and opening sentences epitomized the strategy Boulding had adopted to capture his audience’s attention:

Capture d_écran 2017-03-31 à 03.26.44

In other words, discrimination within economics is an economic problem, one that begets economic analysis and cures. Boulding proposed to consider discrimination as part of a larger process of “role learning and role acceptance, and went on to rationalize the CSWEP’s proposals to solve the “betterment production function” : “what are the inputs which produce this output, and particularly, what are those inputs that can be most easily expanded and that have the highest marginal productivity? … Four broad classes of inputs may be named: information, persuasion, reward and punishments,” he wrote. The CSWEP was thus established at a moment economists of all stripes were developing theories and tools to study discrimination, some they were naturally draw to apply to themselves. Problem was, no one seemed to agree on what the relevant tools and theories were. At that time, Arrow was developing a theory of statistical discrimination at RAND, one that grew into a criticism of Becker’s taste-based model. He tried to explain wage differences by imperfect information, then recruitment costs. Other frameworks challenged the beckerian “new household economics” more radically, in particular Marxist and nascent feminist theories in which sex (a biological characteristic) was distinguished from gender (a social construct). In exchanges known as the “Domestic Labor Debate,” Marianne Ferber or Barbara Bergmann, among others, challenged Becker’s idea that household specialization reflected women’ rational choice and emphasized the limitations placed by firms on labor opportunities. They also claimed that economists should pay more attention to the historical foundation of economic institutions and endorse a more interdisciplinary approach. These debates were reflected inside the CSWEP. The 1975 symposium on the implications of Occupational Segregation presented   the audience with the views of virtually all committee members. How these theoretical, empirical and methodological played out in the understanding of the status of women within the economic discipline and changing status itself is also a question a systematic history of the CSWEP could answer.

Posted in Uncategorized | Tagged , , | 2 Comments

Big data in social sciences: a promise betrayed ?

In just 5 years, the mood at conferences on social science and big data has shifted, at least in France. Back in the early 2010s, these venues were buzzing with exchanges about the characteristics of the “revolution” (the 4Vs) with participants marveling at the research insights afforded by the use of tweets, website ratings, Facebook likes, Ebay prices or online medical records. It was a time when, in spite of warnings about the challenges and perils ahead, grant applications, graduate courses and publications were suddenly invaded by new tools to extract, analyze and visualize data. There discussions are neither over nor even mature yet, but their tone has changed. The enthusiasm with a tint of arrogance has given way to a cautious reflexivity wrapped up in a general zeitgeist of uncertainty and angst, even anger. Or so is the feeling I took away from the ScienceXXL conference I attended last week. Organized by demographer Arnaud Bringé and sociologists Anne Lambert and Etienne Ollion at the French National Institute for Demographic Studies, it was conceived as an interdisciplinary practitioners’ forum. Debates on sources, access, tools and uses were channeled via a series of feedbacks offered by computer scientists, software engineers, demographers, statisticians, sociologists, economists, political scientists and historians. And this, in fact, made the underlying need to yoke new practices to an epistemological re-evaluation of the nature and uses of data, of the purpose of social science, and of the relationships between researchers and government, independent agencies, business and citizens especially salient.

Lucidity: big data is neither easier nor faster nor cheaper


The most promising trend I saw during the workshop is a better integration of users, disciplines and workflows. “Building a database doesn’t create its own uses” was much reiterated, but responses were offered. One is the interdisciplinary construction of a datascape, that is, a tool that integrates the data corpus and the visualization instrument. Paul Girard introduced RICardo, which allows the exploration of XIX/XXth centuries trade data. Eglantine Schmitt likewise explained that the development of a text-mining software required “choosing an epistemological heritage” on how words are defined and how the interpretative work is performed, and “tool it up” for current and future uses, subject to technical constraints. What surprised me, I shall confess, was the willingness of research engineers and data and computer scientists to incorporate the epistemological foundations of social sciences into their work and collect lessons learned from centuries of qualitative research. Several solutions to further improve collaboration between social and computer scientists were discussed. The Hackaton/Sprint model prevents teams from divide up tasks, and force interaction yield an understanding of others’ way of thinking and practices. The downside is in promoting “fast science,” while data need time to be understood and digested. Data dumps and associated contests on websites such as Kaggle, by contrast, allow longer-term projects.

Perceived future challenges were a better integration of 1) qualitative and quantitative methods (cases of fruitful interbreeding mentioned were the Venice Time Machine project and Moretti’s Distant Reading. Evaluations of culturomics were more mixed)  2) old and new research (to know if the behavioral patterns are really new phenomena produced by social networks and digitalized markets, or are consistent with those traditional behaviors identified with older techniques). Also pointed out was the need to identify and study social phenomena that are impossible to capture through quantification and datification. This suggests that a paradoxical consequence of the massive and constant data dump allowed through real-time recording of online behavior could be a rise in the prestige of extremely qualitative branches of analysis, such as ethnography.


Unsurprisingly, debates on quantitative tools, in particular regarding the benefits and limits of traditional regression methods vs machine learning, quickly escalated. Conference exchanges echoed larger debates on the black box character of algorithms, the lack of guarantee that their result is optimal and the difficulty in interpreting results, three shortcomings that some researchers believe make Machine Learning incompatible with social science DNA. Etienne Ollion & Julien Boelaert pictured random forest as epistemologically consistent with the great sociological tradition of “quantitative depiction” pioneered by Durkheim or Park & Burgess. They explained that ML techniques allow more iterative exploratory approaches and mapping heterogeneous variable effects across the data space. Arthur Charpentier rejected attempts to conceal the automated character of ML. These techniques are essentially built to outsource the task of getting a good fit to machines, he insisted. My impression was that there is a sense in which ML is to statistics what robotization is to society: a job threat demanding a compelling reexamination of what is left for human statisticians to do, what is impossible to automatize.

tEQDan.pngTool debates fed into soul-searching on the nature and goals of social sciences. The focus was on prediction vs explanation. How well can we hope to predict with ML, some asked? Prediction is not the purpose of social sciences, other retorted, echoing Jake Hofman, Armit Sharma and Duncan Watt’s remark that “social scientists have generally deemphasized the importance of prediction relative to explanation, which is often understood to mean the identification of interpretable causal mechanisms.” These were odd statements for a historian of economics working on macroecometrics. The 1960s/1970s debates around the making and uses of Keynesian macroeconometrics models I have excavated highlight the tensions between alternative purposes: academics primarily wanted to understand the relationships between growth, inflation and unemployment, and make conditional prediction of the impact of shifts in taxes, expenditures or the money supply on GDP. Beyond policy evaluation, central bankers also wanted their model to forecast well. Most macroeconometricians also commercialized their models, and what sold best were predictive scenarios. My conclusion is that prediction had been as important, if not more, than explanation in economics (and I don’t even discuss how Friedman’s predictive criterion got under economists’ skin in the postwar). If, as Hoffman, Sharma and Watts argue, “the increasingly computational nature of social science is beginning to reverse this traditional bias against prediction,” then the post-2008 crash crisis in economics should serve as a warning against such crystal ball hubris.

Access (denied)

scrapUncertainty, angst and a hefty dose of frustration dominated discussions on access to data. Participants documented access denials to a growing number of commercial websites after using data scrapping bots, twitter’s APIs getting increasingly restrictive, administrations and firms routinely refusing to share their data, and, absent adequate storage/retrieval routines, data mining and computational expertise and stable and intelligible legal framework, even destroying large batches of archives. Existing infrastructure designed to allow researchers’ access to public and administrative data are sometimes ridiculously inadequate. In some cases, researchers cannot access data firsthand and have to send their algorithms for intermediary operators to run them, meaning no research topic and hypotheses can emerge from observing and playing with the data. Accessing microdata through the Secure Data Access Center mean you might have to take picture of your screen as regression output, tables, and figures are not always exportable. Researchers also feel their research designs are not understood by policy and law-makers. On the one hand, data sets need to be anonymized to preserve citizens’ privacy, but on the other, only identified data allow dynamic analyses of social behaviors. Finally, as Danah Boyd and Kate Crawford had predicted in 2011, access inequalities are growing, with the prospect of greater concentration of money, prestige, power and visibility in the hand of a few elite research centers. Not so much because access to data is being monetized (at least so far), but because privileged access to data increasingly depends on networks and reputation and creates a Matthew effect.

Referring to Boyd and Crawford, one participant sadly concluded that he felt the promises of big data that had drawn him to the field were being betrayed.

Harnessing the promises of big data: from history to current debates

Those social scientists in the room shared a growing awareness that working with big data is neither easier nor faster nor cheaper. What they were looking for, it appeared, was not merely feedback, but frameworks to harness the promises of big data and guidelines for public advocacy. Yet crafting such guidelines requires some understanding of the historical, epistemological and political dimensions of big data. This involves reflecting on changing (or enduring) definitions of “big” and of “data” across time and interests groups, including scientists, citizens, businesses or governments.

When data gets big

512px-Hollerith_card_reader_closeup“Bigness” is usually defined by historians, not in terms of terabits, but as a “gap” between the amount and diversity of data produced and the available intellectual and technical infrastructure to process them. Data gets big when it becomes impossible to analyze, creating some information overload. And this has happened several times in history: the advent of the printing machine, the growth in population, the industrial revolution, the accumulation of knowledge, the quantification that came along scientists’ participation into World War II. A gap appeared when 1890 census data couldn’t be tabulate in 10 years only, and the gap was subsequently reduced by the development of punch cards tabulating machines. By the 1940s, libraries’ size was doubling every 16 years, so that classification systems needed to be rethought. In 1964, the New Statesman declared the age of “information explosion.” Though its story is unstabilized yet, the term “big data” appeared in NASA documents at the end of the 1990, then by statistician Francis Diebold in the early 2000s. Are we in the middle of the next gap? Or have we entered an era in which technology is permanently lagging behind the amount of information produced?

IBM360Because they make “bigness” historically contingent, histories of big data tend to de-emphasize the distinctiveness of the new data-driven science and to lower claims that some epistemological shift in how scientific knowledge is produced is under way. But they illuminate characteristics of past information overloads, which help make sense of contemporary challenges. Some participants, for instance, underlined the need to localize the gap (down to the size and capacities of their PCs and servers) so as to understand how to reduce it, and who should pay for it. This way of thinking is reminiscent of the material cultures of big data studied by historians of science. They show that bigness is a notion primarily shaped by technology and materiality, whether paper, punch cards, microfilms, or those hardware, software and infrastructures scientific theories were built into after the war. But there’s more to big data than just technology. Scientists have also actively sought to build large-scale databases, and a “rhetoric of big” had sometimes been engineered by scientists, government and firms alike for prestige, power and control. Historians’ narratives also elucidate how closely intertwined with politics the material and technological cultures shaping big data are . For instance, the reason why Austria-Hungary adopted punch-card machinery to handle censuses earlier that the Prussian, Christine von Oertzen explains, was determined by labor politics (Prussian rejected mechanized work to provide disabled veterans with jobs).

Defining data through ownership

The notion of “data” is no less social and political than that of “big.” In spite of the term’s etymology (data means given), the data social scientists covet and their access are largely determined by questions of uses and ownership. Not agreeing on who owns what for what purpose is what generates instability in epistemological, ethical, and legal frameworks, wha creates this ubiquitous angst. For firms, data is a strategic asset and/or a commodity protected by property rights. For them, data are not to be accessed or circulated, but to be commodified, contractualized and traded in monetized or non-monetized ways (and, some would argue, stolen). For citizens and the French independent regulatory body in charge of defending their interests, the CNIL, data is viewed through the prism of privacy. Access to citizens’ data is something to be safeguarded, secured and restricted. For researchers, finally, data is a research input on the basis of which they seek to establish causalities, make predictions and produce knowledge. And because they usually see their agenda as pure and scientific knowledge as a public good, they often think the data they need should be also considered a public good, free and open to them. 

In France, recent attempts to accommodate these contradictory views have created a mess. Legislators have strived to strengthen citizens’ privacy and their right to be forgotten against Digital Predators Inc. But the 19th article of the resulting Digital Republic Bill passed in 2016 states that, under specific conditions, the government can order private business to transfer survey data for public statistics and research purposes. The specificities will be determined by “application decrees,” not yet written and of paramount importance to researchers. But at the same time, French legislators have also increased governmental power to snitch (and control) the private life of its citizens in the wake of terror attacks, and rights on business, administrative and private data are also regulated by a wide arrays of health, insurance or environmental bills, case law, trade agreements and international treaties.

consentAs a consequence, firms are caught between contradictory requirements: preserving data to honor long term contracts vs deleting data to guarantee their clients’ “right to be forgotten.” Public organizations navigate between the need to protect citizens, their exceptional rights to require data from citizens, and incentives to misuse them (for surveillance and policing purpose.) And researchers are sandwiched between their desire to produce knowledge, describe social behaviors and test new hypotheses, and their duty to respect firms’ property rights and citizens’ privacy rights. The latter requirement yields fundamental ethical questions, also debated during the ScienceXXL conference. One is how to define consent, given that digital awareness is not distributed equally across society. Some participants argued that consent should be explicit (for instance, to scrap data from Facebook or dating websites). Other asked why digital scrapping should be regulated while field ethnographic observation wasn’t, the two being equivalent research designs. Here too, these debates would gain from a historical perspective, one offered in histories of consent in medical ethics (see Joanna Radin and Cathy Gere on the use of indigenous heath and genetic data).

All in all, scientific, commercial, and political definitions of what “big” and what “data” are are interrelated. As Bruno Strasser illustrates with the example of crystallography, “labeling something ‘data’ produces a number of obligations” and prompt a shift from privacy to publicity. Conversely, Elena Aronova’s research highlights that postwar attempts to gather geophysical data from oceanography, seismology, solar activity or nuclear radiation were shaped by the context of research militarization. They were considered a “currency” that should be accumulated in large volumes, and their circulation was more characterized by Cold War secrecy than international openness. The uncertain French technico-legal framework can also be compared to that of Denmark, whose government has lawfully architected big data monitoring without citizens’ opposition: each citizen has a unique ID carried through medical, police, financial and even phone records, an “epidemiologist’s dream” come true. 

Social scientists in search of a common epistemology

If they want to harness the promises of big data, then, social scientists cannot avoid entering the political arena. A prerequisite, however, is to forge a common understanding of what data are and are for. And conference exchanges suggest we are not there yet. At the end of the day, what participants agreed on is that the main characteristic of these new data isn’t just size, but the fact that it is produced for purposes other than research. But that’s about all they agree on. For some, it means that data like RSS feeds, tweets, Facebook likes or Amazon prices is not as clean than that produced through sampling or experiments, and that more efforts and creativity should be put into cleaning datasets. For other, cleaning is distorting. Gaps and inconsistencies (like multiple birth dates, odd occupations in demographical databases) provide useful information on the phenomena under study.

That scrapped data is not representative also commanded wide agreement but while some saw this as a limitation, other considered it as an opportunity to develop alternative quality criteria. Neither is data taken from digital websites objective. The audience was again divided on what conclusion to draw. Are these data “biased”? Do their subjective character make it more interesting? Rebecca Lemov’s history of how mid-twentieth century American psycho-anthropologists tried to set up a “database of dreams” reminds us that capturing and cataloguing the subjective part of human experience is a persistent scientific dream. In an ironical twist, the historians and statisticians in the room ultimately agreed that what a machine cannot be taught (yet) is how the data are made, and this matter more than how data are analyzed. The solution to harness the promise of big data, in the end, is to consider data not as a research input, but as the center of scientific investigation.

Relevant links on big data and social science (in progress)

2010ish “promises and challenges of big data” articles: [Bollier], [Manovich], [Boyd and Crawford]

Who coined the term big data” (NYT), a short history of big data, a timeline

Science special issue on Prediction (2017)

Max Plank Institute Project on historicizing data and 2013 conference report

Elena Aronova on historicizing big data ([VIDEO], [BLOG POST], [PAPER])

2014 STS conference on collecting, organizing, trading big data, with podcast

Quelques liens sur le big data et les sciences sociales

Au delà des Big Data” par Etienne Ollion & Julien Boelaert

A quoi rêvent les algorithms, par Dominique Cardon

Numero special de la revue Statistiques et Sociétés (2014)

Numero special de la revue Economie et Statistiques à venir

Sur le datascape RICardo, par Paul Girard

Posted in Uncategorized | Tagged , , , , | 3 Comments

The ordinary business of macroeconometric modeling: working on the MIT-Fed-Penn model (1964-1974)

Against monetarism?

 In the early days of 1964, George Leland Bach, former dean of the Carnegie Business School and consultant to the Federal Reserve, arranged a meeting between the Board of Governors and 7 economists, including Stanford’s Ed Shaw, Yale’s James Tobin, Harvard’s James Dusenberry and MIT’s Franco Modigliani. The hope was to tighten relationships between the Fed economic staff and “academic monetary economists.” The Board’s concerns were indicated by a list of questions sent to the panel: “when should credit restraint being in an upswing?” “What role should regulation of the maximum permissible rate on time deposits play in monetary policy?” “What weight should be given to changes in the ‘quality’ of credit in the formation of monetary policy?”

Fed chairman William McChesney Martin’s tenure had opened with the negotiation of the 1951 Accord which restored the Fed’s independence, which he had since constantly sought to assert and strengthen. In the past years, however, the constant pressure CEA chairman Walter Heller exerted to keep short-term rates low (so as not to offset the expansionary effects of his proposed tax cut) had forced Martin into playing defense. The board was now in a weird position. On the one hand, after the emphasis had been on fiscal stimulus, inflationary pressures were building up and the voices of those economists pushing for active monetary stabilization were increasingly heard. Economists like Franco Modigliani, trained in the Marschakian tradition, were hardly satisfied with existing macroeconometric models of the Brookings kind, with their overwhelming emphasis on budget channels and atrophied money/finance blocks.

On the other hand, Milton Friedman, who was invited to talk to the board a few weeks after the panel, was pushing a monetarist agenda which promised to kill the Fed’s hard-fought autonomy in steering the economy. Money supply only affected output and employment in a transitory way, he explained, and it was a messy process because of  lags in reacting to shifts in interest rates. Ressurecting the prewar quantity theory of money, Friedman insisted that the money supply affected output through financial and non-financial asset prices. He and David Meiselman had just published an article in which they demonstrated that the correlation between money and consumption was higher and more stable than between consumption and expenditures. MIT’s Robert Solow and John Kareken had questioned Friedman and Meiselman’s interpretation of lags and their empirical treatment of causality, and their colleagues Modigliani and Albert Ando were working on their own critique of FM’s consumption equation. This uncertain situation was summarized in the first sentences of Dusenberry’s comments to the 1964 panel:

Decision making in the monetary field is always difficult. There are conflicts over the objectives of monetary policy and over the nature of momentary influences on income, employment prices and the balance of payments. The size and speed of impact of the effects of central bank actions are also matters of dispute. The Board’s consultants try to approach their task in a scientific spirit but we cannot claim to speak with the authority derived from a wealth of solid experimental evidence. We must in presenting our views emphasize what we don’t know as well as what we do know. That may be disappointing vut as Mark Twain said: “it ain’t things we don’t know that hurt, it’s the things we know that ain’t so.


Winning the theory war implied researching channels whereby monetary policy influenced real aggregates, but winning the policy war implied putting these ideas to work. During a seminar held under the tutelage of the SSRC’s Council of Economic Stability, economists came to the conclusion that the Brookings model previously funded came short of integrating the monetary and financial sphere with the real one, and Modigliani and Ando soon proposed to fashion another macroeconomic model. For the Keynesian pair, the model was explicitly intended as a workshorse against Friedman’s monetarism. At the Fed, head of the division of research and statistics Daniel Brill and Frank De Leeuw, a Harvard PhD who had written down the Brooking’s model monetary sector, had come to the same conclusion and started to build their own model. It was decided to merge the two projects. Funded by the Fed through the Social Science Research Council, the resulting model came to be called the MPS, for MIT-Penn (where Ando had moved in 1967)-SSRC. Intended as a large-scale quarterly model, its 1974 operational version exhibited around 60 simultaneous behavioral equations (against several hundreds for some versions of the Wharton and Brookings models), and up to 130 in 1995, when it was eventually replaced. Like companion Keynesian models, its supply equations were based on a Solovian model of growth, which determined the characteristics of the steady state, and a more refined demand set of equations, with 6 major blocks: final demand, income distribution, tax and transfers, labor market, price determination, and a huge financial sector (with consumption and investment equations).Non conventional monetary transmission mechanisms (aka, other than that the cost-of-capital channel) were emphasized.


Model comparison, NBER 1976

To work these equations out, Modigliani and Ando tapped the MIT pool of graduate students. Larry Meyer, for instance, was in charge of the housing sector (that is, modeling how equity and housing values are impacted by monetary policy), Dwight Jaffee worked on the impact of credit-rationing on housing, Georges de Menil handled the wage equation with a focus on the impact of unions on wages, Charles Bischoff provided a putty-clay model of plant and equipment investment, Gordon Sparks wrote the demand equation for mortgage. Senior economists were key contributors too: Ando concentrated on fiscal multiplier estimates, Modigliani researched how money influenced wages, and how to model expectations to generate a consistent theory of interest rates determination with students Richard Sutch, then Robert Schiller. The growing inflation and the oil shock later forced them to rethink the determination of prices and wages, the role inflation played in transmission mechanisms and to add a Phillips curve to the model. The Fed also asked several recrues, including Enid Miller, Helen Popkin, Alfred Tella and Peter Tinsley, to work on the banking & financial sector and transmission mechanisms, in particular portfolio adjustments. The latter were led by   de Leeuv and Edward Gramlich, who had just graduated from Yale under Tobin and Art Okun. Responsibilities for data compilation, coding, running simulations were also split between academics and the Fed, with Penn assistant professor Robert Rasche playing a key role.

PG1 1964 10 30

The final model was much influenced by Modigliani’s theoretical framework. The project generated streams of papers investigating various transmission mechanisms, including the effect of interest rates on housing and plants investment and durable goods consumption,  credit rationing and the impact of expectations of future changes in asset prices on the term structure and on the structure of banks’ and households’ portfolio, and Tobin’s q. The MPS model did not yield expected results. Predictive performance was disappointing, estimated money multipliers were small, lags were important, and though their architects were not satisfied with the kind of adaptive expectations embedded in the behavioral equations, they lacked the technical apparatus to incorporate rational expectations. In short, the model didn’t really back aggressive stabilization policies.

Modigliani’s theoretical imprint on the MPS model, and his use of its empirical results in policy controversies are currently being investigated by historian of macro Antonella Rancan. My own interest lies, not with the aristocratic theoretical endeavors and big onstage debates, but with the messy daily business of crafting, estimating and maintaining the model.

From theoretical integrity to messy practices

A first  question is how such a decentralized process led to a consistent result. I don’t have an exhaustive picture of the MPS project yet, but it seems that graduate students picked a topic, then worked in relative isolation for months, gathering their own data, surveying the literature on the behavior of banks, firms, unions consumers or investors before sending back a block of equations. Because these blocks each had different structure, characteristics and properties, disparate methods were summoned to estimate them: sometimes TSLS, sometimes LILM or IV. Finally, because the quality of the forecasts was bad, a new batch of senior researchers reworked the housing, consumption, financial and investment blocks in 1969-1973. How is this supposed to yield a closed hundred equations model?

Bringing consistency to hundreds of equations with disparate underlying theories, data and estimation methods was a recurring concern for postwar macroeconometric modelers. At Brookings, the problem was to aggregate tens of subsectors. “When the original large scale system was first planned and constructed, there was no assurance that the separate parts would fit together in a consistent whole,” a 1969 Brookings report reads. Consistency was brought by a coordinating team and through the development of common standards, Michael McCarthy explains: large database capabilities with easy access and efficient update procedures, common packages (AUTO-ECON), efficient procedures for checking the accuracy of the code (the residual check procedure), and common simulation methods. But concerns with unification only appeared post-1969 in the Modigliani-Ando-Fed correspondence. Modigliani was traveling a lot, involved in the development of an Italian macromodel, and did not seem to care very much about the nooks and crannies of data collection and empirical research. Was a kind of consistency achieved through the common breeding of model builders, then? Did Modigliani’s monetary and macro courses at MIT create a common theoretical framework, so that he did not have to provide specific guidelines as to which behavior equations were acceptable, and which were not? Or were MIT macroeconomists’ practices shaped by Ed Kuh and Richard Schmalensee’s empirical macro course, and the TROLL software?



To mess things further up, Fed and academic researchers had different objectives, which translated in diverging, sometimes antagonistic practices. In his autobiography, Modigliani claimed that “the Fed wanted the model to be developed outside, the academic community to be aware of this decision, and the result not to reflect its idea of how to operate.” Archival records show otherwise. Not only were Fed economists very much involved in model construction and simulations, data collection and software management, but they further reshaped equations to fit their agenda. Intriligator, Bodkin and Hsiao list three objectives macroeconometric modeling tries to achieve: structural analysis, forecasting and policy evaluation, that is, a descriptive, a predictive and a prescriptive purpose. Any macroeconometric model thus embodies tradeoffs between these uses. This is seen in the many kinds of simulations Fed economists were running, each answering a different question. “Diagnostic simulations” were aimed at understanding the characteristics of the model: whole blocks were taken as exogenous , so as to pin down causes and effects in the rest of the system. “Dynamics simulations” required feeding forecasts from the previous period into the model for up to 38 quarters, and check whether the model blew up (it often did) or remained stable and yielded credible estimates for GDP or unemployment. “Stochastic simulations” were carried out by specifying initial conditions, then making out-of-sample forecasts. Policy simulations relied on shocking an exogenous variable after the model had been calibrated.

How the equations were handled also reflected different tradeoffs between analytical consistency and forecasting performance. True, Board members needed some knowledge on how monetary policy affect prices, employment and growth, in particular on scope, channels and lags. But they were not concerned with theoretical debates. They would indifferently consult with Modigliani, Dusenberry, Friedman or Metlzer. Fed economists avoided the terms “Keynesian” or “monetarist.” At best, they joked about “radio debates” (FM-AM stood for Friedman/Meiselman-Ando/Modigliani). More fundamentally, they were clearly willing to trade theoretical consistency for improved forecasting ability. In March 1968, for instance, De Leeuv wrote that dynamic simulations were improved if current income was dropped from the consumption equation:

We change the total consumption equation by reducing the current income weight and increasing the lagged income weight […] We get a slight further reduction of simulation error if we change the consumption allocation equations so as to reduce the importance of current income and increase the importance of total consumption. This reduction of error occurs regardless of which total consumption equation we use. These two kinds of changes taken together probably mean that when we revise the model the multipliers will build up more gradually than in our previous policy simulations, and also that the government expenditure multiplier will exceed the tax multiplier. You win!

 But Modigliani was not happy to sacrifice theoretical sanity in order to gain predictive power. “I am surprised to find that in these equations you have dropped completely current income. Originally this variable had been introduced to account for investment of transient income in durables. This still seems a reasonable hypothesis,” he responded.

The Fed team was also more comfortable with fudging, aka adding an ad-hoc quantity to the intercept of an equation to improve forecasts, than Modigliani and Ando were. As explained by Arnold Kling, this was made necessary by the structural shift associated with mounting inflationary pressures of all kinds, including the oil crisis. After 1971, macroeconometric models were systematically under-predicting inflation. Ray Fair later noted that analyses of the Wharton and OBE models showed that ex-ante forecast from model builders (with fudge factors) were more accurate than the ex-post forecasts of the models (with actual data). “The use of actual rather than guessed values of the exogenous variables decreased the accuracy of the forecasts,” he concluded. According to Kling, the hundreds of fudge factors added to large-scale models were precisely what clients were paying for when buying forecasts from Wharton, DRI or Chase. They were “providing us with the judgment of Eckstein, Evans and Adams […] and these judgments are more important to most of their customers than are the models themselves,” he ponders.


Material from Modigliani’s MPS folders, Rubinstein Library, Duke University

Diverging goals therefore nurtured conflicting model adjustments. Modigliani and Ando primarily wanted to settle an analytical controversy, while the Fed used MPS as a forecasting tool. How much MPS was aimed as a policy aid is more uncertain. By the time the model was in full operation, Arthur Burns had replaced Martin as chairman. Though a highly skilled economist – he had coauthored Welsey Mitchell’s business cycles study– his diaries suggest that his decisions were largely driven by political pressures. Kling notes that “the MPS model plays no role in forecasting at the Fed.” The forecasts were included in the Greenbook, the memorandum used by the chair for FOMC meetings. “The staff is not free to come up with whatever forecast it thinks is most probable. Instead, the Greenbook must support the policy direction favored by the Chairman),” writes Kling. Other top Fed officials were openly dismissive of the whole macroeconometric endeavor. Lyle Gramley, for instance, wouldn’t trust the scenarios derived from simulations. Later dubbed the “inflation tamer,” he had a simple policy agenda: bring inflation down. A consequence of these divergences, two models were, in fact, curated side by side throughout the decade: an academic one (A), and a Fed one (B). With time, they exhibited growing differences in steady states and transition properties. During the final years of the project, some unification was undertaken, but several MPS models kept circulating throughout the 1970s and 1980s.

Against the linear thesis

Archival records finally suggest that there is no such thing as a linear downstream relationship from theory to empirical work. Throughout the making of the MPS, empirical analysis and computational constraints seem to have spurred macroeconomic and econometric theory innovations. One example is the new work carried by Modigliani, Ando, Rasche, Cooper, Gramlich and Shiller on the effects of the expectations of price increases on investment, on credit constraints in the housing sector and on saving flow in the face of poor predictions. Economists were also found longing for econometric tests enabling the selection of a model specification over others. The MPS model was constantly compared with those developed by the Brookings, Wharton, OBE, BEA, DRI or St Louis teams. Public comparisons were carried through conferences and volumes sponsored by the NBER. But in 1967, St Louis monetarists also privately challenged MPS Keynesians to a duel. In those years, you had to specify what counted as a fatal blow, choose the location, the weapon, but also its operating mechanism. In a letter to Modigliani, Meltzer clarified their respective hypotheses on the relationship between short-term interest rates and the stock of interest bearing government debt held by the public. He then proceeded to define precisely what data they would use to test these hypotheses, but he also negotiated the test design itself. “Following is a description of some tests that are acceptable to us. If these tests are acceptable to you, we ask only (1) that you let us know […] (2) agree that you will us copies of all of the results obtained in carrying out these tests, and (3) allow us to participate in decisions about appropriate decisions of variable.”


Ando politely asked for compiled series, negotiated the definition of some variables, and agreed to 3 tests. This unsatisfactory armory led Ando and Modigliani to nudge econometricians: “we must develop a more systematic procedure for choosing among the alternative specifications of the model than the ones that we have at our disposal. Arnold Zellner of the University of Chicago has been working on this problem with us, and Phoebus Dhrymes and I have just obtained a National Science Foundation grant to work on this problem,” Modigliani reported in 1968 (I don’t understand why Zellner specifically).


Punchcard instructions (MPS folders)

More generally, it is unclear how the technical architecture, including computational capabilities simulation procedures and FOTRAN coding, shaped the models, their results and their performances. 1960s reports are filled with computer breakdowns and coding nightmares: “the reason for the long delay is […] that the University of Pennsylvania computer facilities have completely broken down since the middle of October during the process of conversion to a 360 system, and until four days ago, we had to commute to Brookings in Washington to get any work done,” Ando lamented in 1967. Remaining artifacts such as FORTRAN logs, punchcard instructions and endless washed-out output reels or hand-made figures speaks to tediousness of the simulation process. All this must have been especially excruciating for those model builders who purported to settle the score with a monetarist who wielded parsimonious models with a handful of equations and loosely defined exogeneity.


Capture d_écran 2017-03-15 à 03.42.08

Output reel (small fraction, MPS folders)

As is well known, these computational constraints have stimulated scientists’ creativity (Gauss-Seidel implemented through SIM package Erdman residual check procedure, etc). Did they foster other creative practices, types of conversations? Have the standardization of models evaluation brought by the enlargement of the tests toolbox and the development of econometric software package improve macroeconomic debates since Ando, Modigliani, Brunner and Meltzer’s times? As Roger Backhouse and I have documented elsewhere, historians are only beginning to scratch the surface of how computer changed economics. While month-long tedious simulations now virtually take two clicks to run, data import included, this neither helped the spread of simulations, nor prevented the marginalization of Keynesian macroeconometrics, the current crisis of DSGE modeling and the rise of computer-economical quasi-experimental techniques.


MPS programme (MPS folder)

Overall, my tentative picture of the MPS model is not one of a large-scale consistent Keynesian model. Rather, it is one of multiple compromises and back and forth between theory, empirical work and computations. Its is not even a model, but a collection of equations whose scope and contours can be adapted to the purpose at hand.

Note: this post is a mess. It is a set of research notes drawing on anarchic and fragmentary archives for my coauthor to freak out work on. Our narrative might change as additional data is gathered. Some questions might be irrelevant. The econometrics narrative is probably off base. But the point is to elicit corrections, comments, suggestions and recollections from those who have participated into the making of the MPS or any contemporary large scale macroeconometric model in the 1960s and 1970s

Posted in Uncategorized | Tagged , , , , , , , , , , | 1 Comment