Big data in social sciences: a promise betrayed ?

In just 5 years, the mood at conferences on social science and big data has shifted, at least in France. Back in the early 2010s, these venues were buzzing with exchanges about the characteristics of the “revolution” (the 4Vs) with participants marveling at the research insights afforded by the use of tweets, website ratings, Facebook likes, Ebay prices or online medical records. It was a time when, in spite of warnings about the challenges and perils ahead, grant applications, graduate courses and publications were suddenly invaded by new tools to extract, analyze and visualize data. There discussions are neither over nor even mature yet, but their tone has changed. The enthusiasm with a tint of arrogance has given way to a cautious reflexivity wrapped up in a general zeitgeist of uncertainty and angst, even anger. Or so is the feeling I took away from the ScienceXXL conference I attended last week. Organized by demographer Arnaud Bringé and sociologists Anne Lambert and Etienne Ollion at the French National Institute for Demographic Studies, it was conceived as an interdisciplinary practitioners’ forum. Debates on sources, access, tools and uses were channeled via a series of feedbacks offered by computer scientists, software engineers, demographers, statisticians, sociologists, economists, political scientists and historians. And this, in fact, made the underlying need to yoke new practices to an epistemological re-evaluation of the nature and uses of data, of the purpose of social science, and of the relationships between researchers and government, independent agencies, business and citizens especially salient.

Lucidity: big data is neither easier nor faster nor cheaper

Interdisciplinarity

The most promising trend I saw during the workshop is a better integration of users, disciplines and workflows. “Building a database doesn’t create its own uses” was much reiterated, but responses were offered. One is the interdisciplinary construction of a datascape, that is, a tool that integrates the data corpus and the visualization instrument. Paul Girard introduced RICardo, which allows the exploration of XIX/XXth centuries trade data. Eglantine Schmitt likewise explained that the development of a text-mining software required “choosing an epistemological heritage” on how words are defined and how the interpretative work is performed, and “tool it up” for current and future uses, subject to technical constraints. What surprised me, I shall confess, was the willingness of research engineers and data and computer scientists to incorporate the epistemological foundations of social sciences into their work and collect lessons learned from centuries of qualitative research. Several solutions to further improve collaboration between social and computer scientists were discussed. The Hackaton/Sprint model prevents teams from divide up tasks, and force interaction yield an understanding of others’ way of thinking and practices. The downside is in promoting “fast science,” while data need time to be understood and digested. Data dumps and associated contests on websites such as Kaggle, by contrast, allow longer-term projects.

Perceived future challenges were a better integration of 1) qualitative and quantitative methods (cases of fruitful interbreeding mentioned were the Venice Time Machine project and Moretti’s Distant Reading. Evaluations of culturomics were more mixed)  2) old and new research (to know if the behavioral patterns are really new phenomena produced by social networks and digitalized markets, or are consistent with those traditional behaviors identified with older techniques). Also pointed out was the need to identify and study social phenomena that are impossible to capture through quantification and datification. This suggests that a paradoxical consequence of the massive and constant data dump allowed through real-time recording of online behavior could be a rise in the prestige of extremely qualitative branches of analysis, such as ethnography.

Methodenstreit

Unsurprisingly, debates on quantitative tools, in particular regarding the benefits and limits of traditional regression methods vs machine learning, quickly escalated. Conference exchanges echoed larger debates on the black box character of algorithms, the lack of guarantee that their result is optimal and the difficulty in interpreting results, three shortcomings that some researchers believe make Machine Learning incompatible with social science DNA. Etienne Ollion & Julien Boelaert pictured random forest as epistemologically consistent with the great sociological tradition of “quantitative depiction” pioneered by Durkheim or Park & Burgess. They explained that ML techniques allow more iterative exploratory approaches and mapping heterogeneous variable effects across the data space. Arthur Charpentier rejected attempts to conceal the automated character of ML. These techniques are essentially built to outsource the task of getting a good fit to machines, he insisted. My impression was that there is a sense in which ML is to statistics what robotization is to society: a job threat demanding a compelling reexamination of what is left for human statisticians to do, what is impossible to automatize.

tEQDan.pngTool debates fed into soul-searching on the nature and goals of social sciences. The focus was on prediction vs explanation. How well can we hope to predict with ML, some asked? Prediction is not the purpose of social sciences, other retorted, echoing Jake Hofman, Armit Sharma and Duncan Watt’s remark that “social scientists have generally deemphasized the importance of prediction relative to explanation, which is often understood to mean the identification of interpretable causal mechanisms.” These were odd statements for a historian of economics working on macroecometrics. The 1960s/1970s debates around the making and uses of Keynesian macroeconometrics models I have excavated highlight the tensions between alternative purposes: academics primarily wanted to understand the relationships between growth, inflation and unemployment, and make conditional prediction of the impact of shifts in taxes, expenditures or the money supply on GDP. Beyond policy evaluation, central bankers also wanted their model to forecast well. Most macroeconometricians also commercialized their models, and what sold best were predictive scenarios. My conclusion is that prediction had been as important, if not more, than explanation in economics (and I don’t even discuss how Friedman’s predictive criterion got under economists’ skin in the postwar). If, as Hoffman, Sharma and Watts argue, “the increasingly computational nature of social science is beginning to reverse this traditional bias against prediction,” then the post-2008 crash crisis in economics should serve as a warning against such crystal ball hubris.

Access (denied)

scrapUncertainty, angst and a hefty dose of frustration dominated discussions on access to data. Participants documented access denials to a growing number of commercial websites after using data scrapping bots, twitter’s APIs getting increasingly restrictive, administrations and firms routinely refusing to share their data, and, absent adequate storage/retrieval routines, data mining and computational expertise and stable and intelligible legal framework, even destroying large batches of archives. Existing infrastructure designed to allow researchers’ access to public and administrative data are sometimes ridiculously inadequate. In some cases, researchers cannot access data firsthand and have to send their algorithms for intermediary operators to run them, meaning no research topic and hypotheses can emerge from observing and playing with the data. Accessing microdata through the Secure Data Access Center mean you might have to take picture of your screen as regression output, tables, and figures are not always exportable. Researchers also feel their research designs are not understood by policy and law-makers. On the one hand, data sets need to be anonymized to preserve citizens’ privacy, but on the other, only identified data allow dynamic analyses of social behaviors. Finally, as Danah Boyd and Kate Crawford had predicted in 2011, access inequalities are growing, with the prospect of greater concentration of money, prestige, power and visibility in the hand of a few elite research centers. Not so much because access to data is being monetized (at least so far), but because privileged access to data increasingly depends on networks and reputation and creates a Matthew effect.

Referring to Boyd and Crawford, one participant sadly concluded that he felt the promises of big data that had drawn him to the field were being betrayed.

Harnessing the promises of big data: from history to current debates

Those social scientists in the room shared a growing awareness that working with big data is neither easier nor faster nor cheaper. What they were looking for, it appeared, was not merely feedback, but frameworks to harness the promises of big data and guidelines for public advocacy. Yet crafting such guidelines requires some understanding of the historical, epistemological and political dimensions of big data. This involves reflecting on changing (or enduring) definitions of “big” and of “data” across time and interests groups, including scientists, citizens, businesses or governments.

When data gets big

512px-Hollerith_card_reader_closeup“Bigness” is usually defined by historians, not in terms of terabits, but as a “gap” between the amount and diversity of data produced and the available intellectual and technical infrastructure to process them. Data gets big when it becomes impossible to analyze, creating some information overload. And this has happened several times in history: the advent of the printing machine, the growth in population, the industrial revolution, the accumulation of knowledge, the quantification that came along scientists’ participation into World War II. A gap appeared when 1890 census data couldn’t be tabulate in 10 years only, and the gap was subsequently reduced by the development of punch cards tabulating machines. By the 1940s, libraries’ size was doubling every 16 years, so that classification systems needed to be rethought. In 1964, the New Statesman declared the age of “information explosion.” Though its story is unstabilized yet, the term “big data” appeared in NASA documents at the end of the 1990, then by statistician Francis Diebold in the early 2000s. Are we in the middle of the next gap? Or have we entered an era in which technology is permanently lagging behind the amount of information produced?

IBM360Because they make “bigness” historically contingent, histories of big data tend to de-emphasize the distinctiveness of the new data-driven science and to lower claims that some epistemological shift in how scientific knowledge is produced is under way. But they illuminate characteristics of past information overloads, which help make sense of contemporary challenges. Some participants, for instance, underlined the need to localize the gap (down to the size and capacities of their PCs and servers) so as to understand how to reduce it, and who should pay for it. This way of thinking is reminiscent of the material cultures of big data studied by historians of science. They show that bigness is a notion primarily shaped by technology and materiality, whether paper, punch cards, microfilms, or those hardware, software and infrastructures scientific theories were built into after the war. But there’s more to big data than just technology. Scientists have also actively sought to build large-scale databases, and a “rhetoric of big” had sometimes been engineered by scientists, government and firms alike for prestige, power and control. Historians’ narratives also elucidate how closely intertwined with politics the material and technological cultures shaping big data are . For instance, the reason why Austria-Hungary adopted punch-card machinery to handle censuses earlier that the Prussian, Christine von Oertzen explains, was determined by labor politics (Prussian rejected mechanized work to provide disabled veterans with jobs).

Defining data through ownership

The notion of “data” is no less social and political than that of “big.” In spite of the term’s etymology (data means given), the data social scientists covet and their access are largely determined by questions of uses and ownership. Not agreeing on who owns what for what purpose is what generates instability in epistemological, ethical, and legal frameworks, wha creates this ubiquitous angst. For firms, data is a strategic asset and/or a commodity protected by property rights. For them, data are not to be accessed or circulated, but to be commodified, contractualized and traded in monetized or non-monetized ways (and, some would argue, stolen). For citizens and the French independent regulatory body in charge of defending their interests, the CNIL, data is viewed through the prism of privacy. Access to citizens’ data is something to be safeguarded, secured and restricted. For researchers, finally, data is a research input on the basis of which they seek to establish causalities, make predictions and produce knowledge. And because they usually see their agenda as pure and scientific knowledge as a public good, they often think the data they need should be also considered a public good, free and open to them. 

In France, recent attempts to accommodate these contradictory views have created a mess. Legislators have strived to strengthen citizens’ privacy and their right to be forgotten against Digital Predators Inc. But the 19th article of the resulting Digital Republic Bill passed in 2016 states that, under specific conditions, the government can order private business to transfer survey data for public statistics and research purposes. The specificities will be determined by “application decrees,” not yet written and of paramount importance to researchers. But at the same time, French legislators have also increased governmental power to snitch (and control) the private life of its citizens in the wake of terror attacks, and rights on business, administrative and private data are also regulated by a wide arrays of health, insurance or environmental bills, case law, trade agreements and international treaties.

consentAs a consequence, firms are caught between contradictory requirements: preserving data to honor long term contracts vs deleting data to guarantee their clients’ “right to be forgotten.” Public organizations navigate between the need to protect citizens, their exceptional rights to require data from citizens, and incentives to misuse them (for surveillance and policing purpose.) And researchers are sandwiched between their desire to produce knowledge, describe social behaviors and test new hypotheses, and their duty to respect firms’ property rights and citizens’ privacy rights. The latter requirement yields fundamental ethical questions, also debated during the ScienceXXL conference. One is how to define consent, given that digital awareness is not distributed equally across society. Some participants argued that consent should be explicit (for instance, to scrap data from Facebook or dating websites). Other asked why digital scrapping should be regulated while field ethnographic observation wasn’t, the two being equivalent research designs. Here too, these debates would gain from a historical perspective, one offered in histories of consent in medical ethics (see Joanna Radin and Cathy Gere on the use of indigenous heath and genetic data).

All in all, scientific, commercial, and political definitions of what “big” and what “data” are are interrelated. As Bruno Strasser illustrates with the example of crystallography, “labeling something ‘data’ produces a number of obligations” and prompt a shift from privacy to publicity. Conversely, Elena Aronova’s research highlights that postwar attempts to gather geophysical data from oceanography, seismology, solar activity or nuclear radiation were shaped by the context of research militarization. They were considered a “currency” that should be accumulated in large volumes, and their circulation was more characterized by Cold War secrecy than international openness. The uncertain French technico-legal framework can also be compared to that of Denmark, whose government has lawfully architected big data monitoring without citizens’ opposition: each citizen has a unique ID carried through medical, police, financial and even phone records, an “epidemiologist’s dream” come true. 

Social scientists in search of a common epistemology

If they want to harness the promises of big data, then, social scientists cannot avoid entering the political arena. A prerequisite, however, is to forge a common understanding of what data are and are for. And conference exchanges suggest we are not there yet. At the end of the day, what participants agreed on is that the main characteristic of these new data isn’t just size, but the fact that it is produced for purposes other than research. But that’s about all they agree on. For some, it means that data like RSS feeds, tweets, Facebook likes or Amazon prices is not as clean than that produced through sampling or experiments, and that more efforts and creativity should be put into cleaning datasets. For other, cleaning is distorting. Gaps and inconsistencies (like multiple birth dates, odd occupations in demographical databases) provide useful information on the phenomena under study.

That scrapped data is not representative also commanded wide agreement but while some saw this as a limitation, other considered it as an opportunity to develop alternative quality criteria. Neither is data taken from digital websites objective. The audience was again divided on what conclusion to draw. Are these data “biased”? Do their subjective character make it more interesting? Rebecca Lemov’s history of how mid-twentieth century American psycho-anthropologists tried to set up a “database of dreams” reminds us that capturing and cataloguing the subjective part of human experience is a persistent scientific dream. In an ironical twist, the historians and statisticians in the room ultimately agreed that what a machine cannot be taught (yet) is how the data are made, and this matter more than how data are analyzed. The solution to harness the promise of big data, in the end, is to consider data not as a research input, but as the center of scientific investigation.

Relevant links on big data and social science (in progress)

2010ish “promises and challenges of big data” articles: [Bollier], [Manovich], [Boyd and Crawford]

Who coined the term big data” (NYT), a short history of big data, a timeline

Science special issue on Prediction (2017)

Max Plank Institute Project on historicizing data and 2013 conference report

Elena Aronova on historicizing big data ([VIDEO], [BLOG POST], [PAPER])

2014 STS conference on collecting, organizing, trading big data, with podcast

Quelques liens sur le big data et les sciences sociales

Au delà des Big Data” par Etienne Ollion & Julien Boelaert

A quoi rêvent les algorithms, par Dominique Cardon

Numero special de la revue Statistiques et Sociétés (2014)

Numero special de la revue Economie et Statistiques à venir

Sur le datascape RICardo, par Paul Girard

Posted in Uncategorized | Tagged , , , , | 2 Comments

The ordinary business of macroeconometric modeling: working on the MIT-Fed-Penn model (1964-1974)

Against monetarism?

 In the early days of 1964, George Leland Bach, former dean of the Carnegie Business School and consultant to the Federal Reserve, arranged a meeting between the Board of Governors and 7 economists, including Stanford’s Ed Shaw, Yale’s James Tobin, Harvard’s James Dusenberry and MIT’s Franco Modigliani. The hope was to tighten relationships between the Fed economic staff and “academic monetary economists.” The Board’s concerns were indicated by a list of questions sent to the panel: “when should credit restraint being in an upswing?” “What role should regulation of the maximum permissible rate on time deposits play in monetary policy?” “What weight should be given to changes in the ‘quality’ of credit in the formation of monetary policy?”

Fed chairman William McChesney Martin’s tenure had opened with the negotiation of the 1951 Accord which restored the Fed’s independence, which he had since constantly sought to assert and strengthen. In the past years, however, the constant pressure CEA chairman Walter Heller exerted to keep short-term rates low (so as not to offset the expansionary effects of his proposed tax cut) had forced Martin into playing defense. The board was now in a weird position. On the one hand, after the emphasis had been on fiscal stimulus, inflationary pressures were building up and the voices of those economists pushing for active monetary stabilization were increasingly heard. Economists like Franco Modigliani, trained in the Marschakian tradition, were hardly satisfied with existing macroeconometric models of the Brookings kind, with their overwhelming emphasis on budget channels and atrophied money/finance blocks.

On the other hand, Milton Friedman, who was invited to talk to the board a few weeks after the panel, was pushing a monetarist agenda which promised to kill the Fed’s hard-fought autonomy in steering the economy. Money supply only affected output and employment in a transitory way, he explained, and it was a messy process because of  lags in reacting to shifts in interest rates. Ressurecting the prewar quantity theory of money, Friedman insisted that the money supply affected output through financial and non-financial asset prices. He and David Meiselman had just published an article in which they demonstrated that the correlation between money and consumption was higher and more stable than between consumption and expenditures. MIT’s Robert Solow and John Kareken had questioned Friedman and Meiselman’s interpretation of lags and their empirical treatment of causality, and their colleagues Modigliani and Albert Ando were working on their own critique of FM’s consumption equation. This uncertain situation was summarized in the first sentences of Dusenberry’s comments to the 1964 panel:

Decision making in the monetary field is always difficult. There are conflicts over the objectives of monetary policy and over the nature of momentary influences on income, employment prices and the balance of payments. The size and speed of impact of the effects of central bank actions are also matters of dispute. The Board’s consultants try to approach their task in a scientific spirit but we cannot claim to speak with the authority derived from a wealth of solid experimental evidence. We must in presenting our views emphasize what we don’t know as well as what we do know. That may be disappointing vut as Mark Twain said: “it ain’t things we don’t know that hurt, it’s the things we know that ain’t so.

 

Winning the theory war implied researching channels whereby monetary policy influenced real aggregates, but winning the policy war implied putting these ideas to work. The Fed Board therefore asked Modigliani and Ando to fashion yet another macroeconomic model, one that would offer a better integration of the monetary and financial spheres with the real one. For the Keynesian pair, the model was explicitly intended as a workshorse against Friedman’s monetarism. Funded by the Fed through the Social Science Research Council, the model came to be called the MPS, for MIT-Penn (where Ando had moved in 1967)-SSRC. Intended as a large-scale quarterly model, its 1974 operational version exhibited around 60 simultaneous behavioral equations (against several hundreds for some versions of the Wharton and Brookings models), and up to 130 in 1995, when it was eventually replaced. Like companion Keynesian models, its supply equations were based on a Solovian model of growth, which determined the characteristics of the steady state, and a more refined demand set of equations, with 6 major blocks: final demand, income distribution, tax and transfers, labor market, price determination, and a huge financial sector (with consumption and investment equations).Non conventional monetary transmission mechanisms (aka, other than that the cost-of-capital channel) were emphasized.

comparison

Model comparison, NBER 1976

To work these equations out, Modigliani and Ando tapped the MIT pool of graduate students. Larry Meyer, for instance, was in charge of the housing sector (that is, modeling how equity and housing values are impacted by monetary policy), Dwight Jaffee worked on the impact of credit-rationing on housing, Georges de Menil handled the wage equation with a focus on the impact of unions on wages, Charles Bischoff provided a putty-clay model of plant and equipment investment, Gordon Sparks wrote the demand equation for mortgage. Senior economists were key contributors too: Ando concentrated on fiscal multiplier estimates, Modigliani researched how money influenced wages, and how to model expectations to generate a consistent theory of interest rates determination with students Richard Sutch, then Robert Schiller. The growing inflation and the oil shock later forced them to rethink the determination of prices and wages, the role inflation played in transmission mechanisms and to add a Phillips curve to the model. The Fed also asked several recrues, including Enid Miller, Helen Popkin, Alfred Tella and Peter Tinsley, to work on the banking & financial sector and transmission mechanisms, in particular portfolio adjustments. The latter were led by Frank De Leeuw, a Harvard PhD who had written down the Brooking’s model monetary sector, and Edward Gramlich, who had just graduated from Yale under Tobin and Art Okun. Responsibilities for data compilation, coding, running simulations were also split between academics and the Fed, with Penn assistant professor Robert Rasche playing a key role.

PG1 1964 10 30

The final model was much influenced by Modigliani’s theoretical framework. The project generated streams of papers investigating various transmission mechanisms, including the effect of interest rates on housing and plants investment and durable goods consumption,  credit rationing and the impact of expectations of future changes in asset prices on the term structure and on the structure of banks’ and households’ portfolio, and Tobin’s q. The MPS model did not yield expected results. Predictive performance was disappointing, estimated money multipliers were small, lags were important, and though their architects were not satisfied with the kind of adaptive expectations embedded in the behavioral equations, they lacked the technical apparatus to incorporate rational expectations. In short, the model didn’t really back aggressive stabilization policies.

Modigliani’s theoretical imprint on the MPS model, and his use of its empirical results in policy controversies are currently being investigated by historian of macro Antonella Rancan. My own interest lies, not with the aristocratic theoretical endeavors and big onstage debates, but with the messy daily business of crafting, estimating and maintaining the model.

From theoretical integrity to messy practices

A first  question is how such a decentralized process led to a consistent result. I don’t have an exhaustive picture of the MPS project yet, but it seems that graduate students picked a topic, then worked in relative isolation for months, gathering their own data, surveying the literature on the behavior of banks, firms, unions consumers or investors before sending back a block of equations. Because these blocks each had different structure, characteristics and properties, disparate methods were summoned to estimate them: sometimes TSLS, sometimes LILM or IV. Finally, because the quality of the forecasts was bad, a new batch of senior researchers reworked the housing, consumption, financial and investment blocks in 1969-1973. How is this supposed to yield a closed hundred equations model?

Bringing consistency to hundreds of equations with disparate underlying theories, data and estimation methods was a recurring concern for postwar macroeconometric modelers. At Brookings, the problem was to aggregate tens of subsectors. “When the original large scale system was first planned and constructed, there was no assurance that the separate parts would fit together in a consistent whole,” a 1969 Brookings report reads. Consistency was brought by a coordinating team and through the development of common standards, Michael McCarthy explains: large database capabilities with easy access and efficient update procedures, common packages (AUTO-ECON), efficient procedures for checking the accuracy of the code (the residual check procedure), and common simulation methods. But concerns with unification only appeared post-1969 in the Modigliani-Ando-Fed correspondence. Modigliani was traveling a lot, involved in the development of an Italian macromodel, and did not seem to care very much about the nooks and crannies of data collection and empirical research. Was a kind of consistency achieved through the common breeding of model builders, then? Did Modigliani’s monetary and macro courses at MIT create a common theoretical framework, so that he did not have to provide specific guidelines as to which behavior equations were acceptable, and which were not? Or were MIT macroeconomists’ practices shaped by Ed Kuh and Richard Schmalensee’s empirical macro course, and the TROLL software?

IBM360

IBM360

To mess things further up, Fed and academic researchers had different objectives, which translated in diverging, sometimes antagonistic practices. In his autobiography, Modigliani claimed that “the Fed wanted the model to be developed outside, the academic community to be aware of this decision, and the result not to reflect its idea of how to operate.” Archival records show otherwise. Not only were Fed economists very much involved in model construction and simulations, data collection and software management, but they further reshaped equations to fit their agenda. Intriligator, Bodkin and Hsiao list three objectives macroeconometric modeling tries to achieve: structural analysis, forecasting and policy evaluation, that is, a descriptive, a predictive and a prescriptive purpose. Any macroeconometric model thus embodies tradeoffs between these uses. This is seen in the many kinds of simulations Fed economists were running, each answering a different question. “Diagnostic simulations” were aimed at understanding the characteristics of the model: whole blocks were taken as exogenous , so as to pin down causes and effects in the rest of the system. “Dynamics simulations” required feeding forecasts from the previous period into the model for up to 38 quarters, and check whether the model blew up (it often did) or remained stable and yielded credible estimates for GDP or unemployment. “Stochastic simulations” were carried out by specifying initial conditions, then making out-of-sample forecasts. Policy simulations relied on shocking an exogenous variable after the model had been calibrated.

How the equations were handled also reflected different tradeoffs between analytical consistency and forecasting performance. True, Board members needed some knowledge on how monetary policy affect prices, employment and growth, in particular on scope, channels and lags. But they were not concerned with theoretical debates. They would indifferently consult with Modigliani, Dusenberry, Friedman or Metlzer. Fed economists avoided the terms “Keynesian” or “monetarist.” At best, they joked about “radio debates” (FM-AM stood for Friedman/Meiselman-Ando/Modigliani). More fundamentally, they were clearly willing to trade theoretical consistency for improved forecasting ability. In March 1968, for instance, De Leeuv wrote that dynamic simulations were improved if current income was dropped from the consumption equation:

We change the total consumption equation by reducing the current income weight and increasing the lagged income weight […] We get a slight further reduction of simulation error if we change the consumption allocation equations so as to reduce the importance of current income and increase the importance of total consumption. This reduction of error occurs regardless of which total consumption equation we use. These two kinds of changes taken together probably mean that when we revise the model the multipliers will build up more gradually than in our previous policy simulations, and also that the government expenditure multiplier will exceed the tax multiplier. You win!

 But Modigliani was not happy to sacrifice theoretical sanity in order to gain predictive power. “I am surprised to find that in these equations you have dropped completely current income. Originally this variable had been introduced to account for investment of transient income in durables. This still seems a reasonable hypothesis,” he responded.

The Fed team was also more comfortable with fudging, aka adding an ad-hoc quantity to the intercept of an equation to improve forecasts, than Modigliani and Ando were. As explained by Arnold Kling, this was made necessary by the structural shift associated with mounting inflationary pressures of all kinds, including the oil crisis. After 1971, macroeconometric models were systematically under-predicting inflation. Ray Fair later noted that analyses of the Wharton and OBE models showed that ex-ante forecast from model builders (with fudge factors) were more accurate than the ex-post forecasts of the models (with actual data). “The use of actual rather than guessed values of the exogenous variables decreased the accuracy of the forecasts,” he concluded. According to Kling, the hundreds of fudge factors added to large-scale models were precisely what clients were paying for when buying forecasts from Wharton, DRI or Chase. They were “providing us with the judgment of Eckstein, Evans and Adams […] and these judgments are more important to most of their customers than are the models themselves,” he ponders.

20160404_114229

Material from Modigliani’s MPS folders, Rubinstein Library, Duke University

Diverging goals therefore nurtured conflicting model adjustments. Modigliani and Ando primarily wanted to settle an analytical controversy, while the Fed used MPS as a forecasting tool. How much MPS was aimed as a policy aid is more uncertain. By the time the model was in full operation, Arthur Burns had replaced Martin as chairman. Though a highly skilled economist – he had coauthored Welsey Mitchell’s business cycles study– his diaries suggest that his decisions were largely driven by political pressures. Kling notes that “the MPS model plays no role in forecasting at the Fed.” The forecasts were included in the Greenbook, the memorandum used by the chair for FOMC meetings. “The staff is not free to come up with whatever forecast it thinks is most probable. Instead, the Greenbook must support the policy direction favored by the Chairman),” writes Kling. Other top Fed officials were openly dismissive of the whole macroeconometric endeavor. Lyle Gramley, for instance, wouldn’t trust the scenarios derived from simulations. Later dubbed the “inflation tamer,” he had a simple policy agenda: bring inflation down. A consequence of these divergences, two models were, in fact, curated side by side throughout the decade: an academic one (A), and a Fed one (B). With time, they exhibited growing differences in steady states and transition properties. During the final years of the project, some unification was undertaken, but several MPS models kept circulating throughout the 1970s and 1980s.

Against the linear thesis

Archival records finally suggest that there is no such thing as a linear downstream relationship from theory to empirical work. Throughout the making of the MPS, empirical analysis and computational constraints seem to have spurred macroeconomic and econometric theory innovations. One example is the new work carried by Modigliani, Ando, Rasche, Cooper, Gramlich and Shiller on the effects of the expectations of price increases on investment, on credit constraints in the housing sector and on saving flow in the face of poor predictions. Economists were also found longing for econometric tests enabling the selection of a model specification over others. The MPS model was constantly compared with those developed by the Brookings, Wharton, OBE, BEA, DRI or St Louis teams. Public comparisons were carried through conferences and volumes sponsored by the NBER. But in 1967, St Louis monetarists also privately challenged MPS Keynesians to a duel. In those years, you had to specify what counted as a fatal blow, choose the location, the weapon, but also its operating mechanism. In a letter to Modigliani, Meltzer clarified their respective hypotheses on the relationship between short-term interest rates and the stock of interest bearing government debt held by the public. He then proceeded to define precisely what data they would use to test these hypotheses, but he also negotiated the test design itself. “Following is a description of some tests that are acceptable to us. If these tests are acceptable to you, we ask only (1) that you let us know […] (2) agree that you will us copies of all of the results obtained in carrying out these tests, and (3) allow us to participate in decisions about appropriate decisions of variable.”

test

Ando politely asked for compiled series, negotiated the definition of some variables, and agreed to 3 tests. This unsatisfactory armory led Ando and Modigliani to nudge econometricians: “we must develop a more systematic procedure for choosing among the alternative specifications of the model than the ones that we have at our disposal. Arnold Zellner of the University of Chicago has been working on this problem with us, and Phoebus Dhrymes and I have just obtained a National Science Foundation grant to work on this problem,” Modigliani reported in 1968 (I don’t understand why Zellner specifically).

20160404_100356

Punchcard instructions (MPS folders)

More generally, it is unclear how the technical architecture, including computational capabilities simulation procedures and FOTRAN coding, shaped the models, their results and their performances. 1960s reports are filled with computer breakdowns and coding nightmares: “the reason for the long delay is […] that the University of Pennsylvania computer facilities have completely broken down since the middle of October during the process of conversion to a 360 system, and until four days ago, we had to commute to Brookings in Washington to get any work done,” Ando lamented in 1967. Remaining artifacts such as FORTRAN logs, punchcard instructions and endless washed-out output reels or hand-made figures speaks to tediousness of the simulation process. All this must have been especially excruciating for those model builders who purported to settle the score with a monetarist who wielded parsimonious models with a handful of equations and loosely defined exogeneity.

 

Capture d_écran 2017-03-15 à 03.42.08

Output reel (small fraction, MPS folders)

As is well known, these computational constraints have stimulated scientists’ creativity (Gauss-Seidel implemented through SIM package Erdman residual check procedure, etc). Did they foster other creative practices, types of conversations? Have the standardization of models evaluation brought by the enlargement of the tests toolbox and the development of econometric software package improve macroeconomic debates since Ando, Modigliani, Brunner and Meltzer’s times? As Roger Backhouse and I have documented elsewhere, historians are only beginning to scratch the surface of how computer changed economics. While month-long tedious simulations now virtually take two clicks to run, data import included, this neither helped the spread of simulations, nor prevented the marginalization of Keynesian macroeconometrics, the current crisis of DSGE modeling and the rise of computer-economical quasi-experimental techniques.

DSC00125

MPS programme (MPS folder)

Overall, my tentative picture of the MPS model is not one of a large-scale consistent Keynesian model. Rather, it is one of multiple compromises and back and forth between theory, empirical work and computations. Its is not even a model, but a collection of equations whose scope and contours can be adapted to the purpose at hand.

Note: this post is a mess. It is a set of research notes drawing on anarchic and fragmentary archives for my coauthor to freak out work on. Our narrative might change as additional data is gathered. Some questions might be irrelevant. The econometrics narrative is probably off base. But the point is to elicit corrections, comments, suggestions and recollections from those who have participated into the making of the MPS or any contemporary large scale macroeconometric model in the 1960s and 1970s

Posted in Uncategorized | Tagged , , , , , , , , , , | 1 Comment

How much do current debates owe to conflicting definitions of economics?

Capture d_écran 2017-03-10 à 04.27.59It is not clear to me how the literature on the current state of economics got out of control. The genre is as old as the discipline itself and has grown cyclically with crises. But the last one broke out as the economic blogosphere was taking shape, and this time, the swelling tide of crisis-in-economics articles hasn’t been curbed by a new General Theory or a World War outbreak (yet). Which is why I welcomed Jo Michell’s recent idea of making a typology of econ defenses (and maybe attacks) of economics with the gratitude of a kitesurfer being handed her first F-One Trax Carbon board. What I want to suggest in this post is that sorting out current debates also requires a better understanding of what definitions of economics critics and champions of mainstream economics hold (I define mainstream as what is being published in the top-5).

Changing definitions of economics

This idea is, as usual, nurtured by the history of the discipline. That accepted definitions of economics have undergone many changes is well documented (see this survey by Roger Backhouse and Steve Medema. edit: they track changes in the definition of economics through textbooks). Economics had initially been conceived as the science of wealth, production and exchange. Marshall famously defined it as the “study of mankind in the ordinary business of life […] on the one side a study of wealth; and on the other, and more important side, a part of the study of man.” The quote shows that in the late 19Th century, an individualistic element appeared, foreshadowing the sea change Lionel Robbins brought. His famous definition of economics as “the science which studies human relations as the relationship between ends and scare means which have alternative uses” not only wrote ethics out of the discipline, but also shifted its focus toward scarcity and resource allocation. In those transitional decades, Frank Knight thought economists should focus on “social organization,” Jim Buchanan on exchange, and Ronald Coase on institutions. In a key twist, Georges Stigler and Paul Samuelson both wedded scarcity to maximization, and Robbins’ definition gradually fed into a third one: economics as the science of rational decision-making. This expanded the boundaries of the discipline: every decision, from marriage to education, prostitution and health care could be considered a legit object for economists. Some, like Gary Becker and Gordon Tullock, called this expansion economic imperialism. Backhouse and Medema’s account ends in the late 1970s, but another shift has arguably been taking place in the last decades: the replacement of a subject-based definition with a tool-based one. The hallmark of the economist’s approach would not be its subject-matter –any human phenomenon is eligible-, but its use of a set of tools designed to confront theories with quantitative data through models. See for instance this recent post by Miles Kimball: “economics needs to tackle all the big questions in the social sciences,” he titles, adding that “what is needed [for economists to influence policy] is a full-fledged research program that does the hard work of modeling and quantifying.”

From definitions to topics, methods and interdisciplinary practices: 1952

Most interesting for my purpose is how these successive, sometimes competing definitions of economics have informed what economists think are the proper subject matters, methods and boundaries of their science. And these views are nowhere as clearly articulated as when they discuss their relationships to other sciences. Take 1952, which I believe was the most important year in the history of economics. It was a time of transition when the above definitions were found clashing, as seen in the conference on “Economics and the Behavioral Sciences” held in New York under the auspices of the Ford Foundation. The minutes of the conference read:

Marschak: two fold definition of economics; (1) optimization (rational behaviour); (2) dealing with material goods rather than with other fields of decisions.

Boulding: Yes. The two don’t have anything to do with each other. It is the material goods that characterize economics. To state that the recent increase in American money wages was due to the increase of quantity of circulation has nothing to do with rational behaviour.

Marschak: it has. This was stated already by David Hume who speculated on what will people do if everyone finds overnight that his cash balance has doubled.

Herbert Simon was in the room, and his notes are slightly different from the typed minutes. He wrote “allocation of scarce resources / best decisions / the handling of material goods,” and seemingly referred to the two first as “rational.”

Capture d_écran 2017-03-10 à 03.48.11

Simon’s notes (source)

Kenneth Boulding, Simon and Jacob Marschak had all been much involved in interdisciplinary ventures since the War. Later, they would each spend a year at the Center of Advanced Studies in the Behavioral Science, established by the Ford Foundation in the wake of the aforementioned conference.  Yet, they held diverging views of what economics is about. This led them to articulate different pictures of the relationships between economists and other scientists, a topic back on the scene today.

 

Boulding: a gifted theorist – he received the second John Bates Clark medal in 1949, a year before publishing a Reconstruction of Economics aimed at merging Keynesian analysis with a balancesheet theory of the firm–, a Quaker and a pacifist, Boulding spent a lifetime reflecting on how to avoid wars, including scientific ones. His study of growth led him to study sociology and political sciences, before establishing an “integration seminar” at Michigan. “Boulding’s advocacy of integration by symbiosis between the various social sciences rather than by edification of a superdiscipline encompassing other could be taken as offering a different model for international relations: an integrated world, like an integrated social science, required collaboration, not subjugation,” Philippe Fontaine explains. The result of this integration of social science and pacifist Quaker faith was a triangle, aimed at representing society in terms of three mains organizers (‘love,’ ‘fear,’ ‘exchange’), one published months after his participation into the New York conference. He later proposed a general theory of behavior based on the concept of the message-image relationship.

simonSimon is largely absent from those giants’ shoulders 2017 critics and champions of economics like to summon, and yet he strikes me as just the kind of character whose vision should be discussed right now. By his own assessment, he developed an early (aka undergraduate) interest in decision-making in organizations, and borrowed from whichever science could help him understand it: political science, management, economics, organizational sociology, social and cognitive psychology and computer science. Simon saw no disciplinary boundaries: “there are a lot of decisions in economics, there are a lot of decisions in political science, and there are a lot of decisions in psychology. In sum, there are a lot of decisions in doing science. It is all the same subject. My version of the science of everything,” he told Mie Augier at the end of his life. The science of everything he had tried to teach and institutionalize at Carnegie’s GSIA since the 1950s was, unlike Boulding’s, unified by mathematization and quantification (must reads on Simon include Crowther-Heyk, Augier, and Sent on his economics. Edit: I forgot Simon’s autobiography).

marschakThough Simon and Marschak often found themselves on the same side of postwar debates about the future of economics – for instance in their focus on decision-making and their instance that mathematization is a prerequisite for interdisciplinary work –, their scientific visions were nonetheless different. A Menshevik activist turned marxist turned mathematical economist, Marschak headed the Cowles Commission during the 1940s. At the time of the conference, he was only beginning to contemplate interdisciplinary projects as a way to enhance his models of decision under uncertainty. If the economist can fruitfully collaborate with other scientists, Marschak wrote to Ford Foundation’s Thomas Carroll months before the 1952 conference, it is because he brings a distinctive perspective to the table. Marschak’s nascent interdisciplinary bent was predicated upon a strong disciplinary identity. His letter is worth quoting at length:

Economics is normally defined formally as dealing with the best allocation of limited resources; or (somewhat more generally) as concerned with the choosing of the best among limited set of opportunities. The word “best” refers to consistent preferences of an individual, or of a group; a business corporation, a nation. In the latter case there arise important problems of semi-ethical nature (welfare economics) […]

Many an economist (sic) tries to study comprehensively all human actions that pertain to material goods, including such things as the administration of price control, the psychology of stock speculation, the political feasibility of a monetary reform, the process of collective bargaining. Such an economist, if endowed with good common sense, can say as much any journalist so endowed. But he could say much more as a result of joint work with a psychologist, sociologist, or political scientist. In such a cooperation, the distinctive contribution of the economist consists in asking: how would the buyers, sellers, bankers, stockholders, workers, farmers behave if they consistently made choices that are the best from their own point of view? And what are the policy measures that are best from the nation’s point of view?

The economic principle of consistent choices (also known as “rational behaviour principle”) admittedly does not have power to describe all behaviour, not even in the field of commodity choices. For example, a truly rational model of a stock speculator would have to be someone like a mathematical statistician continually practicing a form of so-called sequential analysis: he will make each subsequent decision dependent, in a predetermined optimal fashion, upon the ever growing sequence of observations. While it may be advisable for a speculator to behave according to this model, no speculator does! The rational model, useful for advice, is, in this case, of little use for prediction of actual behaviour. The economist working on the theory of speculation will have to do, or learn, some social psychology, from books or from colleagues. Yet, even in this case, the economist will be able to make a contribution stating from his economic principles. The cross-disciplinary group studying speculators will be looking for a realistic compromise between the picture of mathematical statisticians engaging in speculation and the picture of stampeding buffalos. While the economist will contribute his particular method of looking for rationality, his psychological colleagues will enlighten him on how to design a series of experiments reproducing the essential elements of the investment situation. Because, as stated in your memorandum, collaboration should be seen, not as a mutual borrowing of propositions but as the interchange of methods!

[…] The economists’ peculiar concern with optimal behaviour, while astonishing and irritating to non-economists, is actually a distinct and useful contribution of economic thinking

From definitions to topics, methods and interdisciplinary practices: 2017

 To me, Miles Kimball’s post has clear Marschakian overtones: first, define what your economics identity is. Then, go and engage other social sciences on any question you wish. Writes Kimball:

That doesn’t mean the economists should ignore the work done by other social scientists, but neither should they be overly deferential to that work. Social scientists outside of economics have turned over many stones and know many things. Economists need to absorb the key bits of that knowledge. And the best scholars in any social science field are smarter than mediocre economists. But in many cases, economists who are not dogmatic can learn about social science questions outside their normal purview and see theoretical and statistical angles to studying those questions that others have not. 

By contrast, Unlearning Economics’s last critical essay seems informed by the notion that economics is about explaining how the economy works. Reclaiming the economy is also how I interpret the CORE’s recent proposal to reshape introductory economics courses. The textbook opens with histories on capitalism, wealth, growth and technology. Scarcity and choice only show up in the third chapter. Other blueprints for “radical remaking” have a Simonish flavour, perhaps not surprisingly given that their advocates are often biologists or physicists by training. And I read many French critics as plotting to subdue economics to sociology, whether bourdieusian, foucaldian, latourian or mertonian (kidding, all mertonians but one are dead).

I’m thus left wondering to what extent current debates about the state of economics are nurtured by conflicting definitions of economics. Here’s my speculation: those economists who believe the shape of economics is good usually endorse the rational decision definition. Yet in the past decades, they have shifted toward a tool-box vision of their practices. They thus view interdisciplinarity as tool exchanges. Meanwhile, critics are pushing back toward a definition of economics that was in wide currency in the early XXth century, one concerned with understanding the economy as a system of production and distribution, one rooted in capitalist accumulation, technological change, etc. They believe economists should borrow from other scientists whatever models, concepts and theories will improve their understanding of how the economy works. Those who believe the economy cannot be isolated from the social system it is embedded in additionally plead for a deeper integration of social (and sometimes natural) sciences. And that is why critics and champions often talk past each other.

Putting this hypothesis to test requires a typology of competing definitions of economics. But more systematically spelling out what your definition of “economics” is before lamenting or celebrating its current state will certainly raise the quality of current debates. Now I’m off to read Economism, The Econocracy and Economics Ruleswith the hope of riding the wave.

Posted in Uncategorized | Tagged , , , | 2 Comments

How not to screw up your economic expertise: lessons from the Kennedy tax cut grandmaster, Walter Heller

 

What is the “crisis in economic expertise” about?

Trump’s decision to demote whoever might be nominated chairman of the Council of Economic Advisers (CEA) from his cabinet has been interpreted as a final blow to a tough year – in which economists’ advice has been systematically ignored by voters- within a tough post-financial crisis decade. Economists are under the impression that since 2008, their expertise has been increasingly challenged, and they have offered several analyses and remedies: more micro, more data, more attention to distribution and less to efficiency, more humility, more awareness to the moral and political element in economic expertise, more diversity and more interdisciplinarity –economic education included- Few of these however rely on the whopping literature on the history and sociology of scientific expertise.

A systematic review would take a book, so let’s jump to how it helps elucidate what the problem with economic expertise is. Essentially, scientists produce knowledge about the word, which they believe is, if not true, at least robust or reliable and objective because produced systematically, validated according to vetted methodologies, and often quantified. Putting this knowledge to work is what expertise is about. It is thus intimately tied with 1) application, 2) building relationships with non-scientific communities, including catching attention, building trust and establishing markers of expertise (such as a PhD). Audiences and clients for economists’ expertise are many: public bodies, in particular policy-makers (from presidents to independent regulation agencies to statistical bureaus and Feds. I tentatively put courts in that category); but also private ones (businesses, banks, IT firms, insurance companies), the media, and “the public,” citizens who make economic decisions, and, most important, vote.

Over the past decade, the “crisis in expertise” has been much about loosing voters’ trust. Yet there is no evidence that economists have ever been able to influence the public at large, nor that they have put much effort in trying to do so (Friedman and Galbraith might be exceptions here). Anxieties raised by Trump’s dismissal of economic expertise are more about loosing policy-makers’ attention, though the current US president might be an exception in that his views echo the public-at-large rather than the standard policy-maker. There is no sign that economic expertise has lost currency with journalists, even less with the private sector: economists get better wages than any other discipline (but law?), receive insane amount of money for private consulting, and IT firms have gone a long way toward luring economists out of academia. One important qualification, here, is that economists don’t merely produce knowledge; they also produce tools of analysis, technologies. One possibility, thus, is that the type of economic expertise in demand has shifted: substantial policy or management advice is loosing ground, while technical skills to implement tools and provide data analysis are on the rise.

Histories of how economists painfully gained reputation and trust during the XXth century abound. Most of them are focused on public policy – data on private businesses are more difficult to obtain, and tracking economists’ influence on the public is elusive–. And none of them fail to mention the canonical proof that economists’ expertise is/have been influential: it was Walter Heller, 4th CEA chairman, who convinced J.F. Kennedy and L.B. Johnson to propose a massive income and business tax cut, passed by the Congress in 1964.

 .

Walter Heller, expert grandmaster

The facts are well-known: Eisenhower’s legacy was a sluggish decade, with growth stuck at 2,5% per year and unemployment at 8%. Recurring budget deficit, which topped 12 billions in 1959, prevented much needed defense, education and welfare expenditures. Kennedy’s campaign was consequently focused on the promise of restoring growth, of “get[ting] this country moving again.” The candidate had nevertheless straightforwardly rejected the fiscal stimuli proposed by those economists, including Paul Samuelson, who had participated in his Democratic Advisory Committee. Kennedy came to the oval office with the notion, inherited from his father, that the budget should be balanced and the money supply tightly controlled. Under the influence of his CEA chairman, Walter Heller, Kennedy became more favorable to sustaining a budget deficit, and by early 1963, he had submitted to Congress the largest peacetime voluntary budget deficit: $12 billion. He proposed to reduce income tax rate from 20-91% to 14-65% and corporate income tax rate from 52 to 47% and to abolish loopholes and preferential deductions to enlarge the tax base. He promised that, should the Congress pass his tax cuts, the 1965 budget would be equilibrated. The proposal was finally enacted in 1964, under Johnson. 1965 saw the smallest Federal deficit of the decade (1 billion), strong growth and unemployment down to 4%. The trend persisted throughout the decade, with inflation pressures slowly building in response to Johnson’s spending frenzy.

the-grandmasterThought the contribution of the tax cut to this period of prosperity, and to subsequent imbalances, is still fiercely debated, its positive spillovers on the whole profession commands wide agreement. Heller’s CEA has contributed to shift economists’ image from ivory tower technicians to useful experts and to strengthen public trust. It has been heralded as the canonical example for economists’ ability to increase society’s welfare, a symbol of a (some would say lost) golden age. The scope of Heller’s influence has, in fact, extended ways beyond the tax cut. He was instrumental in putting poverty on the presidential agenda, and, as recently unearthed by Laura Holden and Jeff Biddle, he was the one who turned human capital theory into an argument in favor of federal funding for education. His peculiar status as the “economic experts’ expert” was immediately recognized. He made Time’s cover twice in two years. No other CEA chair made the cover of the magazine before the late 1976, and none ever made it twice as CEA chair. But if the fallouts of his expertise are well known, its determinants are less so. The nagging question remains: how did he do it?

 

Beyond technocratic advice 

Let’s begin with the non-replicable aspect: Kennedy’s knack for economics, his willingness to discuss policy as well as theoretical aspects, his eagerness to read and digest memos and newspaper articles, his systematic mind. The legend says that when James Tobin told him that he may not be the best pick as CEA member because he was a “sort of ivory-tower economist,” Kennedy replied “that’s the best kind. I’m a sort of ivory-tower president.” Granted, this is not likely to happen anytime soon, but the president is not the only policy-maker economists want to influence, right? And that Kennedy was drawn to economics didn’t make Heller’s job easier. Not only was the president surrounded by advisors with conflicting economic policy views (see below), but it wasn’t clear, back then, that the role of the CEA as defined in the 1946 Employment Act was to promote specific policies. First CEA chairman Edwin Nourse and Eisenhower’s chairman Arthur Burns conceived their role as being mere advisors to the president, providing technical reports and private forecasts and refraining from making public statements or testifying before Congress. The only exception was Truman’s second chairman, Leon Keyserling, whose more activist stance created a stir. It was nevertheless one more congenial to Walter Heller’s vision of the role of the economists within society.

            The son of a civil engineer committed to public service, Heller was, by his own admission, one of those children of the Great-Depression who turned to economics because “explaning why [the economy flat on its back] and try to do something about it, seemed a high calling.” Economists from the University of Wisconsin, where Heller got his PhD, boasted a strong record in successfully influencing Wisconsin’s policy-making, not least his PhD advisor, fiscalist Harold Groves. Heller’s wartime contribution to fashioning tax increased at the Treasury, his participation into the Marshall Plan and his lobbying for federally funded education on behalf of the National Economic Education in the late 1950 strengthened his identity as a “policy-oriented economist,” a “do-something-about-it economist.” As he was nominated CEA chair, he was ready, not only to provide forecasts and technical advice, but also to advocate for those policies he believed were supported by good science, to convince the president, to testify before Congress, to engage the media and the public.

He was also willing to give much latitude for his two fellow committee members to do the same. As extensively documented by Michael Bernstein, macroeconomist James Tobin and budget specialist Kermit Gordon fully shared Heller’s conception of the role of an economic expert. Tobin later explained that “economics has always been a policy-oriented subject” and that applications of theories to “the urgent … issues of the days” were essential. In a 1961 Time article, the 3 frontiersmen thus described themselves as “pragmatists.” Promoting the tax cut really was a team effort. All 3 council members had extensive discussions with Kennedy on policy as well as on the common theoretical foundations they had borrowed from the New Economics articulated by Paul Samuelson at MIT. 

1962-cea-jfk

Heller, Tobin and Gordon with Kennedy, 1962 (source)

 .

“The President’s economic education” and the art of memos

For Heller genuinely believed that his policy advocacy was rooted in solid science, and that his task essentially consisted in educating the president. Thought it was only later that he came to call himself an “educator of presidents,” the education trope was already pervasive in his favorite educative tool: his memos. His team literally flooded the president with more than a 1000 memos during the Kennedy/Johnson presidency. Hellers’ ones were of a special kind: short, devoid of technical jargon but not of figures, with a clear and apparent structure, and main arguments systematically underlined. They usually began with a quantified depiction of the economic situation, a brief policy proposal, and extensive response to possible counterarguments.

These were so convincing that, Heller remembers, Johnson once help up one of his memos at a Cabinet meeting and said “Here’s one of Walter Heller’s memos. See how it’s set up? That’s the way I want you all to write your memos.” Below is what I believed was one of the memos that convinced Kennedy to endorse the 1963 Economic Report and the Special Message to the Congress on Tax Reform Heller had contributed to draft.

 

jfkpof-063a-008-p0024

What Heller did in those memos was:

1) arguing that the tax cut was a means consistent with Kennedy’s overarching policy ends, that is, national defense and growth (it was an argumentative strategy he had already successfully wielded on education funding, Holden and Biddle show). The above December 1962 memo began with “top of economic agenda – must match our progress in foreign policy and defense with a restoration of full vigor of our domestic economy.” This strategy is echoed in the introductory sentences of Kennedy’s Special message to the Congress:

“the most urgent task facing our Nation at home today is to end the tragic waste of unemployment and unused resources –to step up the growth and vigor of our national economy- to increase job and investment opportunities- to improve our productivity – and thereby to strengthen our nation’s ability to meet its worldwide commitments for the defense and growth of freedom.” 

2) Having argued that his proposed economic policy was in line with the President’s broader aims, Heller proceeded to frame complex policy choices in simple economic terms: it was all about bridging “the gap.” Already in memos issued early 1961, Heller hammered that the key question was “how do we close the gap between existing and potential levels of employment, production and income.” He used the term so much that after a 1961 hearing Joe Pechman told him “gee, you ought to stop talking so much about the gap because it just isn’t doing any good.”

okun3) Though Heller refrained from using technical terms in his memos, he did not shy away from quantification. Early on, he equated “bridging the gap” with a more specific target, the 4% unemployment rate. It was, Heller hypothesized from his knowledge of the previous decade, the rate that allowed the highest non-inflationary growth. At the end of 1961, he sensed that he needed a better picture of how increasing the capacity of production utilization could help reach this target. He therefore asked CEA staffer Arthur Okun to quantify the “output gap.” Okun’s resulting working paper, famously remembered for introducing the “Okun law” testifies to the influence of policy concerns on economic research. In line with Heller’s objectives, Okun set “full employment without inflationary pressure” at 4%, without further justification. Another target would change the figures, not the calculus, he warned. In the introduction, he further explained that “if programs to lower unemployment from 5 ½ to 4 percent of the labor are viewed as attempts to raise the economy’s “grade” from 94 ½ to 96 [use of production capacity], the case for them may not seem compelling. Focus on the ‘gap’ helps to remind policy-makers of the large reward associated with such an improvement.” Using 3 different techniques to estimate the relationship between unemployment and real GNP, he unequivocally concluded that each extra percentage point in the unemployment rate above four percent has been associated with about a three percent decrement in real GNP. It was not the only case where Heller’s quest for sound theoretical and empirical basis for the policies he was advocated stimulated new research. At about the same time, he asked Burton Weisbrod, senior staff economist at the CEA, to expand his quantitative analysis of the external benefits of education, Holden and Biddle relate.

4) Heller’s final step was to de-dramatize the consequences of a tax cut, namely budget deficits. He did so by showing that countries exhibiting a more rapid growth than the US, such as France, Italy or Germany, were not shy of running deficits to support aggregate demand. He also followed a gradual approach, first convincing Kennedy not to raise taxes to fund the additional $1 billion military expenses needed to face the building of a Berlin Wall in the summer of 1961. He also set to counter the “fiscal irresponsibility” argument, occasionally going downright political: “under present programs and outlook, a deficit in fiscal 462 is already in the cards,” he wrote in December 1962. “Once fiscal virginity is list, the size of the deficit matters very little to the critics of “fiscal irresponsibility.” The Eisenhower $12 billion deficit should restrain the stone-throwing of Republican critics. Our deficit would be less, and it would come at the right time.” 

Educating (or neutralizing) the whole decision chain

Persuading the executive branch

Educating the president was only part of Heller’s job. The whole decision chain had to be persuaded. As important, then, was educating skeptical presidential advisors, and neutralizing those who wouldn’t surrender. In those years, macroeconomic expertise within the executive branch was scattered across the CEA, Douglas Dillon and Robert Roosa’s Treasury, David Bell’s Bureau of Budget and the Federal Reserve Board, whose chair, William McChesney Martin, served between 1951 and 1970. Their task was to provide forecasts, advice and coordination, and prepare the budget. Beyond routine disagreement on forecasts, these economists held divergent visions of the major economic threat Kennedy had to deal with. Dillon, Roosa and Martin were worried about the growing imbalance in foreign payments and the associated risk of gold drain, and Martin also closely monitored the deterioration of the value of the dollar. They also believed that the high level of unemployment was the consequence of the “changing structure of the labor force” rather than of slacking demand.

To dismiss “the official Republican diagnosis (or excuse) is that growing unemployment is due to changing structure of the labor force”, Heller claimed that science was on his side. An early 1961 memo accordingly contrasted “the ‘correct’ analysis […] would be that most of our unemployment would respond to over-all measures designed to stimulate demand and investment […] would call for substantial additional spending, tax cuts and deficits” with “the ‘incorrect’ policy position that most of the unemployment and under-capacity operation are the result of structural factors.” Heller also emphasized the non-partisan character of his policies by providing long lists of those individuals and organizations across the political spectrum he has managed to convince that a tax cut was the best policy. A December 1962 briefing book listed the Committee for Economic Development, the AFL-CIO, New York Governor Nelson Rockefeller, the National Association of Business Economists, and, ironically, most of Eisenhower’s CEA members. The CEA also regularly provided memos debunking newspaper articles dealing with excess-demand inflation and the risk of government spending-induced inflation.

Heller took care to copy those memos to Kennedy’s closest policy aids. Ted Sorensen, Myer Feldman ad Richard Godwin, who had fiercely opposed budget deficits during the campaign, came to agree with the CEA, as did Treasury and BoB officials. They had been enrolled with Fed chairman Martin, in monthly meeting of what was dubbed “the quadriad,” whose agenda and exchanges were always set (and closely monitored) by the CEA. Through his memos, Heller even managed to defeat an alternative proposal to replace the $10 billions tax cuts with a $9 billions expenditure increase. No small feat. The idea was carried by Kenneth Galbraith, who since their Harvard students’ day was much closer to Kennedy than Heller, Tobin or Gordon ever were. Yet Heller tersely added a “why cut taxes rather than go the Galbraith way?” in his next memo. The answer was as short as it was efficient: “how could we spend an extra $9 billion in a year or two? Attempts to enlarge spending at the rate required to do the economic job would lead to waste, bottlenecks, profiteering and scandal.” Moreover, extra spending would make the government vulnerable to suspicions of “over-centralization, power grad of the cities, the educational system.” Tax-cut-induced deficit was more acceptable to the world financial community, he added, “ie, far less likely to touch off new gold outflows.”

Neutralizing the Fed

William_MChesney_Marting_Jr_175Neither was Heller shy to testify before the Congress’s Joint Economic Committee, in an effort to win their support for the forthcoming bill. In the end, the only enduring resistance came for Fed chairman Martin. The longstanding fight for influence between Martin and Heller wasn’t restricted to the tax cut issue. Martin wasn’t trained as an economist, and was therefore impervious to Heller’s arguments. He had also taken early steps to assert the Fed’s independence from the executive branch, and would constantly act to remind it. When Kennedy was elected, he did not offer his resignation, as was the practice in those years. To counter the deteriorating balance-of-payment, stabilize the value of the dollar and contain the inflationary pressures which he believed would derive from a tax cut, Martin intended to raise interest rates. In the early months of the presidency, he made it clear that he didn’t see fit to offset the upward pressures on the interest rates associated with the fledging recovery.

hero_grandmaster-2013-3Heller’s counter-attack was multifaceted. Longer and more technical memos to the president eschewed to Tobin, whose command of monetary policy was unequaled. In his own memos, the chair took a broader view, emphasizing that the success of the tax cut required the implementation of an appropriate “mix.” He was walking a tight rope: “monetary policy should be used, as needed, for balance-of-payments or price stability reasons,” he conceded, “but don’t offset the expansionary effect of tax cuts,” he immediately underlined. He argued that monetary policy should be discussed within quadriad meetings for the sake of “economic policy coordination,” and suggested to fill the board of directors of the 12 Federal Reserve Banks with New Frontiersmen like Tobin or Solow. He repeatedly tried to convince Martin that, while short-term interests rates should be raised as needed to avoid a gold drain, the Fed should buy long-term bonds so as to keep long-term interests rates low (“buying long”). This would stimulate investment and risk-taking, he argued. Heller also brought their disagreement to the media, an unusual practice in these years: in the 1961 Time article, he declared: “high interest rates and budget surpluses are incompatible: an Administration has to choose one or the other. Since both tend to hold how demand, tight money and budget surplus acting together have a gravely depressing impact on the economy.”

capture-decran-2017-02-19-a-15-46-25Sensing that he would never convince Martin, Heller labored toward (1) finding alternatives to control inflation pressures. To this end, he set up wage and price guideposts whereby wage increases should be guided by expected gains in productivity. And in the spring of 1962, he convinced Kennedy to oppose price increases in the Steel industry. (2) alleviating the balance-of-payment constraint. The Gold drain was accelerating since the beginning of 1962, with the consequence that Martin was taking measures to raise the short-term interest rate. Heller convinced Kennedy to make a public statement to restore faith in the dollar. “The United States will not devalue its dollar … I have confidence in it, and I think that if others examine the wealth of this country and its determination to bring its balance of payments into order, which it will do, I think that they will feel that the dollar is a good investment and as good as gold,” Kennedy declared during a transatlantic TV broadcast on July 23 1962. Heller never succeeded in bringing Martin into line, and the Fed rates doubled during Kennedy’s presidency. He nevertheless felt he had avoided more dramatic hikes on short and, more important for the policy mix, long term rates. 

Engaging the public

Heller’s final target was the public, voters, consumers, economic agents. In the early months of his tenure, he wrote in a memo to Kennedy that “a committee could contribute to public education on […] “modern” solution such as deficit financing and expanded government programs, thus overcoming in part the results of eight years of miseducation and retrogression in economic thinking under the Eisenhower Administration.” Heller devoted considerable energy to give talk to citizens, labor and professional organization, and also seized the opportunity to preach the Gospel through the media. He, Tobin or Samuelson, who had refused to chair the CEA but kept an eye on its progresses, regularly published popularization articles in Business Week, Time, Life, Business Insider, and so forth. Heller made no mystery about his attempts to “re-educate” the public. In March 1961, he told Time journalists that:

“The strain of fiscal conservatism has become strong, perhaps because it has been so well nurtured during the last eight years. There is a deep strain of conservatism bias built into the congressional system.” “The Eisenhower heritage persuades Washington’s new economists tha they must re-educate the US to make the most of its economy.”

In a December 1962, he explicitly outlined why educating the public was both crucial and difficult, in terms that resonate today:

“Problem of public attitude greater here, perhaps because of greater public participation in government decisions; Also, Americans are more prone to a tendency of ‘each man his own economist.’ In other countries, they’re more likely to ‘leave it to the experts.’ And who’s to say that our situation is worse, for a democracy?”

It was Kennedy himself who eventually suggested that “the Council do some serious thinking about how to use the White House as a pulpit for public education.” In his memos, Heller therefore looked for ways to overcome “American people and the Congress’s strong aversion to budget deficit.” His solution was to “repeat ‘deficit of inertia vs creative deficit for expansion” argument,” and this was precisely how Kennedy January 1963’s message to Congress was framed: “our choice today is not between a tax cut and a balanced budget. Our choice is between chronic deficits resulting from chronic slack, on the one hand, and transitional deficit temporarily enlarged by tax revision designed to promote full employment and thus make possible an ultimately balanced budget,” the president asserted.

tumblr_inline_msnasbyudm1qz4rgp

Heller resigned in November 1964, in spite of Johnson’s request that he stayed for another term. He was succeeded by Gardner Ackley, and remained a close advisor to the president. Ironically, he soon found himself on Martin’s side. As Johnson proceeded into his War on Poverty program, Heller sensed that the overheated economy had to be cooled by a tax increase. Absent such measure in the 1965 budget, Martin was right in warning that he would raise interest rates. This time, Heller failed to convince the president. CEA historian Michael Bernstein has argued that Heller’s CEA was the apex of economists’ public influence. Yet those sociologists who have spent decades haunted by the sources of economists’ unwarranted influence generally agree that it is still strong, only different. While economists’ direct influence on the content of policies has been limited, at least uneven, they have contributed to the “economicization” of public policy. Economists’ influence, Elisabeth Berman and Dan Hirschman explain in a recent survey, was in shaping the data that influenced policy decisions –GDP, CPI indexes, unemployment rate–, the range of questions which could be asked – increasingly focused on efficiency–, the institutions who asked those questions, and the socioeconomic tools to implement and evaluate policies – from cost-benefit analysis to auctions and scoring techniques. I’m rather left wondering which lessons from Heller contemporary economists have learned, and which they should work on

Posted in Uncategorized | Tagged , , | 3 Comments

The problem with “economists-failed-to-predict-the-2008-crisis” macrodeath articles

macrodeathThis week has delivered one more interesting batch of economics soul-searching posts. On Monday,  the Bloomberg View editorial board has outlined its plans to make economics more of a science (by “tossing out” models that are “refuted by the observable world” and relying “on experiments, data and replication to test theories and understand how people and companies really behave.” You know, things  economists have probably never tried…). John Lanchester then reflected on recent macro smackdown by Bank of England’s Andy Haldane and World Bank’s Paul Romer. And INET has launched a timely “Experts on Trial” series. In the first of these essays, Sheila Dows outlined how economists could forecast better (by emulating physics less and relying on a greater variety of approaches) and why economists should make peace with the  inescapable moral dimension of their discipline. In the second piece, Alessandro Roncaglia argued that considering economists as princes or servants of power is authoritarian, and that giving them such an asymmetric role within society is dangerous.

75543820Rich and thoughtful as this macrodeath literature is, it leaves me, again, frustrated.  A common feature of virtually all articles  dealing with the crisis in economics is that they are built around economists’ failure to predict the 2008 financial crisis. And yet, they hardly dig into the sources, meaning and consequences of this failure (note: in this post, I’ll consider that a forecast is a specific quantitative and probabilistic type of prediction, and I’ll use the two terms interchangeably. Shoot, philosopher). The failure to forecast is usually construed as a failure to model, leading to suggestion to improve modeling either by upgrading existing ones with frictions, search and matching, financial markets, new transmission mechanism, more variables, ect., or going back to older models, or changing paradigms altogether. Yet,  economists’  approach to forecasting rely on much more than modeling strategies, history whispers.

Agreeing to forecast, disagreeing on how and why

Macroeconomics is born out of finance fortune-tellers’ early efforts to predict changes in stock prices and economists’ efforts to explain and tame agricultural and business cycles. In 1932, Alfred Cowles expressed his frustration in a paper entitled “Can Stock Market Forecasters Forecast?” No, he concludes:

A review of the various statistical tests, applied to the records for this period, of these 24 forecasters, indicates that the most successful records are little, if any, better than war might be expected to result from pure chance. There is some evidence, on the other hand, to indicate that the least successful records are worse than what could reasonably be attributed to chance.

Tcowleswo years after Ragnar Frisch, Charles Roos and Irving Fisher had laid the foundations of the Econometric Society, Cowles liaised with the 3 men and established a Cowles Commission in Colorado Springs. It is not clear to me how pervasive a goal forecasting was in the first decades of macroeconomics and econometrics, how much it drove theoretical thinking, which role it had in the import of a probabilistic framework into economics. Historical works on Frisch and Haavelmo, for instance, suggest it is difficult to disentangle conditional forecasting from explaining and policy-making. Predicting was one of the 5 “mental activities” Frisch thought the economist should perform, alongside describing, understanding, deciding and (social) engineering (see Dupont and Bjerkholt’s paper). Forecasting wasn’t always associated with identifying causal relationships, as exemplified by the longstanding debate between chartists and fundamentalists in finance, but for early macroeconometricians, the two went hand in hand. That explaining, forecasting and planning were inextricably interwoven in Lawrence Klein’s mind is well-documented by Erich Pinzon Fuchs in his dissertation.  He quotes Klein saying his

“main objective [was] to construct a model that [would] predict, in the [broader] sense of the term. At the national level, this means that practical policies aimed at controlling inflationary of reflationary gaps will be served. A good model should be one that [could] eventually enable us to forecast, within five percent error margins roughly eighty percent of the time, such things as national production, employment, the price level…”

The notion that economics is about predicting is however not usually associated primarily with Klein’s name, but with Milton Friedman’s. In his much discussed 1953 methodological essay, Friedman proposed that the “task [of positive economics] is to provide a system of generalization that can be used to make correct predictions about the consequences of any change in circumstances. Its performance is to be judged by the precision, scope and conformity with experience of the predictions it yields.” These predictions “need not be forecast of future events,” he continued; “they may be about phenomena that have occurred but observations on which have not yet been made or are not know to the person making the prediction. And this is what makes economics policy-relevant, he concluded: “any policy conclusions necessarily rests on a prediction.” Klein and Friedman’s shared statement that the purpose of economic modeling is to predict has come to be widely accepted, yet it is not clear how many competing views of what the purpose of economics should be circulated in these years.

Most important, their longstanding dispute on statistical illusions reveals that they neither agreed on the purpose nor on the proper method to forecast, nor even on what a “good” forecast was. Klein believed macro econometric models should be as exhaustive as possible, Pinzon Fuchs documents, that they should accurately depict reality. This belief was tied to his desire to conceive engines for social planning, models that could provide guidance as to which exogenous variable the government should alter to achieve full-employment. In the NBER tradition, Friedman rather endorsed simpler models with few equations. He considered Klein’s complex machinery as a failure and endorsed Carl Christ’s idea that these models should be tested through out-of-sample prediction. Erich argues that Friedman was merely trying to understand how the economic system works. I rather interpret his work as an attempt to identify stable behaviors and self-stabilizing mechanisms. As Friedman believed government intervention was inefficient, he did not need the endogeneity or exogeneity of his variables to be precisely specified, which infuriated his Keynesian opponents. “The Friedman and Meiselman game of testing a one-equation one-variable model….. cannot be expected to throw any light on such basic issue as how to our economic systems work, or how it can be stabilized,” Albert Ando and Franco Modigliani complained in the 1960s. More fundamentally, Friedman doubted that statistical testing was fit for evaluating economic models. The true test was history, he often said, which might explain why, to Klein’s astonishment, he switched to advocating goodness-of-fit kind of testing with Becker in the late 1950s. Methodological pragmatism, or opportunism, as you want to see it.

What is the failure-to-predict about: statistical methods? Models? Institutions? Epistemology? 

representative-agentAs this historical exemple suggests, claiming that macro is in crisis because of economists’ failure to predict the financial crisis is too vague a diagnosis to point to possible remedy. For what is this “failure-to-predict” about? Is is a statistical issue? For instance, a failure to estimate models with fat-tailed variable distributions, or to handle a sudden unseen switch in the mean of that distribution (what Hendry calls “location shifts”). Or is it a theoretical issue? For instance, failing to explain why stock market returns are fat-tailed, to model firms and households‘ exposure to financial risk and its systemic consequences  into macro models, to take shadow banking into account, to identify the drivers of productivity. A bigger failure to model institutions, complexity, heterogeneity?  Improving theoretical modeling is the bulk of what is discussed in the macro death literature.

overlapping2On the contrary, that changes in economic structures or in perceived ways for government to intervene in the economy (for instance through macro prudential regulations, QE, etc) have made  economists’ regular predictions  irrelevant, useless or less accurate is less discussed. Keynesian macroeconometricians have built models aimed at conditional forecasting (aka dealing wit questions such as : what happens to the economy if the government raises the interest rates?), though central bankers have sometimes used these for unconditional forecasts (to get next year’s GDP figures). But the “failure-to-predict” criticism deals with unconditional forecast, as was the case with part of the Phillips curve debate during the 1970s. Finance economists have also traditionally been mostly concerned with unconditional forecast. I’m thus left wondering whether the rise of financial dimensions in public intervention have led to misusing DSGE models, or have fostered the development of macro models aimed at hybrid forecasting.

contractions2Finally, this “failure-to-predict” literature might point to a deeper epistemological shift. It is, of course, one seen in some economists’ rejection of DSGE modeling and endorsement of alternative models (agent-based, evolutionary, or complexity), interdisciplinary frames, in their call to go back to Minsky or Kindleberger, or even to good old IS/LM. But among those economists who have traditionally endorsed DSGE macro, there also seems to be a shift away from forecasting (unconditional and conditional) as the main goal of macroeconomics or economics at large. In 2009, for instance, Mark Thoma has commented that “our most successful use of models has been in cleaning up after shocks rather than predicting, preventing or insulating against them through pre-crisis preparation,” and the blogosphere and newspaper opens are ripe with similar statements. These can be interpreted as rhetorical gestures, defensive moves, or early symptoms of an epistemological turn. Itzhak Gilboa, Andrew Postelwaite, Larry Samuelson and David Schmeidler have , for instance, recently worked out a formal model in which they suggest that economic theory  is not merely useful through providing predictions, but also as a guide and a critique to economic reasoning, as a decision-aid .(This ties in with their broader call for case-based reasoning).

Granted, this is all very muddled. I am probably making artificial distinctions (for instance, I couldn’t decide whether Gabaix’s work on power laws and granularity belongs to statistical or theoretical analysis), and I am certainly misunderstanding key concepts and models. But my point is, it should be the purpose of the macrodeath literature to un-muddle my thoughts.  What I’m asking for is two types of articles:

(1) articles on economists’ failure to predict the crisis that are explicit about what their target is and how their championed substitute approaches will yield better conditional/unconditional predictions. Or, if their alternative paradigms reject prediction as the key purpose of economic analysis, why, and what’s next.

(2) histories of how economists have theorized and practiced forecasting since World War II. A full-fledged history of forecasting in economics and finance is a little too ambitious to begin with. What I’m interesting in is why and when empirical macroeconomists, in particular macroeconometricians,  have endorsed (conditional?) prediction as their key objective, what resistances did they encounter, what were the debates over how to produce , evaluate and use forecasts, whether models built for conditional forecasting were used by central bankers and by their  own producers for unconditional forecasting (think Wharton Inc and DRI), whether it shaped their relationships with finance and banking specialists, and how they reacted to the first salvo of public criticisms (70s economic crisis, Phillips curve breaking down, ect.). Additionally, recasting public and governmental anxiety about forecasting in the wider context of changing conceptions and uses of “the future” may help understand the challenges postwar economists faced.

Note: These great fantastic TERRIFIC pictures are based on suggestions by @Arzoglou@dvdbllrd, and . One of these pictures, and only one, features real economists whose band name is an insider’s pun.

Posted in Uncategorized | Tagged , , , , , , , , | 3 Comments

Les économistes face à la défiance de la société civile: faut-il déclarer la guerre des nombres?

Crise de confiance, crise de l’expertise, crise du nombre

capture-decran-2017-02-03-a-22-33-06Les économistes sont au bord de la crise de nerf, prévient Anne-Laure Delatte, la directrice adjointe du CEPII. L’inquiétude était en effet palpable à l’American Economic Association, le rassemblement de plusieurs milliers d’économistes qui se tient chaque année le premier week-end de janvier. Pas une session où le mot Trump ne soit murmuré avec inquiétude, et pourtant, c’était avant que ce dernier ne gèle les budgets recherche et communication de l’agence pour la protection de l’environnement ou de la NASA, et que son équipe n’oppose aux quantifications des experts une série de “faits alternatifs.” Son mépris des chiffres, en particulier économiques, n’est pas nouveau. A l’automne, il avait déjà qualifié le chiffre officiel du chômage (aux alentours de 5%) de fiction totale. Pour lui, on était plus près des 40%. Du coup, les économistes américains s’inquiètent des dommages que le nouveau gouvernement pourrait infliger aux statistiques qui forment leur matériau de base: le financement de certaines enquêtes et recensements pourrait être supprimé, ou réduit, ce qui diminuerait la qualité de la collecte de ces données. Le calcul de certaines statistiques pourrait également être altéré – par exemple en majorant ou minorant les dénominateurs- pour mieux coller à l’image que que Trump se fait de la situation des Etats-Unis.

Une autre inquiétude est la privatisation de la statistique publique. La privatisation des données est un sujet qui agite les esprits économiques depuis quelques années déjà. Depuis que la digitalisation de nombreux marchés et comportements permet l’enregistrement en temps réel de milliards de données sur les prix, les transactions, les goûts, les réactions, la psychologie des agents, ce que l’on appelle le big data. Depuis que les scientifiques, dont les économistes, développent de nouvelles techniques pour analyser ces données, comme le machine learning, échangent leur expertise en terme de market design contre un accès privilégié à ces données, et abandonnent même de prestigieuses carrières économiques pour des postes au sein des GAFA. Mais ce que craignent les économistes américains aujourd’hui, c’est que certains recensements démographiques publics soient confiés à des agences privées, sans que celles-ci n’aient l’obligation de communiquer les données brutes ni la méthodologie des calculs statistiques effectués.

La crise à laquelle les économistes font face est en réalité plus lancinante, plus profonde. Tout au long de l’année 2016, les chercheurs américains et européens ont assisté, en se tordant les mains, à la défaite de leur expertise. Leurs pétitions contre le Brexit, puis contre Trump se sont enchainées, ignorées par les électeurs. Ce qui a suscité un flot de questions: pourquoi les économistes ne sont-ils plus écoutés? Dans quelle mesure cette défiance s’inscrit-elle dans un mouvement vers l’ère du ‘post-factuel’ et de la ‘post-vérité‘ ?

L’expertise (en crise) des économistes repose sur les faits que ceux-ci sont en mesure de produire. Ces faits sont en général de nature quantitative, puisqu’ils consistent en la sélection et le traitement de données, elles-mêmes souvent récoltées ou produites puis stockées sous forme numérique. Cette crise de l’expertise est donc, entre-autres, une crise de la statistique publique, et plus largement des nombres, de l’observation, de la mesure, de la quantification et de la communication de ceux-ci.  Cette défiance de la société civile vis-à-vis du nombre traverse les pays occidentaux, de Londres à Bruxelles en passant par Washington. Elle est visible jusque dans la campagne présidentielle française : les candidats ont tendance à éviter de chiffrer leurs promesses électorales, ou se contentent de chiffres vagues destinés à marquer les esprits: la suppression de 500 000 fonctionnaires, le revenu universel à 750 euros.

Les articles linkés ci-dessus proposent tous la même explication à cette défiance grandissante. Le problème n’est pas tant que les économistes travailleraient pour des intérêts spécifiques, seraient “achetés”, mais plutôt leur échec à prédire la crise financière de 2008, auquel s’ajoutent, explique Mark Thoma, “les fausses promesses faites aux classes laborieuses et moyennes au sujet des bénéfices de la mondialisation, des baisses d’impôts en faveur des plus aisés, et de l’ouverture commerciale.” Cet échec est lui-même interprété de plusieurs façons: comme une conséquence de la tendance des économistes à analyser les effets agrégés de la libéralisation commerciale ou de la croissance (en général perçus comme positifs), et à négliger les effets négatifs sur certaines catégories professionnelles, ou du moins, à évacuer le problème par une note de bas de page spécifiant qu’il suffirait de “mettre en place des transferts compensatoires.” Le refus des citoyens d’être décris par des grandeurs agrégées ou des moyennes ne touche pas simplement les conclusions que les économistes dérivent de leur modèles, mais aussi les statistiques qu’ils utilisent: 68% des américains n’ont pas confiance dans les statistiques publiées par le gouvernement fédéral. En France, le dernier baromètre de la confiance du Cevipof a montré que 60% des interrogés n’ont pas confiance dans les chiffres de la hausse des prix et de la croissance, et 70% dans les chiffres de l’immigration, du chômage et de la délinquance que produit l’INSEE. Pourtant, l’institut bénéficie d’une bonne image pour 71% des répondants. Les citoyens ne se reconnaissent donc pas dans les statistiques, ils ne s’y voient pas.

Ce problème de représentativité perçue peut se doubler d’un problème de représentativité réelle. Certains économiste avouent avoir du mal à quantifier de manière satisfaisante des réalités économiques en constante évolution. L’idée est qu’à cause de la mondialisation et des transformation technologiques, justement, les statistiques nationales peinent à capturer l’identité des agents et des phénomènes économiques: les statistiques devraient être à la fois régionalisées et internationalisées, les notions d’intensité et de qualité devraient être prises en compte. Les données collectées par les GAFA seraient ainsi de meilleure qualité, pas simplement parce qu’elle sont plus nombreuses, mais parce qu’elles sont de nature différente et permettraient ainsi d’accéder à un nouveau type de connaissance, sans a priori théorique. Mais ces données sont “propriétaires,” ce qui pose d’innombrables problèmes d’éthique, de protection de la vie privée et de confidentialité,  et, sur le plan scientifique, d’accès, d’indépendance et de replicabilité. [1]

D’autres explications de la crise de confiance sont mentionnés trop rapidement : le fait que les indicateurs chiffrés aient perdus toute valeur à force d’être agités en pure perte (les “3% de deficit budgétaire”), ou au contraire, aient perdu leur caractère objectif et neutre à force d’être utilisés comme objets de management et de contrôle, de scoring, de benchmarking, de classification, bref, soient instrumentalisés; le rôle des médias enfin, et la possibilité que les chiffres erronés, mais sensationnels chassent les chiffres fiables.

Réflexivité et perspective historique

La réponse des économistes à cette crise? Plus de chiffres, des deux côtés de l’atlantique. Après tout, n’a-t-on pas récemment rencontrés de francs succès dans l’étude des effets redistributifs de la mondialisation ou des systèmes fiscaux? N’est-on pas capable de “voir” et de quantifier la diminution du nombre d’hommes en âge de travailler effectivement présents sur le marché du travail américain, de lier ce phénomène à l’augmentation des problèmes de santé, voir de la diminution de leur espérance de vie ? De quantifier l’inégalité intergénérationnelle? Ces travaux ne sont-ils pas entrés à la maison blanche, ne se sont-ils pas retrouvés dans les discours du président, voir dans les slogans des manifestants? Anne-Laure Delatte conclue ainsi sa chronique par une profession de foi:

Les experts ont trahi par dogmatisme. Faut-il pour autant se taire ? Laisser la parole aux autres, ceux qui ne croient pas en les chiffres et les faits ? Ou bien justement entrer en résistance contre le dogmatisme des uns et l’obscurantisme des autres ? C’est le choix de plusieurs instituts français, dont celui auquel j’appartiens, qui entrent dans la campagne présidentielle juste armés des outils de l’analyse économique (1). Nous avons choisi d’éclairer les débats avec des chiffres et des résultats issus de la recherche académique. Le faire avec pédagogie et humilité.

Ceci fait écho aux explications de Michael Klein qui a choisi de répliquer aux “faits alternatifs” du nouveau gouvernement américain en ouvrant un site, Econofact. “Les faits sont têtus,” écrit celui qui a demandé à des “économistes universitaires prestigieux” de rédiger des mémos au sujet de l’emploi manufacturier, des effets économiques des migrations, du commerce ou des systèmes de change. Son but, explique t-il, est “de souligner que si l’on peut choisir ses propres opinions, on ne peut pas choisir ses propres faits.” Plus de faits, donc, avec plus de communication. Et plus d’humilité, un mot qui revient constamment dans ces diverses chroniques. Pour louable que soient ces propositions, elles semblent insuffisantes à enrayer le mal. “Les économistes ont trahi par dogmatisme,” conclut Anne-Laure Delatte.  Mais le dogmatisme se nourrit du manque de réflexivité. Réflexivité sur la production des nombres économiques, leur utilisation et leur communication. Réflexivité qui passe, non seulement par des tables rondes annuelles, mais surtout par la connaissance de son histoire disciplinaire, des débats qui ont fait de la science économique ce qu’elle est aujourd’hui.

41hka94tfkl-_uy250_Ca tombe bien, la quantité de travaux que les historiens anglophones de l’économie ont produit sur l’observation, la mesure, la quantification économique est considérable. Et la quantité de travaux francophones rédigés sur les même thématiques par les sociologues et historiens de la chose publique, plus époustouflante encore.[2] Un groupe de sociologues, parmi lesquels Alain Desrosières ont fondé une véritable école française de la sociologie de la quantification, dont les objets sont les conditions théoriques, techniques et institutionnelles de production et d’utilisation  de la statistique publique par les gouvernements. L’histoire de la comptabilité privée, de l’utilisation grandissante d’indicateurs chiffrés, de scores, de classifications et de benchmarks a également fait l’objet de nombreuses recherches. La tendance est aujourd’hui à la synthèse de ces deux littératures, ce qui implique d’analyser la porosité entre quantification publique et privée, entre traditions nationales, d’étudier la circulation des pratiques de quantifications entre continents, époques et sphères professionnelles

capture-decran-2017-02-05-a-13-21-16Certes, ceux qui se risquent à promouvoir l’histoire de l’économie auprès des praticiens de la discipline souvent l’impression de crier dans le désert. Mais il s’agit de méfiance autant que de désintérêt.  On me susurre dans l’oreillette qu’on a bien essayé de discuter avec des historiens, philosophes ou sociologues, voir même de lire certains articles, mais que, vraiment, les catégories parfois employées pour décrire le travail des économiste, le ton ouvertement critique, ça ne passe pas. Les économistes lambda passent leurs journées à s’arracher les cheveux sur leur code DYNARE, à estimer des modèles de search and matching, à mettre au point des expériences de laboratoire pour comprendre les biais des agents face à différents niveaux de risques, des testing sur CV anonymisés pour saisir les mécanismes de la discrimination hommes/femmes à l’emploi, ou des expérimentations de vote permettant de confirmer que nos modes de scrutin présidentiels sont tout sauf efficaces. Et ponctuellement, ils essaie d’expliquer ce qu’ils fabriquent à des audiences d’étudiants, de curieux, vérifient le script d’excellentes vidéos de vulgarisation, le tout pour 2500 euros net par mois après 15 ans d’ESR, et parfois, luxe ultime, avec une prime d’excellence scientifique en bonus. Ils ne comprennent donc pas bien en quoi consiste le “paradigme néoclassique” supposé caractériser cette diversité d’approches (la majorité des recherches évoquées ci-dessus ne font pas recours à des homo oecomicus maximisant sous contrainte avec une information parfaite), encore moins en quoi ils participent à un complot néoliberal ou sont vendus au grand capitalisme. Ils ont l’impression que la cible des sociologues sont, au mieux, quelques puissants, au pire, les économistes d’il y a 70 ans.

Certes. Mais ces analyses des pratiques des économistes, quels qu’en soient le cadre épistémologique et l’interprétation, utilisent un matériau historique qui révèle les cheminements intellectuels, les débats, les errements, les hasards, les obstinations, les influences et les résistances qui ont façonnées les pratiques d’aujourd’hui. Elles permettent de comprendre que l’utilisation collective de modèles à agent représentatif ou de jeux non-coopératifs, que le recours à des expérimentations contrôlées ou des méthodes structurelles, que l’utilisation de l’analyse couts-bénéfices, que les méthodes de mesure, de collecte et de traitement des données ne sont pas neutres, peuvent avoir des effets inconscients importants et durables. Et surtout, elles nourrissent la réflexion sur les réponses à apporter à la défiance actuelle. En particulier, ces histoires de la fabrique des nombres en économie montrent que ceux-ci reflètent ce que les économistes choisissent de “voir.” Elles permettent de comprendre ce qui influence, obscurcit ou déplace l’attention des chercheurs, et comment ceux-ci établissent des faits fiables. Les conditions pour qu’un “fait” économique circule bien (c’est à dire en préservant son intégrité et en étant utile) ont également fait l’objet de recherches historiques.

“Les objets statistiques sont à la fois réels et construits” (Alain Desrosières

Les données économiques, donc, reflètent ce les économistes choisissent de “voir.” Et c’est bien ce qu’on leur reproche aujourd’hui: de n’avoir pas su voir la hausse des inégalités, les conséquences de la mondialisation sur certaines catégories de la population, l’instabilité financière. Pourtant, leur expertise fut, à partie de la Seconde Guerre mondiale, de plus en plus recherchée. Comment ceux-ci ont-ils construit leur crédibilité? Comment l’ont-il perdu? Pourquoi cet “aveuglement”? Comment “voir mieux”?  Si les ouvrages exhaustifs abondent (biblios ici, , et , je signale les sources en français dans la suite du texte), que peut-on tirer de certains exemples?

Layout 1Les débats qui ont jalonnés le calcul de la principale statistique économique, le PIB, par exemple l’exclusion de les productions non marchandes ou la difficulté d’évaluer l’environnement sont souvent connus, ne serait-ce que parce qu’ils ont trouvé un écho contemporain à travers les nombreux indicateurs alternatifs développés ces dernières années.[3]  Mais le cas des indices des prix et du coût de la vie est tout aussi intéressant. Tom Stapleford (EN) explique que l’indice du cout de la vie américain (le CPI) fut développé par le Bureau of Labor Statistics en réponse à l’expansion du système administratif gouvernemental. L’objectif était que celui aide à la rationalisation des ajustements des salaires et allocations. Mais très vite, il fut également utilisé pour les négociations salariales dans le secteur privé, puis pour tenter de résoudre des conflits sociaux par le recours à des instruments “rationnels.” Le CPI n’a donc rien d’une nombre “objectif”, conclue Stapleford. Il est un instrument de quantification façonné par des problèmes d’ordre pratique, des conflits bureaucratiques, des théories (le remplacement de l’utilité cardinale par l’utilité ordinale), et les desseins politiques, de la justification de la baisse des salaires de 1933 à son utilisation dans les débats autour de la stabilisation macroéconomiques.

logo-inseeMichel Armatte (ch2, FR) raconte que les indices français du coût de la vie étaient traditionnellement construits en calculant l’évolution des prix d’un panier de biens fixe. La liste des biens figurant dans le panier fit donc l’objet de nombreuses controverses, d’autant plus que celui-ci jouait un rôle fondamental dans le partage de la valeur ajoutée. L’indexation des salaires fut parfois interdite, parfois effectuée par rapport à un second indice calculé sur un panier de bien plus économique. En réaction, la CGT finit par créer son propre indice en 1972. Celui-ci devait refléter le coût de la vie pour une famille  ouvrières de 4 personnes, locataires en région parisienne, et servir de base aux négociations salariales. Alors que l’indice français est régulièrement accusé d’être sous-évalué, l’indice américain était en revanche perçu comme structurellement sur-évalué. Le très contesté rapport Boskin, publié en 1996 sous la présidence Bush père, concluait ainsi que la non prise en compte des effets de substitution de biens et de points de vente, de l’amélioration de la qualité des produits, et de l’introduction de nouveaux produits conduisait à une surestimation de l’inflation de quelques 1,3%. Ceci aurait couter 135 milliards de dollars à l’Etat. Les recommandations du rapport consistaient en l’adoption d’un indice à utilité constante plutôt qu’à panier constant, ce qui conduisit à l’adoption de la méthode des prix hédoniques développée, entre autres, par Zvi Griliches (voir cet article, EN).

Mais s’il y a bien un exemple de quantification dont la pertinence et l’impact social n’est pas remise en cause, c’est celui des travaux récent sur les inégalités de revenus. Les raisons du succès de l’ouvrage de Thomas Piketty, de ses travaux avec Antony Atkinson, Emmanuel Saez, Gabriel Zucman, mais aussi les travaux de Raj Chetty, de Branko Milanovic, de Miles Corak  et d’Alan Krueger ont beaucoup occupé les éditorialistes ces dernières années. La manière dont ceux-ci ont déplacé l’attention des chercheurs et du grand public des problèmes liés à la pauvreté et la croissance vers de nouveaux “faits stylisés” sur les inégalités, la redistribution et la fiscalité est encore mal comprise. Mais les travaux de Dan Hirschman (EN) permettent de saisir les raisons pour lesquelles les 10%, 1% ou 0,1% restèrent invisibles jusque dans les années 2000.

Les données sur les inégalités qui intéressent les économistes sont en effet déterminées par les théories qu’ils cherchent à confirmer. Dans l’après-guerre, explique Hirschman, les macroéconomistes étaient obsédés par la question du partage de la valeur ajoutée entre capital et travail, tandis que les économistes de l’emploi cherchaient avant tout à savoir si les différences de capital humain entre travailleurs qualifiés et non-qualifiés étaient à l’origine des différences de salaires. Les inégalités de genre et de race étaient également des sujets sensibles. Si les données fiscales avaient été exploitées lors des premières analyses sur la distribution des revenus, celles-ci n’attiraient donc plus l’attention, si bien que le Bureau of Economic Analysis cessa de produire ces séries, rendant le problème invisible. Les autres données sur les revenus, fournies par le Census’s Current Population Survey, ne permettaient pas non plus de “voir” les hauts revenus. En effet, pour des raisons de confidentialité, ces données étaient  top-coded, c’est à dire simplement enregistrées comme supérieures à un certain niveau, sans détails. Dans les années 1990, quelques économistes comme Feenberg et Poterba, ou Krugman, identifièrent une augmentation de la part de la richesse détenus par les  hauts revenus, mais il fallu attendre l’exploitation de larges quantités de données fiscales par Thomas Piketty et Emmanuel Saez pour disposer de nouvelles séries sur l’évolution de la part du revenu gagnée par les 5% et 1% les plus riches.[4]

En résumé, l’impossibilité à voir les enjeux redistributifs des transformations économiques n’était pas simplement due au fait de “travailler avec des données agrégées.” L’utilisation de données micro était une pratique courante au moins depuis les années 1960/1970, période de progrès dans collecte et du stockage de données d’enquête, comme le PSID aux Etats-Unis, et de l’économétrie des données de panel. Le problème des inégalités intéressaient déjà les économistes, mais les questions posées étaient structurées par des cadres théoriques (la théorie du capital humain), les demandes des les pouvoirs publics (à l’époque focalisés sur la pauvreté, ou les inégalités de genre et de race), et les effets non-intentionnels des décisions techniques (comme le top-coding). Certains historiens notent d’ailleurs que l’intérêt actuel pour les données donnant à voir la part de revenus et de richesse détenue par les 1% tend désormais à rendre invisible les inégalités de race et de genre.

“Les nombres servent à intervenir, pas simplement à représenter” (Ted Porter)

Comme le montrent ces deux exemples, les statistiques économiques reflètent aussi bien des controverses théoriques et techniques que les besoins des institutions publiques et privées. “La quantification est une technologie sociale,” souligne Porter (EN). Les statistiques sont façonnées par et pour la connaissance et le pouvoir gouvernemental, expliquait, de même, Alain Desrosières (FR). La tendance à la quantification du fait social s’inscrit dans les transformations des modes de gouvernement. Les gouvernements peuvent donc être amenés utiliser les nombres de manière politique, comme des armes. C’est ce que montrent les contributions rassembleés dans Benchmarking, édité par Isabelle Bruno et Emmanuel Didier. Ceux-ci documentent la croissance inflationniste des indicateurs, des classements (des hôpitaux, des universités, des régions, des entreprises). Mais ce management par le chiffre a des effets pervers, soulignent-ils: “telle est la force du benchmarking, qui fait sa très grande spécificité : il ne se contente pas de traduire la réalité en termes statistiques pour agir, il stimule cette action et la canalise vers un “meilleur” dont la définition est soustraite à l’autonomie des agents.”


Dans second ouvrage
, Isabelle Bruno et Emmanuel Didier, accompagnés de Julien Prévieux et d’autres auteurs, proposent une forme de réponse à ce nouveau management: la résistance à l’oppression par les chiffres passent (1) par la déconstruction des statistiques existantes ; (2) par la création de nouveaux nombres, le Statactivisme. Un exemple, celui de la production d’un indice du coût de la vie par la CGT, a déjà été évoqué plus haut. L’ouvrage en dissèque de nombreux autres:  proposer des indicateurs alternatifs de bien être, calculer des empruntes écologiques, compter le nombre de suicides pour évaluer le management d’une entreprise, évaluer le coût de l’expulsion des réfugiés, mettre en place un baromètre des inégalité et de la pauvreté, le BIP40 (voir la recension des deux ouvrages par Olivier Pilmis). Ces descriptions des conditions institutionnelles et intellectuelles de l’émergence de contre-statistiques, de leur diffusion, et de leur influence pourraient intéresser ces économistes qui souhaitent ré-occuper l’espace intellectuel, médiatique, public. Est-ce le rôle des associations militantes? Est-ce le rôle des scientifiques, et si oui, sous quelles conditions?  Les nouveaux faits stylisés sur les inégalités produits ces dernières années peuvent-ils être interprétés comme une forme de statactivisme? L’opposition 99%/1% fut, après tout, mise en scène de manière concomitante (mais apparemment indépendante) par des chercheurs et les activistes d’Adbuster.

“The lives of travelling facts” (Mary Morgan)

Car il ne suffit pas de voir autrement afin de produire de nouvelles données, encore faut-il  les sélectionner, les organiser et les présenter de manière à ce qu’ils forment des faits. Et si possible, des faits qui “voyagent bien,” c’est à dire, explique Mary Morgan,  des faits capables de conserver leur intégrité et d’être fructueux (e.g. utile aux conversations académiques comme aux débats publics).  D’un ouvrage portant sur des types de faits très variés (économiques, biologiques, physiques, etc), coordonné avec Peter Howlett, Morgan tire 3 conclusions.

u5dtmvnuq5yee7ewhjtapb4kog6svcx

capture-decran-2017-01-30-a-23-21-31Premièrement, l’importance des “compagnons de voyages,” qui peuvent être “des labels, du packaging, des véhicules ou des chaperons.” Que le packaging, en particulier visuel, soit une condition sine qua non de succès est visible dans le soin apporté à toutes les représentations graphiques des travaux récents sur les inégalités : métaphore animale ou référence à des classiques pour marquer les esprits, rupture avec la sémiologie statistique standard pour mettre en valeur certaines données. Un autre exemple est celui du succès d’Our World in Data, le site alimenté par Max Roser. Celui-ci pense que les médias ont tendance à plus insister sur les faits négatifs. Il se réfère aux travaux de Johan Galtung, selon lequel la fréquence de publication des médias (hebdo puis en temps réel) les empêche d’identifier et de couvrir des tendances positives de long terme. Son projet consiste donc à rassembler des séries de très long terme sur la santé, l’éducation, les conflits, le niveau de vie, etc., et à les visualiser selon une stratégie élaborée avec soin. Il y aurait aussi beaucoup à dire sur les véhicules divers que les économistes ont utilisés pour faire circuler les faits économiques qui leur semble important. On peut par exemple penser aux séries TV animées par John Galbraith et Milton Friedman (voir cet ouvrage consacré aux économistes et leurs publics).

Deuxièmement, indique Morgan, les “terrains” sur lesquels les faits circulent et leurs “frontières” ont aussi de l’importance. Ceux-ci peuvent être disciplinaires, professionnels, historiques, géographiques ou culturels. Les raisons pour lesquelles la réception de l’ouvrage de Thomas Piketty, Le Capital au XXIeme Siècle,  fut bien meilleure aux Etats-Unis qu’en France sont, par exemple, difficile à établir. La réflexion autour du bien-être et de ses mesures semble en revanche avoir rencontré dans notre pays un terreau plus fertile. Enfin, la capacité d’un fait à voyager dépend de ses caractéristiques intrinsèques, attributs et fonctions. Ceux-ci s’acquièrent souvent en cours de route, et se voient à travers les adjectifs utilisés pour décrire certains faits : “compréhensibles, surprenants, reproductibles, têtus, évidents, cruciaux, incroyables, importants, étranges.” Certains de ces adjectifs dénotent des qualités intrinsèques, d’autres des aspects affectifs. Au total, l’entrée brutale dans un monde “post-vérité,” mais surtout le lent déclin de la confiance en leur expertise forcent actuellement les économistes à réfléchir à des stratégies de défense et de contre-attaques. Ils peuvent, pour cela, puiser dans celles mises en oeuvre par leurs prédécesseurs ainsi que par d’autres types de professionnels, à condition, bien sûr, qu’ils s’intéressent à leur histoire.

Notes

[1] L’opposition données publiques/agrégées/ouvertes /petites quantités vs données privées/désagrégées/grandes quantités/propriétaires me semble largement surfaite. Il suffit de penser aux millions de données générées dans les pays ou le système éducatif et/ou de santé est public, ainsi que les données démographiques et fiscales, auxquelles les chercheurs en science sociales n’ont que rarement accès. Cela ne remet cependant pas en cause la possibilité d’une concurrence entre données publiques et privées – et la nécessité pour les états de réfléchir aux problèmes de confidentialité vs ouverture aux chercheurs.

[2] Un apercu de ces travaux figure dans la liste de lecture proposée par François Briatte, Samuel Goëta et Joël Gombin ou celles que l’on peut glaner sur le site d’Emilien Ruiz, sur le site du projet AGLOS – visant à fédérer un réseau international et interdisciplinaire d’étude des appareils statistiques de différents pays, ou au fil des references proposées sur la page du séminaire Chiffres Privés, chiffres publics, coordonné par Beatrice Touchelay. La revue Statistiques et Sociétés est consacrée à ces recherches. Pour un apercu de “l’école” fondée par Alain Desrosiere, voir cet ouvrage (en anglais) qui lui rend hommage ou ce numéro special. Les historiens français, eux, se sont plutôt penchés sur les techniques -de l’économétrie à l’expérimentation-, des théories – du choix, des jeux- et les modèles -en particuliers macroéconomiques- produits par les économistes. Cette littérature est également cruciale pour comprendre la crise de l’expertise actuelle, mais elle n’est pas le sujet ici.

[3] Voir la thèse de Geraldine Thiry (FR) ou celles de Benjamin Mitra-Khan et de Dan Hirschman (EN). Voir aussi cette bibliographie.

[4] L’article traite du contexte américain, et exclue donc la possibilité que l’intérêt pour les inégalités de revenus ait émergé en Grande-Bretagne dans les années 1960, en particulier sous l’impulsion d’Antony Atkinson.

Posted in Uncategorized | Tagged , , , , , , | Leave a comment

The making of economic facts : a reading list

That Donald Trump’s first presidential decisions included gagging the EPA,USDA and NASA, asking his advisors to provide “alternative facts” on inauguration attendance, and questioning the unemployment rate is raising serious concerns among economists. Mark Thoma and former BLS statistician Brent Moulton, among others, fear that the new government may soon challenge the independence of public statistics agencies, drain access to the data economists feed themselves with, attempt to tweak them, or just ax whole censuses and trash data archives, Harper style.

capture-decran-2017-01-27-a-10-47-20

One reaction to this had been to put more or better communicated economics facts online. So is the purpose of the Econofact website, launched by Tufts economist Michael Klein. “Facts are stubborn,” he writes, so he asked “top academic economists” to write memos “covering the topics of manufacturing, currency manipulation, the border wall, World Trade Organization rules, the trade deficit, and charter schools.” The purpose, he explains,  is “to emphasize that you can choose your own opinions, but you cannot choose your own facts.”  The move is in line with other attempts by scientists and people working in academia, the NASA, the National Parks or the Merriam Webster dictionary to uphold and reaffirm facts, in particular on climate change.

capture-decran-2017-01-27-a-13-58-31Looking at the website, though, I’m left wondering who the intended audience is, and whether this is the most effective way to engage a broad public. As noted by Klein himself, citizens seem to crave “information,” but within the broader context of a growing distrust of scientific expertise, statistics and facts. All sorts of expertise are impacted, but it doesn’t mean that responses should be identical. Because in practice, if not  in principle, each science has its own way to build “facts,” and citizens’ disbelief of climate, demographic, sociological or economic facts may have different roots. It is not clear, for instance, that economics statistics are primarily rejected because of perceived manipulation and capture by political interests. What citizens dismiss, rather, is the aggregating process the production of statistics entails, and economists’ habit to discuss averages rather than standard deviations, growth rather than distribution. Statistics have historically sprung out of scientists’ efforts to construct an “average man,” but people don’t want to be averaged anymore. They want to see themselves in the numbers.

It might be a good time, therefore, to acknowledge that economic statistics proceed from  what economists intentionally or unconsciously choose to see, observe, measure, quantify  and communicate; to reflect on why domestic production had been excluded from GDP calculations and whether national accounting embody specific visions of government spending; to  ponder the fact that income surveys and tax data were not coded and processed in a way that made “the 1%” visible until the 2000s because, until then, economists were concerned with poverty and the consequences of education inequality rather than with top income inequality; to think about the Boskin Commission’s 1996 decision to settle for a constant utility price index to prevent inflation overstatement, and its consequences on the way economists measure the welfare derived from good consumption (and productivity). And it’s not just that the observation, measurement and quantification process underpinning “economic facts” has constantly been debated and challenged. It has also been politicized, even weaponized, by governments and profit or non-profit organizations alike. Economic data should be viewed as negotiated and renegotiated compromises rather than numbers set in stone. This doesn’t diminish their reliability, just the contrary. They can be constructed, yet constructed better than any “alternative” rogue organizations have in store.

The production of government statistics has considerably evolved over decades if not centuries. It results from longstanding theoretical and technical disputes as well as conflicting demands and uses, some very different across countries.  Even more dramatic have been the changes in the production and uses of financial and business economic data. Below is a non-exhaustive list of books and articles offering overarching perspective on economic (and social science) data as well as specific case studies.

Note: Some of these references primarily deal with quantification, other with observation or measurement, some with the making of economic data, other with the production of “facts” (aka selected, filtered, organized and interpreted data). 

General framing

  1. Trust in Numbers: the Pursuit of Objectivity in Science and Public by Ted Porter. Fast read, excellent overview. Porter explains that, contrary to the received view, the quantification of social facts was largely driven by administrative demands, by political makers’ willingness to enforce a new kind of “mechanical objectivity.’ “Quantification is a social technology,” he explains. For a longer view, see Mary Poovey’s A History of the Modern Fact, which tracks the development of systematic knowledge based on numerical representation back to XVIth century double-entry bookkeeping.
  1. A collective volume on the history of observation in economics, edited by Harro Maas and Mary Morgan. They provide a broad historical overview in their introduction, and insist on the importance of studying the space in which observation takes place, the status and technicalities of the instruments used, as well as the process whereby trust between the economists-observers and the data-users is built.
  1. Marcel Boumans has spent a lifetime reflecting on quantification and measurement  in economics. In his 2005 book How Economists Model the World into Numbers and associated article, he defines economic models as “tools for measurement,” just as the thermometer is for physical sciences (this idea is borrowed from the Morgan-Morrisson tradition. See also this review of the book by Kevin Hoover).   His 2012 book likewise details historical exemples of observation conducted outside the laboratory (aka when economic phenomena cannot be isolated from their environment). His purpose is to use history to frame epistemological evaluations of the economic knowledge produced “in the field.” The book discusses, among others, Morgenstern’s approach to data or Kranz, Suppes, Luce and Tversky’s axiomatic theory of measurement.
  1. French sociologist Alain Desrosieres has pioneered the sociology of economic quantification through his magnum opus The Politics of Large Numbers: a History of Statistical Reasoning and countless articles. The gist of his comparative analysis of the statistical apparati developed in France/Germany/US/UK is that statistics are shaped by and for government knowledge and power. His legacy lives on through the work of Isabelle Bruno, Florence Jany-Catrice and Béatrice Touchelay,  among others. They have recently edited a book on how recent public management has moved from large numbers to specific indicators and targets.

 

Capture d’écran 2017-01-27 à 14.57.13.png

Economic facts: case studies

1. There is a huge literature on the history of national accounting and the debates surrounding the development of GDP statistics. See Dans Hirschman’s reading list as well as his dissertation on the topic, and the conference he has co-organizing last Fall with Adam Leeds and Onur Özgöde. See also this post by Diane Coyle.

2. Dan Hirschman’s study of economic facts also comprise an analysis of stylized facts in social sciences, and a great account of the technical and contextual reasons why economists couldn’t see “the 1%” during most of the postwar period, then developed inequality statistics during the 2000s. It has been covered by Justin Fox here.

Layout 13. There are also stories of cost of living and price indexes. Historian Tom Stapleford has written a beautiful history of the Cost of Living in America. He ties the development of the statistics, in particular at the Bureau of Labor Statistics, to the growth of the American bureaucratic administrative system. The CPI was thus set up to help the rationalization of benefit payments adjustments, but it was also used for wage negotiations in the private sector, in an attempt to tame labor conflicts through the use of “rational” tools. The CPI index is thus nothing like an “objective statistics,” Stapleford argues, but a quantifying device shaped by practical problems, bureaucratic conflicts – the merge of public statistical offices, economic theory –the shift from cardinal to ordinal utility–, institutional changes and political agendas – the legitimation of wage cuts in 1933, the need to control for war spending, its use in postwar macroeconomic stabilization debates. See also Stapleford’s paper on the development of hedonic prices by Zvi Griliches and others. Spencer Banzhaf recounts economists’ struggles to make quality-adjusted price indexes fair and accurate.

4. Histories of agricultural and environmental statistics also make for good reads. Emmanuel Didier relates how USDA reporters contributed to the making of US agricultural statistics, and Federico D’Onofrio has written his dissertation on how Italian economists collected agricultural data at the turn of XXth century through enquiries, statistics and farm surveys.  Spencer Banzhaf relate economists struggles to value life, quantify recreational demand, and measure the value of environmental goods though contingent valuation. A sociological perspective on how to value nature is provided by Marion Fourcade.

ccam8npweaip7mm
On public statistics, see also Jean-Guy Prevost, Adam Tooze on Germany before World War II, and anthropologist Sally Merry’s perspective. Zachary Karabell’s Leading Indicators: A Short History of the Numbers that Rule Our World is intended at a larger audience.

 Shifting the focus away from public statistics

As I was gathering references for the post, I realized how much historians and sociologists  of economics’ focus is on the production of public data, and their use in state management and discourse. I don’t really buy the idea that governments alone are responsible for the rise in the production of economic data in the last centuries. Nor am I a priori willing to consider economists’ growing reliance upon proprietary data produced by IT firms as “unprecedented.” Several of  the following references on private economic data collection were suggested by Elisabeth Berman, Dan Hirschman, and Will Thomas.

  1. Economic data, insurance and the quantification of risk: see How Our Days Became Numbered : Risk and the Rise of the Statistical Individual Risk by Dan Bouk (history of life insurance and public policy in the XXth century, reviews here and here). For a perspective on the XIXth century, see Sharon Ann Murphy’s Investing in Life.  Jonathan Levy covers two centuries of financial risk management in Freaks of Fortune.

 

.

2. For histories of finance data, look at the performativity literature. I especially like this paper by Juan Pablo Pardo-Guerra on how computerization transformed the production of financial data.

3. A related literature deals with the sociology of scoring  (for instance Fourcade and Healy’s work here and here)

51xmm4z5tbl-_sx327_bo1204203200_4. Equally relevant to understand the making of economic facts is the history of business accounting. See Paul Miranti’s work, for instance his book with Jonathan Barron Baskin. See also Bruce Carruthers and Wendy Espeland’s work on double-entry accounting and economic rationality. Espeland also discusses the relation of accounting to corporate control with Hirsch here and to accountability and law with Berit Vannebo here (their perspective is discussed by Robert Crum).

 

Posted in Uncategorized | Tagged , , , , , , , | 4 Comments