The problem with “economists-failed-to-predict-the-2008-crisis” macrodeath articles

macrodeathThis week has delivered one more interesting batch of economics soul-searching posts. On Monday,  the Bloomberg View editorial board has outlined its plans to make economics more of a science (by “tossing out” models that are “refuted by the observable world” and relying “on experiments, data and replication to test theories and understand how people and companies really behave.” You know, things  economists have probably never tried…). John Lanchester then reflected on recent macro smackdown by Bank of England’s Andy Haldane and World Bank’s Paul Romer. And INET has launched a timely “Experts on Trial” series. In the first of these essays, Sheila Dows outlined how economists could forecast better (by emulating physics less and relying on a greater variety of approaches) and why economists should make peace with the  inescapable moral dimension of their discipline. In the second piece, Alessandro Roncaglia argued that considering economists as princes or servants of power is authoritarian, and that giving them such an asymmetric role within society is dangerous.

75543820Rich and thoughtful as this macrodeath literature is, it leaves me, again, frustrated.  A common feature of virtually all articles  dealing with the crisis in economics is that they are built around economists’ failure to predict the 2008 financial crisis. And yet, they hardly dig into the sources, meaning and consequences of this failure (note: in this post, I’ll consider that a forecast is a specific quantitative and probabilistic type of prediction, and I’ll use the two terms interchangeably. Shoot, philosopher). The failure to forecast is usually construed as a failure to model, leading to suggestion to improve modeling either by upgrading existing ones with frictions, search and matching, financial markets, new transmission mechanism, more variables, ect., or going back to older models, or changing paradigms altogether. Yet,  economists’  approach to forecasting rely on much more than modeling strategies, history whispers.

Agreeing to forecast, disagreeing on how and why

Macroeconomics is born out of finance fortune-tellers’ early efforts to predict changes in stock prices and economists’ efforts to explain and tame agricultural and business cycles. In 1932, Alfred Cowles expressed his frustration in a paper entitled “Can Stock Market Forecasters Forecast?” No, he concludes:

A review of the various statistical tests, applied to the records for this period, of these 24 forecasters, indicates that the most successful records are little, if any, better than war might be expected to result from pure chance. There is some evidence, on the other hand, to indicate that the least successful records are worse than what could reasonably be attributed to chance.

Tcowleswo years after Ragnar Frisch, Charles Roos and Irving Fisher had laid the foundations of the Econometric Society, Cowles liaised with the 3 men and established a Cowles Commission in Colorado Springs. It is not clear to me how pervasive a goal forecasting was in the first decades of macroeconomics and econometrics, how much it drove theoretical thinking, which role it had in the import of a probabilistic framework into economics. Historical works on Frisch and Haavelmo, for instance, suggest it is difficult to disentangle conditional forecasting from explaining and policy-making. Predicting was one of the 5 “mental activities” Frisch thought the economist should perform, alongside describing, understanding, deciding and (social) engineering (see Dupont and Bjerkholt’s paper). Forecasting wasn’t always associated with identifying causal relationships, as exemplified by the longstanding debate between chartists and fundamentalists in finance, but for early macroeconometricians, the two went hand in hand. That explaining, forecasting and planning were inextricably interwoven in Lawrence Klein’s mind is well-documented by Erich Pinzon Fuchs in his dissertation.  He quotes Klein saying his

“main objective [was] to construct a model that [would] predict, in the [broader] sense of the term. At the national level, this means that practical policies aimed at controlling inflationary of reflationary gaps will be served. A good model should be one that [could] eventually enable us to forecast, within five percent error margins roughly eighty percent of the time, such things as national production, employment, the price level…”

The notion that economics is about predicting is however not usually associated primarily with Klein’s name, but with Milton Friedman’s. In his much discussed 1953 methodological essay, Friedman proposed that the “task [of positive economics] is to provide a system of generalization that can be used to make correct predictions about the consequences of any change in circumstances. Its performance is to be judged by the precision, scope and conformity with experience of the predictions it yields.” These predictions “need not be forecast of future events,” he continued; “they may be about phenomena that have occurred but observations on which have not yet been made or are not know to the person making the prediction. And this is what makes economics policy-relevant, he concluded: “any policy conclusions necessarily rests on a prediction.” Klein and Friedman’s shared statement that the purpose of economic modeling is to predict has come to be widely accepted, yet it is not clear how many competing views of what the purpose of economics should be circulated in these years.

Most important, their longstanding dispute on statistical illusions reveals that they neither agreed on the purpose nor on the proper method to forecast, nor even on what a “good” forecast was. Klein believed macro econometric models should be as exhaustive as possible, Pinzon Fuchs documents, that they should accurately depict reality. This belief was tied to his desire to conceive engines for social planning, models that could provide guidance as to which exogenous variable the government should alter to achieve full-employment. In the NBER tradition, Friedman rather endorsed simpler models with few equations. He considered Klein’s complex machinery as a failure and endorsed Carl Christ’s idea that these models should be tested through out-of-sample prediction. Erich argues that Friedman was merely trying to understand how the economic system works. I rather interpret his work as an attempt to identify stable behaviors and self-stabilizing mechanisms. As Friedman believed government intervention was inefficient, he did not need the endogeneity or exogeneity of his variables to be precisely specified, which infuriated his Keynesian opponents. “The Friedman and Meiselman game of testing a one-equation one-variable model….. cannot be expected to throw any light on such basic issue as how to our economic systems work, or how it can be stabilized,” Albert Ando and Franco Modigliani complained in the 1960s. More fundamentally, Friedman doubted that statistical testing was fit for evaluating economic models. The true test was history, he often said, which might explain why, to Klein’s astonishment, he switched to advocating goodness-of-fit kind of testing with Becker in the late 1950s. Methodological pragmatism, or opportunism, as you want to see it.

What is the failure-to-predict about: statistical methods? Models? Institutions? Epistemology? 

representative-agentAs this historical exemple suggests, claiming that macro is in crisis because of economists’ failure to predict the financial crisis is too vague a diagnosis to point to possible remedy. For what is this “failure-to-predict” about? Is is a statistical issue? For instance, a failure to estimate models with fat-tailed variable distributions, or to handle a sudden unseen switch in the mean of that distribution (what Hendry calls “location shifts”). Or is it a theoretical issue? For instance, failing to explain why stock market returns are fat-tailed, to model firms and households‘ exposure to financial risk and its systemic consequences  into macro models, to take shadow banking into account, to identify the drivers of productivity. A bigger failure to model institutions, complexity, heterogeneity?  Improving theoretical modeling is the bulk of what is discussed in the macro death literature.

overlapping2On the contrary, that changes in economic structures or in perceived ways for government to intervene in the economy (for instance through macro prudential regulations, QE, etc) have made  economists’ regular predictions  irrelevant, useless or less accurate is less discussed. Keynesian macroeconometricians have built models aimed at conditional forecasting (aka dealing wit questions such as : what happens to the economy if the government raises the interest rates?), though central bankers have sometimes used these for unconditional forecasts (to get next year’s GDP figures). But the “failure-to-predict” criticism deals with unconditional forecast, as was the case with part of the Phillips curve debate during the 1970s. Finance economists have also traditionally been mostly concerned with unconditional forecast. I’m thus left wondering whether the rise of financial dimensions in public intervention have led to misusing DSGE models, or have fostered the development of macro models aimed at hybrid forecasting.

contractions2Finally, this “failure-to-predict” literature might point to a deeper epistemological shift. It is, of course, one seen in some economists’ rejection of DSGE modeling and endorsement of alternative models (agent-based, evolutionary, or complexity), interdisciplinary frames, in their call to go back to Minsky or Kindleberger, or even to good old IS/LM. But among those economists who have traditionally endorsed DSGE macro, there also seems to be a shift away from forecasting (unconditional and conditional) as the main goal of macroeconomics or economics at large. In 2009, for instance, Mark Thoma has commented that “our most successful use of models has been in cleaning up after shocks rather than predicting, preventing or insulating against them through pre-crisis preparation,” and the blogosphere and newspaper opens are ripe with similar statements. These can be interpreted as rhetorical gestures, defensive moves, or early symptoms of an epistemological turn. Itzhak Gilboa, Andrew Postelwaite, Larry Samuelson and David Schmeidler have , for instance, recently worked out a formal model in which they suggest that economic theory  is not merely useful through providing predictions, but also as a guide and a critique to economic reasoning, as a decision-aid .(This ties in with their broader call for case-based reasoning).

Granted, this is all very muddled. I am probably making artificial distinctions (for instance, I couldn’t decide whether Gabaix’s work on power laws and granularity belongs to statistical or theoretical analysis), and I am certainly misunderstanding key concepts and models. But my point is, it should be the purpose of the macrodeath literature to un-muddle my thoughts.  What I’m asking for is two types of articles:

(1) articles on economists’ failure to predict the crisis that are explicit about what their target is and how their championed substitute approaches will yield better conditional/unconditional predictions. Or, if their alternative paradigms reject prediction as the key purpose of economic analysis, why, and what’s next.

(2) histories of how economists have theorized and practiced forecasting since World War II. A full-fledged history of forecasting in economics and finance is a little too ambitious to begin with. What I’m interesting in is why and when empirical macroeconomists, in particular macroeconometricians,  have endorsed (conditional?) prediction as their key objective, what resistances did they encounter, what were the debates over how to produce , evaluate and use forecasts, whether models built for conditional forecasting were used by central bankers and by their  own producers for unconditional forecasting (think Wharton Inc and DRI), whether it shaped their relationships with finance and banking specialists, and how they reacted to the first salvo of public criticisms (70s economic crisis, Phillips curve breaking down, ect.). Additionally, recasting public and governmental anxiety about forecasting in the wider context of changing conceptions and uses of “the future” may help understand the challenges postwar economists faced.

Note: These great fantastic TERRIFIC pictures are based on suggestions by @Arzoglou@dvdbllrd, and . One of these pictures, and only one, features real economists whose band name is an insider’s pun.

Posted in Uncategorized | Tagged , , , , , , , , | 3 Comments

Les économistes face à la défiance de la société civile: faut-il déclarer la guerre des nombres?

Crise de confiance, crise de l’expertise, crise du nombre

capture-decran-2017-02-03-a-22-33-06Les économistes sont au bord de la crise de nerf, prévient Anne-Laure Delatte, la directrice adjointe du CEPII. L’inquiétude était en effet palpable à l’American Economic Association, le rassemblement de plusieurs milliers d’économistes qui se tient chaque année le premier week-end de janvier. Pas une session où le mot Trump ne soit murmuré avec inquiétude, et pourtant, c’était avant que ce dernier ne gèle les budgets recherche et communication de l’agence pour la protection de l’environnement ou de la NASA, et que son équipe n’oppose aux quantifications des experts une série de “faits alternatifs.” Son mépris des chiffres, en particulier économiques, n’est pas nouveau. A l’automne, il avait déjà qualifié le chiffre officiel du chômage (aux alentours de 5%) de fiction totale. Pour lui, on était plus près des 40%. Du coup, les économistes américains s’inquiètent des dommages que le nouveau gouvernement pourrait infliger aux statistiques qui forment leur matériau de base: le financement de certaines enquêtes et recensements pourrait être supprimé, ou réduit, ce qui diminuerait la qualité de la collecte de ces données. Le calcul de certaines statistiques pourrait également être altéré – par exemple en majorant ou minorant les dénominateurs- pour mieux coller à l’image que que Trump se fait de la situation des Etats-Unis.

Une autre inquiétude est la privatisation de la statistique publique. La privatisation des données est un sujet qui agite les esprits économiques depuis quelques années déjà. Depuis que la digitalisation de nombreux marchés et comportements permet l’enregistrement en temps réel de milliards de données sur les prix, les transactions, les goûts, les réactions, la psychologie des agents, ce que l’on appelle le big data. Depuis que les scientifiques, dont les économistes, développent de nouvelles techniques pour analyser ces données, comme le machine learning, échangent leur expertise en terme de market design contre un accès privilégié à ces données, et abandonnent même de prestigieuses carrières économiques pour des postes au sein des GAFA. Mais ce que craignent les économistes américains aujourd’hui, c’est que certains recensements démographiques publics soient confiés à des agences privées, sans que celles-ci n’aient l’obligation de communiquer les données brutes ni la méthodologie des calculs statistiques effectués.

La crise à laquelle les économistes font face est en réalité plus lancinante, plus profonde. Tout au long de l’année 2016, les chercheurs américains et européens ont assisté, en se tordant les mains, à la défaite de leur expertise. Leurs pétitions contre le Brexit, puis contre Trump se sont enchainées, ignorées par les électeurs. Ce qui a suscité un flot de questions: pourquoi les économistes ne sont-ils plus écoutés? Dans quelle mesure cette défiance s’inscrit-elle dans un mouvement vers l’ère du ‘post-factuel’ et de la ‘post-vérité‘ ?

L’expertise (en crise) des économistes repose sur les faits que ceux-ci sont en mesure de produire. Ces faits sont en général de nature quantitative, puisqu’ils consistent en la sélection et le traitement de données, elles-mêmes souvent récoltées ou produites puis stockées sous forme numérique. Cette crise de l’expertise est donc, entre-autres, une crise de la statistique publique, et plus largement des nombres, de l’observation, de la mesure, de la quantification et de la communication de ceux-ci.  Cette défiance de la société civile vis-à-vis du nombre traverse les pays occidentaux, de Londres à Bruxelles en passant par Washington. Elle est visible jusque dans la campagne présidentielle française : les candidats ont tendance à éviter de chiffrer leurs promesses électorales, ou se contentent de chiffres vagues destinés à marquer les esprits: la suppression de 500 000 fonctionnaires, le revenu universel à 750 euros.

Les articles linkés ci-dessus proposent tous la même explication à cette défiance grandissante. Le problème n’est pas tant que les économistes travailleraient pour des intérêts spécifiques, seraient “achetés”, mais plutôt leur échec à prédire la crise financière de 2008, auquel s’ajoutent, explique Mark Thoma, “les fausses promesses faites aux classes laborieuses et moyennes au sujet des bénéfices de la mondialisation, des baisses d’impôts en faveur des plus aisés, et de l’ouverture commerciale.” Cet échec est lui-même interprété de plusieurs façons: comme une conséquence de la tendance des économistes à analyser les effets agrégés de la libéralisation commerciale ou de la croissance (en général perçus comme positifs), et à négliger les effets négatifs sur certaines catégories professionnelles, ou du moins, à évacuer le problème par une note de bas de page spécifiant qu’il suffirait de “mettre en place des transferts compensatoires.” Le refus des citoyens d’être décris par des grandeurs agrégées ou des moyennes ne touche pas simplement les conclusions que les économistes dérivent de leur modèles, mais aussi les statistiques qu’ils utilisent: 68% des américains n’ont pas confiance dans les statistiques publiées par le gouvernement fédéral. En France, le dernier baromètre de la confiance du Cevipof a montré que 60% des interrogés n’ont pas confiance dans les chiffres de la hausse des prix et de la croissance, et 70% dans les chiffres de l’immigration, du chômage et de la délinquance que produit l’INSEE. Pourtant, l’institut bénéficie d’une bonne image pour 71% des répondants. Les citoyens ne se reconnaissent donc pas dans les statistiques, ils ne s’y voient pas.

Ce problème de représentativité perçue peut se doubler d’un problème de représentativité réelle. Certains économiste avouent avoir du mal à quantifier de manière satisfaisante des réalités économiques en constante évolution. L’idée est qu’à cause de la mondialisation et des transformation technologiques, justement, les statistiques nationales peinent à capturer l’identité des agents et des phénomènes économiques: les statistiques devraient être à la fois régionalisées et internationalisées, les notions d’intensité et de qualité devraient être prises en compte. Les données collectées par les GAFA seraient ainsi de meilleure qualité, pas simplement parce qu’elle sont plus nombreuses, mais parce qu’elles sont de nature différente et permettraient ainsi d’accéder à un nouveau type de connaissance, sans a priori théorique. Mais ces données sont “propriétaires,” ce qui pose d’innombrables problèmes d’éthique, de protection de la vie privée et de confidentialité,  et, sur le plan scientifique, d’accès, d’indépendance et de replicabilité. [1]

D’autres explications de la crise de confiance sont mentionnés trop rapidement : le fait que les indicateurs chiffrés aient perdus toute valeur à force d’être agités en pure perte (les “3% de deficit budgétaire”), ou au contraire, aient perdu leur caractère objectif et neutre à force d’être utilisés comme objets de management et de contrôle, de scoring, de benchmarking, de classification, bref, soient instrumentalisés; le rôle des médias enfin, et la possibilité que les chiffres erronés, mais sensationnels chassent les chiffres fiables.

Réflexivité et perspective historique

La réponse des économistes à cette crise? Plus de chiffres, des deux côtés de l’atlantique. Après tout, n’a-t-on pas récemment rencontrés de francs succès dans l’étude des effets redistributifs de la mondialisation ou des systèmes fiscaux? N’est-on pas capable de “voir” et de quantifier la diminution du nombre d’hommes en âge de travailler effectivement présents sur le marché du travail américain, de lier ce phénomène à l’augmentation des problèmes de santé, voir de la diminution de leur espérance de vie ? De quantifier l’inégalité intergénérationnelle? Ces travaux ne sont-ils pas entrés à la maison blanche, ne se sont-ils pas retrouvés dans les discours du président, voir dans les slogans des manifestants? Anne-Laure Delatte conclue ainsi sa chronique par une profession de foi:

Les experts ont trahi par dogmatisme. Faut-il pour autant se taire ? Laisser la parole aux autres, ceux qui ne croient pas en les chiffres et les faits ? Ou bien justement entrer en résistance contre le dogmatisme des uns et l’obscurantisme des autres ? C’est le choix de plusieurs instituts français, dont celui auquel j’appartiens, qui entrent dans la campagne présidentielle juste armés des outils de l’analyse économique (1). Nous avons choisi d’éclairer les débats avec des chiffres et des résultats issus de la recherche académique. Le faire avec pédagogie et humilité.

Ceci fait écho aux explications de Michael Klein qui a choisi de répliquer aux “faits alternatifs” du nouveau gouvernement américain en ouvrant un site, Econofact. “Les faits sont têtus,” écrit celui qui a demandé à des “économistes universitaires prestigieux” de rédiger des mémos au sujet de l’emploi manufacturier, des effets économiques des migrations, du commerce ou des systèmes de change. Son but, explique t-il, est “de souligner que si l’on peut choisir ses propres opinions, on ne peut pas choisir ses propres faits.” Plus de faits, donc, avec plus de communication. Et plus d’humilité, un mot qui revient constamment dans ces diverses chroniques. Pour louable que soient ces propositions, elles semblent insuffisantes à enrayer le mal. “Les économistes ont trahi par dogmatisme,” conclut Anne-Laure Delatte.  Mais le dogmatisme se nourrit du manque de réflexivité. Réflexivité sur la production des nombres économiques, leur utilisation et leur communication. Réflexivité qui passe, non seulement par des tables rondes annuelles, mais surtout par la connaissance de son histoire disciplinaire, des débats qui ont fait de la science économique ce qu’elle est aujourd’hui.

41hka94tfkl-_uy250_Ca tombe bien, la quantité de travaux que les historiens anglophones de l’économie ont produit sur l’observation, la mesure, la quantification économique est considérable. Et la quantité de travaux francophones rédigés sur les même thématiques par les sociologues et historiens de la chose publique, plus époustouflante encore.[2] Un groupe de sociologues, parmi lesquels Alain Desrosières ont fondé une véritable école française de la sociologie de la quantification, dont les objets sont les conditions théoriques, techniques et institutionnelles de production et d’utilisation  de la statistique publique par les gouvernements. L’histoire de la comptabilité privée, de l’utilisation grandissante d’indicateurs chiffrés, de scores, de classifications et de benchmarks a également fait l’objet de nombreuses recherches. La tendance est aujourd’hui à la synthèse de ces deux littératures, ce qui implique d’analyser la porosité entre quantification publique et privée, entre traditions nationales, d’étudier la circulation des pratiques de quantifications entre continents, époques et sphères professionnelles

capture-decran-2017-02-05-a-13-21-16Certes, ceux qui se risquent à promouvoir l’histoire de l’économie auprès des praticiens de la discipline souvent l’impression de crier dans le désert. Mais il s’agit de méfiance autant que de désintérêt.  On me susurre dans l’oreillette qu’on a bien essayé de discuter avec des historiens, philosophes ou sociologues, voir même de lire certains articles, mais que, vraiment, les catégories parfois employées pour décrire le travail des économiste, le ton ouvertement critique, ça ne passe pas. Les économistes lambda passent leurs journées à s’arracher les cheveux sur leur code DYNARE, à estimer des modèles de search and matching, à mettre au point des expériences de laboratoire pour comprendre les biais des agents face à différents niveaux de risques, des testing sur CV anonymisés pour saisir les mécanismes de la discrimination hommes/femmes à l’emploi, ou des expérimentations de vote permettant de confirmer que nos modes de scrutin présidentiels sont tout sauf efficaces. Et ponctuellement, ils essaie d’expliquer ce qu’ils fabriquent à des audiences d’étudiants, de curieux, vérifient le script d’excellentes vidéos de vulgarisation, le tout pour 2500 euros net par mois après 15 ans d’ESR, et parfois, luxe ultime, avec une prime d’excellence scientifique en bonus. Ils ne comprennent donc pas bien en quoi consiste le “paradigme néoclassique” supposé caractériser cette diversité d’approches (la majorité des recherches évoquées ci-dessus ne font pas recours à des homo oecomicus maximisant sous contrainte avec une information parfaite), encore moins en quoi ils participent à un complot néoliberal ou sont vendus au grand capitalisme. Ils ont l’impression que la cible des sociologues sont, au mieux, quelques puissants, au pire, les économistes d’il y a 70 ans.

Certes. Mais ces analyses des pratiques des économistes, quels qu’en soient le cadre épistémologique et l’interprétation, utilisent un matériau historique qui révèle les cheminements intellectuels, les débats, les errements, les hasards, les obstinations, les influences et les résistances qui ont façonnées les pratiques d’aujourd’hui. Elles permettent de comprendre que l’utilisation collective de modèles à agent représentatif ou de jeux non-coopératifs, que le recours à des expérimentations contrôlées ou des méthodes structurelles, que l’utilisation de l’analyse couts-bénéfices, que les méthodes de mesure, de collecte et de traitement des données ne sont pas neutres, peuvent avoir des effets inconscients importants et durables. Et surtout, elles nourrissent la réflexion sur les réponses à apporter à la défiance actuelle. En particulier, ces histoires de la fabrique des nombres en économie montrent que ceux-ci reflètent ce que les économistes choisissent de “voir.” Elles permettent de comprendre ce qui influence, obscurcit ou déplace l’attention des chercheurs, et comment ceux-ci établissent des faits fiables. Les conditions pour qu’un “fait” économique circule bien (c’est à dire en préservant son intégrité et en étant utile) ont également fait l’objet de recherches historiques.

“Les objets statistiques sont à la fois réels et construits” (Alain Desrosières

Les données économiques, donc, reflètent ce les économistes choisissent de “voir.” Et c’est bien ce qu’on leur reproche aujourd’hui: de n’avoir pas su voir la hausse des inégalités, les conséquences de la mondialisation sur certaines catégories de la population, l’instabilité financière. Pourtant, leur expertise fut, à partie de la Seconde Guerre mondiale, de plus en plus recherchée. Comment ceux-ci ont-ils construit leur crédibilité? Comment l’ont-il perdu? Pourquoi cet “aveuglement”? Comment “voir mieux”?  Si les ouvrages exhaustifs abondent (biblios ici, , et , je signale les sources en français dans la suite du texte), que peut-on tirer de certains exemples?

Layout 1Les débats qui ont jalonnés le calcul de la principale statistique économique, le PIB, par exemple l’exclusion de les productions non marchandes ou la difficulté d’évaluer l’environnement sont souvent connus, ne serait-ce que parce qu’ils ont trouvé un écho contemporain à travers les nombreux indicateurs alternatifs développés ces dernières années.[3]  Mais le cas des indices des prix et du coût de la vie est tout aussi intéressant. Tom Stapleford (EN) explique que l’indice du cout de la vie américain (le CPI) fut développé par le Bureau of Labor Statistics en réponse à l’expansion du système administratif gouvernemental. L’objectif était que celui aide à la rationalisation des ajustements des salaires et allocations. Mais très vite, il fut également utilisé pour les négociations salariales dans le secteur privé, puis pour tenter de résoudre des conflits sociaux par le recours à des instruments “rationnels.” Le CPI n’a donc rien d’une nombre “objectif”, conclue Stapleford. Il est un instrument de quantification façonné par des problèmes d’ordre pratique, des conflits bureaucratiques, des théories (le remplacement de l’utilité cardinale par l’utilité ordinale), et les desseins politiques, de la justification de la baisse des salaires de 1933 à son utilisation dans les débats autour de la stabilisation macroéconomiques.

logo-inseeMichel Armatte (ch2, FR) raconte que les indices français du coût de la vie étaient traditionnellement construits en calculant l’évolution des prix d’un panier de biens fixe. La liste des biens figurant dans le panier fit donc l’objet de nombreuses controverses, d’autant plus que celui-ci jouait un rôle fondamental dans le partage de la valeur ajoutée. L’indexation des salaires fut parfois interdite, parfois effectuée par rapport à un second indice calculé sur un panier de bien plus économique. En réaction, la CGT finit par créer son propre indice en 1972. Celui-ci devait refléter le coût de la vie pour une famille  ouvrières de 4 personnes, locataires en région parisienne, et servir de base aux négociations salariales. Alors que l’indice français est régulièrement accusé d’être sous-évalué, l’indice américain était en revanche perçu comme structurellement sur-évalué. Le très contesté rapport Boskin, publié en 1996 sous la présidence Bush père, concluait ainsi que la non prise en compte des effets de substitution de biens et de points de vente, de l’amélioration de la qualité des produits, et de l’introduction de nouveaux produits conduisait à une surestimation de l’inflation de quelques 1,3%. Ceci aurait couter 135 milliards de dollars à l’Etat. Les recommandations du rapport consistaient en l’adoption d’un indice à utilité constante plutôt qu’à panier constant, ce qui conduisit à l’adoption de la méthode des prix hédoniques développée, entre autres, par Zvi Griliches (voir cet article, EN).

Mais s’il y a bien un exemple de quantification dont la pertinence et l’impact social n’est pas remise en cause, c’est celui des travaux récent sur les inégalités de revenus. Les raisons du succès de l’ouvrage de Thomas Piketty, de ses travaux avec Antony Atkinson, Emmanuel Saez, Gabriel Zucman, mais aussi les travaux de Raj Chetty, de Branko Milanovic, de Miles Corak  et d’Alan Krueger ont beaucoup occupé les éditorialistes ces dernières années. La manière dont ceux-ci ont déplacé l’attention des chercheurs et du grand public des problèmes liés à la pauvreté et la croissance vers de nouveaux “faits stylisés” sur les inégalités, la redistribution et la fiscalité est encore mal comprise. Mais les travaux de Dan Hirschman (EN) permettent de saisir les raisons pour lesquelles les 10%, 1% ou 0,1% restèrent invisibles jusque dans les années 2000.

Les données sur les inégalités qui intéressent les économistes sont en effet déterminées par les théories qu’ils cherchent à confirmer. Dans l’après-guerre, explique Hirschman, les macroéconomistes étaient obsédés par la question du partage de la valeur ajoutée entre capital et travail, tandis que les économistes de l’emploi cherchaient avant tout à savoir si les différences de capital humain entre travailleurs qualifiés et non-qualifiés étaient à l’origine des différences de salaires. Les inégalités de genre et de race étaient également des sujets sensibles. Si les données fiscales avaient été exploitées lors des premières analyses sur la distribution des revenus, celles-ci n’attiraient donc plus l’attention, si bien que le Bureau of Economic Analysis cessa de produire ces séries, rendant le problème invisible. Les autres données sur les revenus, fournies par le Census’s Current Population Survey, ne permettaient pas non plus de “voir” les hauts revenus. En effet, pour des raisons de confidentialité, ces données étaient  top-coded, c’est à dire simplement enregistrées comme supérieures à un certain niveau, sans détails. Dans les années 1990, quelques économistes comme Feenberg et Poterba, ou Krugman, identifièrent une augmentation de la part de la richesse détenus par les  hauts revenus, mais il fallu attendre l’exploitation de larges quantités de données fiscales par Thomas Piketty et Emmanuel Saez pour disposer de nouvelles séries sur l’évolution de la part du revenu gagnée par les 5% et 1% les plus riches.[4]

En résumé, l’impossibilité à voir les enjeux redistributifs des transformations économiques n’était pas simplement due au fait de “travailler avec des données agrégées.” L’utilisation de données micro était une pratique courante au moins depuis les années 1960/1970, période de progrès dans collecte et du stockage de données d’enquête, comme le PSID aux Etats-Unis, et de l’économétrie des données de panel. Le problème des inégalités intéressaient déjà les économistes, mais les questions posées étaient structurées par des cadres théoriques (la théorie du capital humain), les demandes des les pouvoirs publics (à l’époque focalisés sur la pauvreté, ou les inégalités de genre et de race), et les effets non-intentionnels des décisions techniques (comme le top-coding). Certains historiens notent d’ailleurs que l’intérêt actuel pour les données donnant à voir la part de revenus et de richesse détenue par les 1% tend désormais à rendre invisible les inégalités de race et de genre.

“Les nombres servent à intervenir, pas simplement à représenter” (Ted Porter)

Comme le montrent ces deux exemples, les statistiques économiques reflètent aussi bien des controverses théoriques et techniques que les besoins des institutions publiques et privées. “La quantification est une technologie sociale,” souligne Porter (EN). Les statistiques sont façonnées par et pour la connaissance et le pouvoir gouvernemental, expliquait, de même, Alain Desrosières (FR). La tendance à la quantification du fait social s’inscrit dans les transformations des modes de gouvernement. Les gouvernements peuvent donc être amenés utiliser les nombres de manière politique, comme des armes. C’est ce que montrent les contributions rassembleés dans Benchmarking, édité par Isabelle Bruno et Emmanuel Didier. Ceux-ci documentent la croissance inflationniste des indicateurs, des classements (des hôpitaux, des universités, des régions, des entreprises). Mais ce management par le chiffre a des effets pervers, soulignent-ils: “telle est la force du benchmarking, qui fait sa très grande spécificité : il ne se contente pas de traduire la réalité en termes statistiques pour agir, il stimule cette action et la canalise vers un “meilleur” dont la définition est soustraite à l’autonomie des agents.”


Dans second ouvrage
, Isabelle Bruno et Emmanuel Didier, accompagnés de Julien Prévieux et d’autres auteurs, proposent une forme de réponse à ce nouveau management: la résistance à l’oppression par les chiffres passent (1) par la déconstruction des statistiques existantes ; (2) par la création de nouveaux nombres, le Statactivisme. Un exemple, celui de la production d’un indice du coût de la vie par la CGT, a déjà été évoqué plus haut. L’ouvrage en dissèque de nombreux autres:  proposer des indicateurs alternatifs de bien être, calculer des empruntes écologiques, compter le nombre de suicides pour évaluer le management d’une entreprise, évaluer le coût de l’expulsion des réfugiés, mettre en place un baromètre des inégalité et de la pauvreté, le BIP40 (voir la recension des deux ouvrages par Olivier Pilmis). Ces descriptions des conditions institutionnelles et intellectuelles de l’émergence de contre-statistiques, de leur diffusion, et de leur influence pourraient intéresser ces économistes qui souhaitent ré-occuper l’espace intellectuel, médiatique, public. Est-ce le rôle des associations militantes? Est-ce le rôle des scientifiques, et si oui, sous quelles conditions?  Les nouveaux faits stylisés sur les inégalités produits ces dernières années peuvent-ils être interprétés comme une forme de statactivisme? L’opposition 99%/1% fut, après tout, mise en scène de manière concomitante (mais apparemment indépendante) par des chercheurs et les activistes d’Adbuster.

“The lives of travelling facts” (Mary Morgan)

Car il ne suffit pas de voir autrement afin de produire de nouvelles données, encore faut-il  les sélectionner, les organiser et les présenter de manière à ce qu’ils forment des faits. Et si possible, des faits qui “voyagent bien,” c’est à dire, explique Mary Morgan,  des faits capables de conserver leur intégrité et d’être fructueux (e.g. utile aux conversations académiques comme aux débats publics).  D’un ouvrage portant sur des types de faits très variés (économiques, biologiques, physiques, etc), coordonné avec Peter Howlett, Morgan tire 3 conclusions.

u5dtmvnuq5yee7ewhjtapb4kog6svcx

capture-decran-2017-01-30-a-23-21-31Premièrement, l’importance des “compagnons de voyages,” qui peuvent être “des labels, du packaging, des véhicules ou des chaperons.” Que le packaging, en particulier visuel, soit une condition sine qua non de succès est visible dans le soin apporté à toutes les représentations graphiques des travaux récents sur les inégalités : métaphore animale ou référence à des classiques pour marquer les esprits, rupture avec la sémiologie statistique standard pour mettre en valeur certaines données. Un autre exemple est celui du succès d’Our World in Data, le site alimenté par Max Roser. Celui-ci pense que les médias ont tendance à plus insister sur les faits négatifs. Il se réfère aux travaux de Johan Galtung, selon lequel la fréquence de publication des médias (hebdo puis en temps réel) les empêche d’identifier et de couvrir des tendances positives de long terme. Son projet consiste donc à rassembler des séries de très long terme sur la santé, l’éducation, les conflits, le niveau de vie, etc., et à les visualiser selon une stratégie élaborée avec soin. Il y aurait aussi beaucoup à dire sur les véhicules divers que les économistes ont utilisés pour faire circuler les faits économiques qui leur semble important. On peut par exemple penser aux séries TV animées par John Galbraith et Milton Friedman (voir cet ouvrage consacré aux économistes et leurs publics).

Deuxièmement, indique Morgan, les “terrains” sur lesquels les faits circulent et leurs “frontières” ont aussi de l’importance. Ceux-ci peuvent être disciplinaires, professionnels, historiques, géographiques ou culturels. Les raisons pour lesquelles la réception de l’ouvrage de Thomas Piketty, Le Capital au XXIeme Siècle,  fut bien meilleure aux Etats-Unis qu’en France sont, par exemple, difficile à établir. La réflexion autour du bien-être et de ses mesures semble en revanche avoir rencontré dans notre pays un terreau plus fertile. Enfin, la capacité d’un fait à voyager dépend de ses caractéristiques intrinsèques, attributs et fonctions. Ceux-ci s’acquièrent souvent en cours de route, et se voient à travers les adjectifs utilisés pour décrire certains faits : “compréhensibles, surprenants, reproductibles, têtus, évidents, cruciaux, incroyables, importants, étranges.” Certains de ces adjectifs dénotent des qualités intrinsèques, d’autres des aspects affectifs. Au total, l’entrée brutale dans un monde “post-vérité,” mais surtout le lent déclin de la confiance en leur expertise forcent actuellement les économistes à réfléchir à des stratégies de défense et de contre-attaques. Ils peuvent, pour cela, puiser dans celles mises en oeuvre par leurs prédécesseurs ainsi que par d’autres types de professionnels, à condition, bien sûr, qu’ils s’intéressent à leur histoire.

Notes

[1] L’opposition données publiques/agrégées/ouvertes /petites quantités vs données privées/désagrégées/grandes quantités/propriétaires me semble largement surfaite. Il suffit de penser aux millions de données générées dans les pays ou le système éducatif et/ou de santé est public, ainsi que les données démographiques et fiscales, auxquelles les chercheurs en science sociales n’ont que rarement accès. Cela ne remet cependant pas en cause la possibilité d’une concurrence entre données publiques et privées – et la nécessité pour les états de réfléchir aux problèmes de confidentialité vs ouverture aux chercheurs.

[2] Un apercu de ces travaux figure dans la liste de lecture proposée par François Briatte, Samuel Goëta et Joël Gombin ou celles que l’on peut glaner sur le site d’Emilien Ruiz, sur le site du projet AGLOS – visant à fédérer un réseau international et interdisciplinaire d’étude des appareils statistiques de différents pays, ou au fil des references proposées sur la page du séminaire Chiffres Privés, chiffres publics, coordonné par Beatrice Touchelay. La revue Statistiques et Sociétés est consacrée à ces recherches. Pour un apercu de “l’école” fondée par Alain Desrosiere, voir cet ouvrage (en anglais) qui lui rend hommage ou ce numéro special. Les historiens français, eux, se sont plutôt penchés sur les techniques -de l’économétrie à l’expérimentation-, des théories – du choix, des jeux- et les modèles -en particuliers macroéconomiques- produits par les économistes. Cette littérature est également cruciale pour comprendre la crise de l’expertise actuelle, mais elle n’est pas le sujet ici.

[3] Voir la thèse de Geraldine Thiry (FR) ou celles de Benjamin Mitra-Khan et de Dan Hirschman (EN). Voir aussi cette bibliographie.

[4] L’article traite du contexte américain, et exclue donc la possibilité que l’intérêt pour les inégalités de revenus ait émergé en Grande-Bretagne dans les années 1960, en particulier sous l’impulsion d’Antony Atkinson.

Posted in Uncategorized | Tagged , , , , , , | 1 Comment

The making of economic facts : a reading list

That Donald Trump’s first presidential decisions included gagging the EPA,USDA and NASA, asking his advisors to provide “alternative facts” on inauguration attendance, and questioning the unemployment rate is raising serious concerns among economists. Mark Thoma and former BLS statistician Brent Moulton, among others, fear that the new government may soon challenge the independence of public statistics agencies, drain access to the data economists feed themselves with, attempt to tweak them, or just ax whole censuses and trash data archives, Harper style.

capture-decran-2017-01-27-a-10-47-20

One reaction to this had been to put more or better communicated economics facts online. So is the purpose of the Econofact website, launched by Tufts economist Michael Klein. “Facts are stubborn,” he writes, so he asked “top academic economists” to write memos “covering the topics of manufacturing, currency manipulation, the border wall, World Trade Organization rules, the trade deficit, and charter schools.” The purpose, he explains,  is “to emphasize that you can choose your own opinions, but you cannot choose your own facts.”  The move is in line with other attempts by scientists and people working in academia, the NASA, the National Parks or the Merriam Webster dictionary to uphold and reaffirm facts, in particular on climate change.

capture-decran-2017-01-27-a-13-58-31Looking at the website, though, I’m left wondering who the intended audience is, and whether this is the most effective way to engage a broad public. As noted by Klein himself, citizens seem to crave “information,” but within the broader context of a growing distrust of scientific expertise, statistics and facts. All sorts of expertise are impacted, but it doesn’t mean that responses should be identical. Because in practice, if not  in principle, each science has its own way to build “facts,” and citizens’ disbelief of climate, demographic, sociological or economic facts may have different roots. It is not clear, for instance, that economics statistics are primarily rejected because of perceived manipulation and capture by political interests. What citizens dismiss, rather, is the aggregating process the production of statistics entails, and economists’ habit to discuss averages rather than standard deviations, growth rather than distribution. Statistics have historically sprung out of scientists’ efforts to construct an “average man,” but people don’t want to be averaged anymore. They want to see themselves in the numbers.

It might be a good time, therefore, to acknowledge that economic statistics proceed from  what economists intentionally or unconsciously choose to see, observe, measure, quantify  and communicate; to reflect on why domestic production had been excluded from GDP calculations and whether national accounting embody specific visions of government spending; to  ponder the fact that income surveys and tax data were not coded and processed in a way that made “the 1%” visible until the 2000s because, until then, economists were concerned with poverty and the consequences of education inequality rather than with top income inequality; to think about the Boskin Commission’s 1996 decision to settle for a constant utility price index to prevent inflation overstatement, and its consequences on the way economists measure the welfare derived from good consumption (and productivity). And it’s not just that the observation, measurement and quantification process underpinning “economic facts” has constantly been debated and challenged. It has also been politicized, even weaponized, by governments and profit or non-profit organizations alike. Economic data should be viewed as negotiated and renegotiated compromises rather than numbers set in stone. This doesn’t diminish their reliability, just the contrary. They can be constructed, yet constructed better than any “alternative” rogue organizations have in store.

The production of government statistics has considerably evolved over decades if not centuries. It results from longstanding theoretical and technical disputes as well as conflicting demands and uses, some very different across countries.  Even more dramatic have been the changes in the production and uses of financial and business economic data. Below is a non-exhaustive list of books and articles offering overarching perspective on economic (and social science) data as well as specific case studies.

Note: Some of these references primarily deal with quantification, other with observation or measurement, some with the making of economic data, other with the production of “facts” (aka selected, filtered, organized and interpreted data). 

General framing

  1. Trust in Numbers: the Pursuit of Objectivity in Science and Public by Ted Porter. Fast read, excellent overview. Porter explains that, contrary to the received view, the quantification of social facts was largely driven by administrative demands, by political makers’ willingness to enforce a new kind of “mechanical objectivity.’ “Quantification is a social technology,” he explains. For a longer view, see Mary Poovey’s A History of the Modern Fact, which tracks the development of systematic knowledge based on numerical representation back to XVIth century double-entry bookkeeping.
  1. A collective volume on the history of observation in economics, edited by Harro Maas and Mary Morgan. They provide a broad historical overview in their introduction, and insist on the importance of studying the space in which observation takes place, the status and technicalities of the instruments used, as well as the process whereby trust between the economists-observers and the data-users is built.
  1. Marcel Boumans has spent a lifetime reflecting on quantification and measurement  in economics. In his 2005 book How Economists Model the World into Numbers and associated article, he defines economic models as “tools for measurement,” just as the thermometer is for physical sciences (this idea is borrowed from the Morgan-Morrisson tradition. See also this review of the book by Kevin Hoover).   His 2012 book likewise details historical exemples of observation conducted outside the laboratory (aka when economic phenomena cannot be isolated from their environment). His purpose is to use history to frame epistemological evaluations of the economic knowledge produced “in the field.” The book discusses, among others, Morgenstern’s approach to data or Kranz, Suppes, Luce and Tversky’s axiomatic theory of measurement.
  1. French sociologist Alain Desrosieres has pioneered the sociology of economic quantification through his magnum opus The Politics of Large Numbers: a History of Statistical Reasoning and countless articles. The gist of his comparative analysis of the statistical apparati developed in France/Germany/US/UK is that statistics are shaped by and for government knowledge and power. His legacy lives on through the work of Isabelle Bruno, Florence Jany-Catrice and Béatrice Touchelay,  among others. They have recently edited a book on how recent public management has moved from large numbers to specific indicators and targets.

 

Capture d’écran 2017-01-27 à 14.57.13.png

Economic facts: case studies

1. There is a huge literature on the history of national accounting and the debates surrounding the development of GDP statistics. See Dans Hirschman’s reading list as well as his dissertation on the topic, and the conference he has co-organizing last Fall with Adam Leeds and Onur Özgöde. See also this post by Diane Coyle.

2. Dan Hirschman’s study of economic facts also comprise an analysis of stylized facts in social sciences, and a great account of the technical and contextual reasons why economists couldn’t see “the 1%” during most of the postwar period, then developed inequality statistics during the 2000s. It has been covered by Justin Fox here.

Layout 13. There are also stories of cost of living and price indexes. Historian Tom Stapleford has written a beautiful history of the Cost of Living in America. He ties the development of the statistics, in particular at the Bureau of Labor Statistics, to the growth of the American bureaucratic administrative system. The CPI was thus set up to help the rationalization of benefit payments adjustments, but it was also used for wage negotiations in the private sector, in an attempt to tame labor conflicts through the use of “rational” tools. The CPI index is thus nothing like an “objective statistics,” Stapleford argues, but a quantifying device shaped by practical problems, bureaucratic conflicts – the merge of public statistical offices, economic theory –the shift from cardinal to ordinal utility–, institutional changes and political agendas – the legitimation of wage cuts in 1933, the need to control for war spending, its use in postwar macroeconomic stabilization debates. See also Stapleford’s paper on the development of hedonic prices by Zvi Griliches and others. Spencer Banzhaf recounts economists’ struggles to make quality-adjusted price indexes fair and accurate.

4. Histories of agricultural and environmental statistics also make for good reads. Emmanuel Didier relates how USDA reporters contributed to the making of US agricultural statistics, and Federico D’Onofrio has written his dissertation on how Italian economists collected agricultural data at the turn of XXth century through enquiries, statistics and farm surveys.  Spencer Banzhaf relate economists struggles to value life, quantify recreational demand, and measure the value of environmental goods though contingent valuation. A sociological perspective on how to value nature is provided by Marion Fourcade.

ccam8npweaip7mm
On public statistics, see also Jean-Guy Prevost, Adam Tooze on Germany before World War II, and anthropologist Sally Merry’s perspective. Zachary Karabell’s Leading Indicators: A Short History of the Numbers that Rule Our World is intended at a larger audience.

 Shifting the focus away from public statistics

As I was gathering references for the post, I realized how much historians and sociologists  of economics’ focus is on the production of public data, and their use in state management and discourse. I don’t really buy the idea that governments alone are responsible for the rise in the production of economic data in the last centuries. Nor am I a priori willing to consider economists’ growing reliance upon proprietary data produced by IT firms as “unprecedented.” Several of  the following references on private economic data collection were suggested by Elisabeth Berman, Dan Hirschman, and Will Thomas.

  1. Economic data, insurance and the quantification of risk: see How Our Days Became Numbered : Risk and the Rise of the Statistical Individual Risk by Dan Bouk (history of life insurance and public policy in the XXth century, reviews here and here). For a perspective on the XIXth century, see Sharon Ann Murphy’s Investing in Life.  Jonathan Levy covers two centuries of financial risk management in Freaks of Fortune.

 

.

2. For histories of finance data, look at the performativity literature. I especially like this paper by Juan Pablo Pardo-Guerra on how computerization transformed the production of financial data.

3. A related literature deals with the sociology of scoring  (for instance Fourcade and Healy’s work here and here)

51xmm4z5tbl-_sx327_bo1204203200_4. Equally relevant to understand the making of economic facts is the history of business accounting. See Paul Miranti’s work, for instance his book with Jonathan Barron Baskin. See also Bruce Carruthers and Wendy Espeland’s work on double-entry accounting and economic rationality. Espeland also discusses the relation of accounting to corporate control with Hirsch here and to accountability and law with Berit Vannebo here (their perspective is discussed by Robert Crum).

 

Posted in Uncategorized | Tagged , , , , , , , | 5 Comments

From physicists to engineers to meds to plumbers: Esther Duflo rediscovering the lost art of economics @ASSA2017

“The economist as plumber”

esther-dulfo-speaking-mit_0Yesterday, Esther Duflo gave the American Economic Association Richard T. Ely lecture (edit: the full video of the lecture is now online). The gist of her talk was that economists should think as themselves more as plumbers who lay the pipes and fix the leaks. Economists should not merely be concerned with what policy to implement she argued, but to work out the details and practicalities of such implementation. She gave a lot of examples seemingly related to institutional design (though not through mechanisms). For instance, she showed that the transparency, thus efficiency, of a rice subsidy program could be improved by providing ID card to eligible family. But how authorities provide the cards matter, she pointed out. Likewise, not just how many government workers you hire and how much you pay them is important. How you recruit them, that is what you whether you should advertise career prospect or public service matters. She was especially concerned with identifying “leaking pipelines,” that is, where corruption occurred. At first, it wasn’t clear that the new perspective she was advocating was anything more than a repackaging of the idea that economists should pay more attention to institutions, large and small, especially when disregarded by policy makers because of “ideology, ignorance or inertia.” Their lack of concern is shown is the recent ruthless Indian demonetization, she emphasized.

 But it quickly appeared Duflo wasn’t merely aiming at repackaging. She was calling for a more radical “mindset change.” She wanted economists to reconceive economic agents, policy-makers and bureaucrats as bounded “humans” embedded in wider power structures and cultures, and to realize that thinking goods ideas is not enough to improve the latter’s welfare. “Incentive architecture” is thus needed, and economics expertise is especially relevant because it deals with behavioral, incentive and market equilibrium issues. The recent success of (some) “nudge” has given some salience to benefits of crafting incentives carefully, for instance by fixing regulations to prevent firms from exploiting loopholes. Plumbing was also beneficial for economics as science, she continued, as it helped generate counterfactuals by randomizing on entire markets. Plumbing also shines the spotlight on issues theorists had previously ignored, like how important the default scenario is. Economics as plumbing requires a more pragmatic and experimental mindset, she concluded, as it requires them to make decision without having a full knowledge of the system to be tinkered (“tinkering” was one of the keywords of the speech).

20170106_082636

Scientists, engineers, meds: the shifting identity of economists

Though Duflo was very cautious in her talk, making her plumbering analogy a call for humility, her claims are more transformative that they sound. Ely lectures are, like presidential addresses, a primary vehicle to address the state of the discipline, to reflect about economists’ proper objects of study, methods and social role. Jacob Viner had started the series by reflecting on “The Economist in History” in 1962. These lectures are, in fact, often attempts to shape and reshape economists’ identity.. My take is that Duflo’s talk is no exception, and that putting her attempt to alter economists’ identity by comparing them to other professionals into (sketchy) historical perspective might highlight her goals.

jnkeynesThese comparisons are tricky material for historians, because they have often been more than just metaphors. What economists have borrowed from other sciences, disciplines and practices include image but also questions, objects of study, tools, epistemological and social relevance criteria. In the XIXth century, political economy was considered as a positive science, but also as a branch of ethics and as an art.“It is a further questions whether or not we should also recognize, as included under political economy in the widest sense – but distinct from the positive science – (a) a branch of tehics which may be called the ethics of political economy, and which seels to determine economics ideals; and (b) an art of political economy, which seeks to formulate economic precepts, John Neville Keynes wrote in 1890. Keynes’ argued that the art of economist entailed a different methodology from positive science, and that failing to distinguish the two would impoverish economic expertise. “One hundred years later, he has turned out to be clairvoyant,” David Colander diagnosed in a 1992 essay lamenting “The Lost Art of Economics.”

The last 100 years can indeed been construed as an irresistible march toward making economics a science, notably by emulating physics. Philip Mirowksi showed that this physics envy included borrowing methodological claims (Popperian deductive approach, economics being concerned with identifying laws), metaphors (“energy”, “body”, “movement”, values”) and mathematical models and tools (law of energy conversion them thermodynamics). The physics comparison was a rhetorical weapon, wielded by NBER’s Wesley Mitchell in the 1940s during the discussing following Vannevar Bush’s Science: The Endless Frontier report (Bush and his fellow physicists advocated federal funding for “sciences.” Alas, the resulting National Science Foundation, established in 1950, did not cover any social science). But the pervasive reference to physics also reflected genuine emulation, as seen in Roger Backhouse forthcoming biography of Paul Samuelson. In the late 1930s, the young Chicago sophomore decided to start a diary in the 1930 to reflect on Frank Knight, Jacob Viner and his other teachers’ ideas, Backhouse relates. Its first sentence set the tone of Samuelson’s (hence the whole profession’s) methodological vision :

“Science is essentially the establishing of Cause and Effect relationships. This knowledge can be utilized in controlling causes to produce desired effects. It is the realm of philosophy to decide what these objectives shall be, and that of science to achieve those decided upon.”

After World War two, Mirowski argued in a follow-up volume, economists gradually shifted to information science metaphor (“machine dream”). It was however another comparison that gained currency in that period: the economist as engineer. Of course, several economists have been trained as engineers for more than a century. Those French economists-engineers trained at Polytechnique and the Grand Corps d’Etat have, from Jules Dupuit to Marcel Boiteux, applied their tools to rationalize public management and set prices for public goods and utilities. Tool exchanges between operational research and economics have been numerous in the postwar, especially in those universities with strong engineering tradition (MIT, Carnegie). Many of those economists who developed al_roth_sydney_ideas_lecture_2012clab experiment and mechanism design, like Vernon Smith or Steve Rassenti, were trained as engineers. One of them, Al Roth, graduated from Stanford’s Operational Research program before applying his game theoretic tools to economic issues. Yet, he said, he felt that was really doing the same science, and he later wrote the famous paper that stabilized market designers’ economic identity as engineers and scientists: “if we want market design to be better informed and more reliable in the future, we need to promote a scientific literature of design economics. Today this literature is in its infancy. My guess is that if we nurture it to maturity, its relations with current economics wll be something like the relationship of engineering and physics, or of medicine and biology,” he wrote. 

            The engineering metaphor was taken up by many other economists, including Bob Shiller, in his defense of his shared Nobel Prize: “economic phenomena do not have the same intrinsic fascination for us as the internal resonance of the atom […]. We judge economics by what it can produce. As such, economics is rather more like engineering than physics, more practical than spiritual,” he explained, before claiming that engineering should have its Nobel too. That Shiller and Fama held opposed views of how financial markets work came under such fire that Raj Chetty also weighted in. Yes, economics is a science, he hammered, but one akin to medicine. “The kind of empirical work in economics might be compared to the “micro” advances in medicine (like research on therapies for heart disease) that have contributed enormously to increasing longevity and quality of life, even as the “macro” questions of the determinants of health remain contested,” he explained.

            These engineering and medicine comparisons served various purposes. In the wake of the financial crisis, Shiller and Chetty used them to argue that a discipline can yield scientific and useful knowledge in spite of its approximations, limits, and possible mistakes. But considering medicine as a repository of scientific methods to tap was also on Alan Krueger and Esther Duflo’s agenda. Krueger interestingly relates how he used to read the New England journal of Medicine. In her Ted talk laying out her randomized experiments methodology, Duflo was more straightforward: “these economics I’m proposing, it’s like 20th century medicine. It’s a slow, deliberative process of discovery. There is no miracle cure, but modern medicine is saving millions of lives every year, and we can do the same thing.” And in a recent book, French economists Pierre Cahuc and André Zylberberg created a stir by calling all those economics works that did not rely on the kind of careful causal identification medicine proposed “negationism.”

 

Rediscovering the “art” of economics?

In her Ely lecture, Duflo is thus moving away from medicine envy. To what extent her “economists-as-plumber” comparison is merely metaphorical is unclear, but she carried it throughout her talk, one riven with pipes, taps, leaks, flowcharts, and verbs like fixing and tinkering. But the key question is not as much about the nature of the comparison as it is about its purpose. How much does she wants economists’ identity shifted? At the beginning of her talk, she offered a reassuring sense of continuity. Scientists provide the general framework that guides the design of policies and markets, she clarified. Roth’s engineer takes this framework and confronts it to real situation, but “he knows what the are important features of the environment and tries to design the machine to address them.” “The plumber goes one step further,” she continued: “she fits the machine in the real world, carefully watches out what happens and tinkers as needed.”

            But the plumbing epistemology cannot be one whereby a preexisting scientific framework is applied, her later arguments reveal. Though she is not explicit about breaking from the rationality assumption, plumbers consider economic agents as neither endowed with perfect information, foresight and computing abilities nor fully rational in their decisions. Though her talk leaves the science of economics intact because distinct from plumbering, the latter in fact has the ability to transform what the theorist focuses on and what phenomena he needs to explain. What she might implicitly suggest is that economists should be less deductive and more inductive. The economist-as-plumber cannot wait for scientific knowledge to be mature and complete, she eventually warned. There’s a good deal of guesses, trial-and-errors and tinkering that economists should be willing to accept. In arguing this, Duflo is exactly in line with Paul Romer’s recent statement that economists should emulate those surgeons who cannot afford to wait for clean causal identification to make life-and-death decision. “Delay is costly. Impatience is a virtue,” he concluded. The plumber comparison also echoes Greg Mankiw’s distinction between macroeconomists-as-scientist and macroeconomist-as-engineer. Duflo’s Ely lecture might be a sign, then, that economists have recently rediscovered the lost art of economics.

Posted in Uncategorized | Tagged , , , , , , , | 8 Comments

Remembering Tony Atkinson as the architect of modern public economics

The sad news that Sir Antony Atkinson has passed away was released today. Tributes and reminiscences will no doubt flow this week, focusing on his extraordinary humility and generosity, alongside his lifelong dedication to the study of inequality (whose essence is perfectly captured in Piketty’s review of his last book). I would like to emphasize another aspect of his biography: his key role in the development of public economics from the 1960s onward. His contribution was intellectual (in particular his papers on optimal taxation), educational (he coauthored the most influential textbook of the 1980s and 1990s), and institutional (he was the founding editor of the Journal of Public Economics).

tony_atkinson_-_festival_economia_2015Atkinson’s early interest in poverty led him to Cambridge at a time the study of government intervention, then called public finance, was on the verge of a major overhaul. In 1966, as he received his BA, he flew to MIT, where he met two others key protagonists of the transformation of public economics: Joseph Stiglitz, then a graduate student, and Peter Diamond. The latter had just returned from Berkeley, then immersed in the application of duality techniques to microeconomic issues. In the public economics course he had set up with Carry Brown, Diamond was thus teaching duality and how to measure dead weight burden using expenditure function. Atkinson also took Robert Solow’s seminal growth theory course. Back in Cambridge, he pursued his interest in mathematical economics and growth theory, which he blended with a concern for development, the topic James Mirrlees was lecturing about. Frank Hahn was his director of studies. Atkinson would also cite Jan Graff’s Theoretical Welfare Economics (1957) and James Meade’s Trade and Welfare (1955) as nurturing his early interest in welfare issues. These various influences led him to write a dissertation on the reform of social security, poverty and inequality.

 In the Fall of 1967, Atkinson began teaching public economics with Stiglitz, who was then a research fellow at Cambridge. His first lecture opened straight with the two theorems of welfare economics, before going into externalities, indivisibilities, measurement of waste, optimal tax structure, government production, distribution and non convexities. He combined theoretical exposition with figures, though the dearth of data on social security, taxes and government expenditures was a serious hindrance to the growing econometrics research aimed at studying tax incidence of how households and firms reacted to taxation (it was only in the late 1970s, he remembered, that tax data became available for research). It was in these formative years that Diamond and Mirrlees circulated a model of differentiated commodity taxation building on previous work by Ramsey. Their results were immediately integrated and discussed by Atkinson in his lectures.

 In 1971, he moved to recently founded Essex University. He and Stiglitz began working on the interaction between direct and indirect taxation in the choice of an optimal tax structure. They also aimed at reformulating Diamond and Mirrlees’s taxation efficiency results in a more appealing way for those British public finance economists trained in the Pigovian surplus tradition. Their most famous result, published in 1976, stated that, assuming the utility function is separable between labor and consumption and non-linear income taxation is possible, the government can rely solely on direct labor income taxation (they interpreted capital income tax as a tax on future consumption, one that would distort savings). Their conclusion that, under specific conditions, no indirect tax need be employed spawned decades of debates on whether and under which circumstances capital income tax should be raised.

505578 As the time Atkinson accepted Essex’s offer, the science publishing business was booming, and economics has expanded so much that there was a growing need for specialization. North-Holland sounded him about the establishment of a field journal. Atkinson agreed to become editor, and immediately asked Martin Feldstein, Johansen and Stiglitz to join him. They were to handle public expenditures and econometrics studies of taxation, planning & fiscal policy, and the pure theory of taxation and public expenditure submissions respectively. Masahiko Aoki, Guy Arvidsson, Cary Brown, James Buchanan, Peter Diamond, Serge-Christophe Kolm, Julius Margolis, P. Mieszkowski, James Mirrlees, Richard Musgrave, Paul Samuelson, Ethan Sheshinski, Ralph Turvey, among others, also agreed to serve on the board. The unusually high number of associate editors was probably a reflection of the strength and distinctive features of national traditions in the economic analysis of the state, as well as Atkinson’s characteristic desire to combine theoretical and applied perspectives. He was also eager not to cannibalize existing journals on fiscal policy (like the National Tax Journal) or on the regulation of public utilities (like the Bell Journal). He however sensed his new journal would cover new “areas not traditionally included in public finance, and would be more theoretically oriented.” He also wanted to cover “political party decision-making particularly when it is focused on economic decision or when it is based on economic models.” The first issue of the Journal of Public Economics appeared in 1972, with contributions by Johansen, Buchanan and Goetz, Feldstein, Mieszkowski, King, Sandmo, Hellowell, Bliss, Ursula Hicks, and a first paper on the structure of indirect taxation by Atkinson and Stiglitz themselves. He also organized conferences in Essex, Sienna, etc.., which led to special issues of the journal and further community building.

 At Essex, Atkinson also set up a graduate course in public economics. Back then, the two books graduate students could sweat on were Mugrave’s The Theory of Public Finance (1959) and Leif Johansen’s Public Economics (1965). A.R. Prest’s Public Finance was fashionable in the UK, while Richard Goode’s The personal Income Tax (1965) and Haveman and Margolis’s Public Expenditures and Policy Analysis (1970) were largely used in the US. Musgrave and Johansen both devoted large parts of their books to macro stabilization policy, and neither included new ways to model optimal taxation issues in a general equilibrium setting developed in late 1960s. Atkinson and Stiglitz therefore decided to turn their lectures into a textbook. It took them almost a decade years to do so, but the structure of the book hasn’t changed much since their late 1960s course. Here is how Atkinson described their division of labor:

Joe wrote more on the theory of the firm, risk-taking and the taxation of investment; I drafted the chapters on general equilibrium. The chapters on optimal taxation grew out of our joint articles, but Joe led on the optimal provision of local public goods (a subject with which I still struggle). But we re-wrote each other’s work and drafts crossed and re-crossed the Atlantic. Until recently I kept on my office wall one page from Joe’s amendments to my chapter, as documentary evidence that mine was not the worst hand-writing in the world!

41lbkmhlulThe introductory chapter of the Lectures in Public Economics, eventually published in 1980 by McGraw Hill, makes for a stunning read. The standard for exposition to the field they set has since changed so little that it is difficult to imagine how dramatically different most courses in public finance were in these years. Atkinson and Stiglitz presented the reader with the two theorems of welfare economics. They then listed all sorts of reasons why Pareto-efficient allocations of resources might not be reached: distributional issues, monopolies, missing markets, imperfect information, under-utilization of resources, externalities, public goods, merit wants. These “failures of the market” make room for government intervention, they concluded. The book was divided in two parts: positive analysis of policy, with an overwhelming focus on taxation, and normative analysis with, again, a focus on optimal taxation, and chapters on pricing and public goods. Macroeconomic stabilization was excluded from the book, and two middle chapters summarized recent research on public choice and Marxist theories of the state (voting, bureaucracies, power, interests groups) and welfare economics.

capture-decran-2017-01-02-a-02-28-40

capture-decran-2017-01-02-a-02-22-49Though considered very technical by several reviewers, the textbook became a must-read. By the mid-1980s, it has become the most-cited item in the huge body of literature related to public economics and taxation identified in the Claveau-Gingras bibliometric study of the Web of Science Economics data base, and was still in the top-10 most cited item throughout the 1990s. The original edition was reprinted last year. In their new introduction, Atkinson and Stiglitz explain that, should they rewrite the textbook today, they would only make several additions, including global, behavioral and empirical perspectives, and a deeper coverage of inequality and redistribution.

.

In 2000, Atkinson began his tribute to James Meade by writing:

James Meade saw the role of the economist as that of helping design a better society and international order. He chose to study economics in the late 1920s because he believed it ought to be possible to end the plague of mass unemployment, and his last book in 1995 [note: the year of his death] was on the subject of Full Employment Regained?

He concluded with the following quote:

Above all, James had a positive vision for the future. He was, in his own words, ‘an inveterate explorer of improvements in economic arrangements’… he wrote that ‘I implore any of my fellow countrymen who read this book not to object: “It can’t be done.” He was ultimately concerned with what could be done to make our world a better place.

Replace “unemployment” by “inequality an poverty,” and you have Tony Atkinson’s life. May our generation of economists live up to his expectations.

Note: the material for this post is taken from mail exchanges, interviews and personal archives Tony Atkinson was generous enough to provide.

Posted in Uncategorized | Tagged , | 5 Comments

What’s next in history of economics? A wish list

I just defended my habilitation, a rite de passage meant to evaluate an academic’s ability to develop a research program, to mentor graduate students and to hold a professorship. It’s a long process which begins with writing a thesis and ends up with answering two hours of questions from a peer jury. Part of the discussion looks like an unbounded meditation on the intellectual challenges ahead, with questions on methodology and future topics, yet the other is about navigating the financial and institutional constraints besetting the field, and they are especially critical in history of economics. The whole thing involves a deal of reflexivity and an insane amount of red tape.

 Below is the list of topics I wrote down when preparing the defense, those I wish to see historians of recent economics research in the years to come. It is more a wish list than an actual research program. Some of these topics are difficult to approach absent adequate archival data. And as a historian of science, I know all too well that research programs never developed as planned, and are largely shaped by data and coauthors found along the way. As it reflects my field of expertise, it’s all postwar, mostly mainstream and mostly American.

  1. History of “foundational” papers and books

What needs attention here is the making, reception and dissemination of these foundational texts. I am especially interested in learning how these works fit their authors’ largest programs, whether they owed their success to their conclusions or to their modeling styles, and how divergent subsequent uses of these models were from the author’s original intentions.


Georges Akerlof’s Lemons paper (1970)
: in the past years, it has gained a special status. Its 25,000 citations makes it one of the most cited papers in the history of economics, yet its citation pattern requires further examination (WoS data). Its three rejections are often advanced to advocate perseverance, as well as to claim that thorough changes in science publishing are necessary to foster pathbreaking knowledge. A comparison with another outlier, Coase’s The Problem of Social Cost, might be worth considering. Yet Akerlof had not made his archives public yet.

akerlof-lemons-wos-citations

Robert Axelrod’s The Evolution of Cooperation (1984): game theory+ computers. Transformed biology, philosophy and economics among other sciences.

Thomas Schelling’s spatial segregation paper (1971): influenced the development of Agent-Based Modeling and economists’ view of how game-theoretic models should be built and for what purpose. Interesting methodological investigation and interview have been conducted by Emrat Aydinonat, but more is needed.

Leonid Hurwicz’s papers on information efficiency (1969) and mechanisms for resource allocation (1973): because mechanism design (see below)

.

2) History of fields (all of them, especially applied micro)

Most economists today work in applied fields, yet their history remains largely unknown, even after a 4-year research project designed to provide contexts and framing to their study. Environmental, labor, health, transportation, and international economics are covered to some extent; the history of development economics is the topic of the next HOPE conference. I have specific interest in urban, public, agricultural economics and finance. But we still lack the skills to provide full narrative arcs of how each of these fields developed and was transformed in the postwar.

.

3) history of tools and approaches

Duality and applied micro: our ability to write the history of applied fields is hindered by the lack of historical perspective of some of the tools applied microeconomists use. There are histories of demand curves, utility functions, Cobb-Douglas functions, but little on the adoption of duality techniques, a key element in the development of postwar applied micro. This requires diving into what was researched and taught at Berkeley in the 50s and 60s, and understanding Gorman. Same with microeconometrics.

Mechanism design: econ best-seller for policy-makers (poster child for “economics-that-changes-the-world-and-saves-money-and-even-lives” including kidney market and FCC auctions) and IT firms alike. Consequently re-shaping the internal and public image of economics. Some past and forthcoming histories by Phil Mirowksi, Eddie Nik-Khah and Kye Lee, but nothing systematic.

 .

3) history of data bases and software

the Penn tables

-the Panel Study of Income Dynamics

Dynare: the debate on DGSE models is stalling, though a sociological layer has recently been added. While their origins have been extensively researched, their dissemination and various uses are less so. Studying the development and spread of Dynare would provide a “practitioners” rather than “big-men-with-big-ideas” perspective.

capture-decran-2016-10-17-a-01-10-16

.

4) History of economists

William S. Vickrey: understudied key protagonist of the development of postwar applied micro. Especially interesting given his broad interests: public economics, taxation, auctions, incentives, information, game theory, macro and more.

Peter Diamond: best remedy to the spreading notion that econ theory is dying. Diamond is the quintessential “applied theorist,” a term he often used to describe himself. Has reshaped the theoretical framework in various fields ranging from public economics (optimal taxation) to labor economics (search and matching). Probably one of the most influential economists of the 70s/80s

Julius Margolis, for the same reasons I studied Marschak

-Applied economists, in particular Irma Adelman, Zvi Griliches, Ted Schultz, Marc Nerlove and Nancy RugglesAnd no, this list is not “artificially” loaded with women. They were all over the place in empirical economics in the 40s to 70s. This might even be a reason why the applied tradition in economics has largely been forgotten. The ratio of women studying applied econ may have diminished as the prestige of the field grew in the 1980s and 1990s. A similar trend has been documented by historians of computer science. A way to study this trend is to look at the history of AEA’s CSWEP.

– the inscrutable Kenneth Arrow. No justification needed here. What can be found in his archives is not enough not nail him down. The only option left, then, might be to construct a “shadow archive” by gathering the material found in the archives of his colleagues. But costly.

Economists as editors: Bernard Haley, Al Rees, Hugo Sonneschein, Orley Ashenfelter, and others. Because they had unequaled agency in shaping what counts as good and fashionable economics

capture-decran-2016-10-17-a-01-10-26

.

5) history of places

-the NBER under Feldstein

Northwestern in the 1970s :mechanism design+experimental economics

Minnesota in the 1980s: a single corridor where Prescott, Sargent and Sims researched alongside each other (the former is said to have hanged a sign “don’t regress, progress” on his door to tease the latter), interacted in seminars and on the dissertation committees of Lars Hansen, Larry Christiano and Martin Eichenbaum.

-the other Chicago (that of Harberger, Schultz, Griliches and Chow)

-any government bureau

Berkeley in the 1950s (see above)

(also economics at Microsoft, Goldman Sachs and Google, but I don’t think this is going to happen)

Posted in Uncategorized | 5 Comments

Is there really an empirical turn in economics?

Full paper here. Reposted from the INET blog. French translation here.

The idea that economics has recently gone through an empirical turn –that it went from theory to data– is all over the place. Economists have been trumpeting this transformation on their blogs in the last few years, more rarely qualifying it. It is now showing up in economists’ publications. Not only in articles by those who claim to have been the architects of a “credibility revolution,” but by others as well. This narrative is also finding its way into historical works, for instance Avner Offer and Gabriel Söderberg’s new book on the history of the Nobel Prize in economics.

The structure of the argument is often the same. The figure below, taken from a paper by Dan Hamermesh, is used to argue that there has been a growth in empirical research and a decline of theory in the past decades.

0hamermesh

A short explanation is then offered: this transformation has been enabled by computerization + new, more abundant and better data. On this basis, the “death of theory” or at least a “paradigm shift” is announced. The causality is straight, maybe a little too straight. My goal, here, is not to deny that a sea change in the way data are produced and treated by economists has been taking place. It is to argue that the transformation has been oversimplified and mischaracterized.

What is this transformation we are talking about? An applied rather than an empirical turn

What does Hamermesh’s figure show exactly? Not that there is more empirical work in economics today than 30 or 50 ago. It indicates that there is more empirical work in top-5 journals. That is, that empirical work has become more prestigious. Of course, the quantity of empirical work has probably boomed as well, if only because the amount of time needed to input data and run a regression has gone from a week in the 1950s, 8 hours in the 70s to a blink today. But more precise measurement of both the scope of the increase and an identification of the key period(s?) in which the trend accelerated are necessary. And this requires supplementing the bibliometric work done on databases like Econlit, WoS or Jstor with hand-scrapped data on other types of economic writings.

capture-decran-2016-10-02-a-00-21-13

Cobb and Douglas 1927 regression

For economics has in fact nurtured a strong tradition in empirical work since at least the beginning of the XXth century. Much of it has been published outside the major academic journals, and has thus been forgotten (allowing some 21th century economists to claim to be the first to have ever estimated a demand curve empirically). Whole fields, like agricultural economics, business cycles (remember Mitchell?), labor economics, national accounting, then input-output analysis in the 1930s to 1960s, then cost-benefit policy evaluation from the 1960s onward, and finance all the way up have been built not only upon new tools and theories, but also upon large projects aimed at gathering, recording and making sense of data. But many of these databases and associated research were either proprietary (when funded by finance firms, commercial businesses, insurance companies, the military or governmental agencies), or published in reports, books and outlets such as central bank journals and Brookings Papers.

Neither does Hamermesh’s chart show that theory is dying. Making such a case requires tracing the growth of the number of empirical papers where a theoretical framework is altogether absent. Hamermesh and Jeff Biddle have recently done so. They highlight an interesting trend. In the 1970s, all microeconomic empirical papers exhibited a theoretical framework, but there has since been a limited but significant resurgence of a-theoretical works in the 2000s. But it hasn’t yet matched the proportion of a-theoretical papers published in the 1950s. All in all, it seems that 1) theory dominating empirical work has been the exception in the history of economics rather than the rule 2) there is currently a reequilibration of theoretical and empirical work. But an emancipation of the latter? Not yet. And 3) what is dying, rather, is exclusively theoretical papers. In the conclusion of Economic Rules, Dani Rodrik notes that

these days, it is virtually impossible to publish in top journals [in some fields] without including some serious empirical analysis […] The standards of the profession now require much greater attention to the quality of data, to causal inference from evidence and a variety of statistical pitfalls. All in all, this empirical turn has been good for the profession.

The death of theory-only papers was, in fact, pronounced decades ago, when John Pencavel revised he JEL codes in 1988 and decided to remove the “theory” category. He replaced it with micro, macro and tools categories, explaining that “good research in economics is a blend of theory and empirical work.” And indeed, another word that has gained wide currency in the past decades is “applied theory.” The term denotes that theoretical models are conceived in relation to specific issues (public, labor, urban, environmental), sometimes in relation to specific empirical validation techniques. It seems, thus, that economics has not really gone “from theory to data,” but has rather experienced a profound redefinition of the relationship of theoretical to empirical work. And there is yet another aspect of this transformation. Back in the 1980s, JEL classifiers did not merely merge theory with empirical categories. They also added policy work, on the assumption that most of what economists produced was ultimately policy oriented. This is why the transformation of economics is the last decades is better characterized as an “applied turn” rather than an “empirical turn.”

clark_medal_front_smApplied has indeed become a recurring work in John Bates Clark citations. “Roland Fryer is an influential applied microeconomist,” the 2015 citation begins. Matt Gentzkow (2014) is a leader “in the new generation of microeconomists applying economic methods.” Raj Chetty (2013) is “arguably the best applied microeconomist of his generation.” Amy Finkelstein (2012) is “one of the most accomplished applied microeconomists of her generation” (2011). And so forth. The citations of all laureates since Susan Athey (“an applied theorist”) have made extensive used of the “applied” wording, and have made clear that the medal was awarded not merely for “empirical” work, new identification strategies or the like, but for path-breaking combination of new theoretical insights with new empirical methodologies, with the aim of shedding light on policy issues. Even the 2016 citation for Yuliy Sannikov, the most theoretical medal in a long time, emphasizes that his work “had substantial impact on applied theory.”

Is it really new in the history of economics?

The timing of the transformation is difficult to grasp. Some writers argue that the empirical turn began after the war, other see a watershed in the early 1980s, with the introduction of the PC. Other mention the mid-1990s, with the rise of quasi-experimental techniques and the seeds of the “credibility revolution,” or the 2010s, with the boom in administrative and real-time recorded microeconomic business data, the interbreeding of econometrics with machine learning, and economists deserting academia to work at Google, Amazon and other IT firms.

Dating when economics became an applied science might be a meaningless task. The pre-war period is characterized by a lack of pecking order between theory and applied work. The rise of a theoretical “core” in economics between the 1940s and the 1970s is undeniable, but it encountered fierce resistance in the profession. Well-known artifacts of this tension include the Measurement without Theory controversy or Oskar Morgenstern’s attempt to make empirical work a compulsory criterion to be nominated as fellow of the Economic Society. And the development of new empirical techniques in micro (panel data, lab experiments, field experiments) has been slow yet constant.

Again, a useful proxy to track the transformation in the prestige hierarchy of the discipline is the John Bates Clark medal, as it signals what economists currently see as the most promising research agenda. The first 7 John Bates were perceived as theoretically oriented enough for part of the profession to request the establishment of a second medal, named after Mitchell, to reward empirical, field and policy-oriented work. Roger Backhouse and I have shown that such contributions have been increasingly singled out from the mid-1960s onward. Citations for Zvi Griliches (1965), Marc Nerlove (1969), Dale Jorgensen (1971), Franklin Fisher (1973) and Martin Feldstein (1977) all emphasized contributions to empirical economics, and reinterpreted their work as “applied.” Feldstein, Fisher and later, Jerry Hausman (1985), are viewed as an “applied econometrician.” It was the mid-1990s medals – Lawrence Summers, David Card, Kevin Murphy – that emphasized empirical work more clearly. Summers’ citation notes a “remarkable resurgence of empirical economics over the past decade [which has] restored the primacy of actual economies over abstract models in much of economic thinking.” And as mentioned already, the last 8 medals systematically emphasize efforts to weave together new theoretical insights, empirical techniques and policy thinking.

All in all, not only is “applied” a more appropriate label than “empirical,” but “turn” might be a bit overdone a term to describe a transformation that seems made up of overlapping stages of various intensities and qualities. But what caused this re-equilibration and possible emancipation of applied work in the last decades? As befit economic stories, there are supply and demand factors.

 

Supply side explanations: new techniques, new data, computerization

A first explanation for the applied turn in economics is the rise of new and diverse techniques to confront models with data. Amidst a serious confidence crisis, the new macroeconometric techniques (VARs, Bayesian estimation, calibration) developed in the 1970s were spread alongside the new models they were supposed to estimate (by Sims, Kydland, Prescott, Sargent and others). The development of laboratory experiments contributed to the redefinition of the relationship between theory and data in microeconomics. Says Svorenčík (p15):

By creating data that were specifically produced to satisfy conditions set by theory in controlled environments that were capable of being reproduced and repeated, [experimentalists] sought […] to turn experimental data into a trustworthy partner of economic theory. This was in no sense a surrender of data to the needs of theory. The goal was to elevate data from their denigrated position, acquired in postwar economics, and put them on the same footing as theory.

 The rise of quasi-experimental techniques, including natural and randomized controlled experiments, was also aimed at achieving a re-equilibration with (some would say emancipation from) theory. Whether it actually enabled economists to reclaim inductive methods is fiercely debated. Other techniques blurred the demarcation between theory and applied work by constructing real-world economic objects rather than studying them. That was the case of mechanism design. Blurred frontiers also resulted from the growing reliance upon simulations such as agent-based modeling, in which algorithms stand for theories or application, both or neither.

 23543687643_f17678e75a_bA second related explanation is the “data revolution.” Though the recent explosion of real-time large scale multi-variable digital databases is mind-boggling and has the allure of a revolution, the availability of economic data has also evolved constantly since the Second World War. There is a large literature on the making of public statistics, such as national accounting or the cost of living indexes produced by the BLS, and new microeconomic surveys were started in the 1960s (the Panel Survey on Income Dynamics) and the 1970s (the National Longitudinal Survey). Additionally, administrative databases were increasingly opened for research. The availability of tax data, for instance, transformed public economics. In his 1964 AEA presidential address, Georges Stigler was thus claiming:

The age of quantification is now full upon us. We are armed with a bulging arsenal of techniques of quantitative analysis, and of a power – as compared to untrained common sense- comparable to the displacement of archers by cannon […] The desire to measure economic phenomena is now in the ascendent […] It is a scientific revolution of the very first magnitude.

 A decade later, technological advanced allowed a redefinition of the information architecture of financial markets, and asset prices, as well as a range of business data (credit card information, etc.), could be recorded in real-time. The development of digital markets eventually generated new large databases on a wide range of microeconomic variables.

 Rather than a revolution in the 80s, 90s or 2010s, the history of economics therefore seems one of constant adjustment to new types of data. The historical record belies Liran Einav and Jonathan Levin’s statement that “even 15 or 20 years ago, interesting and unstudied data sets were a scarce resource.” In a 1970 book on the state of economics edited by Nancy Ruggles, Dale Jorgenson explained that

the database for econometric research is expanding much more rapidly than econometric research itself. National accounts and interindustry transactions data are now available for a large number of countries. Survey data on all aspects of economic behavior are gradually becoming incorporated into regular economic reporting. Censuses of economic activity are becoming more frequent and more detailed. Financial data related to securities market are increasing in reliability, scope and availability.

And Guy Orcutt, the architect of economic simulation, explained that the current issue was that “the enormous body of data to work with” was “inappropriate” for scientific use because the economist was not controlling data collection. With a very different qualitative and quantitative situation, they were making the same statements and issuing the same complaints as today.

IBM_1620_data_processing_machine_on_display,_Seattle_World's_Fair,_1962

IBM 1620 (1959)

This dramatic improvement in data collection and storage has been enabled by the improvement in computer technology. Usually seen as the single most important factor behind the applied turn, the computer has affected much more than just economic data. It has enabled the implementation of validation techniques economists could only dreamed of in the previous decades. But a with economic data, the end of history has been repeatedly pronounced: in the 1940s, Wassily Leontief predicted that the ENIAC could soon tell how much public work was needed to cure a depression. In the 1960s, econometrician Daniel Suit wrote that the IBM 1920 enabled the estimation of models of “indefinite size.” In the 1970s, two RAND researchers explained that computers had provided a “bridge” between “formal theory” and “databases.” And in the late 1980s, Jerome Friedman claimed that statisticians could substitute computer power for unverifiable assumptions. If a revolution under way, then, it’s the fifth in fifty years.

But the problem is not only with replacing “revolutions” with more continuous processes. The computer argument seems more deeply flawed. First, because the two most computer-intensive techniques of the 1970s, Computable General Equilibrium and large-scale Keynesian macroeconometrics, were marginalized at the very moment they were finally getting the equipment needed to run their models quickly and with fewer restrictions. Fights erupted as to the proper way to solve CGE models (estimation or calibration), Marianne Johnson and Charles Ballard explain, and the models were seen as esoteric “black-boxes.” Macroeconometric models were swept away from academia by the Lucas critique, and found refuge in the forecasting business. And what has become the most fashionable approach to empirical work three decades later, Randomized Control Trials, merely requires the kind of means and variances calculations that could have been performed on 1970s machines. Better computers are therefore neither sufficient nor necessary for an empirical approach to become dominant.

 Also flawed is the idea that the computer was meant to stimulate empirical work, thus weaken theory. In physics, evolutionary biology or linguistics, computers transformed theory as well as empirical work. This did not happen in economics, is spite of attempts to disseminate automated theorem proving, numerical methods and simulations. One explanation is that economists stuck with concepts of proof which required that results are analytically derived rather than approximated. With changes in epistemology, the computer could even made any demarcation between theory and applied irrelevant. Finally, while hardware improvements are largely exogenous to the discipline, software development is endogenous. The way computing affected economists practices was therefore dependent on economists’ scientific strategies.

The better integration of theoretical and empirical work oriented toward policy prescription and evaluation which characterized the “applied turn” was not merely driven by debates internal to the profession, however. They also largely came in responses to the new demands and pressures public and private clients were placing on economists.

 

Demand side explanation: new patrons, new policy regimes, new business demands

            An easy explanation to the rise of applied economics is the troubled context of the 1970s: the social agitation, the urban crisis, the ghettos, the pollution, the congestion, the stagflation, the energy crisis and looming environmental crisis, the Civil Right movement resulted in the rise of radicalism and neoliberalism alike, students’ protests and demand for more relevance in their scientific education. Both economic education and research were brought to bear on real-world issues. But this raises the same why is this time different? kind of objection mentioned earlier. What about the Great Depression, World War II and the Cold War? These eras pervaded by a similar sense of emergency, a similar demand for relevance. What was different, then, was the way economists’ patrons and clients conceived the appropriate cures for social diseases.

Patrons of economic research have changed. In the 1940s and 1950s, the military and the Ford Foundation were the largest patrons. Both wanted applied research of the quantitative, formalized and interdisciplinary kind. Economists were however eager to secure distinct streams of money. They felt that being often looped together with other social sciences was a problem. Suspicion toward social sciences was as high among politicians – isn’t there a systematic affinity with socialism– as among natural scientists –social sciences can’t produce laws. Accordingly, when the NSF was established in 1950, it had no social science division. By the 1980s, however, it has become a key player in the funding of economic research. As it rose to dominance, the NSF imposed policy-benefits, later christened “broader impact,” as a criterion whereby research projects would be selected. This orientation was embodied in the Research Applied to National Needs office, which, in the wake of its creation in the 1970s, supported research on social indicators, data and evaluation methods for welfare programs. The requirement that the social benefits of economics research be emphasized in applications was furthered by Ronald Reagan’s threat to slash the NSF budget for social sciences by 75% in 1981, and the tension between pure and applied research, and policy benefits has since remained a permanent feature of its economic program.

capture-decran-2016-10-03-a-00-44-14

Heather Ross’s test of he negative income tax

As exemplified by the change in NSF’s strategy, science patrons’ demands have sometimes been subordinated to changes in policy regimes. From Lyndon Johnson’s to Reagan’s, all government pressured scientists to produce applied knowledge to help the design, and, this was new, the evaluation of economic policies. The policy orientation of the applied turn was apparent in Stigler’s 1964 AEA presidential address, characteristically titled The Economist and the State: “our expanding theoretical and empirical studies will inevitably and irresistibly enter into the subject of public policy, and we shall develop a body of knowledge essential to intelligent policy formulation,” he wrote. Henry Ford’s interest in having the efficiency of some welfare policies and regulations evaluated fostered the generalization of cost-benefit analysis. It also pushed Heather Ross, then MIT graduate student, to undertake with Princeton’s William Baumol and Albert Rees a large randomized social experiment to test negative income tax in New Jersey and Pennsylvania. The motivation to undertake experiment throughout the 70s and 80s was not so much to emulate medical science, but to allow the evaluation of policies on pilot projects. As described by Elisabeth Berman, Reagan’s deregulatory movement furthered this “economicization” of public policy. The quest to emulate markets created a favorable reception to the mix of mechanism design and lab experiments economists had developed in the late 1970s. Some of its achievements, the FCC auction or the kidney matching algorithm have become the flagship of a science capable to yield better living. These skills have equally been in demand by private firms, especially as the rise of digital markets involved the design of pricing mechanisms and the study of regulatory issues. Because it is as much the product of external pressures as of epistemological and technical evolutions, it is not always easy to disentangle rhetoric from genuine historical trend in the “applied turn.”

 

Note: this post relies on the research Roger Backhouse and I have been carrying in the past three years. It is an attempt to unpack my interpretation of the data we have accumulated, as we are writing up a summary of our finding. Though it is heavily influenced by Roger’s thinking, it only reflects my own views.

.

Posted in Uncategorized | Tagged , , , , , , , , | 5 Comments