That Donald Trump’s first presidential decisions included gagging the EPA,USDA and NASA, asking his advisors to provide “alternative facts” on inauguration attendance, and questioning the unemployment rate is raising serious concerns among economists. Mark Thoma and former BLS statistician Brent Moulton, among others, fear that the new government may soon challenge the independence of public statistics agencies, drain access to the data economists feed themselves with, attempt to tweak them, or just ax whole censuses and trash data archives, Harper style.
One reaction to this had been to put more or better communicated economics facts online. So is the purpose of the Econofact website, launched by Tufts economist Michael Klein. “Facts are stubborn,” he writes, so he asked “top academic economists” to write memos “covering the topics of manufacturing, currency manipulation, the border wall, World Trade Organization rules, the trade deficit, and charter schools.” The purpose, he explains, is “to emphasize that you can choose your own opinions, but you cannot choose your own facts.” The move is in line with other attempts by scientists and people working in academia, the NASA, the National Parks or the Merriam Webster dictionary to uphold and reaffirm facts, in particular on climate change.
Looking at the website, though, I’m left wondering who the intended audience is, and whether this is the most effective way to engage a broad public. As noted by Klein himself, citizens seem to crave “information,” but within the broader context of a growing distrust of scientific expertise, statistics and facts. All sorts of expertise are impacted, but it doesn’t mean that responses should be identical. Because in practice, if not in principle, each science has its own way to build “facts,” and citizens’ disbelief of climate, demographic, sociological or economic facts may have different roots. It is not clear, for instance, that economics statistics are primarily rejected because of perceived manipulation and capture by political interests. What citizens dismiss, rather, is the aggregating process the production of statistics entails, and economists’ habit to discuss averages rather than standard deviations, growth rather than distribution. Statistics have historically sprung out of scientists’ efforts to construct an “average man,” but people don’t want to be averaged anymore. They want to see themselves in the numbers.
It might be a good time, therefore, to acknowledge that economic statistics proceed from what economists intentionally or unconsciously choose to see, observe, measure, quantify and communicate; to reflect on why domestic production had been excluded from GDP calculations and whether national accounting embody specific visions of government spending; to ponder the fact that income surveys and tax data were not coded and processed in a way that made “the 1%” visible until the 2000s because, until then, economists were concerned with poverty and the consequences of education inequality rather than with top income inequality; to think about the Boskin Commission’s 1996 decision to settle for a constant utility price index to prevent inflation overstatement, and its consequences on the way economists measure the welfare derived from good consumption (and productivity). And it’s not just that the observation, measurement and quantification process underpinning “economic facts” has constantly been debated and challenged. It has also been politicized, even weaponized, by governments and profit or non-profit organizations alike. Economic data should be viewed as negotiated and renegotiated compromises rather than numbers set in stone. This doesn’t diminish their reliability, just the contrary. They can be constructed, yet constructed better than any “alternative” rogue organizations have in store.
The production of government statistics has considerably evolved over decades if not centuries. It results from longstanding theoretical and technical disputes as well as conflicting demands and uses, some very different across countries. Even more dramatic have been the changes in the production and uses of financial and business economic data. Below is a non-exhaustive list of books and articles offering overarching perspective on economic (and social science) data as well as specific case studies.
Note: Some of these references primarily deal with quantification, other with observation or measurement, some with the making of economic data, other with the production of “facts” (aka selected, filtered, organized and interpreted data).
General framing
- Trust in Numbers: the Pursuit of Objectivity in Science and Public by Ted Porter. Fast read, excellent overview. Porter explains that, contrary to the received view, the quantification of social facts was largely driven by administrative demands, by political makers’ willingness to enforce a new kind of “mechanical objectivity.’ “Quantification is a social technology,” he explains. For a longer view, see Mary Poovey’s A History of the Modern Fact, which tracks the development of systematic knowledge based on numerical representation back to XVIth century double-entry bookkeeping.
- A collective volume on the history of observation in economics, edited by Harro Maas and Mary Morgan. They provide a broad historical overview in their introduction, and insist on the importance of studying the space in which observation takes place, the status and technicalities of the instruments used, as well as the process whereby trust between the economists-observers and the data-users is built.
- Marcel Boumans has spent a lifetime reflecting on quantification and measurement in economics. In his 2005 book How Economists Model the World into Numbers and associated article, he defines economic models as “tools for measurement,” just as the thermometer is for physical sciences (this idea is borrowed from the Morgan-Morrisson tradition. See also this review of the book by Kevin Hoover). His 2012 book likewise details historical exemples of observation conducted outside the laboratory (aka when economic phenomena cannot be isolated from their environment). His purpose is to use history to frame epistemological evaluations of the economic knowledge produced “in the field.” The book discusses, among others, Morgenstern’s approach to data or Kranz, Suppes, Luce and Tversky’s axiomatic theory of measurement.
- French sociologist Alain Desrosieres has pioneered the sociology of economic quantification through his magnum opus The Politics of Large Numbers: a History of Statistical Reasoning and countless articles. The gist of his comparative analysis of the statistical apparati developed in France/Germany/US/UK is that statistics are shaped by and for government knowledge and power. His legacy lives on through the work of Isabelle Bruno, Florence Jany-Catrice and Béatrice Touchelay, among others. They have recently edited a book on how recent public management has moved from large numbers to specific indicators and targets.
Economic facts: case studies
1. There is a huge literature on the history of national accounting and the debates surrounding the development of GDP statistics. See Dans Hirschman’s reading list as well as his dissertation on the topic, and the conference he has co-organizing last Fall with Adam Leeds and Onur Özgöde. See also this post by Diane Coyle.
2. Dan Hirschman’s study of economic facts also comprise an analysis of stylized facts in social sciences, and a great account of the technical and contextual reasons why economists couldn’t see “the 1%” during most of the postwar period, then developed inequality statistics during the 2000s. It has been covered by Justin Fox here.
3. There are also stories of cost of living and price indexes. Historian Tom Stapleford has written a beautiful history of the Cost of Living in America. He ties the development of the statistics, in particular at the Bureau of Labor Statistics, to the growth of the American bureaucratic administrative system. The CPI was thus set up to help the rationalization of benefit payments adjustments, but it was also used for wage negotiations in the private sector, in an attempt to tame labor conflicts through the use of “rational” tools. The CPI index is thus nothing like an “objective statistics,” Stapleford argues, but a quantifying device shaped by practical problems, bureaucratic conflicts – the merge of public statistical offices, economic theory –the shift from cardinal to ordinal utility–, institutional changes and political agendas – the legitimation of wage cuts in 1933, the need to control for war spending, its use in postwar macroeconomic stabilization debates. See also Stapleford’s paper on the development of hedonic prices by Zvi Griliches and others. Spencer Banzhaf recounts economists’ struggles to make quality-adjusted price indexes fair and accurate.
4. Histories of agricultural and environmental statistics also make for good reads. Emmanuel Didier relates how USDA reporters contributed to the making of US agricultural statistics, and Federico D’Onofrio has written his dissertation on how Italian economists collected agricultural data at the turn of XXth century through enquiries, statistics and farm surveys. Spencer Banzhaf relate economists struggles to value life, quantify recreational demand, and measure the value of environmental goods though contingent valuation. A sociological perspective on how to value nature is provided by Marion Fourcade.
On public statistics, see also Jean-Guy Prevost, Adam Tooze on Germany before World War II, and anthropologist Sally Merry’s perspective. Zachary Karabell’s Leading Indicators: A Short History of the Numbers that Rule Our World is intended at a larger audience.
Shifting the focus away from public statistics
As I was gathering references for the post, I realized how much historians and sociologists of economics’ focus is on the production of public data, and their use in state management and discourse. I don’t really buy the idea that governments alone are responsible for the rise in the production of economic data in the last centuries. Nor am I a priori willing to consider economists’ growing reliance upon proprietary data produced by IT firms as “unprecedented.” Several of the following references on private economic data collection were suggested by Elisabeth Berman, Dan Hirschman, and Will Thomas.
- Economic data, insurance and the quantification of risk: see How Our Days Became Numbered : Risk and the Rise of the Statistical Individual Risk by Dan Bouk (history of life insurance and public policy in the XXth century, reviews here and here). For a perspective on the XIXth century, see Sharon Ann Murphy’s Investing in Life. Jonathan Levy covers two centuries of financial risk management in Freaks of Fortune.
.
2. For histories of finance data, look at the performativity literature. I especially like this paper by Juan Pablo Pardo-Guerra on how computerization transformed the production of financial data.
3. A related literature deals with the sociology of scoring (for instance Fourcade and Healy’s work here and here)
4. Equally relevant to understand the making of economic facts is the history of business accounting. See Paul Miranti’s work, for instance his book with Jonathan Barron Baskin. See also Bruce Carruthers and Wendy Espeland’s work on double-entry accounting and economic rationality. Espeland also discusses the relation of accounting to corporate control with Hirsch here and to accountability and law with Berit Vannebo here (their perspective is discussed by Robert Crum).
Excellent list. To it I might add ‘How Well Do ‘Facts’ Travel’ edited by Peter Howlett and Mary Morgan. Great chapter on human longevity, Post-enlightenment, the question was ‘Why do we no longer live as long as the sages did?’
Thank you very much. It is indeed a very useful reference, as it also tries to figure whether false or unreliable, or overstated facts travel better than reliable ones. Interdisciplinary perspective also much valuable
I’m currently building a ‘Political Sociology of Quantification’ syllabus–lots of overlap with your list. We should get coffee one day 🙂