Written with Aurélien Saïdi
(draft version with more footnotes and full references here)
The 2018 Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel was given for “addressing some of our time’s most basic and pressing questions about how we create long-term sustained and sustainable economic growth.” It was shared by Yale’s William Nordhaus for bringing negative externalities due to greenhouse gas emissions in growth models and New York University’s Paul Romer “for integrating technological innovations into long-run macroeconomic analysis.” The press release concludes that their contributions are “methodological . . . Laureates do not deliver conclusive answers.” Yet the methods rewarded are very different in kind. Nordhaus is praised for his development of a quantitative “integrated assessment model” of how climate and economic growth affect each other, a model then largely used to run simulations. Romer was crowned for a 10-years effort to endogenize growth culminating in a 1990 theory paper, “Endogeneous technological change.”
“Romer’s work was motivated by the data on macroeconomic aggregates and a more comprehensive cross-country data set which had just become available (Summers and Heston, 1984),” the Nobel “scientific background” document reads. This statement is historically ambiguous, since such data did not exist when Romer decided to engage in a reconsideration of the source of growth as a graduate student. It also tends to overshadow the primarily mathematical nature of Romer’s quest and achievement, one that the present paper strives to capture His work stands as a reminder that non-empirical endeavors in economics are grounded in and fueled by economic reality, one that Romer’s subsequent career tried to transform. After unlocking the mathematics of growth, he went on to found an educational technology company, Aplia. It offered online homework products for college students. After selling it in 2007, he became an advocate of “charter cities.” Looking for institutional arrangements (“rule of law”) fostering growth, he suggested the development of economic regions whose governance would be outsourced to a more stable foreign nation. A controversial stint as chief economist of the World Bank followed, before he resumed his inquiry into how urban management can “improve the health, safety and mobility of their citizens” and “help traditional disenfranchised populations share in the benefits of rapid urbanization.” This involved attending the Burning Man festival to understand its urban planning model.
Romer is not just unusual in his career path, straddling intellectual, policy and advocacy endeavors, and his public personae. He is also the only economist whose work was the subject of a thorough historical account years before it was recognized by a Nobel Prize. David Warsh has provided a thorough account of the bustling intellectual and institutional milieu in which Romer labored, throughout the 1980s, to articulate a mathematical representation of the role of knowledge in the growth process. Drawing on the interviews, material and narrative assembled by Warsh, we thus begin by reconstructing the process whereby Romer came to write two path-breaking articles, each cited more than 27,000 times, which contributed to launching a large reinvestigation of the endogenous causes for growth in developing and developed countries. Because we interpret these papers as pathbreaking contributions to mathematical theory, we then relate Romer’s perception of his work to his recent controversial statements on the uses of mathematics in growth and macroeconomics . Romer holds strong view of how mathematics should be used in theorizing and can be abused, and we locate his disagreement with other economists, in particular Robert Lucas, in their respective beliefs about the right degree of correspondance between real world objects, economic concepts and their mathematical representation.
“Providing a richer and more satisfying positive theory of growth”
A salient feature of Warsh’s account of Romer’s early student years is how unfashionable working on growth had become at the turn of the 1980s. Robert Solow, the architect of the reference model in which countries only escaped a stationary equilibrium (in which output per capita stalled) thanks to a mysterious exogenous “technological change” variable, had declared the field asleep. When, after graduating in mathematics, Romer took his first economics classes at MIT in 1977-1979, Solow was telling students that “anyone working inside economic theory [those] days knows in his or her bones that growth theory is not a promising pond for an enterprising theorist to fish in . . . I think growth theory is at least temporarily played out” he added.
Halfway through his graduate training, Romer decided to move back to Chicago. During a transition stint in Canada, he was introduced to Von Neumann’s model of growth, which he found at odd with the rise of private research labs, universities, and patents he was observing. As he settled in Chicago, mathematical economist José Scheinkman had agreed to supervise his dissertation and Robert Lucas to sit on his committee. As explained in the opening sentences of his 1983 dissertation, Romer’s ambition was to “provide a richer and more satisfying positive theory of growth than is possible in the new standard formulation.” This was primarily intended as a mathematical endeavor, aimed at providing (and solving) a generic theoretical framework: “since the kind of model is applicable in a wide variety of economic problems, the mathematics per se may be of more fundamental interest than the specific application to growth,” he wrote. Yet he also immediately acknowledged a tension between mathematics and facts. The “mathematical appeal” of the optimizing models of Frank Ramsey, Tjalling Koopmans and David Cass was “clear”, he wrote:[1] “the study of competitive equilibria can be reduced to the study of a familiar maximization problem.” This “must surely explain their general acceptance in the economics profession, for they were inconsistent with two basic observations,” he reflected (p.2). First, technological change was clearly “the result of actions taken by economic agents” (rather than a spontaneous and occasional improvement in the production technology). Second, he drew on per capita growth rates collected by Simon Kuznets to highlight that growth in Western countries had been accelerating over the 20th century. Romer wanted a mathematical model consistent with these observations.
The problem of the mechanisms that had been postulated to endogenize technical change and generate constant positive growth rates involved increasing returns to scale. Such a modeling strategy was difficult to handle mathematically, for it introduced non-convexities in the production set that ruled out standard optimization techniques. When Arrow first introduced learning-by-doing in growth in 1962, he was able to bypass the problem through simplifying hypotheses. Intrafirm increasing returns to scale also created an economic puzzle, one that was well-known since Adam Smith. They fostered concentration, thus perfect competition could not be preserved (the greater firms produce, the lower the unit cost, thus the greater the profit for constant input and output prices). Conversations with Sherwin Rosen led Romer to read Allyn Young’s 1928 literary exposition of an “economic growth driven by increasing returns resulting from specialization.” Without having read Alfred Marshall’s work, he modeled spillover effects internal to a sector but external to the firm, thereby avoiding the trend toward firm concentration and preserving a price-taking perfect competition setting.[2] That was the only way “to deal with the technical problem, to ensure the math came out right,” he later reflected. While the resulting decentralized equilibrium could be proved to exist, however, it was necessarily suboptimal since firms do not take into account the positive social externalities on each other they generate. This created space for government intervention to force agents to internalize the external effects and invest more intensively in the production of knowledge.
The spillover model Romer had conceived in his dissertation was soon highlighted by Robert Lucas. Invited to give the Marshall lectures in Cambridge in 1985, Lucas chose to walk the audience through a menagerie of models which had something to say about countries’ differentiated growth rates. By this time, important new data had become available. Carrying over a project launched by Irving Kravis at the University of Pennsylvania in the 1960s, Alan Heston and Robert Summers collected GDP, consumer expenditures, capital formation, public expenditures and other data for more than 100 countries. The whole was made comparable through the development or purchasing-power parity indexes. What came to be known at the Penn
World Tables was published in 1982 and updated regularly. These data documented at great length the lack of convergence between countries.[3] Lucas considered both capital accumulation and what he called, in the Chicago tradition of Schultz and Gary Becker, ‘human capital’ accumulation, through either schooling or learning-by-doing. He outlined a two-sector growth model where human capital was used to produce (and accumulate) human capital according to a non-decreasing returns technology. He replaced Romer’s sectoral spillovers with the idea of a human capital externality. Like his former student, he obtained a suboptimal social equilibrium. But unlike Romer, he did not discuss possible public intervention.
By the time his model of endogenous growth with spillovers went to press, Romer had however turned to models of monopolistic competition. In doing so, he was connecting with longstanding debates that had been reignited with Arrow’s 1962 article. Demsetz challenged Arrow’s ambition to draw relevant conclusions about the optimal allocation of resources for invention, and associated economic policy prescriptions from a pure theoretical framework of perfect competition. He rather pushed for a monopolistic framework, one later developed by Partha Dasgupta and Joseph Stiglitz. Their 1980 article endogenized market structure and introduced R&D expenditure. It was only after he defended his thesis in 1983 and moved to Rochester that Romer took up these themes. There, he pursued extensive discussions with fellow assistant professor Robert Barro and general equilibrium theorist Lionel McKenzie. He read the work of Avinash Dixit, Stiglitz and Paul Krugman on specialization, performed econometric work to offer some explanation to the hot puzzle of the days, the US productivity slowdown. Finally, he reflected on which characteristics of knowledge would make agents produce it and spur growth. In 1988, he presented a paper titled as “Micro-foundations for Aggregate Technological Change,” providing a rationale for agent to pursue knowledge. This was an early version of the paper to which he later gave the simpler title, “Endogenous technical change.”
It was a paper on the pricing of ski-lifts, written with Barro, that led Romer to reflect on Paul Samuelson’s paper on public vs private goods and James Buchanan’s intermediate notion of “club goods,” and to refine what he believed were the crucial characteristics of knowledge. Not indivisibility, as Arrow had previously emphasized, but a combination of non-rivalry and partial excludability. The latter, Romer claimed in his 1990 paper, explains why economic agents might choose to invest in the production of new ideas. He proposed a model in which profit–maximizing entrepreneurs hunted for new ideas because of the gains temporary patents would provide them. Romer thus made producing knowledge a profit generating activity in a monopolistic competition framework. That those ideas were non-rivals, that is, could be used by many agents at the same time without being depleted, created knowledge spillovers leading to sustainable growth.
Solving mathematical riddles or matching data?
Romer’s contribution was thus primarily a mathematical tour de force: transposing in a neoclassical dynamic general equilibrium framework both Young’s ideas about the specialization origins of growth and those of Marshall on increasing returns. He detailed his mathematical treatment of non-convexities and associated non-conventional solutions (such as chattering equilibria or equilibria with jumps) in an article published in Econometrica. The original ambition and mathematical accomplishment that gave rise to a “new growth economics” and spurred thousands of research articles was however not emphasized in most textbooks and by the Nobel committee. What was retained was the general idea of knowledge accumulation, and the conditions for sustainable growth. The preservation of a general equilibrium framework was also important for Romer: “Remember my thesis, and how it was articulated, I had these general equilibrium ambitions, I was hoping people would pay attention to that, but they didn’t. On the other hand it was a little too abstract for the Solow types, the MIT types, who said, just give me the equation, don’t worry about the logic and assumptions. I don’t think either of those paths ultimately would have led to the clarification of what do we mean by an externality, as opposed to what do we mean by a non-rival good. That’s where the rigor and logic of General Equilibrium math really paid off,” he told Warsh.
At the same time, Romer insisted that mathematical modeling needed to be checked, ex ante and ex post, by empirical evidence. “I often draw a picture for my students of differents levels,” he later explained. “The highest degree of abstraction is at the top, the closest contact to the world of our senses at the bottom. The theorists follow a trajetory within these bounds. You zoom up, spend some time and zoom back down again.” Such process was echoed in the structure of his papers. From Kuznets’s data in his dissertation, he gradually came to introduce historical data on growth gathered by Angus Madison or Heston and Summers, as well as histories of innovation and technological progress by Stanford economists Nathan Rosenberg, Moses Abramovitz and Paul David. Invited to present to the macroeconomics conference of the NBER in 1987, he wrote his first empirical defense of a long-term economic growth mainly driven by increasing returns and spillovers effects.

Nearly a decade later, an American Economic Association session on “new Growth Theory and Economic History: Match or Mismatch” offered him the opportunity to articulate more strongly his vision of the interplay of theory and historical evidence. He faulted those economists, who retained a price-taking competitive framework (especially at the time it was becoming common to use imperfect competition in DSGE models). When they assume that “technology is the same in all countries and conclude that exogenous differences in saving and education cause all of the observed differences in levels of income and rates of growth,” they disregard the most elementary facts, he bemoaned. But he also rejected the proponents of “history without theory” who believe that “these equations are so simplistic, and the world is so complicated.” He went on to offer a defense of formal methods, some geared toward the explanation of observed patterns: “What theories do is take all the available complicated information about the world and organize it into this kind of hierarchical structure . . . What growth theory must do is provide a good, simple split of the opportuities available in the physical world,” Romer explained.
Romer’s contribution to the 1996 session foreshadowed the attack he would launch on growth theory and more largely macroeconomics during another AEA annual meeting, almost 20 years later. During a session on “reflections on new growth theory,” Romer bluntly accused Lucas, who had just presented on human capital and growth, of indulging in “mathiness.” The word echoed humorist Stephen Colbert’s remark that some statements have an air of “truth” in spite of being grounded in no evidence, one he called “truthiness.” Mathiness, Romer wrote in the published version of his talk, “uses a mixture of words and symbols, but instead of making tight links, it leaves ample room for slippage between statements in natural versus formal laguage and between statements with theoretical as opposed to empirical content.” What he targeted was Lucas’ assumption that every present and future productive technology is already used at time zero and the observationally equivalent interpretation proposed. He faulted other economists with similar “dishonest” practices, which seemed to include a mix of unrealistic assumptions, shaky interpretation of mathematical symbols, and mistakes in manipulating those symbols.
That most of them were associated with Minnesota and Chicago and using price-taking models reveal that what Romer was flailing against was their lack of endorsment of his monopolistic competition framework, which he argued prevented economists from moving toward the “shared consensus” characteristic of a healthy science. Romer did not accuse Lucas of using questionable methods in order to reach specific policy prescriptions, but Lucas seems to have understood their exchange that way: “If anyone sees anything like politics in Romer’s JPE [1990] article, let me know” he responded. “What I’m saying does not line up with familiar critiques about political ideology in economics,” he later clarified in a blog post. What Romer indicted was “academic politics” and “methodological dogma”: “[the people I criticize] are fighting to preserve a sense of academic group identity grounded in a common defense of this dogmatic position,” he wrote.
Why (and how) theorists make assumptions : “carving a system at the joints”?
The endogenous growth literature honored by the Nobel committee was underpinned by a shared methodology: models, growth theorists agreed, were built in response to patterns observed in the data that were inconsistent with the main conclusions of the standard model, mathematics being used to bridge the gap between facts and theories. Like Romer, Lucas opened his seminal “On the mechanics of economic development” with a survey of the World Bank’s World Development Report (1983) and of Summers and Heston’s data, documenting sharp divergences between per-capita income across countries. He then explained that he was looking for a theory of economic development “to provide some kind of framework for organizing facts like these, for judging which represent opportunities and which necessities.” Romer acknowledged that, in those years, a consensus existed both on which observed patterns were problematic and on how to approach them: “both Robert Lucas (1988) and I cited the failure of cross-country convergence to motivate models of growth that drop the two central assumptions of the neoclassical model: that technological change is exogenous and that the same technological opportunities are available in all countries of the world.”
However, Romer’s mathiness attack shows that he and Lucas disagreed on how mathematics and the real world should interact in the process of developing theoretical assumptions. When he attacked Lucas for relying on an unrealistic assumption on the degree of technological knowledge possessed by the model’s agents, Lucas responded: “every theory contains assumption that are not quite true. That’s what makes it theory.” In his 1988 article already, Lucas had outlined what he meant by “theory”:
“an explicit dynamic system, something that can be put on a computer and run. This is what I mean by the ‘mechanics’ of economic development – the construction of a mechanical artificial world, populated by the interacting robots that economics typically studies, that is capable of exhibiting behavior the gross features of which resemble those of the actual world that I have just described.”
The quote summarized what he claimed in many other publications and speeches: that models are “artificial” worlds and abstractions, but ones that need to be good “imitations” of real facts and of “some of the main features of the economic behavior we observe in the world economy.” In a 1988 commencement address delivered at the University of Chicago, he explained that the task of the economists was to look for “better and more instructive analogies.” Economists “are storytellers, operating much of the time in worlds of make believe,” he explained, “We do not find that the realm of imagination and ideas is an alternative to, or retreat from, practical reality. On the contrary, it’s the only way we have found to think seriously about reality.”
Lucas’s view on the relationship of assumptions to reality has been interpreted as “ambivalent.” Francesco Sergi points out that Lucas generally prioritized the internal consistency of theoretical assumptions, yet sometimes he wrote there must be some “analogy” between assumptions and reality if policy conclusions are to be drawn. Our hypothesis is rather that what matters for Lucas is not the analogy between assumptions and real behavior, but with resulting situations. In a testimony before the Pontifical Council for Justice and Peace in 2011, Lucas acknowledged that the homo oeconomicus model describes a way “actual people never are,” but he considered that the resulting “situation,” in which each agent is acting in a way that is individually rational yet collectively irrational, to be “common in actual society.” In a review of Elhanan Helpman and Paul Krugman’s Trade Policy and Market structure (1989), he further justified the unrealism of assumptions as providing a tractable and unique, general, model: “one is able to see which assumptions are essential to which results with a clarity that is just not possible through the study of special cases as they appear in journal articles.” If realism had to be traded for tractability, then so be it. Lucas viewed the kind of theoretical model that he, or Helpman and Krugman, produced as a first stage in a larger process whereby the model is subsequently tested against out-of-sample data. The model was then used to fashion more “specific models that seem to capture situations in particular industries, and thus they permit the exercise of judgment and the use of evidence to help determine which theoretically possible effects are small and which are critical.”
The empirical patterns on growth brought to the table in the 1980s thus equally infused Romer and Lucas’s research questions, and formed the benchmark against which their models of growth needed to be evaluated. But Lucas was willing to adopt assumptions which did not reflect economic agents’ observed behavior if they allowed him to devise a “mechanism” that replicated a wider range of phenomena, therefore to isolate a possible common effect. He would come up with mathematical expression first, then some stories about the underlying economic behavior. In contrast, Romer rather drew on the history of technological innovation to develop conceptual distinctions between ideas and things as well as behavioral assumption on what drives entrepreneurs.
Though Romer’s early contributions did not include any epistemological statement, he articulated such framework in his 1996 contribution to the aforementioned AEA session on theory and economic history. Drawing on Richard Dawkins’s “hierarchical reductionism,” Romer argued that the task of the scientist is to describe real world phenomena by distinguishing, classifying and combining their main structural elements. For instance, distinguishing between “ideas” and “things” was a better classification of growth input that public vs private good, he contended. The original phenomena can thus be progressively reduced to a conjunction of interacting atomic elements, Romer explained:
“Explanation operates on many levels that must be consistent with each other. What theories do is take all the available complicated information about the world and organize it into this kind of hierarchical structure. In building this structure, good theory indicates how to carve a system at the joints. At each level, theory breaks a system down into a simple collection of subsystems that interact in a meaningful way” (emphasis added).
In his mathiness paper, Romer further insisted that each of the theoretical element remained empirically interpretable, i.e. analogous to an identifiable object in the real world, as they become encapsulated in mathematical symbols. He praised how Solow’s mathematical theory of growth “mapped the word ‘capital’ onto a variable in his mathematical equations, and onto both data from national income accounts and objects like machines or structures that someone could observe directly,” and how Gary Becker’s theory of wage likewise “gave the words “human capital” the same precision and established the same two types of tight connection—between words and math and between theory and evidence.” Maintaining a “tight connection” between the data to be explained, the words used to denote abstract concepts such as “technology” and the mathematical symbols and equations used to represent their relationships with one another was key, he concluded. He faulted economists such as Lucas for using words and mathematical assumptions which have no meaning and no precise counterpart in reality. He rejected hypotheses based on “immaterial entities or process, such as disembodied spirits” (to use the word of the philosopher of science he relies on, Mario Bunge). This is the case not only for words like “technology,” but also for “technological shocks,” which he believes Kydland and Prescott “might as well have called . . . gremlins or unicorns.”.
The divide between Romer and Lucas, therefore, appears to be the degree of correspondence they believe should exist between real-world entities, concepts expressed through words and mathematical entities. What remains unclear in Romer’s guidelines is how the acceptable degree of convergence or divergence is defined. He questions the way macroeconomists use the word “technology” and the way technology is represented, but he does not object to other abstractions such as capital-labor substitution, firms (in the neoclassical sense of the term, i.e. without any social structure) or production functions. He even accused economists who openly challenged the existence of the kind of production functions Solow used, like Joan Robinson, of engaging in mathiness. As both the two Cambridges capital controversy and Romer’s own endogenous growth theory however shows, all important economic abstractions and associated mathematical representation are meant to be challenged by more specific and therefore empirically relevant ones.
The question of the convergence between theoretical categories and real-world objects, in fact, been a major fault line in the philosophy of knowledge. This is exemplified by the debates surrounding the quote by Plato that Romer had borrowed. His idea that “good theory indicates how to carve a system at the joints” was a rendition of a famous line from Phaedrus (265e):
“the second principle is that of cutting up each kind according to its species along its natural joints, and try to not splinter any part as a bad butcher might.”
The resulting phrase, “carving nature at its joints,” had generated centuries of debates on whether nature has “joints” separating “natural kinds” of things, so that science needs to proceed by isolating those. A key associated question has been whether scientific knowledge relies on discovering new kinds if they exist, clearly delineated, in nature, or inventing them through clarifying muddle natural distinctions or engage in more independent conceptualization. In the end, it seems that Romer and Lucas got caught in another ripple of the millennium old science debate : whether to become a butcher or a storyteller.
Notes
[1] The Cass-Koopmans model, based on Ramsey’s 1928 pioneer work, was an attempt to refine Solow’s 1957 exogenous growth model by replacing the Keynesian consumption function with optimizing behavior (consumption/leisure tradeoff) in the investment/consumption plans of an infinitely lived household. It used mathematical programming (especially calculus of variations and optimal control), and was usually taught to students as an extension of Solow’s model where consumption decisions are endogenized.
[2] Sectoral spillovers (under the form of knowledge production) are considered by firm as given when searching for their optimal production decision, and are compatible with constant returns to scale (then private decreasing returns to knowledge) at the firm level. At the aggregate or social level however they induce increasing returns to scale since spillovers increase with production.
[3] In Solow’s 1957 model, countries with similar characteristics (relating to technology and demographic growth) but lower states of “development” (more precisely lower capital per capita accumulation) benefit from higher growth rate allow them to catch up with the most advanced countries. This phenomenon known as “absolute convergence” is clearly rejected on the ground of empirical plausibility. Subsequent models, and especially models of endogenous growth, aimed at resolving the discrepancy between theory and data, and explaining persistent growth gaps across countries.