Heterogeneous agents macroeconomics has a long history, and it raises many questions

EDIT October 2022: Pedro Duarte, Aurélien Saïdi have now completed a full history of heterogeneous households in macroeconomics, available here

A cornerstone of thoughtful and lazy criticisms of mainstream macroeconomics alike is the idea that macroeconomists have spent 40 years just writing infinitely lived representative agent models (in which all agents behave the same way and are eternal), and isn’t it a ridiculous assumption? To which mainstream macroeconomics invariably respond: “heterogeneous agent models are all over the place,” which is in turn invariably met with “yes, but this is a very recent development.” I have myself long considered the development of heterogeneous agent macro as a response to the 2008 crisis. Then I began working on the history of economics at Minnesota, and I realized that by the mid-1980s, heterogeneous agent models were already all over the place. Much of the current discussions on the-state-of-macro are premised on a flawed historical narrative. And acknowledging that representative agent models have long coexisted with heterogeneous agent models of various stripes raises a host of new questions for critics and proponents of mainstream macro.

Heterogeneous agent models: the Minnesota genealogy

In 1977, Cowles economist Truman Bewley was looking to microfound the permanent income hypothesis (which basically state that agents try to smooth their consumption over time by saving and dis-saving). He came up with a model in which agents are heterogeneous in the income fluctuations they face (back then you didn’t call these “shocks” yet), and, crucially, who are not allowed to borrow. Though he subsequently moved from general equilibrium theory to survey-based investigation of why wage are sticky, his earlier work gave rise to a class of models named after him by Lars Ljundqvist and Tom Sargent. Their characteristic was that market incompleteness, in particular incomplete insurance, creates heterogeneity in the kind of shocks otherwise initially identical agents reacted to, so that their ex-post wealth follows a distribution that needs to be characterized. Major contributions to this literature included landmark works by former physics student Rao Aiyagari (1981 Minnesota PhD under Wallace) and Mark Huggett (1991 Minnesota PhD under Prescott) in the early 1990s.

Huggett wanted to understand why risk-free interest were on average lower than what calibrated representative-agent models predict. He constructed a model in which households face idiosyncratic income endowment shocks that they can’t fully insure against because they face a borrowing constraint, and showed that this results in higher precautionary savings (so that they can later smooth consumption in spite of uncertainty), thus a lower risk-free rate. “A low risk-free rate is needed to persuade agents not to accumulate large credit balances so that the credit market can clear,” he explained. Aiyagari intended to study how an economy with a large number of agents behaved, hoping to bridge the gap between representative-agent models and observed patterns in individual behavior. He also wanted to study how individual risk affect aggregate saving (little, he found). He wrote a production model where agents differ in that they face uninsured idiosyncratic labor endowment shocks and trade assets between themselves. Since agent save more, the capital stock, the wage rate and the labor productivity are all higher, he showed.  He also used a Bewley model to show that if market were incomplete, capital income could be taxed. This challenged the famous result independently reached by Kenneth Judd (in a model with two classes of agents, capitalists and workers) and Christophe Chamley (in a representative agent setting) that positive capital taxation is suboptimal.

Minnesota macroeconomists in the 1980s were fond of another type of model in which heterogeneity bred restricted participation to markets, leading to lack of insurance against extrinsic uncertainty (that is shocks affecting nonfundamental variables). The heterogeneity was in age, and it was generated by the coexistence of at least two generations of otherwise identical agents in each one of an infinite succession of periods. Since each agent is born then dies, this  de facto restricts participation to past markets. What came to be known as overlapping generation models were originally engineered by Maurice Allais in 1947 and Paul Samuelson in 1958. Samuelson’s consumption-loan model allowed him to investigate the determination of interest rates (he identified an autarkic equilibrium in which money has no value because agents consume their endowment and another one in which money is used to delay consumption). His purpose was to ‘point up a fundamental and intrinsic deficiency in a free price system … no tendency to get you to positions on the [Pareto efficiency frontier] that are ethically optimal.” OLG has subsequently been wedded to the examination of the role of money in the economy – it was used in the famous 1972 paper by Lucas, the original model in which Cass and Shell framed their sunspots (they argued that OLG was the “only dynamic disaggregated macroeconomic model”) and it was the core of a conference on microfounding the role of money organized by John Kareken and Neil Wallace at the Federal Bank of Minneapolis in 1978 (Bewley, Cass and Shell contributed to the resulting volume). Aiyagari, and many others (half of Wallace’s PhD students) at Minnesota, had spent years comparing the properties of infinitely-lived agent models and OLGs.

There were several reasons to bring heterogeneity into general equilibrium models. Some researchers wanted to study its consequences on the level and dynamics of price and quantities, others were interested in understanding the effects of business cycles on the welfare of various types of consumers (something that governments might want to offset by removing some risks agent faced, through social security for instance). It was the motive behind the dissertation Ayse Imrohoroglu completed at Minnesota in 1988 under Prescott. One of her papers pushed back against Lucas’ 1987 conclusion that the business cycle did not really affect aggregate consumption. She wrote a model where variable probabilities to find a job create some idiosyncratic income uncertainty which agents cannot completely offset because they have borrowing constraints. She concluded that in specific settings and for some parameters, the welfare cost of business cycle was significant ($128 per person, 5 time larger than the one in an economy with perfect insurance).

 Per Krusell (1992 Minnesota PhD under Wallace) and Carnegie economist Tony Smith were also concerned with the consequences of heterogeneity on the business cycle and its welfare implications. Their agenda was to check whether a heterogeneous agent model fared better than a representative agent one when it came to replicate the behavior of macro aggregates.  They used a production model in which households face idiosyncratic  employment shocks with borrowing restriction. These agents consequently hold different positions in the wealth distribution, with some of them ‘rich’ and some of them ‘poor.’ They also added an aggregate productivity shock, and, in a spinoff of the model, differences in agents’ random discount factor (their degree of patience).

They found out that when shocks are calibrated to mimic real GDP fluctuations, the resulting overall wealth distribution is close to the real-world one. They noted that the resulting level and dynamics of aggregates was not substantially different from what was obtained with a representative agent model, a result that was later attributed to their calibration choices. Furthermore, they explained that “the distribution of aggregate wealth was almost completely irrelevant for how the aggregates behave in the equilibrium” ( because the value of the shocks they chose was not that big, agents ended up insured enough so that their marginal propensity to save was largely independent from their actual wealth and income, except for the poorest, who don’t weigh a lot in aggregate wealth anyway. The borrowing constraint didn’t play a big role in the end).

            In a follow-up survey, Krusell and Smith made it clear that their purpose was not “to provide a detailed assessment of the extent to which inequality influences the macroeconomy … [or] how inequality is determined.” It seems to me that back then, studying inequality in wealth, income, wage was not the main motive for developing these models (Aiyagari excepted). The growing amount of micro data produced, in particular through the US Panel Study on Income Dynamics initiated by Michigan economist Jim Morgan in the wake of Johnson’s War on Poverty, provided a new set of facts calibrators were challenged to replicate. These included a more disagregated picture of income and wealth distribution. If Bewley models featured prominently in Ljungqvist and Sargent’s 2000 Recursive Macroeconomic Theory textbook, thus, it was because it was necessary to match the “ample evidence that individual households’ positions within the distribution of wealth move over time.” Macroeconomists’ motives to use heterogeneous agent models gradually shifted, as they became more directly interested in the two-ways relations between inequalities and aggregate fluctuations. Other types of heterogeneity were introduced, in the demographic structure, the type of shock agent face (fiscal and productivity shocks among others) and their innate characteristics (in the liquidity of their asset portfolios, their marginal propensity to consume, their health, their preferences, etc.).

Innovations were much needed to solve these models, as aggregate equilibrium prices depended not just on exogenous variables, but also on the entire wealth and income distribution of agents that endogenously changes over time. This meant that solutions could not be analytically derived. If Krusell and Smith’s work proved so influential, it wasn’t merely because they proposed a new model, but also because they built on numerical methods to provide an iterative numerical solution algorithm (based on their idea that the only thing that was necessary for agents to make consumption decisions – thus for them to compute the solution – was the mean of the wealth distribution, which determined future interest rates). The development of heterogeneous agent macro models from the 1990s onward therefore paired theoretical and computational investigations.

Questions for critics, supporters and historians of mainstream macro

What I describe here is just a subset of the heterogeneous agent models written from the 1980s onward. I only deal only with part of the Minnesota genealogy  (excluding works by Diaz, Manuelli and others) and with household heterogeneity. The work done by other economists on precautionary savings and consumption, the catalyzing role played by Angus Deaton’s empirical work on aggregation and consumption and all the literature on firm heterogeneity is left behind. This is a rough account which probably contains analytical errors and whose underlying narrative will likely evolve with the historical evidence I gather. But sketchy as it is, this account already raises a host of new questions, some I ask as a historian and other as a candid observer of current macro modeling debates

First, I’m puzzled by how this line of research has been ignored by critics of mainstream macro and wiped out of the canonical history of macro told by its major protagonists. You might judge that this cluster of economists was anecdotal, but as I said, it is only a subset of those macroeconomists who have worked with heterogeneous agent models since the 1980s. They were located at the center of the field, working in or produced by the department of economics critics and eulogists alike all believe was setting the agenda for macro in these years: Minnesota. A further sign of its importance, this approach was institutionalized in curricula, surveys and textbooks ways before the 2010s. OLGs featured prominently in Sargent’s Minnesota course notes (published in 1987) as the preferred vehicle to model credit, money and government finance. As I said above, his 2000 textbook made ample space for Bewley models. Some computationally oriented textbooks even devoted half their space to heterogenous agent models by the mid-2000s. José-Victor Rios-Rull, who completed his 1990 Minnesota PhD under Prescott on OLG models, immediately set out to teach the theory of computation of Bewley, OLGs and other heterogeneous agent models.

Screen Shot 2018-11-28 at 02.10.27

Heterogeneous agent modeling was therefore part of the macro playbook already at the turn of the 2000s, that is, at the moment the state of macro was considered “good.” Contrary to what I had previously imagined, then, the most recent crop of models was not a reaction to the crisis. Rough citations patterns to Aiyagari from the Web of Science economics database, for instance, illustrate that the rise of heterogeneity in macro was a long slow but steady trend. So why this invisibility? Was it because the general sense was that these models’ conclusions were not essentially different from those reached with representative agent models? Pre-crisis assessments of the literature differ. Lucas, for instance, wrote in 2003 that “[Krusell and Smith] discovered, realistically modeled household heterogeneity just does not matter very much. For individual behavior and welfare, of course, heterogeneity is everything.” But in a survey circulated three years later, Krusell and Smith themselves were more nuanced:

“The aggregate behavior of a model where the only friction … is the absence of insurance markets for idiosyncratic risk is almost identical to that of a representative-agent model … for other issues and other setups, the representative-agent model is not robust Though we do not claim generality, we suspect that the addition of other frictions is key here. As an illustration we add a certain form of credit-market constraints to the setup with missing insurance markets. We show here that for asset pricing—in particular, the determination of the risk-free interest rate—it can make a big difference whether one uses a representative-agent model or not.”

Anecdotal evidence I’ve been collecting suggests even more fluctuating assessments were offered in 90s and 2000s graduate macro courses, ranging from “these models are the future of macro” to “these models are useless as they don’t improve on representative agent models.” If so, why this variance? Tractability issues – authors had to invent new computation algorithms as they brought new types of heterogeneity in these models? Confirmation bias – retaining only papers that emphasized similarities between the two types of models? Ideology –rejection of heterogeneous agent models because they opened the door to active risk offsetting policies?

My other set of questions is predicated on the claim that acknowledging this modeling tradition should shift current debates on macroeconomic models. Faulting macroeconomists with sticking with infinitely lived representative agent models for 40 years is simply incorrect. Now, you might argue that the departures from this benchmark model I have described were insufficient. But then, the questions become how much heterogeneity is enough? If you believe that the flourishing literature born out of these early efforts (HANK, etc) is a minimal departure from the “standard” model no matter how far the approach is stretched to encompass new types of heterogeneity, why isn’t it enough? If a more radical break with these practices is needed, if a shift in the nature rather than the sophistication of models in unavoidable (for instance, by shifting to agent-based models or non-microfounded models), why is that?

Another question on how much “progress” there is in macro right now and whether it’s fast enough. The standard reason invoked for the burgeoning heterogeneous agent literature is better computer+ better data. I have already speculated elsewhere on why I think this is necessary but far from sufficient condition to transform economists’ practices (roughly, the effects of running models on faster computers to match them with more fined-grained data is conditional upon the profession’s acceptance of agreed standards of proof, agreed standards of conclusive empirical work, shared software). But let assume this is the case. Then how can macroeconomists be confident that models will be improved enough to “see” the next crisis brewing. How much heterogeneity is this going to take? Now you can have heterogeneity on a shock, on a consumer’s characteristics and maybe on one firm’s characteristic. How long before tractable models with heterogeneity + financial frictions+non-rational expectations+search+monopolistic market structure can be developed? What if it takes 20 years and the next crisis is 5 years ahead? To put it graphically,  how can I be confident that I won’t see a 2020 NYT cartoon picturing policy makers surrounded by a crowd of homeless starving people, handing charts plotting x & y econ variables plummeting, knocking at a door with a “academic macroeconomists” tag on it. The door is closed, with a speech bubble that reads: “Come back in 10 years, we’re not ready!” All my questions, in the end, are about the right strategy to build macro engines versatiles enough to pin down not just past crises but also brewing imbalances.

EDIT: I got many comments on the final sentence. It has been read as a defense of “forecasting the next crisis” as the criterion on which the quality of macroeconomic models should be judged. This is not what I meant, but I confess that my use of “see” and “brewing crises” is extremely vague. My views on how macroeconomic models should be evaluated are equally muddled. I think that it all boils down to what you put under the label “forecasting.” I have already explained here why it is epistemologically inconsistent to hold economists responsible for bad unconditional forecasts. My tentative criterion is that the models macroeconomists use should allow them to track a variety of important phenomena like, back in the 2000s, securitization and bubbles, and in the 2010s, trade wars, climate, inequality (all of which macroeconomists take into account), political factors like shrinking democracy and growing polarization (don’t know if macro models now feature this kind of variable).  It’s not a positive history statement on what quality criteria in macroeconomics were and are, just a lay opinion on the present situation.

A model is a device that allows economists to observe, and sometimes explain, the economy through zooming on a small subset of phenomena they think are most relevant. The bulk of macroeconomic models published in the 2000s did not single out what was happening of financial markets as important to understand the evolution of macroeconomic aggregate. Economists had positive and normative things to say about how financial market work, but in a distinct field, finance. A few Rajans tracked the mortgage security market and reflected on macroeconomic consequences, but my understanding is that macroeconomists as a group simply did not have a collective discourse on the macroeconomic consequences of securitization and risk exposure to bring to the public scene (happy to be disproved). Of course you can never know in advance which phenomena are going to turn relevant. But you can build a set of observational devices that track a larger range of phenomena (it also include statistics. You cannot “see” a new phenomenon, like the rise of the share of wealth of the top 1%, unless you can measure those things). Is what I’m trying to say with so many words here is just that macroeconomist should get better at conditional forecast?

Advertisement

2 Comments

  1. I have enjoyed this blog, particularly highlighting the efforts of those who attempt to escape the neoclassical straitjacket of rational expectations, the market efficiency hypothesis and the supply-demand and IS-LM totems, in addition of imposing empirical microeconomics on macroeconomics through insisting on some microfoundations (Steve Keen, 2011). I understand that you as an economic historian, has attempted to document efforts on one issue that is the homogeneous agent with a special interest of Minnesota. The dilemma of the contemporary neoclassical economists is that giving up their untenable assumptions, would deprive their theory of its comfortable convexity and linearity and make their model unsolvable without some numerical logarithm. Meanwhile, they insist that their theory has helped reach the Great Moderation (e.g., Lucas, Bernanke). As an economic historian, I suggest you try to look for those who admit the serious neoclassical faults in both logic, philosophy and applied mathematics, in order to justify and inspire a new thinking. The less known group of Islamic economists are facing a similar dilemma: whether to build their analysis on faulty neoclassical economics, or to ignore it completely. But then, ignoring it would mandate building up a new micro theory, which appears to be a rather challenging task.
    May I suggest a new research project that surveys those who tried to escape the mistakes of the orthodoxy and also succeeded in prividing an alternative, e.g., the post Great Depression theory of debt deflation by Irving Fisher and the works of Minsky and others.
    Best regards,

  2. Modern labour economics is based on the study of heterogeneity. In particular, the study of gender. The reason for that is men are boring; they joined the workforce as a teenager, retire at 65 and soon after dropped dead.

    There is tremendous variation in the behaviour of women regarding fertility, labour supply, fallen part-time work, college major choice, worklife balance, commuting distance, work intensity, career goals and family goals, and the list goes on.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s