Quartz #29—>The Complete Guide to Getting into an Economics PhD Program

blog.supplysideliberal.com tumblr_inline_mtjpcaqMZA1r57lmx.png

Link to the Column on Quartz

Here is the full text of my 29th Quartz column, ”The Complete Guide to Getting into an Economics PhD Program.” I am glad to now bring it home to supplysideliberal.com, and I expect Noah to post it on his blog Noahpinion, as well.  It was first published on August 16, 2013. Links to all my other columns can be found here.

Up to this point, this is by far my most popular Quartz column. In addition to great interest in the topic, I attribute the popularity of this column to Noah’s magic touch. Personally, I would rather read Noah’s blog than any other blog in cyberspace. That brilliant style shows through here; I think I managed not to spoil things too much in this column.   

This column generated many reactions, two of which you can see as guest posts on supplysideliberal.com: 

Jeff Smith is my colleague at the University of Michigan. He amplifies many of the things we say.  For a complete guide, be sure to see what Jeff has to say, too. What Bruce Bartlett had to say is worth reading simply because of his interesting career path.  

If you want to mirror the content of this post on another site, that is possible for a limited time if you read the legal notice at this link and include both a link to the original Quartz column and the following copyright notice:

© August 16, 2013: Miles Kimball and Noah Smith, as first published on Quartz. Used by permission according to a temporary nonexclusive license expiring June 30, 2014. All rights reserved.

Noah has agreed to give permission on the same terms as I do. 


Back in May, Noah wrote about the amazingly good deal that is the PhD in economics. Why? Because:

  1. You get a job.
  2. You get autonomy.
  3. You get intellectual fulfillment.
  4. The risk is low.
  5. Unlike an MBA, law, or medical degree, you don’t have to worry about paying the sticker price for an econ PhD:  After the first year, most schools will give you teaching assistant positions that will pay for the next several years of graduate study, and some schools will take care of your tuition and expenses even in the first year. (See what is written at the end of this post, after the column proper, for more about costs of graduate study and how econ PhD’s future earnings makes it worthwhile, even if you can’t get a full ride.)

Of course, such a good deal won’t last long now that the story is out, so you need to act fast! Since he wrote his post, Noah has received a large number of emails asking the obvious follow-up question: “How do I get into an econ PhD program?” And Miles has been asked the same thing many times by undergraduates and other students at the University of Michigan. So here, we present together our guide for how to break into the academic Elysium called Econ PhD Land:

(Note: This guide is mainly directed toward native English speakers, or those from countries whose graduate students are typically fluent in English, such as India and most European countries. Almost all highly-ranked graduate programs teach economics in English, and we find that students learn the subtle non-mathematical skills in economics better if English is second nature. If your nationality will make admissions committees wonder about your English skills, you can either get your bachelor’s degree at a—possibly foreign—college or university where almost all classes are taught in English, or you will have to compensate by being better on other dimensions. On the bright side, if you are a native English speaker, or from a country whose graduate students are typically fluent in English, you are already ahead in your quest to get into an economics PhD.)

Here is the not-very-surprising list of things that will help you get into a good econ PhD program:

  • good grades, especially in whatever math and economics classes you take,
  • a good score on the math GRE,
  • some math classes and a statistics class on your transcript,
  • research experience, and definitely at least one letter of recommendation from a researcher,
  • a demonstrable interest in the field of economics.

Chances are, if you’re asking for advice, you probably feel unprepared in one of two ways. Either you don’t have a sterling math background, or you have quantitative skills but are new to the field of econ. Fortunately, we have advice for both types of applicant.

If you’re weak in math…

Fortunately, if you’re weak in math, we have good news: Math is something you can learn. That may sound like a crazy claim to most Americans, who are raised to believe that math ability is in the genes. It may even sound like arrogance coming from two people who have never had to struggle with math. But we’ve both taught people math for many years, and we really believe that it’s true. Genes help a bit, but math is like a foreign language or a sport: effort will result in skill.

Here are the math classes you absolutely should take to get into a good econ program:

  • Linear algebra
  • Multivariable calculus
  • Statistics

Here are the classes you should take, but can probably get away with studying on your own:

  • Ordinary differential equations
  • Real analysis

Linear algebra (matrices, vectors, and all that) is something that you’ll use all the time in econ, especially when doing work on a computer. Multivariable calculus also will be used a lot. And stats of course is absolutely key to almost everything economists do. Differential equations are something you will use once in a while. And real analysis—by far the hardest subject of the five—is something that you will probably never use in real econ research, but which the economics field has decided to use as a sort of general intelligence signaling device.

If you took some math classes but didn’t do very well, don’t worry. Retake the classes. If you are worried about how that will look on your transcript, take the class the first time “off the books” at a different college (many community colleges have calculus classes) or online. Or if you have already gotten a bad grade, take it a second time off the books and then a third time for your transcript. If you work hard, every time you take the class you’ll do better. You will learn the math and be able to prove it by the grade you get. Not only will this help you get into an econ PhD program, once you get in, you’ll breeze through parts of grad school that would otherwise be agony.

Here’s another useful tip: Get a book and study math on your own before taking the corresponding class for a grade. Reading math on your own is something you’re going to have to get used to doing in grad school anyway (especially during your dissertation!), so it’s good to get used to it now. Beyond course-related books, you can either pick up a subject-specific book (Miles learned much of his math from studying books in the Schaum’s outline series), or get a “math for economists” book; regarding the latter, Miles recommends Mathematics for Economists by Simon and Blume, while Noah swears by Mathematical Methods and Models for Economists by de la Fuente. When you study on your own, the most important thing is to work through a bunch of problems. That will give you practice for test-taking, and will be more interesting than just reading through derivations.

This will take some time, of course. That’s OK. That’s what summer is for (right?). If you’re late in your college career, you can always take a fifth year, do a gap year, etc.

When you get to grad school, you will have to take an intensive math course called “math camp” that will take up a good part of your summer. For how to get through math camp itself, see this guide by Jérémie Cohen-Setton.

One more piece of advice for the math-challenged: Be a research assistant on something non-mathy. There are lots of economists doing relatively simple empirical work that requires only some basic statistics knowledge and the ability to use software like Stata. There are more and more experimental economists around, who are always looking for research assistants. Go find a prof and get involved! (If you are still in high school or otherwise haven’t yet chosen a college, you might want to choose one where some of the professors do experiments and so need research assistants—something that is easy to figure out by studying professors’ websites carefully, or by asking about it when you visit the college.)

If you’re new to econ…

If you’re a disillusioned physicist, a bored biostatistician, or a neuroscientist looking to escape that evilPrincipal Investigator, don’t worry: An econ background is not necessary. A lot of the best economists started out in other fields, while a lot of undergrad econ majors are headed for MBAs or jobs in banks. Econ PhD programs know this. They will probably not mind if you have never taken an econ class.

That said, you may still want to take an econ class, just to verify that you actually like the subject, to start thinking about econ, and to prepare yourself for the concepts you’ll encounter. If you feel like doing this, you can probably skip Econ 101 and 102, and head straight for an Intermediate Micro or Intermediate Macro class.

Another good thing is to read through an econ textbook. Although economics at the PhD level is mostly about the math and statistics and computer modeling (hopefully getting back to the real world somewhere along the way when you do your own research), you may also want to get the flavor of the less mathy parts of economics from one of the well-written lower-level textbooks (either one by Paul Krugman and Robin WellsGreg Mankiw, or Tyler Cowen and Alex Tabarrok) and maybe one at a bit higher level as well, such as David Weil’s excellent book on economic growth) or Varian’s Intermediate Microeconomics.

Remember to take a statistics class, if you haven’t already. Some technical fields don’t require statistics, so you may have missed this one. But to econ PhD programs, this will be a gaping hole in your resume. Go take stats!

One more thing you can do is research with an economist. Fortunately, economists are generally extremely welcoming to undergrad RAs from outside econ, who often bring extra skills. You’ll get great experience working with data if you don’t have it already. It’ll help you come up with some research ideas to put in your application essays. And of course you’ll get another all-important letter of recommendation.

And now for…

General tips for everyone

Here is the most important tip for everyone: Don’t just apply to “top” schools. For some degrees—an MBA for example—people question whether it’s worthwhile to go to a non-top school. But for econ departments, there’s no question. Both Miles and Noah have marveled at the number of smart people working at non-top schools. That includes some well-known bloggers, by the way—Tyler Cowen teaches at George Mason University (ranked 64th), Mark Thoma teaches at the University of Oregon (ranked 56th), and Scott Sumner teaches at Bentley, for example. Additionally, a flood of new international students is expanding the supply of quality students. That means that the number of high-quality schools is increasing; tomorrow’s top 20 will be like today’s top 10, and tomorrow’s top 100 will be like today’s top 50.

Apply to schools outside of the top 20—any school in the top 100 is worth considering, especially if it is strong in areas you are interested in. If your classmates aren’t as elite as you would like, that just means that you will get more attention from the professors, who almost all came out of top programs themselves. When Noah said in his earlier post that econ PhD students are virtually guaranteed to get jobs in an econ-related field, that applied to schools far down in the ranking. Everyone participates in the legendary centrally managed econ job market. Very few people ever fall through the cracks.

Next—and this should go without saying—don’t be afraid to retake the GRE. If you want to get into a top 10 school, you probably need a perfect or near-perfect score on the math portion of the GRE. For schools lower down the rankings, a good GRE math score is still important. Fortunately, the GRE math section is relatively simple to study for—there are only a finite number of topics covered, and with a little work you can “overlearn” all of them, so you can do them even under time pressure and when you are nervous. In any case, you can keep retaking the test until you get a good score (especially if the early tries are practice tests from the GRE prep books and prep software), and then you’re OK!

Here’s one thing that may surprise you: Getting an econ master’s degree alone won’t help. Although master’s degrees in economics are common among international students who apply to econ PhD programs, American applicants do just fine without a master’s degree on their record. If you want that extra diploma, realize that once you are in a PhD program, you will get a master’s degree automatically after two years. And if you end up dropping out of the PhD program, that master’s degree will be worth more than a stand-alone master’s would. The one reason to get a master’s degree is if it can help you remedy a big deficiency in your record, say not having taken enough math or stats classes, not having taken any econ classes, or not having been able to get anyone whose name admissions committees would recognize to write you a letter of recommendation.

For getting into grad school, much more valuable than a master’s is a stint as a research assistant in the Federal Reserve System or at a think tank—though these days, such positions can often be as hard to get into as a PhD program!

Finally—and if you’re reading this, chances are you’re already doing this—read some econ blogs. (See Miles’s speculations about the future of the econ blogosphere here.) Econ blogs are no substitute for econ classes, but they’re a great complement. Blogs are good for picking up the lingo of academic economists, and learning to think like an economist. Don’t be afraid to write a blog either, even if no one ever reads it (you don’t have to be writing at the same level as Evan Soltas orYichuan Wang);  you can still put it on your CV, or just practice writing down your thoughts. And when you write your dissertation, and do research later on in your career, you are going to have to think for yourself outside the context of a class. One way to practice thinking critically is by critiquing others’ blog posts, at least in your head.

Anyway, if you want to have intellectual stimulation and good work-life balance, and a near-guarantee of a well-paying job in your field of interest, an econ PhD could be just the thing for you. Don’t be scared of the math and the jargon. We’d love to have you.


In case you are curious, let me say a little about the financial costs and benefits of an economics PhD.  At Michigan and other top places, PhD students are fully funded. Here, that means that the first year’s tuition and costs are covered (including a stipend for your living expenses). In years 2 through 5 (which is enough time to finish your PhD if you work hard to stay on track), as long as you are in good standing in the program, the costs of a PhD are just the work you do as a teaching assistant. So there are no out-of-pocket costs as long as you finish within five years, which is tough but doable if you work hard to stay on track. Tuition is relatively low in year 6 (and 7) if you can’t finish in 5 years. Plus, graduate students in economics who have had that much teaching experience often find they can make about as much money by tutoring struggling undergraduates as they could have by being a teaching assistant.

When a school can’t manage full funding, the first place it adds a charge is in charging the bottom-half of the applicant pool for the first year, when a student can’t realistically teach because the courses the grad students are taking are too heavy. That might add up to a one-time expense of $40,000 or so in tuition, plus living expenses.

On pay, the market price for a brand-new assistant professor at a top department seems to be at least $115,000 for 9 months, with the opportunity to earn more during the summer months. If you don’t quite make it to that level, University of Michigan PhD’s I have asked seem to get at least $80,000 starting salary, and Louis Johnston tweets that below-top liberal arts colleges pay a starting salary in the $55,000 to $60,000 range. But remember that all of these numbers are for 9-month salaries that allow for the possibility (though not the regularity) of earning more in the summer. Government jobs tend to pay 12-month salaries that are about 12/9 of 9-month academic salaries at a comparable level.

There is definitely the possibility of being paid very well in academic economics, though not as well as the upside potential if you go to Wall Street. For example, with summer pay included, quite a few of the full economics professors at the University of Michigan make more than $250,000 a year. (Because we are at a state university, our salaries are public.)

The bottom line is that the financial returns are good enough that you should have no hesitation begging or borrowing to finance your Economics PhD. (Please don’t steal to finance it.)

What about the costs of the extra year it might take to study math the way we recommend? If you have been developing self-discipline like a champion, but are short on money and summers aren’t enough, you could spend a gap year right after high school just studying math, living in your parents’ house at very low cost; most colleges will let you defer admission for a year after they have let you in.    

Update: 

I liked this comment that Kevin C. Smith (an MD) sent to Quartz:

Great advice!
I almost flunked Grade 8 because my math was so bad [back in the day they would flunk you for that, at least in Alberta.]
I wound up heading for medicine. A friend who was a few years ahead of me warned: “You’ll never make it if you are not good at math!”
I hired a math tutor in August [before University started], and did every question at the end of every chapter in every one of my text books. I could call my tutor when I got stuck [God bless her, wherever she is in the world today!] Math got to be fun after a while [like being really good at solving puzzles.]
You might add to you list of suggestions: hire a tutor do all the questions in all your textbooks
Long story short, I won the Gold Medal for Science, and have found that a really good grasp of math has helped my enjoyment of the world and of my work a lot.

Expert Performance and Deliberate Practice

I wanted to back up some of what I have been writing about deliberate practice with more academic references. It matters because the evidence indicates that human capital accumulation can be dramatically improved by getting to best practice about practicing skills. K. Anders Ericsson is one of the foremost academic experts about deliberate practice. Here is an excerpt from his “Expert Performance and Deliberate Practice”:

The recent advances in our understanding of the complex representations, knowledge and skills that mediate the superior performance of experts derive primarily from studies where experts are instructed to think aloud while completing representative tasks in their domains, such as chess, music, physics, sports and medicine (Chi, Glaser & Farr, 1988; Ericsson & Smith, 1991; Starkes & Allard, 1993). For appropriate challenging problems experts don’t just automatically extract patterns and retrieve their response directly from memory. Instead they select the relevant information and encode it in special representations in working memory that allow planning, evaluation and reasoning about alternative courses of action (Ericsson & Lehmann, 1996). Hence, the difference between experts and less skilled subjects is not merely a matter of the amount and complexity of the accumulated knowledge; it also reflects qualitative differences in the organization of knowledge and its representation (Chi, Glaser & Rees, 1982).  Experts’ knowledge is encoded around key domain-related concepts and solution procedures that allow rapid and reliable retrieval whenever stored information is relevant. Less skilled subjects’ knowledge, in contrast, is encoded using everyday concepts that make the retrieval of even their limited relevant knowledge difficult and unreliable. Furthermore, experts have acquired domain-specific memory skills that allow them to rely on long-term memory (Long-Term Working Memory, Ericsson & Kintsch, 1995) to dramatically expand the amount of information that can be kept accessible during planning and during reasoning about alternative courses of action.  The superior quality of the experts’ mental representations allow them to adapt rapidly to changing circumstances and anticipate future events in advance.  The same acquired representations appear to be essential for experts’ ability to monitor and evaluate their own performance (Ericsson, 1996; Glaser, 1996) so they can keep improving their own performance by designing their own training and assimilating new knowledge.

Below are some references. You can find a lot more by googling “Ericsson deliberate practice." 

References:

Bolger, F., and G. Wright, 1992, ‘Reliability and validity in expert judgment.’ In *Expertise and Decision Support*, G. Wright and F. Bolger, eds. New York: Plenum, pp. 47-76.

Camerer, C. F., and E. J. Johnson, 1991,  ‘The process-performance paradox in expert judgment: How can the experts know so much and predict so badly?’ In *Towards a General Theory of Expertise: Prospects and Limits*, K. A. Ericsson and J. Smith, eds. Cambridge: Cambridge University Press, pp. 195-217.

Charness, N., R. Th. Krampe, and U. Mayr, 1996, ‘The role of practice and coaching in entrepreneurial skill domains: An international comparison of life-span chess skill acquisition.’ In *The Road to Excellence: The Acquisition of Expert Performance in the Arts and Sciences, Sports, and Games*, K. A. Ericsson, ed. Mahwah, NJ: Erlbaum, pp. 51-80.

Chase, W. G., and H. A. Simon, 1973, ‘The mind’s eye in chess.’ In *Visual Information Processing*, W. G. Chase, ed. New York: Academic Press, pp. 215-281.

Chi, M. T. H., R. Glaser, and M. J. Farr, eds., 1988,  *The nature of expertise*. Hillsdale, NJ:  Erlbaum.

Chi, M. T. H., R. Glaser, and E. Rees, 1982,  ‘Expertise in problem solving.’  In *Advances in the Psychology of Human Intelligence*, R. S. Sternberg, ed. Hillsdale , NJ Erlbaum, Vol. 1, pp. 1-75.

Dawes, R. M., 1994, *House of Cards: Psychology and Psychotherapy Built on Myth*. New York: Free Press.

Djakow, Petrowski, and Rudik, 1927, *Psychologie des Schachspiels [Psychology of Chess]*. Berlin: Walter de Gruyter

Doll, J., and U. Mayr, 1987,  ‘Intelligenz und Schachleistung - eine Untersuchung an Schachexperten.  [Intelligence and achievement in chess - a study of chess masters].’  *Psychologische Beiträge*, 29: 270-289.

de Groot, A., 1978,  *Thought and Choice in Chess*.  The Hague:  Mouton. (Original work published 1946).

Ericsson, K. A., 1996,  ‘The acquisition of expert performance: An introduction to some of the issues.’ In *The Road to Excellence: The Acquisition of Expert Performance in the Arts and Sciences, Sports, and Games*, K. A. Ericsson, ed. Mahwah, NJ: Erlbaum, pp. 1-50.

Ericsson, K. A., and W. Kintsch, 1995,  ‘Long-term working memory.’ *Psychological Review*, 102: 211-245.

Ericsson, K. A., R. Th. Krampe, and C. Tesch-Römer, 1993,  ‘The role of deliberate practice in the acquisition of expert performance.’ *Psychological Review*, 100: 363-406.

Ericsson, K. A., and A. C. Lehmann, 1996,  ‘Expert and exceptional performance: Evidence on maximal adaptations on task constraints.’  *Annual Review of Psychology*, 47: 273-305.

Ericsson, K. A., and J. Smith, eds., 1991,  *Toward a General Theory of Expertise:  Prospects and Limits*.  Cambridge, England:  Cambridge University Press.

Glaser, R., 1996, ‘Changing the agency for learning: Acquiring expert performance.’ In *The Road to Excellence: The Acquisition of Expert Performance in the Arts and Sciences, Sports, and Games*, K. A. Ericsson, ed. Mahwah, NJ: Erlbaum, pp. 303-311.

Hoffman, R. R. ed., 1992,  *The Psychology of Expertise: Cognitive Research and Empirical AI*. New York: Springer-Verlag.

Proctor, R. W., and A. Dutta, 1995, *Skill Acquisition and Human Performance*. Thousand Oaks, CA: Sage

Richman, H. B., F. Gobet, J. J. Staszewski, and H. A. Simon, 1996,‘Perceptual and memory processes in the acquisition of expert performance: The EPAM model.’ In *The Road to Excellence: The Acquisition of Expert Performance in the Arts and Sciences, Sports, and Games*, K. A. Ericsson, ed. Mahwah, NJ: Erlbaum, pp. 167-187.

Simon, H. A., and W. G. Chase, 1973,  ‘Skill in chess.’  *American Scientist*, 61: 394-403.

Sloboda, J. A., J. W. Davidson, M. J. A.  Howe,  and D. G. Moore, 1996, ‘The role of practice in the development of performing musicians.’ *British Journal of Psychology*,  87: 287-309.

Starkes, J. L., and F. Allard,  eds., 1993, *Cognitive Issues in Motor Expertise*. Amsterdam: North Holland.

Starkes, J. L.,  J. Deakin, F. Allard, N. J. Hodges, and A. Hayes, 1996,  ‘Deliberate practice in sports: What is it anyway?’ In *The Road to Excellence: The Acquisition of Expert Performance in the Arts and Sciences, Sports, and Games*, K. A. Ericsson, ed. Mahwah, NJ: Erlbaum, pp. 81-106

Taylor, I. A., 1975,  ‘A retrospective view of creativity investigation.’ In *Perspectives in creativity*, I. A. Taylor and J. W. Getzels, eds. Chicago, IL: Aldine Publishing Co, pp. 1-36.

VanLehn, K., 1996,  ‘Cognitive skill acquisition.’ *Annual Review of Psychology*, 47: 513-539.

*Webster’s third new international dictionary*, 1976. Springfield, MA: Merriam

Evan Soltas: How Economics Can Save the Whales

This post first appeared on Bloomberg.com on August 22, 2013. Thanks to Evan for giving me permission to reprint it here. 


trong consensus on policy still eludes most branches of economics. How can poor countries best achieve rapid sustainable growth, for instance? That’s probably the most important question in all of economics. Development economists have “very little clue,” according to Lane Kenworthy, a leading scholar in the field and professor at the University of Arizona. But there’s an interesting exception. Without attracting much notice, one branch of the discipline has made a lot of progress in devising polices that command consensus: environmental economics.

Of course, recommendations aren’t necessarily followed by policy makers. The U.S. is still far away from taxing on carbon emissions, for instance, as just about every environmental economist would favor. But the field has some practical successes to boast of. At the top of the list is a program that rations the right to fish, known as “catch share.” It has proven shockingly successful in halting overfishing and ecological collapse – the point at which stocks can no longer replenish themselves.

A study of 11,135 fisheries showed that introducing catch share roughly halved the chance of collapse. The system caught on in the 1980s and 1990s after decades of other well-intentioned efforts failed. Economist H. Scott Gordon is usually credited with laying out the problem and the solution in 1954.

Modern environmental economists accuse their predecessors of forgetting about incentives. Catch-share schemes issue permits to individuals and groups to fish some portion of the grounds or keep some fraction of the total catch. If fishermen exceed their share, they can buy extra rights from others, pay a hefty fine or even lose their fishing rights, depending on theparticular arrangement. The system works because it aligns the interests of individual fishermen with the sustainability of the entire fishery. Everybody rises and falls with the fate of the total catch, eliminating destructive rivalries among fishermen.

Environmental economists have lately turned their attention to Atlantic bluefin tuna and whales. The National Marine Fisheries Service has just proposed new regulations that would for the first time establish a catch-share program for the endangered and lucrative bluefin. And a group of economists is pushing for a new international agreement on whaling.

In both cases the problem is overfishing. The bluefin tuna population has dropped by a third in the Atlantic Ocean and by an incredible 96 percent in the Pacific. And whaling, which is supposedly subject to strict international rules that ban commercial fishing and regulate scientific work, is making a sad comeback. The total worldwide annual catch has risen more than fivefold over the last 20 years.

Ben Minteer, Leah Gerber, Christopher Costello and Steven Gaines have called for a new and properly regulated market in whales. Set a sustainable worldwide quota, they say, and allow fishermen, scientists and conservationists alike to bid for catch rights. Then watch the system that saved other fish species set whaling right.

The idea outrages many environmentalists. Putting a price on whales, they argue, moves even further away from conservationist principles than the current ban, however ineffective. They’re wrong. “The arguments that whales should not be hunted, whatever their merits, have not been winning where it counts – that is, as measured by the size of the whale population,” says economist Timothy Taylor, editor of the Journal of Economic Perspectives.

I’d go further. Whale auctions would attract green donations to reduce catches below the quota. Environmental groups might find that the system was a blessing in disguise. If Japan were forced to buy permits to support its “scientific research,” the biggest loophole in the current ban on commercial whaling would be closed. Japan will resist the idea – but if it were persuaded to comply, environmental economics could score one of its greatest triumphs.

Quartz #26—>The Government and the Mob

Link to the Column on Quartz

Here is the full text of my 26th Quartz column, “The US government’s spying is straight out of the mob’s playbook,” now brought home to supplysideliberal.com. It was first published on July 4, 2013. Links to all my other columns can be found here. My preferred title above better represents my broader theme: what governments need to do to foster economic growth.

I pitched this column to my editors as an Independence Day column. I am proud of our American experiment: attempting government of the people, by the people, and for the people. This column is about the principles behind that American experiment, from an economic perspective. 

If you want to mirror the content of this post on another site, that is possible for a limited time if you read the legal notice at this link and include both a link to the original Quartz column and the following copyright notice:

© July 4, 2013: Miles Kimball, as first published on Quartz. Used by permission according to a temporary nonexclusive license expiring June 30, 2014. All rights reserved.


Reading Ben Zimmer’s “How to talk like Whitey Bulger: Mobster lingo gets its day in court“ in the International Herald Tribune provided by the hotel during my recent stay in Tokyo reminded me of my litany of the basics the government must provide to make anything close to market efficiency possible:

  1. blocking theft,
  2. blocking deception, 
  3. blocking threats of violence.

Let me give two examples of what I have written in this vein. The first is from “So You Want to Save the World“:

If someone’s overall objective is evil or self-serving, the only way what they do will have a good effect on the world is if all their attempts to get their way by harming others are forestalled by careful social engineering. It is exactly such social engineering to prevent people from stealing, deceiving, or threatening violence that yields the good results from free markets that Adam Smith talks about in The Wealth of Nations—the book that got modern economics off the ground.

The second is from ”Leveling Up: Making the Transition from Poor Country to Rich Country“:

The entry levels in the quest to become a rich country are the hardest.  The basic problem is that any government strong enough to stop people from stealing from each other, deceiving each other, and threatening each other with violence, is itself strong enough to steal, deceive, and threaten with violence.  Designing strong but limited government that will prevent theft, deceit, and threats of violence, without perpetrating theft, deceit, and threats of violence at a horrific level is quite a difficult trick that most countries throughout history have not managed to perform.

How to talk like Whitey Bulger: Mobster lingo gets its day in court“ describes the example I have in mind when I write about “threats of violence”:

Charging “rent” is extorting money from business owners under the threat of violence.

I have thought about whether I should include actual violence in the list, but decided that, with only a few exceptions, the motivations for violence boil down to either theft or being able to provide some credibility for one’s threats of violence.

Deception covers a wide range of destructive activities. The idea that the free market requires tolerance of corporate deception is itself a big lie. Even routine secrets have a measure of deception to them, and as Sissela Bok demonstrates in her book Secrets: On the Ethics of Concealment and Revelation, the ethical justification for keeping secrets is much trickier than many people think.

Blackmail presents an interesting case that doesn’t quite fit my litany: the threat to reveal deception is used to distort the deceiver’s behavior. But there is an element of deception in such a revelation, since the selective revelation of one person’s secrets and not the secrets of others makes the person whose secret is revealed look much worse than if all secrets were revealed. I think I would fare very well if the day ever came that Jesus predicted when he said:

For there is nothing hidden that will not be disclosed, and nothing concealed that will not be known or brought out into the open. (Luke 8:17)

But I have no doubt that if someone revealed all of my secrets, while everyone else got to keep theirs, I could be made to look very bad.

The possibility that threats of selective revelation of secrets could be used by members of the government to blackmail others—or to deceive the public about the relative merits of different individuals—is the most serious concern raised by government spying. That is why I join Max Frankel in advocating that government spying be overseen not by judges in their spare time, but by a dedicated court whose judges can develop special expertise, with lawyers who have high-level security clearance given the task of representing the interests of those whose communications are being monitored, whether directly or indirectly. Frankel said it this way in his New York Times editorial ”Where did our ‘inalienable rights’ go?“:

Despite the predilections of federal judges to defer to the executive branch, I think in the long run we have no choice but to entrust our freedom to them. But the secret world of intelligence demands its own special, permanent court, like the United States Tax Court, whose members are confirmed by the Senate for terms that allow them to become real experts in the subject. Such a court should inform the public about the nature of its cases and its record of approvals and denials. Most important, it should summon special attorneys to test the government’s secret evidence in every case, so that a full court hears a genuine adversarial debate before intruding on a citizen’s civil rights. That, too, might cost a little time in some crisis. There’s no escaping the fact that freedom is expensive.

If modern technology makes it harder to keep secrets in general, I think that is all to the good. People usually behave better when they believe that their actions could become known. (See for example this TedEducation talk by Jeff Hancock, “The Future of Lying,” which reports evidence that people are more honest online than offline.) Those overthrowing tyrants may benefit from secrecy in putting together their revolutions, but tyrants need secrecy even more. So a general decline in the ability to keep secrets is likely to be a net plus even there.

Above, I pointed out the fundamental problem of political economy:

… any government strong enough to stop people from stealing from each other, deceiving each other, and threatening each other with violence, is itself strong enough to steal, deceive, and threaten with violence.

Although it pains me to say so, the literature on economic growth (see for example Pranab Bardhan’s Journal of Economic Literature survey article ”Corruption and Development: A Review of the Issues“) argues that centralized corruption by a strong but evil state can yield better economic outcomes than decentralized corruption by many local mob bosses or warlords. Nevertheless, I believe the elimination of tyrants and the progress of democracy throughout the world will be one of the most important contributors to human welfare in the coming decades. May those of us who enjoy the blessings of democracy be willing to make the sacrifices that could be necessary to help others enjoy that blessing. And may all nations add to democracy all of the other restraints on government necessary to make government our servant rather than our master.

Why I Write

Link to the essay on Pieria–which has an ever-expanding list of links essays by other economics bloggers in the “Why I Write” series

On his Pieria website, Tomas Hirst has put together a series in which each Pieria experts answers the question “Why Do You Write?” Here is Tomas’s introduction:

In this series we aim to shed light on the motivations, inspirations and writing processes of some of the leading financial bloggers. Here, Miles Kimball, Professor of Economics and Survey Research at the University of Michigan and author of Confessions of a Supply-Side Liberal, explains why he writes.

Tomas did a masterful job of posing questions and editing the email responses I sent him into this essay, which first appeared on Pieria on July 17, 2013.


“A year ago, I started a blog, “Confessions of a Supply-Side Liberal” out of a mixture of raw ambition, desire for self-expression, duty, and hope. Sometimes duty keeps me up late at night, but it is raw ambition that makes me wake up too early in the morning so that I slip further and further behind in my sleep.”

The quote above come from my post “So You Want to Save the World”, which is about ethics for bloggers, as are my series of John Stuart Mill quotations (easy to find in my Religion, Humanities and Science sub-blog) focusing on taking alternative views seriously (even if they are very wrong – for reasons JS Mill lays out carefully – and even more if they just might be right). Some of the more recent John Stuart Mill quotations also express why I think blogs are so important in the media landscape. 

Blogs (and Twitter) allow real discussion and debate about alternative views for the following reasons: 

  1. Blogs are one place that progressives and conservatives actually interact and wrestle with each other’s arguments.
  2. Blogs and Twitter and comments point out alternative perspectives one wouldn’t have thought of.
  3. The back-and-forth on blogs, Twitter and comments helps one clarify one’s own thinking.

Finally, in the HuffPost Live segment that I did with Umair Haque and others I give a view of blogs as the place people can really get the real deal that is not dumbed down.  One thing I mention is that breaking things down into the blog format of relatively short posts can give much more impact to an argument and make it much more understandable than if one expected someone to have the patience to sit down and read something very long and understand it. Even if I do some posts that are long, they alternate with shorter ones on the same topics.

One thing that may be a serious mistake is that I find myself wanting to resist anything like Krugman’s labeling the level of wonkishness on his posts. It is unrealistic, perhaps, but I hope people try the harder posts without expecting too much of themselves, and then get something from those posts even if they don’t fully understand them. Some of what is behind my attitude on this can be seen in today’s post on Deliberate Practice. Without trying to understand things that are a bit hard, how can people get smarter? If I have a range of different difficulties for posts, hopefully people can read both the ones that are easy for them and the ones that are a bit difficult, and let themselves off the hook for the ones that are over-the-top difficult. I am glad for the discipline of writing some of my pieces for a more general audience in Quartz. Although those columns are toned down relative to some of the posts on my blog, I am proud that they push the boundaries on difficulty level in pieces intended for a popular audience. I hope that even people who did not understand everything in the two columns I wrote with Yichuan Wang on what we found in the Reinhart and Rogoff dataset got a sense for what it means to be careful in analyzing data.

“I think I have a niche of daring to do relatively difficult posts. In addition to hoping that people will stretch, that accords with this strategy” - (from A Year in the Life of a Supply-Side Liberal)

My primary strategy for making the world a better place is not to influence politics in the short run, but to make my case on the merits for each issue to the cohort of young economists who will collectively have such a big influence on policy in decades to come, and to the economists who now staff government agencies (including the Federal Reserve System).

Collectively, I believe that economists are a group that can move the world. It it is not altogether Quixotic to hope to affect the ideas that will animate thinking of the rising generation of economists. And, as a bonus, many others will enjoy hearing, and may be affected by, arguments directed at that group.

On the future of the economics blogosphere

The economics blogosphere is already vibrant, brimming with intellectual energy. The obvious next phase of its development is for more and more of the most academically respected economists to engage in blogging as the respectability of blogging grows in a virtuous cycle. Historically, the interesting thing is that this will constitute a full-scale revival of the literary economics that prevailed before the mathematization of economics in the early 20th century–this time, alongside mathematized economics. That emerging two-barrel, balanced approach to economics through both math and accessible writing in counterpoint is an important development that will make economics both more powerful at getting to the truth and more powerful as a social force.

Note: In the comments below, Isomorphisms mentions my idiosyncratic “Links I am Thinking About” aggregator blog. I have a link to it on my sidebar. I think many of my readers might find it of some interest.

David Byrne on Non-Monetary Motivations

As illustrated by arguments I make in my posts “Scott Adams’s Finest Hour: How to Tax the Rich” and “Copyright,” understanding the strength of non-monetary motivations for work is important for public policy. David Byrne gives a vivid description of non-monetary motivations in his line of work in this passage from his book How Music Works, pp. 203-204.

How important is getting one’s work out to the public? Should that even matter to a creative artist? Would I make music if no one were listening? If I were a hermit and lived on a mountaintop like a bearded guy in a cartoon, would I take the time to write a song? Many visual artists whose work I love–like Henry Darger, Gordon Carter, and James Castle–never shared their art. They worked ceaselessly and hoarded their creations, which were discovered only after they died or moved out of their apartments. Could I do that? Why would I? Don’t we want some validation, respect, feedback? Come to think of it, I might do it–in fact, I did, when I was in high school puttering around with those tape loops and splicing. I think those experiments were witnessed by exactly one friend. However, even an audience of one is not zero. 

Still, making music is its own reward. It feels good and can be a therapeutic outlet; maybe that’s why so many people work hard in music for no money or public recognition at all. In Ireland and elsewhere, amateurs play well-known songs in pubs, and their ambition doesn’t stretch beyond the door. They are getting recognition (or humiliation) within their village, though. 

In North America, families used to gather around the piano in the parlor. Any monetary remuneration that might have accrued from these “concerts” was secondary. To be honest, even tooling around with tapes in high school, I think I imagined that someone, somehow, might hear my music one day. Maybe not those particular experiments, but I imagined that they might be the baby steps that would allow my more mature expressions to come into being and eventually reach others. Could I have unconsciously had such a long-range plan? I have continued to make plenty of music, often with no clear goal in sight, but I guess somewhere in the back of my mind I believe that the aimless wandering down a meandering path will surely lead to some (well-deserved, in my mind) reward down the road. There’s a kind of unjustified faith involved here. 

Is the satisfaction that comes from public recognition–however small, however fleeting–a driving force for the creative act? I am going to assume that most of us who make music (or pursue other create endeavors) do indeed dream that someday someone else will hear, see, or read what we’ve made.

For balance, I should point out that in the paragraph after this passage, David writes of monetary motivations as well:

Many of us who do seek validation dream that we will not only have that dialogue with our peers and the public, but that we might even be compensated for our creative efforts, which is another kind of validation. We’re not talking rich and famous; making a life with one’s work is enough. 

But notice that in David’s description, even the monetary motivation has two dimensions: enabling consumption and validation.

Shane Parrish on Deliberate Practice

In my introductory macroeconomics class, I recommend that my students read Daniel Coyle’s book The Talent Code: Greatness Isn’t Born. It’s Grown. Here’s How.(Daniel Coyle also has a website called The Talent Code”)There are two key messages of that book, important both for gaining skill in economics and for thinking about the economics of education and economic growth

  1. Effort can bring skill to almost anyone.
  2. The kind of effort required is the difficult regimen of deliberate practice.

Talent is OverratedGeoff Colvin’s book Talent is Overrated has the same two messages. Shane Parrish, in his Farnam Street blog post “What is Delieberate Practice?” ably pulls from Talent is Overrated a description of deliberate practice.

Shane begins with these two quotations from Talent is Overrated indicating the difference between deliberate practice and what most people think of when they think of practice:  

  • In field after field, when it comes to centrally important skills…parole officers predicting recidivism, college admissions officials judging applicants—people with lots of experience were no better at their jobs than those with less experience. 
  • Deliberate practice is hard. It hurts. But it works. More of it equals better performance and tons of it equals great performance.

Shane clarifies:

Most of what we consider practice is really just playing around — we’re in our comfort zone.

When you venture off to the golf range to hit a bucket of balls what you’re really doing is having fun. You’re not getting better. Understanding the difference between fun and deliberate practice unlocks the key to improving performance.

Shane then structures the rest of his post by this that Geoff Colvin says of deliberate practice:

  1. It is activity designed specifically to improve performance, often with a teacher’s help;
  2. it can be repeated a lot; feedback on results is continuously available;
  3. it’s highly demanding mentally, whether the activity is purely intellectual, such as chess or business-related activities, or heavily physical, such as sports; it isn’t much fun.

1. Deliberate practice is designed to improve performance. Teachers can help in that design. As Geoff Colvin writes: 

  • In some fields, especially intellectual ones such as the arts, sciences, and business, people may eventually become skilled enough to design their own practice. But anyone who thinks they’ve outgrown the benefits of a teacher’s help should at least question that view. There’s a reason why the world’s best golfers still go to teachers.
  • A chess teacher is looking at the same boards as the student but can see that the student is consistently overlooking an important threat. A business coach is looking at the same situations as a manager but can see, for example, that the manager systematically fails to communicate his intentions clearly.”

Shane comments:

Teachers, or coaches, see what you miss and make you aware of where you’re falling short.

With or without a teacher, great performers deconstruct elements of what they do into chunks they can practice. They get better at that aspect and move on to the next.

Noel Tichy, professor at the University of Michigan business school and the former chief of General Electric’s famous management development center at Crotonville, puts the concept of practice into three zones: the comfort zone, the learning zone, and the panic zone.

Most of the time we’re practicing we’re really doing activities in our comfort zone. This doesn’t help us improve because we can already do these activities easily. On the other hand, operating in the panic zone leaves us paralyzed as the activities are too difficult and we don’t know where to start. The only way to make progress is to operate in the learning zone, which are those activities that are just out of reach.

2. Deliberate practice can be repeated a lot, with appropriate feedback.

Shane gives these two quotations from Talent is Overrated:

  • Let us briefly illustrate the difference between work and deliberate practice. During a three hour baseball game, a batter may only get 5-15 pitches (perhaps one or two relevant to a particular weakness), whereas during optimal practice of the same duration, a batter working with a dedicated pitcher has several hundred batting opportunities, where this weakness can be systematically exploited. 
  • You can work on technique all you like, but if you can’t see the effects, two things will happen: You won’t get any better, and you’ll stop caring.

Shane points out that if results must be subjectively interpreted, it is valuable not to have to rely entirely on one’s own opinion to judge the results. A coach can provide such a second opinion. But sometimes all it takes is a friend with good judgment. 

3. Deliberate practice is highly demanding mentally and isn’t much fun. 

Shane writes

Doing things we know how to do is fun and does not require a lot of effort. Deliberate practice, however, is not fun. Breaking down a task you wish to master into its constituent parts and then working on those areas systematically requires a lot of effort.

Indeed, Geoff Colvin claims that it is hard to do deliberate practice for more than four or five hours a day, or for more than ninety minutes at a stretch. 

Deliberate practice can also be embarrassing. Shane quotes from Susan Cain’s book, Quiet: The Power of Introverts in a World That Can’t Stop Talking this claim:

  • Deliberate Practice is best conducted alone for several reasons. It takes intense concentration, and other people can be distracting. It requires deep motivation, often self-generated. But most important, it involves working on the task that’s most challenging to you personally. Only when you’re alone, Ericsson told me, can you “go directly to the part that’s challenging to you. If you want to improve what you’re doing, you have to be the one who generates the move. Imagine a group class—you’re the one generating the move only a small percentage of the time.”

Presumably, a tutorial by a good coach is even better than doing deliberate practice alone.  But some people do manage deliberate practice alone. A wonderful example is Ben Franklin.

A detailed example of deliberate practice: Ben Franklin. I remember vividly from my own reading of Talent is Overrated this passage Shane quotes about Ben’s program for improving his writing:

  • First, he found examples of prose clearly superior to anything he could produce, a bound volume of the Spectator, the great English periodical written by Joseph Addison and Richard Steele. Any of us might have done something similar. But Franklin then embarked on a remarkable program that few of us would have ever thought of.
  • It began with his reading a Spectator article and marking brief notes on the meaning of each sentence; a few days later he would take up the notes and try to express the meaning of each sentence in his own words. When done, he compared his essay with the original, “discovered some of my faults, and corrected them.
  • One of the faults he noticed was his poor vocabulary. What could he do about that? He realized that writing poetry required an extensive “stock of words” because he might need to express any given meaning in many different ways depending on the demands of rhyme or meter. So he would rewrite Spectator essays in verse. …
  • Franklin realized also that a key element of a good essay is its organization, so he developed a method to work on that. He would again make short notes on each sentence in an essay, but would write each note on a separate slip of paper. He would then mix up the notes and set them aside for weeks, until he had forgotten the essay. At that point he would try to put the notes in their correct order, attempt to write the essay, and then compare it with the original; again, he “discovered many faults and amended them.”

Other Readings. Shane recommends this New Yorker article by Dr. Atul Gawande.Many others have written online about deliberate practice, as googling the words “deliberate practice” indicates. One I stumbled across in my googling was Justin Musk’s excellent post “the secret to becoming a successful published writer: putting the deliberate in deliberate practice.”

A Plea: I would love to see more in the economics blogosphere about what deliberate practice looks like for gaining skill in economics.

Quartz #25—>Examining the Entrails: Is There Any Evidence for an Effect of Debt on Growth in the Reinhart and Rogoff Data?

Link to the Column on Quartz

Here is the full text of my 25th Quartz column, that I coauthored with Yichuan Wang, “Autopsy: Economists looked even closer at Reinhart and Rogoff’s data–and the results might surprise you.” It is now brought home to supplysideliberal.com (and soon to Yichuan’s Synthenomics). It was first published on May 14, 2013. Links to all my other columns can be found here.

If you want to mirror the content of this post on another site, that is possible for a limited time if you read the legal notice at this link and include both a link to the original Quartz column and the following copyright notice:

© June 12, 2013: Miles Kimball and Yichuan Wang, as first published on Quartz. Used by permission according to a temporary nonexclusive license expiring June 30, 2014. All rights reserved.

(Yichuan has agreed to extend permission on the same terms that I do.)


In order to predict the future, the ancient Romans would often sacrifice an animal, open up its guts and look closely at its entrails. Since the discovery of an Excel spreadsheet error in Carmen Reinhart and Ken Rogoff’s analysis of debt and growth by University of Massachusetts at Amherst graduate student Thomas Herndon and his professors Michael Ash and Robert Pollin, many economists have taken a cue from the Romans with the Reinhart and Rogoff data to see if there is any hint of an effect of high levels of national debt on economic growth. The two of us gave our first take in analyzing the Reinhart and Rogoff data in our May 29, 2013 column. We wrote that “…we could not find even a shred of evidence in the Reinhart and Rogoff data for a negative effect of government debt on growth.

Our further analysis since then (here, and here), and University of Massachusetts at Amherst Professor Arindrajit Dube’s analysis since then and full release of his previous work (herehere, and here) in response to our column have only confirmed that view. (Links to other reactions to our earlier column can be found here.) Indeed, although we have found no shred of evidence for a negative effect of government debt on growth in the Reinhart and Rogoff data, the two of us have found at least a mirage of a positive effect of debt on growth, as shown in the graph above.

The point of the graph at the top is to find out if the ratio of debt has any relationship to GDP growth, after isolating the part of GDP growth that can’t be predicted by past GDP growth alone. Let us give two examples of why it might be important to adjust for past growth rates when looking at the effect of debt on growth. First, if a country is run badly in other ways, is likely to grow slowly whatever its level of debt. In order to see if debt makes things worse, it is crucial to adjust for the fact that it was growing slowly to begin with. Second, if a country is run well, it is likely to grow fast while it is in the “catch-up” phase of copying proven techniques from other countries. Then as it gets closer to the technological frontier, its growth will naturally slow down. If getting richer in this way also tends to lead through typical political dynamics to a larger welfare state with higher levels of debt, one would see high levels of debt during that later mature phase of slower growth. This is not debt causing slow growth, but economic development having two separate effects: the slowdown in growth as a country nears the technological frontier, and the development of a welfare state. Adjusting for past growth helps us adjust for how far along a country is in its growth trajectory.

In the graph, if “GDP Growth Relative to Par” is positive, it means GDP growth is higher in the next 10 years than would be predicted by past GDP growth alone. If “GDP Growth Relative to Par” is negative, it means GDP growth is lower in the next 10 years than would be predicted by past GDP growth. (Here, in accounting for the effect of past GDP growth, we use data on the most recent five past years individually, and the average growth rate over the period from 10 years in the past to five years in the past.) The thick red line shows that, overall, high debt is associated with GDP growth just a little higher than what one would guess from looking at the past record of GDP growth alone. The thick blue curve gives more detail by showing in a flexible way what levels of debt are associated with above par growth and what levels of debt are associated with below par growth. We generated it with standard scatterplot smoothing techniques. The thick blue curve shows that, in particular, GDP growth seems surprisingly high in the range from debt about 60% of GDP to debt about 120% of GDP. Higher and lower debt levels are associated with future growth that is somewhat lower than would be predicted by looking at past growth alone. Interestingly, debt at 90% of GDP, instead of being a cliff beyond which the growth performance looks much worse, looks like the top of a gently rounded hill. If one took the tiny bit of evidence here much, much more seriously than we do, it would suggest that debt below 90% of GDP is just as bad as debt above 90% of GDP, but that neither is very bad.

Where does the evidence of above par growth in the range from 60% to 120% of GDP come from? Part of the answer is Ireland. In particular, all but one of the cases when GDP growth was more than 2.5% per year above what would be expected from looking at past growth occurred in a 10-year period after Ireland had a debt to GDP ratio in the range from 60% to 120% of GDP. It is well-known that Ireland has recently gotten into trouble because of its debt, but what does the overall picture of its growth performance over the last few decades look like? Here is a graph of Ireland’s per capita GDP from the Federal Reserve Bank of St. Louis data base:

The consequences of debt have reversed some of Ireland’s previous growth, but it is still a growth success story, despite the high levels of debt it had in the 1980s and ’90s.

In addition to Ireland, a bit of the evidence for good growth performance following high levels of debt comes from Greece. As the graph below shows, Greece has had more impressive growth in the last two decades than many people realize, despite the hit it has taken recently because of its debt troubles.

We did a simple exercise to see if the bump up in the thick blue curve in the graph at the top is entirely due to Ireland’s and Greece’s growth that has been reversed recently because of their debt troubles.  To be sure that the bad consequences of Ireland’s and Greece’s debt for GDP in the last few years were accounted for when looking at the effect of debt on growth, we pretended that the recent declines in GDP had been spread out as a drag on growth over the period from 1990 to 2007 instead of happening in the last few years. Then we redid our analysis. Making this adjustment to the growth data is a simple, if ad hoc, way of trying to make sure that the consequences of Irish and Greek debt are not missed by the analysis.

Imagining slower growth earlier on to account for Ireland’s and Greece’s recent GDP declines makes the performance of Ireland and Greece in that period from 1990 to 2007 look less stellar. The key effect is on the thick blue curve estimating the effect of debt on growth. Looking closely at the graph below after adjusting Ireland’s and Greece’s growth rates, you can see that the bump up in the thick blue curve in the range where debt is between 60% and 120% of GDP has been cut down to size, but it is still there. So the bump cannot be attributed entirely to Ireland and Greece “stealing growth from the future” with their high levels of debt.

We want to stress that there is no real justification for making the adjustment for Ireland and Greece that we made except as a way of showing that the argument that Ireland and Greece had high growth in the 1990s and early 2000s, but now have had to pay the piper is not enough to turn the story about the effects of debt on growth around.

There are three broader points to make from this discussion of Ireland and Greece.

  • We still don’t recommend taking the upward bump in growth predicted by the thick blue curves in the 60% to 120% ranges for debt seriously.
  • The fact that looking at the experience of two countries in two decades can account for a good share of the bump up in the 60% to 120% ranges illustrates just how little there is to go on from the Reinhart and Rogoff data set. Our scatter plots with the thick blue curves give the impression of more than there really is, because we have dots for growth from 1970 to 1980 and 1971 to 1981 and 1972 to 1982, and so on. And there is no way to escape this kind of issue when the economic forces one is interested have both short-run and long-run effects, and change as slowly over time as levels of national debt do. There are advanced statistical methods for correcting for such issues; the corrections almost always go in the direction of saying that there is less evidence in a set of data than it might seem. Even without being experts ourselves in making those statistical corrections, we feel reasonably confident in saying that the Reinhart and Rogoff data speak very softly about any positive or negative effect of debt on growth at all: barely a whisper.
  • Third, the inclusion of Ireland and Greece and the fact that the basic story survives after pretending their GDP declines were a drag on growth earlier contradicts to some extent the claim of economics blogger and blog critic Paul Andrews in his post “None the Wiser After Reinhart, Rogoff, et al.” that Reinhart and Rogoff’s data focus on “20 or so of the most healthy economies the world has ever seen.” After adjusting for the hit their economies have taken recently, the inclusion of Ireland and Greece gives some perspective on the effects of debt on the growth of economies that havesubsequently had problems paying for their debt. There could certainly be other economies whose growth is more vulnerable to debt than Ireland and Greece, but to us these seem like exactly the kinds of cases people would have in mind when they argue that one should expect debt to have a negative effect on growth.

Understanding all of this matters because, as Mark Gongloff of Huffington Postwrites:

Reinhart and Rogoff’s 2010 paper, “Growth in a Time of Debt,” … has been used to justify austerity programs around the world. In that paper, and in many other papers, op-ed pieces and congressional testimony over the years, Reinhart and Rogoff have warned that high debt slows down growth, making it a huge problem to be dealt with immediately. The human costs of this error have been enormous.

Even though there are many effective ways to stimulate economies without adding much to their national debt, the primary remedies for sluggish economies that are actually on the table politically are those that do increase national debt, so it matters whether people think debt is damning or think debt is just debt.  It is painful enough that debt has to be paid back (with some combination of interest and principal), and high levels of debt may help cause debt crises like those we have seen for Ireland and Greece. But the bottom line from our examination of the entrails is that the omens and portents in the Reinhart and Rogoff data do not back up the argument that debt has a negative effect on economic growth.

What Would Economic Growth Look Like If We Properly Valued the Web?

I love Jeremy Warner’s essay “The UK internet boom that blows apart economic gloom” in the Telegraph. Jeremy makes several important points. One is that accounting for intangible investment, as the latest US GDP revisions do, can affect GDP figures. Here is a deeper issue Jeremy raises:

But even if these changes were to be incorporated, it still wouldn’t do justice to the growth in the digital economy. This is because much internet activity is free, and therefore immeasurable.

Take the traditional music industry, which used to involve, finding, recording and marketing new acts, and then cleaning up through copyrighted CD sales.

For decades, the model worked well — at least for the record producers and a small, elite of popular artists — and made a not insignificant contribution to GDP.

Then along came digital downloads, legal or otherwise. These have destroyed the old music company stranglehold on distribution, and in so doing made previously quite pricey music either far less expensive or completely free.

The pound value of music consumption has declined, and with it the music industry’s contribution to GDP, but the volume of music consumption has risen exponentially.

Much the same thing can be said about newspapers. The traditional business model has been badly undermined by the internet, but news demand and consumption has never been higher. If only we could persuade the blighters to pay, our industry would again be booming….

Prof Brynjolfsson believes that the correct way to measure all this… is via the time people spend immersed in it.

I know Erik Brynjolfsson from his Twitter feed as an excellent commentator on the digital economy.

The growing importance of free goods is one of the many reasons we need to go beyond GDP in accounting for well-being. Looking at the amount of time people spend online to infer value makes a lot of sense. But there are also more radical ways to go beyond GDP, as discussed in “Ori Heffetz: Quantifying Happiness” and “Judging the Nations: Wealth and Happiness are Not Enough.”

For the richest countries, at the technological frontier, human progress has shifted more and more into the realm of intangibles. Services are less tangible than goods; online activities are less tangible than face-to-face services; happiness, job satisfaction and meaning are less tangible than time spent hanging out in cyberspace. If we don’t develop good ways to account for the increasingly intangible dimensions of human progress, we will miss the main story going forward. 

Quartz #24—>After Crunching Reinhart and Rogoff's Data, We Found No Evidence High Debt Slows Growth

Link to the Column on Quartz

Here is the full text of my 24th Quartz column, that I coauthored with Yichuan Wang, “After crunching Reinhart and Rogoff’s data, we’ve concluded that high debt does not slow growth.” It is now brought home to supplysideliberal.com (and soon to Yichuan's Synthenomics). It was first published on May 29, 2013. Links to all my other columns can be found here. In particular, don’t miss the follow-up column “Examining the Entrails: Is There Any Evidence for an Effect of Debt on Growth in the Reinhart and Rogoff Data?

If you want to mirror the content of this post on another site, that is possible for a limited time if you read the legal notice at this link and include both a link to the original Quartz column and the following copyright notice:

© May 29, 2013: Miles Kimball and Yichuan Wang, as first published on Quartz. Used by permission according to a temporary nonexclusive license expiring June 30, 2014. All rights reserved.

(Yichuan has agreed to extend permission on the same terms that I do.)

This column had a strong response. I have included the text of my companion column, with links to many of the responses after the text of the column itself. (For the comments attached to that companion post, you will still have to go to the original posting.) Other followup posts can be found in my “Short-Run Fiscal Policy” sub-blog.  


Leaving aside monetary policy, the textbook Keynesian remedy for recession is to increase government spending or cut taxes. The obvious problem with that is that higher government spending and lower taxes tend to put the government deeper in debt. So the announcement on April 15, 2013 by University of Massachusetts at Amherst economists Thomas Herndon, Michael Ash and Robert Pollin that Carmen Reinhart and Ken Rogoff had made a mistake in their analysis claiming that debt leads to lower economic growth has been big news. Remarkably for a story so wonkish, the tale of Reinhart and Rogoff’s errors even made it onto the Colbert Report. Six weeks later, discussions of Herndon, Ash and Pollin’s challenge to Reinhart and Rogoff continue in earnest in the economics blogosphere, in the Wall Street Journal, and in the New York Times.

In defending the main conclusions of their work, while conceding some errors, Reinhart and Rogoff point out that even after the errors are corrected, there is a substantial negative correlation between debt levels and economic growth. That is a fair description of what Herndon, Ash and Pollin find, as discussed in an earlier Quartz column, “An Economist’s Mea Culpa: I relied on Reinhardt and Rogoff.” But, as mentioned there, and as Reinhart and Rogoff point out in their response to Herndon, Ash and Pollin, there is a key remaining issue of what causes what. It is well known among economists that low growth leads to extra debt because tax revenues go down and spending goes up in a recession. But does debt also cause low growth in a vicious cycle? That is the question.

We wanted to see for ourselves what Reinhart and Rogoff’s data could say about whether high national debt seems to cause low growth. In particular, we wanted to separate the effect of low growth in causing higher debt from any effect of higher debt in causing low growth. There is no way to do this perfectly. But we wanted to make the attempt. We had one key difference in our approach from many of the other analyses of Reinhart and Rogoff’s data: we decided to focus only on long-run effects. This is a way to avoid getting confused by the effects of business cycles such as the Great Recession that we are still recovering from. But one limitation of focusing on long-run effects is that it might leave out one of the more obvious problems with debt: the bond markets might at any time refuse to continue lending except at punitively high interest rates, causing debt crises like that have been faced by Greece, Ireland, and Cyprus, and to a lesser degree Spain and Italy. So far, debt crises like this have been rare for countries that have borrowed in their own currency, but are a serious danger for countries that borrow in a foreign currency or share a currency with many other countries in the euro zone.

Here is what we did to focus on long-run effects: to avoid being confused by business-cycle effects, we looked at the relationship between national debt and growth in the period of time from five to 10 years later. In their paper “Debt Overhangs, Past and Present,” Carmen Reinhart and Ken Rogoff, along with Vincent Reinhart, emphasize that most episodes of high national debt last a long time. That means that if high debt really causes low growth in a slow, corrosive way, we should be able to see high debt now associated with low growth far into the future for the simple reason that high debt now tends to be associated with high debt for quite some time into the future.

Here is the bottom line. Based on economic theory, it would be surprising indeed if high levels of national debt didn’t have at least some slow, corrosive negative effect on economic growth. And we still worry about the effects of debt. But the two of us could not find even a shred of evidence in the Reinhart and Rogoff data for a negative effect of government debt on growth.

The graphs at the top show show our first take at analyzing the Reinhardt and Rogoff data. This first take seemed to indicate a large effect of low economic growth in the past in raising debt combined with a smaller, but still very important effect of high debt in lowering later economic growth. On the right panel of the graph above, you can see the strong downward slope that indicates a strong correlation between low growth rates in the period from ten years ago to five years ago with more debt, suggesting that low growth in the past causes high debt. On the left panel of the graph above, you can see the mild downward slope that indicates a weaker correlation between debt and lower growth in the period from five years later to ten years later, suggesting that debt might have some negative effect on growth in the long run. In order to avoid overstating the amount of data available, these graphs have only one dot for each five-year period in the data set. If our further analysis had confirmed these results, we were prepared to argue that the evidence suggested a serious worry about the effects of debt on growth. But the story the graphs above seem to tell dissolves on closer examination.

Given the strong effect past low growth seemed to have on debt, we felt that we needed to take into account the effect of past economic growth rates on debt more carefully when trying to tease out the effects in the other direction, of debt on later growth. Economists often use a technique called multiple regression analysis (or “ordinary least squares”) to take into account the effect of one thing when looking at the effect of something else. Here we are doing something that is quite close both in spirit and the numbers it generates for our analysis, but allows us to use graphs to show what is going on a little better.

The effects of low economic growth in the past may not all come from business cycle effects. It is possible that there are political effects as well, in which a slowly growing pie to be divided makes it harder for different political factions to agree, resulting in deficits. Low growth in the past may also be a sign that a government is incompetent or dysfunctional in some other way that also causes high debt. So the way we took into account the effects of economic growth in the past on debt—and the effects on debt of the level of government competence that past growth may signify—was to look at what level of debt could be predicted by knowing the rates of economic growth from the past year, and in the three-year periods from 10 to 7 years ago, 7 to 4 years ago and 4 to 1 years ago. The graph below, labeled “Prediction of Debt Based on Past Growth” shows that knowing these various economic growth rates over the past 10 years helps a lot in predicting how high the ratio of national debt to GDP will be on a year by year basis. (Doing things on a year by year basis gives the best prediction, but means the graph has five times as many dots as the other scatter plots.) The “Prediction of Debt Based on Past Growth” graph shows that some countries, at some times, have debt above what one would expect based on past growth and some countries have debt below what one would expect based on past growth. If higher debt causes lower growth, then national debt beyond what could be predicted by past economic growth should be bad for future growth.

Our next graph below, labeled “Relationship Between Future Growth and Excess Debt to GDP” shows the relationship between a debt to GDP ratio beyond what would be predicted by past growth and economic growth 5 to 10 years later. Here there is no downward slope at all. In fact there is a small upward slope. This was surprising enough that we asked others we knew to see what they found when trying our basic approach. They bear no responsibility for our interpretation of the analysis here, but Owen Zidar, an economics graduate student at the University of California, Berkeley, and Daniel Weagley, graduate student in finance at the University of Michigan were generous enough to analyze the data from our angle to help alert us if they found we were dramatically off course and to suggest various ways to handle details. (In addition, Yu She, a student in the master’s of applied economics program at the University of Michigan proofread our computer code.)  We have no doubt that someone could use a slightly different data set or tweak the analysis enough to make the small upward slope into a small downward slope. But the fact that we got a small upward slope so easily (on our first try with this approach of controlling for past growth more carefully) means that there is no robust evidence in the Reinhart and Rogoff data set for a negative long-run effect of debt on future growth once the effects of past growth on debt are taken into account. (We still get an upward slope when we do things on a year-by-year basis instead of looking at non-overlapping five-year growth periods.)

Daniel Weagley raised a very interesting issue that the very slight upward slope shown for the “Relationship Between Future Growth and Excess Debt to GDP” is composed of two different kinds of evidence. Times when countries in the data set, on average, have higher debt than would be predicted tend to be associated with higher growth in the period from five to 10 years later. But at any time, countries that have debt that is unexpectedly high not only compared to their own past growth, but also compared to the unexpected debt of other countries at that time, do indeed tend to have lower growth five to 10 years later. It is only speculating, but this is what one might expect if the main mechanism for long-run effects of debt on growth is more of the short-run effect we mentioned above: the danger that the “bond market vigilantes” will start demanding high interest rates. It is hard for the bond market vigilantes to take their money out of all government bonds everywhere in the world, so having debt that looks high compared to other countries at any given time might be what matters most.

Our view is that evidence from trends in the average level of debt around the world over time are just as instructive as evidence from the cross-national evidence from debt in one country being higher than in other countries at a given time. Our last graph (just above) shows what the evidence from trends in average levels over time looks like. High debt levels in the late 1940s and the 1950s were followed five to 10 years later with relatively high growth.  Low debt levels in the 1960s and 1970s were followed five to 10 years later by relatively low growth. High debt levels in the 1980s and 1990s were followed five to 10 years later by relatively high growth. If anyone can come up with a good argument for why this evidence from trends in the average levels over time should be dismissed, then only the cross-national evidence about debt in one country compared to another would remain, which by itself makes debt look bad for growth. But we argue that there is not enough justification to say that special occurrences each year make the evidence from trends in the average levels over time worthless. (Technically, we don’t think it is appropriate to use “year fixed effects” to soak up and throw away evidence from those trends over time in the average level of debt around the world.)

We don’t want anyone to take away the message that high levels of national debt are a matter of no concern. As discussed in “Why Austerity Budgets Won’t Save Your Economy,” the big problem with debt is that the only ways to avoid paying it back or paying interest on it forever are national bankruptcy or hyper-inflation. And unless the borrowed money is spent in ways that foster economic growth in a big way, paying it back or paying interest on it forever will mean future pain in the form of higher taxes or lower spending.

There is very little evidence that spending borrowed money on conventional Keynesian stimulus—spent in the ways dictated by what has become normal politics in the US, Europe and Japan—(or the kinds of tax cuts typically proposed) can stimulate the economy enough to avoid having to raise taxes or cut spending in the future to pay the debt back. There are three main ways to use debt to increase growth enough to avoid having to raise taxes or cut spending later:

1. Spending on national investments that have a very high return, such as in scientific research, fixing roads or bridges that have been sorely neglected.

2. Using government support to catalyze private borrowing by firms and households, such as government support for student loans, and temporary investment tax credits or Federal Lines of Credit to households used as a stimulus measure.

3. Issuing debt to create a sovereign wealth fund—that is, putting the money into the corporate stock and bond markets instead of spending it, as discussed in “Why the US needs its own sovereign wealth fund.” For anyone who thinks government debt is important as a form of collateral for private firms (see “How a US Sovereign Wealth Fund Can Alleviate a Scarcity of Safe Assets”), this is the way to get those benefits of debt, while earning more interest and dividends for tax payers than the extra debt costs. And a sovereign wealth fund (like breaking through the zero lower bound with electronic money) makes the tilt of governments toward short-term financing caused by current quantitative easing policies unnecessary.

But even if debt is used in ways that do require higher taxes or lower spending in the future, it may sometimes be worth it. If a country has its own currency, and borrows using appropriate long-term debt (so it only has to refinance a small fraction of the debt each year) the danger from bond market vigilantes can be kept to a minimum. And other than the danger from bond market vigilantes, we find no persuasive evidence from Reinhart and Rogoff’s data set to worry about anything but the higher future taxes or lower future spending needed to pay for that long-term debt. We look forward to further evidence and further thinking on the effects of debt. But our bottom line from this analysis, and the thinking we have been able to articulate above, is this: Done carefully, debt is not damning. Debt is just debt.


Companion Post

The title chosen by our editor is too strong, but not so much so that I objected to it; the title of this post is more accurate.

Yichuan only recently finished his first year at the University of Michigan. Yichuan’s blog is Synthenomics. You can see Yichuan on Twitter here. Let me say already that from reading Yichuan’s blog and working with him on this column, I know enough to strongly recommend Yichuan for admission to any Ph.D. program in economics in the world. He should finish has bachelor’s degree first, though.

I genuinely went into our analysis expecting to find evidence that high debt does cause low growth, though of course, to a much smaller extent than low growth causes high debt. I was fully prepared to argue (first to Yichuan and then to the world) that even a statistically insignificant negative effect of debt on growth that was plausibly causal had to be taken seriously from a Bayesian perspective. Our analysis set out the minimal hurdles I felt had to be jumped over to convince me that there was some solid evidence that high debt causes low growth. A key jump was not completed. That shifted my views.

I hope others will try to replicate our findings. That should let me rest easier.

From a theoretical point of view, I am especially intrigued by the possibility that any effect on growth from refinancing difficulties might depend on a country’s debt to GDP ratio compared to that of other countries. What I find remarkable is that despite the likely negative effect of debt on growth from refinancing difficulties, we found no overall negative effect of debt on growth. It is as if there is some other, positive effect of debt on growth to the extent a country’s relative debt position stays the same. Besides the obvious, but uncommonly realized, possibility of very wisely deployed deficit spending, I can think of two intriguing mechanisms that could generate such an effect. First, from a supply-side point of view, lower tax rates now could make growth look higher now, perhaps at the expense of growth at some future date when taxes have to be raised to pay off the debt, with interest. Second, government debt increases the supply of liquid (and often relatively safe) assets in the economy that can serve as good collateral. Any such effect could be achieved without creating a need for higher future taxes or lower future spending by investing the money raised in corporate stocks and bonds through a sovereign wealth fund.

I have thought a little about why borrowing in a currency one can print unilaterally makes such a difference to the reactions of the bond market to debt. One might think that the danger of repudiating the implied real debt repayment promises by inflation would mean the risks to bondholders for debt in one’s own currency would be almost the same as for debt in a foreign currency or a shared currency like the euro. But it is one thing to fear actual disappointing real repayment spread over some time and another thing to have to fear that the fear of other bondholders will cause a sudden inability of a government to make the next payment at all.  

Note: Brad Delong writes:

Miles Kimball and Yichuan Wang confirm Arin Dube: Guest Post: Reinhart/Rogoff and Growth in a Time Before Debt | Next New Deal:

As I tweeted,

  1. .@delong undersells our results. I would have read Arin Dube’s results alone as saying high debt *does* slow growth.

  2. *Of course* low growth causes debt in a big way. But we need to know if high debt causes low growth, too. No ev it does!

In tweeting this, I mean,if I were convinced Arin Dube’s left graph were causal, the left graph seems to suggest that higher debt causes low growth in a very important way, though of course not in as big a way as slow growth causes higher debt. If it were causal, the left graph suggests it is the first 30% on the debt to GDP ratio that has the biggest effect on growth, not any 90% threshold. Yichuan and I are saying that the seeming effect of the first 30% on the debt to GDP ratio could be due in important measure to the effect of growth on debt, plus some serial correlation in growth rates. The nonlinearity could come from the fact that it takes quite high growth rates to keep a country from have some significant amounts of debt—as indicated by Arin Dube’s right graph, which is more likely to be primarily causal.

By the way, I should say that Yichuan and I had seen the Rortybomb piece on Arin Dube’s analysis, but we were not satisfied with it. But I want to give credit for this as a starting place for Yichuan and me in our thinking.

Brad Delong’s Reply: Thanks to Brad DeLong for posting the note above as part of his post “DeLong Smackdown Watch: Miles Kimball Says That Kimball and Wang is Much Stronger than Dube.”

Brad replies:

From my perspective, I tend to say that of course high debt causes low growth—if high debt makes people fearful, and leads to low equity valuations and high interest rates. The question is: what happens in the case of high debt when it comes accompanied by low interest rates and high equity values, whether on its own or via financial repression?

Thus I find Kimball and Wang’s results a little too strong on the high-debt-doesn’t-matter side for me to be entirely comfortable…

My Thoughts about What Brad Says in the Quote Just Above: As I noted above, my reaction is to what we Yichuan and I found is similar to Brad’s. There must be a negative effective of debt on growth through the bond vigilante channel, as Yichuan and I emphasize in our interpretation. For example, in our final paragraph, Yichuan and I write:

…other than the danger from bond market vigilantes, we find no persuasive evidence from Reinhart and Rogoff’s data set to worry about anything but the higher future taxes or lower future spending needed to pay for that long-term debt.

The surprise is the pattern that when countries around the world shifted toward higher debt than would be predicted by past growth, that later growth turned out to be somewhat higher than after countries around the world shifted to lower debt. It may be possible to explain why that evidence from trends in the average level of debt around the world over time should be dismissed, but if not, we should try to understand those time series patterns. It is hard to get definitive answers from the relatively small amount of evidence in macroeconomic time series, or even macroeconomic panels across countries, but given the importance of the issues, I think it is worth pondering the meaning of what limited evidence there is from trends in the average level of debt around the world over time. That is particularly true since in the current crisis, many people have, recommended precisely the kind of worldwide increase deficit spending—and therefore debt levels—that this limited evidence speaks to. 

I am perfectly comfortable with the idea that the evidence from trends in the average level of debt around the world over time is limited enough so theoretical reasoning that shifts our priors could overwhelm the signal from the data. But I want to see that theoretical reasoning. And I would like to get reactions to my theoretical speculations above, about (1) supply-side benefits of lower taxes that reverse in sign in the future when the debt is paid for and (2) liquidity effects of government debt (which may also have a price later because of financial cycle dynamics). 

Matt Yglesias’s Reaction: On MoneyBox, you can see Matthew Yglesias’s piece “After Running the Numbers Carefully There’s No Evidence that High Debt Levels Cause Slow Growth.” As I tweeted:

Don’t miss this excellent piece by @mattyglesias about my column with @yichuanw on debt and growth. Matt gets it.

In the preamble of my post bringing the full text of “An Economist’s Mea Culpa: I Relied on Reihnart and Rogoff" home to supplysideliberal.com, I write:

In terms of what Carmen Reinhart and Ken Rogoff should have done that they didn’t do, “Be very careful to double-check for mistakes” is obvious. But on consideration, I also felt dismayed that they didn’t do a bit more analysis on their data early on to make a rudimentary attempt to answer the question of causality. I wouldn’t have said it quite as strongly as Matthew Yglesias, but the sentiment is basically the same.    

Paul Krugman’s Reaction: On his blog, Paul Krugman characterized our findings this way:

There is pretty good evidence that the relationship is not, in fact, causal, that low growth mainly causes high debt rather than the other way around.

Kevin Drum’s Reaction: On the Mother Jones blog, Kevin Drum gives a good take on our findings in his post “Debt Doesn’t Cause Low Growth. Low Growth Causes Low Growth.” He notices that we are not fans of debt. I like his version of one of our graphs:

Mark Gongloff’s Reaction: On Huffington Post, Mark Gongloff’s“Reinhart and Rogoff’s Pro-Austerity Research Now Even More Thoroughly Debunked by Studies” writes:

…University of Michigan economics professor Miles Kimball and University of Michigan undergraduate student Yichuan Wang write that they have crunched Reinhart and Rogoff’s data and found “not even a shred of evidence" that high debt levels lead to slower economic growth.

And a new paper by University of Massachusetts professor Arindrajit Dube finds evidence that Reinhart and Rogoff had the relationship between growth and debt backwards: Slow growth appears to cause higher debt, if anything….

This contradicts the conclusion of Reinhart and Rogoff’s 2010 paper, “Growth in a Time of Debt,” which has been used to justify austerity programs around the world. In that paper, and in many other papers, op-ed pieces and congressional testimony over the years, Reinhart And Rogoff have warned that high debt slows down growth, making it a huge problem to be dealt with immediately. The human costs of this error have been enormous….

At the same time, they have tried to distance themselves a bit from the chicken-and-egg problem of whether debt causes slow growth, or vice-versa. "The frontier question for research is the issue of causality,“ [Reinhart and Rogoff] said in their lengthy New York Times piece responding to Herndon. It looks like they should have thought a little harder about that frontier question three years ago.

There is an accompanying video by Zach Carter.

Paul Andrews Raises the Issue of Selection Bias: The most important response to our column that I have seen so far is Paul Andrews’s post "None the Wiser After Reinhart, Rogoff, et al.” This is the kind of response we were hoping for when we wrote “We look forward to further evidence and further thinking on the effects of debt.” Paul trenchantly points out the potential importance of selection bias: 

What has not been highlighted though is that the Reinhart and Rogoff correlation as it stands now is potentially massively understated. Why? Due to selection bias, and the lack of a proper treatment of the nastiest effects of high debt: debt defaults and currency crises.

The Reinhart and Rogoff correlation is potentially artificially low due to selection bias. The core of their study focuses on 20 or so of the most healthy economies the world has ever seen. A random sampling of all economies would produce a more realistic correlation. Even this would entail a significant selection bias as there is likely to be a high correlation between countries who default on their debt and countries who fail to keep proper statistics.

Furthermore Reinhart and Rogoff’s study does not contain adjustments for debt defaults or currency crises.  Any examples of debt defaults just show in the data as reductions in debt. So, if a country ran up massive debt, could’t pay it back, and defaulted, no problem!  Debt goes to a lower figure, the ruinous effects of the run-up in debt is ignored. Any low growth ensuing from the default doesn’t look like it was caused by debt, because the debt no longer exists! 

I think this issue needs to be taken very seriously. It would be a great public service for someone to put together the needed data set. 

Note that Paul Andrews views are in line with our interpretation of our findings. Let me repeat our interpretation, with added emphasis:

other than the danger from bond market vigilantes, we find no persuasive evidence from Reinhart and Rogoff’s data set to worry about anything but the higher future taxes or lower future spending needed to pay for that long-term debt. 

Of course, it is disruptive to have a national bankruptcy. And national bankruptcies are more likely to happen at high levels of debt than low levels of debt (though other things matter as well, such as the efficiency of a nation’s tax system). And the fear by bondholders of a national bankruptcy can raise interest rates on government bonds in a way that can be very costly for a country. The key question for which the existing Reinhart and Rogoff data set is reasonably appropriate is the question of whether an advanced country has anything to fear from debt even if, for that particular country, no one ever seriously doubts that country will continue to pay on its debts.

Jonah Berger: Going Viral

Like many other readers, I was fascinated by Richard Dawkins introduction of the idea of a meme in his book The Selfish Gene.

Wikipedia gives a good discussion of memes:

A meme (/ˈmm/meem)[1] is “an idea, behavior, or style that spreads from person to person within a culture.”[2] A meme acts as a unit for carrying cultural ideas, symbols, or practices that can be transmitted from one mind to another through writing, speech, gestures, rituals, or other imitable phenomena. Supporters of the concept regard memes as cultural analogues to genes in that they self-replicate, mutate, and respond to selective pressures.[3]

The word meme is a shortening (modeled on gene) of mimeme (from Ancient Greek μίμημα Greek pronunciation: [míːmɛːma]mīmēma, “imitated thing”, from μιμεῖσθαι mimeisthai, “to imitate”, from μῖμος mimos "mime")[4] and it was coined by the British evolutionary biologist Richard Dawkins in The Selfish Gene (1976)[1][5] as a concept for discussion of evolutionary principles in explaining the spread of ideas and cultural phenomena. Examples of memes given in the book included melodies, catch-phrases, fashion, and the technology of building arches.[6]

Proponents theorize that memes may evolve by natural selection in a manner analogous to that of biological evolution. Memes do this through the processes of variationmutationcompetition, and inheritance, each of which influence a meme’s reproductive success. Memes spread through the behavior that they generate in their hosts. Memes that propagate less prolifically may become extinct, while others may survive, spread, and (for better or for worse) mutate. Memes that replicate most effectively enjoy more success, and some may replicate effectively even when they prove to be detrimental to the welfare of their hosts.[7]

A field of study called memetics[8] arose in the 1990s to explore the concepts and transmission of memes in terms of an evolutionary model.

Internet memes are a subset of memes in general. Wikipedia has a good discussion of this particular subset of memes as well:

An Internet meme may take the form of an imagehyperlinkvideopicturewebsite, or hashtag. It may be just a word or phrase, including an intentional misspelling. These small movements tend to spread from person to person via social networksblogs, direct email, or news sources. They may relate to various existing Internet cultures or subcultures, often created or spread on sites such as 4chanReddit and numerous others.

An Internet meme may stay the same or may evolve over time, by chance or through commentary, imitations, parody, or by incorporating news accounts about itself. Internet memes can evolve and spread extremely rapidly, sometimes reaching world-wide popularity within a few days. Internet memes usually are formed from some social interaction, pop culture reference, or situations people often find themselves in. Their rapid growth and impact has caught the attention of both researchers and industry.[3]Academically, researchers model how they evolve and predict which memes will survive and spread throughout the Web. Commercially, they are used in viral marketing where they are an inexpensive form of mass advertising.

But sometimes our image of an internet meme is too narrow. A tweet can easily become an internet meme if it is retweeted and modified. Thinking of bigger chunks of text, even a blog post sometimes both spreads in its original form and inspires other blog posts that can be considered mutated forms of the original blog post. And thinking just a bit smaller than a tweet, a link to a blog post can definitely be a meme, coevolving with different combinations of surrounding text recommending or denigrating what is at the link–sometimes just the surrounding text of a tweet and sometimes the surrounding text of an entire blog post that flags what is at the link. So those of us who care how many people read what we have to say have reason to be interested in the principles that determine when tweet, a post or a link will be contagious or not. In other words, what does it take to go viral?

Jonah Berger’s book Contagious gives answers based on research Jonah has done as a Marketing professor at the Wharton school. Jonah identifies six dimensions of a message that make it more likely to spread. Here are my notes what Jonah has to say about those six dimensions, for which Jonah gives the acronym STEPPS:

1. Social Currency: We share things that make us look good.

Jonah emphasizes three ways to make people want to share something in order to look good.

  • Inner Remarkability: making clear how remarkable something is. Two examples of remarkabilility are the Snapple facts on the inside of Snapple lids and the video series “Will It Blend?” showing Blendtec blenders grinding up just about anything, the more entertaining the better. Note how what is remarkable about the Blendtec blenders is brought out and dramatized in a non-obvious and entertaining way.  
  • Leverage Game Mechanics: Make a good game out of being a fan.  Here the allure of becoming the Foursquare mayor of some establishment is a great example. 
  • Make People Feel Like Insiders: Here, counterintuitively, creating a sense of scarcity, exclusivity, and the need for inside knowledge to access everything, can make something more attractive. Of course, if you can get away with the illusion of scarcity and exclusivity rather than the reality, more people can be brought on board.

2. Triggers: Top of mind, tip of tongue.

Here the key idea is to tie what you are trying to promote to some trigger that will happen often in someone’s environment.

  • Budweiser’s “Wassup” campaign might seem uninspired, but it tied Budweiser beer to what was a common greeting at the time among a key demographic of young males.  
  • The “Kitkat and Coffee” campaign tied Kitkat chocolate bars to a very frequent occurrence in many people’s days: drinking coffee.
  • The lines “Thinking about Dinner? Think About Boston Market” helped trigger thoughts of Boston Market at a time of day at which they hadn’t previously had as much business.  
  • The trigger can even be the communications of one’s adversary, as in the anti-smoking ads riffing off of the Marlboro Man commercials:

3. Emotion: When we care, we share.

The non-obvious finding here is that high arousal emotions such as 

regardless of whether they are positive or negative–encourage sharing more than low arousal emotions such as contentment and sadness. Indeed, arousal is so important for sharing, experiments indicate that even the physiological arousal induced by making people run in place can cause people to share an article more often.

To find the emotional core of an idea, so that emotional core can be highlighted, Jonah endorses the technique of asking why you think people are doing something, then asking “why is that important” three times. Of course, this could also be seen as a way to try to get at the underlying utility function: utility functions are implemented in important measure by emotions. 

Jonah recommends Google’s “Paris Love” campaign as an example of showing how to demonstrate that something seemingly prosaic, such as search, can connect to deeper concerns. 

4. Public: Built to show, built to grow.

Here I like the story of how Steve Jobs and his marketing expert Ken Segall decided that making the Apple log on a laptop look right-side up to other people when the laptop is in use was more important than making it look right-side up to the user at the moment of figuring out which way to turn to laptop to open it up. Jonah points out how the way the color yellow made them stand out helped make Livestrong wristbands a thing in the days before Lance Armstrong was disgraced

and how the color white made ipod headphones more noticeable than black would have. 

Jonah also makes interesting points about how talking about certain kinds of bad behavior, by making it seem everyone is doing it, can actually encourage bad behavior. Think of Nancy Reagan’s “Just Say No” antidrug campaign:

An alternative is to try to highlight the alternative, desired behavior.  

5. Practical Value: News you can use.

This dimension is fairly straightforward. But Jonah gives this interesting example of a video about how to shuck corn for corn on the cob that went viral in an older demographic where not many things go viral. He also points to the impulse to share information of presumed practical value as part of the reason it is so hard to eradicate the scientifically discredited idea that vaccines cause autism.

6. Stories: Information travels under the guise of idle chatter. 

Here, Jonah uses the example of the Trojan horse, which works well on many levels: the horse brought Greek warriors into Troy, and the story of the Trojan horse brings the idea “never trust your enemies, even if they seem friendly” deep into the soul. He points out just how much information is carried along by good stories.

But Jonah cautions that to make a story valuable, what you are trying to promote has to be integral to the story. Crashing the Olympics and doing a belly flop makes a good story, but the advertising on the break-in diver’s outfit was not central to the story and was soon forgotten. By contrast, for Panda brand Cheese, the Panda backing up the threat “Never say no to Panda” is a memorable part of the stories of Panda mayhem in the cheese commercials, and Dove products at least have an integral supporting role to play in Dove’s memorable Evolution commercial illustrating the extent to which much makeup and photoshopping are behind salient images of beauty in our environment.   

Applied Memetics for the Economics Blogger

Here are a few thought about how to use Jonah’s insights in trying to make a mark in the blogosphere and tweetosphere.

1. Social Currency

Inner Remarkability: I find the effort to encapsulate the inner remarkability of each post or idea in a tweet an interesting intellectual challenge. One good way to practice this is a tip I learned from Bonnie Kavoussi: try to find the most interesting quotation from someone else’s post and put that quotation in your tweet. That will win you friends from the authors of the posts, earn you more Twitter followers (remember that the author of the post will have a strong urge to retweet if you are advertising herhis post well), and hone your skills for when you want to advertise your own posts on Twitter. 

Leverage Game Mechanics: In the blogosphere and on Twitter, we are associating with peers. Much of what they want is similar to what w want–to be noticed, to get our points across, to get new ideas. So helping them to win their game is basically a matter of being a good friend or colleague. For example, championing people’s best work and being generous in giving credit will win points. 

Make People Feel Like Insiders: When writing for on online magazine (Quartz in my case), it feels I need to write as if the readers are reading me for the first time. By contrast, a blog is tailor-made to make readers feel like insiders. So it is valuable to have an independent blog alongside any writing I do for an online magazine.  

2. Triggers

A common piece of advice to young tenure-track assistant professors is to do enough of one thing to become known for that thing. This is consistent with Jonah’s advice about triggers. Having people think of you every time a particular topic comes up is a good way to make sure people think of you. That doesn’t mean you need to be a Johnny-one-note, but it does mean the danger of being seen as a Johnny-one-note is overrated. Remember that readers can easily get variety by diversifying their reading between you and other bloggers. So they will be fine even if your blog specializes to one particular niche, or a small set of niches.

On Twitter, one way to associate yourself with a particular trigger is to use a hashtag. In addition to the hashtag #ImmigrationTweetDay that Adam Ozimek, Noah Smith and I created for Immigration Tweet Day, I have made frequent use of the hashtag #emoney, and I created the hashtag #nakedausterity.  

3. Emotion

Economists often want to come across as cool and rational. But many of the most successful bloggers have quite a bit of emotion in their posts and tweets. I think Noah Smith’s blog Noahpinion is a good example of this. Noahpinion delivers humor, indignation, awe, and even the sense of anxiety that comes from watching him attack and wondering how the object of his attack will respond.  

One simple aid to getting an emotional kick that both Noah and I use is to put illustrations at the top of most of our blog posts. I think more blogs would benefit from putting well-chosen illustrations at the top of posts.    

4. Public

The secret to making a blog more public is simple: Twitter. Everything on Twitter is public, and every interaction with someone who has followers you don’t is a chance for someone new to realize you exist. Of course, you need to be saying something that will make people want to follow you once they notice that you exist.    

Facebook helps too. I post links to my blog posts on my Facebook wall and have friended many economists. 

Finally, the dueling blog posts in an online debate tend to attract attention.

5. Practical Value

In “Top 25 All-Time Posts and All 22 Quartz Columns in Order of Popularity, as of May 5, 2013,” I point out the two posts that are slowly and steadily gaining on posts that were faster out of the block:

I think the reason is practical value. Economists love to understand the economy, but they also have to teach school. They are glad for help and advice for that task.  

6. Stories

Let me make the following argument:

  • a large portion of our brains is devoted to trying to understand the people in our social network;
  • so the author of a blog is much more memorable than a blog, and
  • a memorable story about a blog is almost always coded in people’s brains as a memorable story about the author of the blog.  

Thus, to make a good story for your blog, it is important to “let people in.” That is, it pays off to let people get to know you. The challenge is then to let people get to know you without making them think you are so “full of yourself” that they flee in disgust. Economists as a rule have a surprisingly high tolerance for arrogance in others. But if you want non-economists to stick with you, you might want to inject some notes of humility into what you write.

One simple way to let people get to know you without seeming arrogant is to highlight a range of other people you think highly of. The set of people you think highly of is very revealing of who you are. (Of course, the set of people you criticize and attack is also very revealing of who you are, but not in the same way.)

Summary 

Jonah Berger’s book Contagious is one of the few books in my life where I got to the end and then immediately and eagerly went back to the beginning to read it all over again for the second time. (I can’t remember another one.) Of course, it is a relatively short book. But still, it took a combination of great stories, interesting research results, and practical value for me as a blogger to motivate me to read it twice in quick succession. I recommend it. And I would be interested in your thoughts about how to get a better chance of having blog posts and tweets go viral.         

Further Reading

Jonah recommends two other books that with insights into what makes an idea successful:

  • Malcolm Gladwell’s The Tipping Point: is a fantastic read. But while it is filled with entertaining stories, the science has come a long way since it was released over a decade ago.”
  • Chip Heath and Dan Heath's Made to Stick: Why Some Ideas Survive and Others Die…although the Heaths’ book focuses on making ideas ‘stick’–getting people to remember them–it says less about how to make products and ideas spread, or getting people to pass them on.”

Quartz #23—>QE or Not QE: Even Economists Need Lessons in Quantitative Easing, Bernanke Style

Link to the Column on Quartz

Here is the full text of my 23d Quartz column, “QE or Not QE: Even Economists need lessons in quantitative easing, Bernanke style,” now brought home to supplysideliberal.com. It was first published on May 14, 2013. Links to all my other columns can be found here.

If you want to mirror the content of this post on another site, that is possible for a limited time if you read the legal notice at this link and include both a link to the original Quartz column and the following copyright notice:

© May 14, 2013: Miles Kimball, as first published on Quartz. Used by permission according to a temporary nonexclusive license expiring June 30, 2014. All rights reserved.


Martin Feldstein is an eminent economist. In addition to being a prolific researcher, he served as head of US president Ronald Reagan’s Council of Economic Advisors, and made the National Bureau of Economic Research (NBER) what it is today—an institution that Paul Krugman called “the old-boy network of economics made flesh.” (I am one of the many economists who belongs to the NBER.) But Feldstein was wrong when he wrote in the Wall Street Journal last week, “The time has come for the Fed to recognize that it cannot stimulate growth,” in an op-ed headlined “The Federal Reserve’s Policy Dead End: Quantitative easing hasn’t led to faster growth. A better recovery depends on the White House and Congress.”

“Quantitative easing” or “QE” is when a central bank buys long-term or risky assets instead of purchasing short-term safe assets. One possible spark for Feldstein’s tirade against quantitative easing was the Fed’s announcement on May 1 that it “is prepared to increase or reduce the pace of its purchases” of long-term government bonds and mortgage-backed securities depending on the economic situation. This contrasts with the Fed’s announcement on March 20 that had only pledged as if the Fed would either keep the rate of purchases the same or scale them back, depending on circumstances. Philadelphia Fed Chief Charles Plosser described this as the Fed trying “to remind everybody” that it “has a dial that can move either way.”

So the Fed sounds more ready to turn to QE when needed than it did before.

Feldstein’s argument boils down to saying, “The Fed has done a lot of QE, but we are still hurting, economically. Therefore, QE has failed.” But here he misunderstands the way QE works. The special nature of QE means that the headline dollar figures for quantitative easing overstate how big a hammer any given program of QE is. Once one adjusts for the optical illusion that the headline dollar figures create for QE, there is no reason to think QE has a different effect than one should have expected. To explain why, let me lay out again the logic of one of the very first posts on my blog, “Trillions and Trillions: Getting Used to Balance Sheet Monetary Policy.” In that post I responded to Stephen Williamson, who misunderstood QE (or “balance sheet monetary policy,” as I call it there) in a way similar to Martin Feldstein.

To understand QE, it helps to focus on interest rates rather than quantities of assets purchased. Regular monetary policy operates by lowering safe short-term interest rates, and so pulling down the whole structure of interest rates: short-term, long-term, safe and risky. The trouble is that there is one safe interest rate that can’t be pulled down without a substantial reform to our monetary system: the zero interest rate on paper currency. (See “E-Money: How Paper Currency is Holding the US Recovery Back.”) There is no problem pulling other short-term safe interest rates (say on overnight loans between banks or on 3-month Treasury bills) down to that level of zero, but trying to lower other short-term safe rates below zero would just cause people to keep piles of paper currency to take advantage of the current government guarantee that you can get a zero interest rate on paper currency, which is higher than a negative interest rate.

As long as the zero interest rate on paper currency is left in place by the way we handle paper currency, the Fed’s inability to lower safe, short-term interest rates much below zero means that beyond a certain point it can’t use regular monetary policy to stimulate the economy any more. Once the Fed has hit the “zero lower bound,” it has to get more creative. What quantitative easing does is to compress—that is, squish down—the degree to which long-term and risky interest rates are higher than safe, short-term interest rates. The degree to which one interest rate is above another is called a “spread.” So what quantitative easing does is to squish down spreads. Since all interest rates matter for economic activity, if safe short-term interest rates stay at about zero, while long-term and risky interest rates get pushed down closer to zero, it will stimulate the economy. When firms and households borrow, the markets treat their debt as risky. And firms and households often want to borrow long term. So reducing risky and long-term interest rates makes it less expensive to borrow to buy equipment, hire coders to write software, build a factory, or build a house.

Some of the confusion around quantitative easing comes from the fact that in the kind of economic models that come most naturally to economists, in which everyone in sight is making perfect, deeply-insightful decisions given their situation, and financial traders can easily borrow as much as they want to, quantitative easing would have no effect. In those “frictionless” models, financial traders would just do the opposite of whatever the Fed does with quantitative easing, and cancel out all the effects. But it is important to understand that in these frictionless models where quantitative easing gets cancelled out, it has no important effects. Because in the frictionless models quantitative easing gets canceled out, it doesn’t stimulate the economy. But because in the frictionless models quantitative easing gets cancelled out it has no important effects. In the world where quantitative easing does nothing, it also has no side effects and no dangers. Any possible dangers of quantitative easing only occur in a world where quantitative easing actually works to stimulate the economy!

Now it should not surprise anyone that the world we live in does have frictions. People in financial markets do not always make perfect, deeply-insightful decisions: they often do nothing when they should have done something, and something when they should have done nothing. And financial traders cannot always borrow as much as they want, for as long as they want, to execute their bets against the Fed, as Berkeley professor and prominent economics blogger Brad DeLong explains entertainingly and effectively in “Moby Ben, or, the Washington Super-Whale: Hedge Fundies, the Federal Reserve, and Bernanke-Hatred.” But there is an important message in the way quantitative easing gets canceled out in frictionless economic models. Even in the real world, large doses of quantitative easing are needed to get the job done, since real-world financial traders do manage to counteract some of the effects of quantitative easing as they go about their normal business of trying to make good returns. And “large doses” means Fed purchases of long-term government bonds and mortgage-backed bonds that run into trillions and trillions of dollars. (As I discuss in “Why the US Needs Its Own Sovereign Wealth Fund,” quantitative easing would be more powerful if it involved buying corporate stocks and bonds instead of only long-term government bonds and mortgage-backed bonds.) It would have been a good idea for the Fed to do two or three times as much quantitative easing as it did early on in the recession, though there are currently enough signs of economic revival that it is unclear how much bigger the appropriate dosage is now.

Does QE work? Most academic and central bank analyses argue that it does. (See for example, work by Arvind Krishnamurthy and Annette Vising-Jorgenson of Northwestern University, and work by Signe Krogstrup, Samuel Reynard and Barbara Sutter of the Swiss National Bank. ) But I am also impressed by the decline in the yen since people began to believe that Japan would undertake an aggressive new round of QE. One yen is an aluminum coin that can float on the surface tension of water. Since September, it has floated down from being worth 1.25 cents (US) to less than a penny now. Exchange rates respond to interest rates, so the large fall in the yen is a strong hint that QE is working for Japan, as I predicted it would when I advocated massive QE for Japan back in June 2012.

Sometimes friction is a negative thing—something that engineers fight with grease and ball bearings. But if you are walking on ice across a frozen river, the little bit of friction still there between your boots and the ice allow you to get to the other side. It takes a lot of doing, but quantitative easing uses what friction there is in financial markets to help get us past our economic troubles. The folks at the Fed are not perfect, but they know how quantitative easing works better than Martin Feldstein does. If we had to depend on the White House and Congress for economic recovery, we would be in deep, deep trouble. It is a good thing we have the Fed.

Pieria Debate on the UK Productivity Puzzle

Miles Kimball, Jonathan Portes, Frances Coppola and Tomas Hirst discuss the mysterious case of the UK’s falling productivity. This post first appeared on Pieria on May 24, 2013.

Miles Kimball: A big issue that the Bank of England is worried about is that the UK may not be far below the natural level of output at all. They’re very interest in the productivity puzzle and I’m hoping they’ll put out a prize for research into it one of these days.

Tomas Hirst: We’ve had some interesting discussions on Pieria about how we can explain the productivity puzzle – including how it might reflect miscalculations of output and growing problems in the UK labour market. 

Jonathan Portes: Do they really think that we’re not far below the natural level of output at the moment?

Miles Kimball: Well opinions differ. I think it’s safe to say there’s a very active debate on exactly that question.

Tomas Hirst: The minutes of the MPC’s most recent meeting suggest that there’s something of a schism opening up in the committee between those worrying about the risks of further QE purchases (who are currently in the majority) and those worrying about the continued weakness of output. Do you think it reflects this debate?

Miles Kimball: Pieria really ought to talk about this more. For many other economies it seems crystal clear to almost everybody with an ounce of sense that output is below the natural level but I don’t know if it’s true in the UK. It’s not even clear to me, I just don’t know. 

The broadest sphere of the debate should really be trying to get a hold of that productivity puzzle. In addition to measures that could add to aggregate demand for the UK I think a great deal of work needs to be done to assess whether it really is below the natural level of output or not.

Tomas Hirst: I think in the UK people have been too focused on headline figures of inflation and unemployment, for example. What people have missed is the fact that core inflation has been below target throughout the crisis, which might itself justify further stimulus.

Miles Kimball: Well remember that the new remit from the Treasury says that the MPC should look through government-administered prices.

Tomas Hirst: Yes, but could that change in mandate not be a response to this problem of growing doubts in the usefulness of headline figures?

Miles Kimball: What I’m saying is that the remit could suggest that the BoE is being asked to look more at core inflation. It’s actually a little bit of a mixed message as they’re being told that their target should remain linked to headline inflation but are being told to look through the headline numbers at what’s happening to core inflation. Pushing them towards core inflation is important. 

On the productivity puzzle, there are things that can be solved by expansion and things that can’t. In the recession the government is not as willing to let firms go bankrupt so you get a long tail of unproductive firms carrying on. If you convince everybody that you’ve got all the aggregate demand you want you can allow for more bankruptcies, which will mean some of the puzzle will automatically correct.

Frances Coppola: I’ve heard that argument a lot but I’m not 100% convinced. You’ve got to look through the recession to see what the long-term secular trend is.

Over the last few years we’ve seen a huge increase in self-employment and at the same time self-employed incomes have crashed. That can’t be to do simply with unproductive companies.

Jonathan Portes: It’s an aggregate demand problem.

Frances Coppola: Exactly!

Jonathan Portes: Actually it was part of David Blanchflower’s recent paper that discussed a growing number of people in the UK who want to work more hours and can’t get them. If you’re self-employed and you want to work more hours the only thing that is stopping you is a lack of demand.

Frances Coppola: Speaking from personal experience, as I am self-employed and have been for a long time in a business that requires specialist skills, things were fine until two years ago. Since then demand has collapsed. And it’s not just singing. I’ve never seen the situation out there this bad.

Further Reading

Part 1: Pieria debate on electronic money and negative interest rates 

How Can We Explain Britain’s Productivity Puzzle? – Pieria

Perverse incentives and productivity – Coppola Comment

Can Intangible Investment Explain The UK Productivity Puzzle – Professor Jonathan Haskel

JOIN PIERIA TODAY!

Keep up to date with the latest thinking on some of the day’s biggest issues and get instant access to our members-only features, such as the News DashboardReading List,Bookshelf & Newsletter. It’s completely free.

Instrumental Tools for Debt and Growth

A Joint Post by Miles Kimball and Yichuan Wang

Yichuan (see photo above) and I talked through the analysis and ideas for this post together, but the words and the particulars of the graphs are all his. I find what he has done here very impressive. On his blog, where this post first appeared on June 4, 2013, the last two graphs are dynamic and show more information when you hover over what you are interested in. This post is a good complement to our analysis in our second joint Quartz column: “Autopsy: Economists looked even closer at Reinhart and Rogoff’s data–and the results might surprise you,” which pushes a little further along the lines we laid out in “For Sussing Out Whether Debt Affects Future Growth, the Key is Carefully Taking Into Account Past Growth.”


In a recent Quartz column, we found that high levels of debt do not appear to affect future rates of growth. In the Reinhart and Rogoff (henceforth RR) data set on debt and growth for a group of 20 advanced economies in the post WW-II period, high levels of debt to GDP did not predict lower levels of growth 5 to 10 years in the future. Notably, after controlling for various intervals of past growth, we found that there was a mild positive correlation between debt to GDP and future GDP growth.

In a companion post, we address some of the time window issues with some plots how adjusting for past growth can reverse any observed negative correlation between debt and future growth. In this post, we want to address the possibility that future growth can lead to high debt, and explain our use of instrumental variables to control for this possibility.

One major possibility for this relationship is that policy makers are forward looking, and base their decisions on whether to have high or low debt based on their expectations of future events. For example, if policy makers know that a recession is coming, they may increase deficit spending to mitigate the upcoming negative shock to growth. Even though debt may have increased growth, this would have been observed as lower growth following high debt.On the other hand, perhaps expectations of high future growth make policy makers believe that the government can afford to increase debt right now. Even if debt had a negative effect on growth, the data would show a rapid rise in GDP growth following the increase in debt.

Apart from government tax and spending decisions informed by forecasts of future growth, there are other mechanical relationships between debt and growth that are not what one should be looking for when asking whether debt has a negative effect on growth. For example a war can increase debt, but the ramp of the war makes growth high then and predictably lower after the ramp up is done and predictably lower still when the war winds down. So there is an increase in debt coupled with predictions for GDP growth different from non-war situations. None of this has to do with debt itself causing a different growth rate, so we would like to abstract from it. 

To do so, we need to extract the part of the debt to GDP statistic that is based on whether the country runs a long term high debt policy, and to ignore the high debt that arises because of changes in expected future outcomes or because of relatively mechanical short-run aggregate demand effects of government purchases as a component of GDP. Econometrically, this approach is called instrumental variables, and would involve using a set of variables, called instruments, that are uncorrelated with future outcomes to predict current debt.

Since we are considering future outcomes, a natural choice for instrument would be the lagged value of the debt to GDP ratio. As can be seen below, debt to GDP does not jump around very much. If debt is high today, it likely will also be high tomorrow. Thus lagged debt can predict future debt. Also, since economic growth is notoriously difficult to forecast, the lagged debt variable should no longer reflect expectations about future economic growth.   

By using lagged debt and growth as instruments, we isolate the part of current debt that reflects debt from a long term high debt policy, and not by short run forecasts or other mechanical pressures. We plot the resulting slopes on debt to GDP in the charts below, for both future growth in years 0-5 and for future years 5-10. For the raw data and computations, consult the public dropbox folder.

From these graphs, we can make some observations.

First, almost all the coefficients, across all the different lags and fixed effects, are positive. Since these results are small, we should not put too much weight on statistical significance. However, it should be noted that the plain results, OLS and IV, for both growth periods are all statistically significant at at least the 95% confidence level, and the IV estimates for the 5-10 year period in particular are significant at the 99% confidence level.

The one negative estimate, OLS estimate with country fixed effects, has a standard error with absolute size twice as large as the actual slope estimate.Moreover, country fixed effects are difficult to interpret because they pivot the analysis from looking at high debt versus low debt countries towards analyzing a country’s indebtedness relative to its long run average.

These results are striking considering therobustness with which Reinhart and Rogoff present the argument thatdebt causes low growth in their 2012 JEP article. Yet instead of finding a weaker negative correlation, after controlling for past growth, we find that the estimated relationship between current debt and future growth is weakly positive instead. 

Second, when taking out year fixed effects, there is almost no effect of debt and future . Econometrically, year fixed effects takes out the average debt level in every year, which leaves us analyzing whether being more heavily indebted relative to a country’s peers in that year has an additional effect on growth. Because this component is consistently smaller than the regular IV coefficient, this suggests,for the advanced countries in the sample, it’s absolute, not relative, debt that matters.

This should be no surprise. As most recently articulated in RR’s open letter to Paul Krugman, much of the argument against high debt levels relies on a fear that a heavily indebted country becomes “suddenly unable to borrow from international capital markets because its public and/or private debts that are a contingent public liability are deemed unsustainable.” The credit crunch stifles growth and governments are forced to engage in self-destructive cutbacks just in order to pay the bills. At its core, this is a story about whether the government can pay back the liabilities. But whether or not liabilities are sustainable should depend on the absolute size of the liabilities, not just whether the liabilities are large relative to their peers.

Now,our conclusion is not without limitations. As Paul Andrew notes, the RR data set used focuses on “20 or so of the most healthy economies the world has ever seen,” thus potentially adding a high level of selection bias.

Additionally, we have restricted ourselves to the RR data set of advanced countries in the post WW-II period. The 2012 Reinhart and Rogoff paper considered episodes of debt overhangs from the 1800’s, and thus the results are likely very different. However, it is likely that prewar government policies, such the gold standard and the lack of independent monetary authorities, contributed to the pain of debt crises. Thus our timescale does not detract from the implication that debt has a limited effect on future growth in modern advanced economies.

In their New York Times response to Herndon et. al., Reinhart and Rogoff “reiterate that the frontier question for research is the issue of causality”. And at this frontier, our Quartz column, Dube’s work on varying regression time frames, and these companion posts all suggest that causality from debt to growth is much smaller than previously thought.

Ori Heffetz: Quantifying Happiness

Link to the article on the Johnson Business School website at Cornell

Ori Heffetz is now my coauthor many times over. There is a bio of him at the link. Here is his guest post on some of our joint work on the economics of happiness. Ori asks

Should governments monitor citizens’ happiness and use that data to inform policy? Many say yes; the question is how.

Most of our students at Johnson may be too young to remember, but in 1988, for the first time in history, an a cappella song made it to the #1 spot on the Billboard Hot 100 chart. Many of our alumni however won’t forget the huge success of Bobby McFerrin’s “Don’t Worry, Be Happy.” The artist’s unparalleled singing abilities aside, the song became an instant hit much thanks to its simple message that immediately resonated with everybody. After all, nobody wants to worry, and everybody wants to be happy. 

But if everybody wants to be happy, shouldn’t governments be constantly monitoring the public’s level of happiness, assessing how different policies affect it, and perhaps even explicitly designing policies to improve national happiness (and reduce national worry)? Wouldn’t it make sense to add official happiness measures to the battery of indicators governments already closely track and tie policy to — such as GDP, the rate of unemployment, and the rate of inflation? 

Researchers increasingly think so. Some advocate conducting nation-wide “happiness” surveys (or “subjective well-being” (SWB) surveys, to use the academic term), and using the responses to construct indicators that would be tracked alongside GDP-like measures. Although these proposals are controversial among economists, policymakers have begun to embrace them. In the past two years alone, for example, the U.S. National Academy of Sciences’ Committee on National Statistics convened a series of meetings of a “Panel on Measuring Subjective Well-Being in a Policy-Relevant Framework”; the OECD, as part of its Better Life Initiative, has been holding conferences on “Measuring Well-Being for Development and Policy Making”; and the U.K. Office of National Statistics began including the following SWB questions in its Integrated Household Survey, a survey that reaches 200,000 Britons annually:

Overall, how satisfied are you with your life nowadays?

Overall, how happy did you feel yesterday?

Overall, how anxious did you feel yesterday?

Overall, to what extent do you feel the things you do in your life are worthwhile?

These and other efforts follow the French government’s creation, in 2008, of the now-famous Stiglitz Commission — officially, the “Commission on the Measurement of Economic Performance and Social Progress”— whose members included a few Nobel laureates, and whose 2009 report recommends the collection and publication of SWB data by national statistical agencies. No wonder Gross National Happiness, a concept conceived in Bhutan in the 1970’s, is back in the headlines. Can a few simple questions on a national survey, such as the British (Fab) Four above, be the basis of a reliable indicator of national wellbeing? Will the Bank of England soon tie its monetary policy to the “rate of happiness” (or to the “rate of anxiety”), making central banks that still tie their policies to traditional indicators such as the rate of unemployment seem outdated? 

Not so fast. While demand for SWB indicators is clearly on the rise — witness Ben Bernanke’s discussion of “the economics of happiness” in several speeches in recent years — efforts to construct and apply survey-based well-being indicators are still in their infancy. Among the most urgent still-unresolved practical questions are: Which SWB questions should governments ask? And how should responses to different questions be weighted relative to each other? The four questions above, for example, ask about life satisfaction, happiness, anxiety, and life being worthwhile. But does the public consider these the only — or even the most — important dimensions of well-being? And even if it does, how would people feel about — and will they support — a government policy that increases, say, both happiness and anxiety at the same time? 

These are the questions that my colleagues — Dan Benjamin and Nichole Szembrot here at Cornell, and Miles Kimball at the University of Michigan — and I address in our working paper, “Beyond Happiness and Satisfaction: Toward Well-Being Indices Based on Stated Preference” (2012). The idea behind our proposed method for answering the two questions — the “what to ask” question and the “how to weight different answers” question — is simple and democratic, and consists of two steps: first, gather a list, as long as you can, of potential SWB questions that governments could potentially include in their surveys; and second, let the public determine, through a special-purpose survey that we designed, the relative weights. 

To demonstrate our method, we followed these two steps. We began by compiling a list of 136 aspects of well-being, based on key factors proposed as important components of well-being in major works in philosophy, psychology, and economics. While far from exhaustive, our list represents, as far as we know, the most comprehensive compilation effort to date. It includes SWB measures widely used by economists (e.g., happiness and life satisfaction) as well as other measures, including those related to goals and achievements, freedoms, engagement, morality, self-expression, relationships, and the well-being of others. In addition, for comparison purposes, we included “objective” measures that are commonly used as indicators of well-being (e.g., GDP, unemployment, inflation). 

Next, we designed and conducted what economists call a stated preference (SP) survey to estimate the relative marginal utility of these 136 aspects of well-being. In plain English, what that means is that we asked a few thousands of survey respondents to state their preference between aspects from our list (e.g., if you had to choose, would you prefer slightly more love in your life or slightly more sense of control over your life?). With enough such questions, we could estimate the relative weight our respondents put on each of these aspects of life. 

Among other things, we found that while commonly measured aspects of well-being such as happiness, life satisfaction, and health are indeed among those with the largest relative weight (or marginal utility), other aspects that are measured less commonly have relative marginal utilities that are at least as large. These include aspects related to family (well-being, happiness, and relationship quality); security (financial, physical, and with regard to life and the future in general); values (morality and meaning); and having options (freedom of choice, and resources). Using policy-choice questions in which respondents vote between two policies that differ in how they affect aspects of well-being for everyone in the nation — rather than state which of two options they prefer for themselves — we continued to find the patterns above and in addition found high marginal utilities for aspects related to political rights, morality of others, and compassion towards others, in particular the poor and others who struggle. We also explored differences across demographic-group and political-orientation subpopulations of our respondents. 

But these findings themselves are perhaps less important. After all, our sample was not representative, and we had to make practical compromises in our data collection and analysis that governments would not have to make. The main contribution of our work, we believe, lies in outlining a new method, and in demonstrating its feasibility. Our method for evaluating SWB questions and for determining their relative weight in a well-being index can now be discussed, criticized, and, as a result, improved on. The familiar conventional indicators such as GDP, inflation, and unemployment did not start in the refined state we know them today: they have been continually fine-tuned over many decades. We hope that our work will contribute to a similar process regarding a SWB-based index. 

Many practical obstacles still have to be overcome before standardized, systematic measurement and tracking of SWB for policymaking purposes becomes a reality. But if the endeavor is successful, then perhaps our children — who I doubt will have heard of Bobby McFerrin’s #1 hit - will at some point consider a DWBH index - “Don’t Worry Be Happy” index - as standard as GDP and other indicators.


Note: I (Miles) give my take on the same research in my column “Judging the Nations: Wealth and Happiness are Not Enough.”

For Sussing Out Whether Debt Affects Future Growth, the Key is Carefully Taking into Account Past Growth

A Joint Post by Miles Kimball and Yichuan Wang

We are very pleased with the response to our May 29, 2013 Quartz column, “After crunching Reinhart and Rogoff’s data, we concluded that high debt does not slow growth.” Miles gives links to some of the online reactions in his (more accurately titled) companion blog post the next day, “After Crunching Reinhart and Rogoff’s Data, We Found No Evidence That High Debt Slows Growth.” The one reaction that called for another full post was Arindrajit Dube’s post “Dube on Growth, Debt and Past Versus Future Windows.”  Arindrajit suggests in that post that in his working paper “A Note on Debt, Growth and Causality,” he had actually explored the variations that the two of us focus on, but we want to argue here that we did one important thing that Arindrajit did not try in his working paper: controlling for ten years worth of data on past growth, as we did in our Quartz column. In this post, we argue that controlling for ten years worth of data on past growth is the key to getting positive slopes for the partial correlation between debt and future growth. We were surprised to find that controlling for ten years of past GDP growth makes the partial correlation between debt and near future growth in future years 0 to 5positive(as well as the further future growth in future years 5 to 10).The graph at the top shows our main message. Since this is a long post, let us give the bottom line here and return to it below:

The two of us could not find even a shred of evidence in the Reinhart and Rogoff data for a negative effect of government debt on growth for either growth either in the short run (the next five years) or in the long run (as indicated by growth from five to ten years later).   

The most important proviso in this statement is the clause “in the Reinhart and Rogoff data." 

Yichuan has placed our programs in a public dropbox folder. Also, on Yichuan’s blog Synthenomics, we have an additional companion post, "Instrumental Tools for Debt and Growth,” showing that instrumenting the debt to GDP ratio by the past debt to GDP ratio in order to isolate high debt and low debt policies from high or low debt caused by recent events makes the relationship between debt and future growth more positive. (This is mainly due to evidence from movements of debt in tandem across countries over time rather than movements in debt that distinguish one country from another at a give time.) 

Why it matters: Why does it matter whether the seeming effect of debt on future growth is a small positive number or a small negative number? Let us illustrate. Brad DeLong says (and Paul Krugman quotes Brad DeLong saying):

…an increase in debt from 50% of a year’s GDP to 150% is associated with a reduction in growth rates of 0.1%/year over the subsequent five years…

The first thing to say about this is that some of the estimates for going from 0 debt to a 50% debt to GDP ratio are bigger negative numbers. As Miles wrote in the companion post “After crunching Reinhart and Rogoff’s Data, We Found No Evidence That High Debt Slows Growth”:

if I were convinced Arin Dube’s left graph were causal, the left graph seems to suggest that higher debt causes low growth in a very important way, though of course not in as big a way as slow growth causes higher debt. If it were causal, the left graph suggests it is the first 30% on the debt to GDP ratio that has the biggest effect on growth, not any 90% threshold.

The second thing to say is that reducing the growth rate .1% per year adds up. After five years, GDP would be .5% lower. Since the extra debt going from 50% to 150% is a year’s GDP, that is like a .5% per year addition to the interest on that extra debt, except that people throughout the economy experience the cost rather than the government alone. And if the effect on the path of GDP is permanent, that annual cost might not go away even when the debt is later repaid.

So we think it matters whether the best evidence points to what looks like a small positive slope or what looks like a small negative slope. And given how important the issues are, the Bayesian updating from results that are statistically insignificant at conventional levels of significance can have substantial practical importance.

Ten years worth of past GDP growth data are significantly better at predicting future GDP growth than five years worth of past GDP growth data.

There is a wide range of growth rates in the data. Even within a given country, growth rates can be very different over the many decades of time represented in the Reinhart and Rogoff data. So it should not be surprising that it is helpful to use data on many years of past growth in order to predict past growth. Define time t as the year in which the debt/GDP ratio is measured. Then what we focus on is the difference between predicting future real GDP growth based on only the growth rates from t-5 to t-4, t-4 to t-3, t-3 to t-2, t-2 to t-1, and t-1 to t, and adding to those five most recent past annual growth rates the average growth rate from t-10 to t-5.  The graph immediately below shows that there is, indeed, variation in the growth rate from t-10 to t-5 that can’t be predicted by the most recent past five annual growth rates of GDP.   

The next graph shows that the average growth rate from t-10 to t-5 does, indeed, help in predicting the future growth rate of GDP from t to t+5:

Here, “Excess Growth from Past Years 10-5” just means growth in past years 10 to 5 beyond what one could have guessed from knowing the most recent past five annual growth rates. In the multiple regression of future growth from t to t+5 on past growth, the t-statistic on “deep past” growth from t-10 to t-5 is 3.75, and so meets a very high standard of statistical significance.  

One way to think of why growth from t-10 to t-5 might help in predicting future growth is that it might help indicate the pace of growth to which growth will tend to mean revert after short-run dynamics play themselves out. But one would expect that there is a limit to the extent to which more and more growth data from the past will help. We find that growth from t-10 to t-5 does not help much in predicting growth in the five-year period fifteen years later from t+5 to t+10, as can be seen in the following graph:

What would we have found if we had neglected to control for growth in past years 10 to 5? 

To illustrate the importance of carefully taking into account the predictive value of many past years of growth for future growth, let us show first what we would have gotten if we had only controlled for the most recent five annual growth rates of GDP.

Here we get a small downward slope. But we don’t believe this small downward slope is causal, since it doesn’t adequately control for all the things other than debt that make both past and future growth tend to be higher or that make both past and future growth tend to be low, and as a byproduct, also have an effect on debt. 

Looking at further future growth in future years 5 to 10, we see a positive relationship between excess debt and further future GDP growth. 

THE MAIN EVENT: THE RELATIONSHIP BETWEEN DEBT AND FUTURE GROWTH AFTER CONTROLLING FOR TEN YEARS OF PAST GROWTH.

Someone might object that after controlling for a full ten years of past GDP growth (the most recent five years of annual growth, plus the average growth rate in past years 10 to 5), there wouldn’t be much independent variation in debt left with which to identify the effects of debt, but that is not so. The following graph shows that some country-years have higher debt than would be predicted by ten years of past growth and some have lower debt than would be predicted by ten years of past growth.

We call the difference between actual debt and what could have been predicted by ten years of past growth is “excess debt.” (It is important to understand that this is only of interest as a statistical object.) As can be seen in the graph immediately below (identical to the graph at the top of the post), debt above what could have been predicted by ten years of past growth has a positive relationship to future growth in the five years after the year when the debt to GDP ratio is measured.  

Looking further into the future, to average GDP growth in future years 5 to 10, the relationship between excess debt and further growth looks more strongly positive. 

Year Fixed Effects: How much of the evidence is from movements in average debt across all countries over time and how much is from movements of debt in one country relative to another? 

In our Quartz column “After crunching Reinhart and Rogoff’s data, we concluded that high debt does not slow growth,” we mentioned, but did not show, what happens when time fixed effects are included in order to isolate what part of the evidence depends on distinct movements in different countries as opposed to movement of debt in many different countries in tandem over time. Surprisingly, with the specification here, even with year fixed effects, we find a positive partial correlation between debt and future growth, for both GDP growth in future years 0 to 5 and GDP growth in future years 5 to 10. (See the two graphs immediately below.) These positive slopes are smaller, however, reflecting the subtraction of the evidence from movements of debt in many different countries in tandem over time.   

blog.supplysideliberal.com tumblr_inline_mnus8rqOEc1qz4rgp.png

The bottom line is that the only time we ever found a negative partial correlation between debt and future growth–that is, the only time we found a relationship between excess debt and future growth that would result in a negative coefficient in a multiple regression–was when we only controlled for five years of growth when looking at debt and near future growth in future years 0 to 5. When we control for a full ten years of past growth, we get a positive relationship between debt and future growth in both future growth windows and both with and without year fixed effects.

In our Quartz column “After crunching Reinhart and Rogoff’s data, we concluded that high debt does not slow growth,“ we wrote 

…the two of us could not find even a shred of evidence in the Reinhart and Rogoff data for a negative effect of government debt on growth.

There, we meant, we could not find even a shred of evidence in the Reinhart and Rogoff data for a negative effect of government debt on growth in the long run, as indicated by GDP growth from five to ten years later. Now let us amplify our statement to say, as we did at the top:

The two of us could not find even a shred of evidence in the Reinhart and Rogoff data for a negative effect of government debt on growth for either growth either in the short run (the next five years) or in the long run (as indicated by growth from five to ten years later).   

The most important proviso in this statement is the clause "in the Reinhart and Rogoff data." 

A key limitation of our analysis: the Reinhart-Rogoff data set may undersample troubled countries. 

In his post “None the Wiser After Reinhart, Rogoff, et al.,” Paul Andrews argues: 

What has not been highlighted though is that the Reinhart and Rogoff correlation as it stands now is potentially massively understated. Why? Due to selection bias, and the lack of a proper treatment of the nastiest effects of high debt: debt defaults and currency crises.

The Reinhart and Rogoff correlation is potentially artificially low due to selection bias. The core of their study focuses on 20 or so of the most healthy economies the world has ever seen. A random sampling of all economies would produce a more realistic correlation. Even this would entail a significant selection bias as there is likely to be a high correlation between countries who default on their debt and countries who fail to keep proper statistics.

Furthermore Reinhart and Rogoff’s study does not contain adjustments for debt defaults or currency crises.  Any examples of debt defaults just show in the data as reductions in debt. So, if a country ran up massive debt, could’t pay it back, and defaulted, no problem!  Debt goes to a lower figure, the ruinous effects of the run-up in debt is ignored. Any low growth ensuing from the default doesn’t look like it was caused by debt, because the debt no longer exists! 

In the light of Paul Andrews’s critique, we want to make it clear that our analysis is about the claim we felt Carmen Reinhart and Ken Rogoff seem to have been making that there might well be a negative effect of debt on growth even for countries that no doubts will repay their debts. That is, the question we are trying to answer is whether there is a negative effect of debt on growth other than the obvious effect that national bankruptcy or fears of national bankruptcy have.

Joshua Foer on Deliberate Practice

The idea of deliberate practice is one that I have been very eager to get my students to understand. I found a nice passage in Moonwalking with Einstein: The Art and Science of Remembering Everything, explaining deliberate practice. Here it is, from pages 169-175:

When people first learn to use a keyboard, they improve very quickly from sloppy single-finger pecking to careful two-handed typing, until eventually the fingers move so effortlessly across the keys that the whole process becomes unconscious and the fingers seem to take on a mind of their own. At this point, most people’s typing skills stop progressing. They reach a plateau. If you think about it, it’s a strange phenomenon. After all, we’ve always been told that practice makes perfect, and many people sit behind a keyboard for at least several hours a day in essence practicing their typing. Why don’t they just keep getting better and better. 

In the 1960’s, the psychologists Paul Fitts and Michael Posner attempted to answer this question by describing the three stages that anyone goes through when acquiring a new skill. During the first phase, known as the “cognitive stage,” you’re intellectualizing the task and discovering new strategies to accomplish it more proficiently. During the second, “associative stage,” you’re concentrating less, making fewer major errors, and generally becoming more efficient. Finally you reach what Fitts called the “autonomous stage,” when you figure that you’ve gotten as good as you need to get at the task and you’re basically running on autopilot….

What separates the experts from the rest of us is that they tend to engage in a very directed, highly focused routine, which Ericsson has labeled “deliberate practice.” Having studied the best of the best in many different fields, he has found that top achievers tend to follow the same general pattern of development. They develop strategies for consciously keeping out of the autonomous stage while they practice by doing three things: focusing on their technique, staying goal-oriented, and getting constant and immediate feedback on their performance. 

Amateur musicians, for example, are more likely to spend their practice time playing music, whereas pros are more likely to work through tedious exercises or focus on specific, difficult parts of pieces. The best ice skaters spend more of their practice time trying jumps that they land less often, while lesser skaters work more on jumps they’ve already mastered. Deliberate practice, by its nature, must be hard….

The best way to get out of the autonomous stage and off the OK plateau, Ericsson has found, is to actually practice failing. One way to do that is to put yourself in the mind of someone far more competent at the task that you’re trying to master, and try to figure out how that person works through problems. Benjamin Franklin was apparently an early practitioner  of this technique. In his autobiography, he describes how he used to read essays by the great thinkers and try to reconstruct the the author’s arguments according to Franklin’s own logic. He’d then open up the essay and compare his reconstruction to the original words to see how his own chain of thinking stacked up against the master’s. The best chess players follow a similar strategy. They will often spend several hours a day replaying the games of grand masters one move at a time, trying to understand the expert’s thinking at each step. Indeed, the single best predictor of an individual’s chess skill is not the amount of chess he’s played against opponents, but rather the amount of time he’s spent sitting alone working through old games.

The secret to improving at a skill is to retain some degree of conscious control over it while practicing–to force oneself to stay out of autopilot. With typing, it’s relatively easy to get past the OK plateau. Psychologists have discovered that the most efficient method is to force yourself to type faster than feels comfortable, and to allow yourself to make mistakes. In one noted experiment, typists were repeatedly flashed words 10 to 15 percent faster than their fingers were able to translate them onto the keyboard. At first they weren’t able to keep up, but over a period of days they figured out the obstacles that were slowing them down, and overcame them, and then continued to type at the faster speed. By bringing typing out of the autonomous stage and back under their conscious control, they had conquered the OK plateau….

This, more than anything, is what differentiates the top memorizers from the second tier: they approach memorization like a science. They develop hypotheses about their limitations; they conduct experiments and track data. “It’s like you’re developing a piece of technology, or working on a scientific theory,” the two-time world champ Andi Bell once told me. “You have to analyze what you’re doing." 

Also see my post ”Joshua Foer on Memory.

Quartz #21—>Optimal Monetary Policy: Could the Next Big Idea Come from the Blogosphere?

blog.supplysideliberal.com tumblr_inline_mme4o27lNn1qz4rgp.png

Link to the Column on Quartz

Here is the full text of my 21st Quartz column, “This economic theory was born in the blogosphere and could save markets from collapse,” now brought home to supplysideliberal.com and given my preferred title. (I am now up-to-date bringing home to supplysideliberal.com all of my columns that are past the 30-day exclusive I give Quartz by contract.)  

Even before I started blogging, Noah Smith told me I should write a post about NGDP targeting. This is that post. And it is also the post on “Optimal Monetary Policy” that I have been promising for some time. It was first published on February 22, 2013. Links to all my other columns can be found here.

If you want to mirror the content of this post on another site, that is possible for a limited time if you read the legal notice at this link and include both a link to the original Quartz column and the following copyright notice:

© February 22, 2013: Miles Kimball, as first published on Quartz. Used by permission according to a temporary nonexclusive license expiring June 30, 2014. All rights reserved.


The most important equation in economics

Much of the history of economics can be traced by the contents of its best-selling textbooks. In 1848, John Stuart Mill published the blockbuster economics textbook of the 19th century: Principles of Political Economy. A century later, in 1948, Paul Samuelson—the very first American Nobel laureate in economics, who more than anyone else made economics the mathematical subject it is today—popularized Keynesian economics in the best-selling economics textbook of all time, Economics: An Introductory Analysis. This past year, in my classroom, I taught from one of the two best and most popular introductory economics textbooks, Brief Principles of Macroeconomics authored by Greg Mankiw—chair of the economics department at Harvard, former chair of the president’s Council of Economic Advisors, and my graduate school advisor.

One constant in all of these textbooks is an equation as famous for economics as E=MC2 is for physics—an equation suitable for an economist’s vanity license plate: MV=PY.

As E=MC2 is the key to understanding nuclear weapons and nuclear power, the “equation of exchange” MV=PY is the key to understanding monetary policy. And for the first major school of economic thought born in the blogosphere, I know of no way to explain their views without invoking this equation. Nerdily charismatic, they call themselves market monetarists,”  but it is easier to identify them by their attribution of almost mystical powers to maintaining a steady growth rate of both sides of this equation. Let me try to explain why the equation MV=PY is so important.

One way to read MV=PY is: Velocity adjusted money equals nominal GDP.

  • M is the money supply.

  • V is the “velocity” of money or how hard money works.

  • So M times V is velocity-adjusted money.

  • P is the price level: think of the consumer price index, though P would include the prices of other things as well, such as equipment bought by businesses.

  • Y is real GDP, the amount of goods and services produced by the economy that really matter for our material well-being.

  • But P times Y is GDP at current prices before adjusting for inflation. GDP before adjusting for inflation is called nominal GDP. PY, that is nominal GDP, can go up either because real GDP goes up (an increase in Y) or because prices go up (an increase in P).

So what the equation of exchange says is: if there is a lot of money in the economy and that money is working hard, then either the economy will have high real GDP (=Y) or high prices (P). On the other hand, if there is not enough money or money is not working very hard, then either real GDP will be low or prices will be low.

Milton Friedman, one of the dominant economists of the 20th century, didn’t write a best-selling economics textbook, but had an enormous influence on policy as a public intellectual. (To celebrate what would have been his 100th birthday last year, I annotated links to many of his best YouTube videos here on my blog. They are still well worth watching.) Friedman played a key role in the US’s switch from a draft to a volunteer military and was the intellectual mastermind behind the school choice movement. As an adviser to president Ronald Reagan, he was the gray  eminence of Reaganomics. In monetary policy, Friedman proposed having the money supply (M) grow at a constant rate. Since he thought velocity (V) wouldn’t change much, Friedman was, in effect, advocating a constant growth rate of the velocity-adjusted money supply—and therefore a constant growth rate of nominal GDP.  The trouble with this idea is that velocity turned out in later years not to be constant—both because it is affected by interest rates and because it is affected by innovations such as ATM’s. So keeping the money supply (M) growing at a constant rate would cause erratic swings in the velocity-adjusted money supply (MV), and therefore in nominal GDP.

Enter: the market monetarists

So the spirit of Milton Friedman’s proposal is the idea of keeping velocity-adjusted money, and therefore nominal GDP, growing at a constant rate. In a movement that should make Milton Friedman proud (if he can get internet access in heaven), that is exactly what the “market monetarists” advocate. The importance market monetarists put on the idea of keeping nominal GDP growing at a constant rate is readily apparent from the frequency of the abbreviation NGDP for nominal GDP in some of the posts and tweets by Scott SumnerDavid Beckworth, and Lars Christensen. One of the best ways to see the value of paying attention to nominal GDP is to look at a graph of nominal GDP over time in the US, using data from the Federal Reserve Bank of St. Louis.

In all the years since 1955, the most striking feature of the graph is the jog down in nominal GDP since the financial crisis in late 2008. Market monetarists take this jog down in GDP since the financial crisis as an indication that monetary policy has not been anywhere near stimulative enough in the aftermath of the financial crisis. In this, they are absolutely right. The reason the graph of nominal GDP shows the stance of monetary policy so well is that too-tight monetary policy drags down both prices (=P) and real GDP (=Y), which both contribute to nominal GDP (=PY) that is low relative to its trend. Conversely, too-loose monetary policy pushes up both prices and real GDP, which both contribute to nominal GDP that is high relative to its trend.

Here is a corresponding graph for the euro zone minus Germany from wunderkind and Wonkbook blogger Evan Soltas:

This graph for Europe focuses on recent years (the trend is shown by the dashed line) and indicates that, while it might have been more nearly okay for Germany, the European Central Bank’s monetary policy has been too tight for the rest of the euro zone.

The graphs show one of the big attractions of market monetarism: with graphs like these it is easy to get a handle on whether monetary policy has been too loose or too tight. Market monetarists go further to say that if the US Federal Reserve and other central banks committed to do whatever it takes to keep nominal GDP on track, then the financial markets listening to that commitment would react in a way that would help to make it happen. In Fed-speak, the market monetarists emphasize communication policy in the form of forward guidance on the track of nominal GDP the Fed or other central bank is aiming for. To know whether the financial markets are getting the message, market monetarists advocate the creation of assets that would provide a market prediction for nominal GDP much as TIPS (Treasury Inflation Protected Securities) provide a market prediction for inflation.

So far, I have emphasized the positive aspects of market monetarism, because I think market monetarism has, in fact, been an important force for good in our current economic troubles. When a crisis scares people into holding back on spending, the best remedy is monetary stimulus, and graphs of nominal GDP, interpreted as a market monetarist would, speak loudly for exactly the needed monetary stimulus.

Evaluating market monetarism

But now I want to step back and question whether market monetarism is the final answer for monetary policy. There are three things that matter for monetary policy: the temptation, the objective if a central bank can resist the temptation, and the toolkit.

The Temptation. The temptation for monetary policy is that, absent a concern about inflation, GDP is chronically too low for at least three reasons: imperfect competition, taxes, and labor market frictions. The trouble is that raising GDP beyond a certain point—a point called the natural level of GDPdoes raise inflation. And not only does raising GDP beyond a certain point raise inflation, pushing GDP above the natural level for even a year or two raises the level of inflation permanently. The one way to get rid of that extra inflation is to push GDP below the natural rate for a while. To put things starkly, after the above-natural GDP of the 1960s, we would still have the double-digit inflation of the 1970s if Americans hadn’t suffered through a big recession that put GDP below its natural level during Reagan’s first term in the early 1980’s. We have low inflation today in large measure thanks to the suffering of Americans in the early 1980s.

The objective. “Sinning” by having GDP above the natural level is no fun if it has to be coupled with “repenting” by having GDP below the natural level to avoid having inflation forever higher. There are two reasons the combination of above-natural GDP one year and below-natural GDP another year is a bad deal. First, the pleasure from higher output and employment to workers, to the taxman, and to firms, is not as big as the pain from lower employment. Second, higher output makes inflation go up more readily than lower output brings inflation back down. Put all this together, and the objective is clear: stay at the natural level of output to avoid the bad deals from any other combination of output in different years that keeps inflation from being higher in the end.

Now, let’s translate the objective of staying at the natural level of output into nominal GDP terms. (It is important in the discussion above that I am thinking of inflation primarily in terms of otherwise slow-to-adjust prices going up faster, rather than in terms of slow-to-adjust wages going up faster. My argument for doing that can be found here.) As long as the natural level of output is growing at a steady rate, keeping real GDP on that steady track will also keep inflation and so the rate of increase of prices steady. If both real GDP and prices are growing at a steady rate, then nominal GDP will be growing at a steady rate. So a steady growth rate of nominal GDP is exactly the right target as long as the natural level of output is growing steadily.

But what if new technology makes the natural level of output go up faster, as the digital revolution did from at least 1995 to 2003? Then real GDP should be going up faster to keep inflation steady. And that means that nominal GDP should also be going up faster. Historically, the Fed has not handled its response to unexpected technology improvements very well, as I discuss in another column, but that doesn’t change the fact that the Fed should have had nominal GDP go up faster after unexpectedly large improvements in technology. (Because the Fed actually let nominal GDP go below trend after technology improvements—instead of above trend as it should have—many people ended up not being able to get jobs after technology improvements.)

By the same token, if technology improves more slowly than it normally does, then both real and nominal GDP should be on a lower track to keep inflation steady and avoid the bad deals from pushing inflation up and then having to bring it back down. Some people have claimed that our current economic slump is a reflection of technology growing more slowly, but careful measures of the behavior of technology and a growing body of research by economists show that is at most a small part of what has been going on since the financial crisis that hit in late 2008. Indeed, if all of the below-trend output we have seen in the last few years were due to more slowly improving technology, we would not have seen inflation fall the way it did after the financial crisis.

The toolkit. Even if I can bring my market monetarist friends around to adjusting the nominal GDP target for what is happening with technological progress, I differ from them in thinking that the tools currently at the Fed’s disposal plus clearly communicating a nominal GDP target are not enough to get the desired result. The argument goes as follows. Interest rates are the price of getting stuff—goods and services—now instead of later. If people are out of work, we want customers to buy stuff now by having low interest rates. Thinking about short-term interest rates like the usual federal funds rate target that the Fed uses, the timing of the low interest rates matters. If everyone knows we are going to have low short-term interest rates in 2016, then it encourages buying in the whole period between now and 2016 in preference to buying after 2016. But to get the economy out of the dumps, we really want people to buy right now, not spread out their purchases over 2013, 2014, and 2015. The lower we can push short-term interest rates, the more we can focus the extra spending on 2013, so that we can have full recovery by 2014, without overshooting and having too much spending in 2015. This is an issue that economist and New York Times columnist Paul Krugman alludes to recently in a column about Japanese monetary policy.

There is only one problem with pushing the short-term interest rate down far enough to focus extra spending right now when we need it most: the way we handle paper currency. The Fed doesn’t dare try to lower the interest rate it targets below zero for fear of causing people to store massive amounts of currency (which effectively earns a zero interest rate). Indeed, most economists, like the Fed, are so convinced that massive currency storage would block the interest rate from going more than a hair below zero that they talk regularly about a zero lower bound on interest rates. The solution is to treat paper currency as a different creature than electronic money in bank accounts, as I discuss in many other columns. (“What Paul Krugman got wrong about Italy’s economy” gives links to other columns on electronic money as well.) If instead of being on a par with electronic money in bank accounts, paper currency is allowed to depreciate in value when necessary, the Fed can lower the short-term interest as far as needed, even if that means it has to push the short-term interest rate below zero.

Keeping the economy on target

In the current economic doldrums, breaking through the zero lower bound with electronic money is the first step in ensuring that monetary policy can quickly get output back to its natural level. A better paper currency policy puts the ability to lower the Fed’s target interest rate back in the toolkit. That makes it possible for the Fed to get the timing of extra spending by firms and households right to meet a nominal GDP target—hopefully one that has been appropriately adjusted for the rate of technological progress.

Despite the differences I have with the market monetarists, I am impressed with what they have gotten right in clarifying the confusing and disheartening economic situation we have faced ever since the financial crisis triggered by the collapse of Lehman Brothers on September 15, 2008. If market monetarists had been at the helm of central banks around the world at that time, we might have avoided the worst of the worldwide Great Recession. If the Fed and other central banks learn from them, but take what the market monetarists say with a grain of salt, the Fed can not only pull us out of the lingering after-effects of the Great Recession more quickly, but also better avoid or better tame future recessions as well.