Jessica Tozer: Boldly Going into a Future Where All Men and Women are Created Equal

In my sermons “UU Visions”“ and "So You Want to Save the World,” I say that a vision of how things should be is the starting place for trying to get there. Star Trek, along with entertainment, provides one such vision. The following is an excerpt from Jessica Tozer’s post “The Continuing Scientific Relevance of SciFi” written for the Armed with Science blog.


By the time Star Trek aired its first episode in 1966, Gene Roddenberry, the creator of Star Trek, was already a seasoned military veteran…. He flew planes in World War II, totaling 89 missions until he was honorably discharged at the rank of captain in 1945. During that time he saw people of all types in the military, pulling together for the sake of the mission, patriotism and each other.  It was this social foundation upon which he built his future military premise.

“It speaks to some basic human needs that there is a tomorrow, that it’s not all going to be over in a big flash and a bomb, that the human race is improving, that we have things to be proud of as humans. No, ancient astronauts did not build the pyramids.  Human beings built them because they’re clever and they work hard. Star Trek is about those things.” – Gene Roddenberry

… Roddenberry believed that the future would have evolved as much in science and technology as it would in social reform (miniskirts and beehives not withstanding).

“If man is to survive, he will have learned to take a delight in the essential differences between men and between cultures. He will learn that differences in ideas and attitudes are a delight, part of life’s exciting variety, not something to fear.”  — Gene Roddenberry

Nichelle Nichols, who played Lt. Uhura (in TOS), often recalls the story about the time she was thinking of quitting Star Trek to return to Broadway, and how it was Martin Luther King, Jr. who talked her out of it.  A fan of Star Trek, MLK Jr. mentioned to Nichelle that her show was one of the few he and his wife would allow their children to watch, and that she was a symbol for reform and change….

So she stayed. I mean, who could say no to that?

As a result, she would go on to film the episode “Plato’s Stepchildren”, the first example of a scripted inter-racial kiss between a white man and black woman on American television.

How’s that for social change?

It was a vision of successful racial integration.  Men, women of all races working together as equals….

Whoopi Goldberg asked to have her role as Guinan on Star Trek TNG.  She has been quoted as saying that she too, loved Star Trek as a kid, and that the show was the first indication that “black people make it to the future”.  Geordi is blind and he flies a spaceship.  Worf is an alien race that was once an enemy, serving proudly on the bridge of the Enterprise.  Data is an android.  I could go on and on.

John L. Davidson on Resolving the House Mystery: The Institutional Realities of House Construction

A Manufactured Home from Manufactured Home Source

A Manufactured Home from Manufactured Home Source

John L. Davidson, is a Missouri lawyer who has an interesting blog, The Law of Drones, UAVs, UASs, and sUASs and is a frequent correspondent. You can find him on Twitter here. John had this intriguing response to my storify post  "A House Mystery: Why Does House Construction Go Up in Booms and Down in Recessions?“ He was generous enough to agree to share this. 


I have attached an article I wrote in 2005 for the South Carolina Bar which implicitly answers your questions. It has to do with banking law and LTV values.

Anyone in the housing or banking business should have give you an answer in 3 minutes. The answers collected in you series of tweets are wrong.

Understand that I have been representing homebuilders since 1980 and at one time represented one of the largest 10 private builders in the US.

While it may finally have changed with the Dobb Frank (law to new to know how it is being applied), before the latest Depression, there was no “equity or capital” in home building.

While nominally there was bank lending, in substance what we actually had was merchant banking with banks using construction loans to builders to give the appearance of lending, as I explain in my article.

This was accomplished by a manipulation of the LTV ratios. If you knew what you were doing, and had a good appraisal, a builder only needed 25% of the cost of the raw ground and could borrow all the rest. And by using “presales,” etc. the builder didn’t even need 25%.

In order to give the appearance of actually lending money, banks would monitor builders cash on hand and retained earnings. If a slow down appeared likely, banks would demand this cash be used to make greater down payments on renewal (home all LAC loans were 12 month, renewed in Sept., based on sales in spring and closings during summer).

My very large client failed in 1990 under this system. It was ironic. Its tax year was 8/31. It closed its tax year, making profits and paying income taxes. In September its lenders refused to renew loans unless all cash on hand was used to increase LTV ratios.  The cash was paid and on 10/1 the firm failed. Since all lots and homes were subject to bank’s security interests, my client could not sell a home or lot and collect any money at a closing. The banks foreclosed and then found new builders, using same plans etc. to sell homes, when things picked up.

In economic substance, the developer was merely an employee of the bank, albeit the highest paid employee. The loan documents also gave the bank complete control over cash, who was paid, when, etc.

The bank acted to call for more cash merely for appearance sake for its regulators.

The entire system has nothing to do with interest rates, wages, material prices, etc.  It was strictly a function of bank willingness to take risk. They could expand the inventory of homes or decrease such at any time. As they controlled all the lots—it takes 24 months, at best, to move from raw ground to finished lot and 30 months to a finished home—their actions controlled supply and prices.

Miles: Very interesting. But the question remains, why the banks don’t build more houses during the recession when it is cheaper to build houses?

John: Now that we have the right question, I have three or four answers, which kind of blend together to explain.

1)Banks do not make their money on the cost of the house. By law, they don’t share in profits. Banks make money on fees for loan origination and interest. Since they do not share in upside potential of build low, sell high, why take the risk?

2)Banks have minimal capital, so when economy dives and they have to take loan loss reserves, they don’t have capital to put into housing inventory.

3)Denial and appearances. Bankers do not think like merchant bankers. Lots of them think they are lenders. My good, look at denial elsewhere (even here in my one person law firm ;<) And, then you have what bankers think about all the time, What would the regulators think? If Bankers had financed new home construction in 2009 they could have been charged with bank fraud.

I am very serious on this point. The Bank fraud statute has been interpreted to make it a crime to make a “foolish” loan. Of course, true, when it comes to whether a loan is good or bad, is like art, in the eyes of the beholder. Read this case and consider whether if you represented a bank you would have told them to build homes in 2009?

US v. Ely, 142 F. 3d 1113

http://scholar.google.com/scholar_case?q=%22bank+fraud%22+directors+window&hl=en&as_sdt=4,72,73,78,79,80,86,88,93,114,129,134,135,141,142,143,149,151,156,258,259,260,261,310,311,321,322,323,324,373,374,383&case=16038856462378403756&scilh=0

In part the case says:

Reckless disregard equally satisfies the intent required under § 1344. See Willis v. United States, 87 F.3d 1004, 1007 (8th Cir.1996). What is charged in the indictment is not mere breach of the duty of a fiduciary to act honestly and prudently but a breach of that duty resulting in the reckless disposition of $2.7 million of Statebank funds. The defendants are adequately apprised of the charge of crimes committed in violation of § 1344(a).

We take the district court’s point that if the world price of oil had not fallen, all the troubles that befell the defendants might not have occurred. They might be today rich and respected citizens of Anchorage. They were unlucky in the extreme. Many financial irregularities come to light only in bad times. If the irregularities are criminal, as those charged here are portrayed as being, the defendants cannot excuse criminal conduct by the plea of bad luck.

4)No home equity of developers. I mentioned the LTV cash down issue. Well developers in fact never put any $$$ down. Most of the time the “cash” part of the LTV comes from a guaranty on a home (with equity) and a second mortgage.  When homes sales drop, prices drop, available “equity” drops and capacity of banks to lend contracts. This was mentioned a lot by community banks, post 2008.

So, there you have it. A very detailed explanation about how the real world works.

Would appreciate you letting me know if you see any oversights in my thinking.

Data on Top Income Shares

This post is reblogged from isomorphismes:Incomes of the top .01%, 1915–2008 in France and United Statesvia @JWMason1from the interactive&nbsp;The Top Incomes Database&nbsp;—you can select countries such as Argentina, China, Indonesia, Irelandand yo…

This post is reblogged from isomorphismes:

Incomes of the top .01%, 1915–2008 in France and United States

via @JWMason1

from the interactive The Top Incomes Database —

you can select countries such as Argentina, China, Indonesia, Ireland

and you can select upper quantiles like the lower half of the top percent; the .5%–.1%; top .1%; the top 10%–5%; and so on

and you can get income controls, price level indices, number of tax units, number of adults — the things you need to divide by in order to make apples-to-apples comparisons

Wooo, data!

Sticky Prices vs. Sticky Wages: A Debate Between Miles Kimball and Matthew Rognlie

Total Factor Productivity With and Without Utilization Adjustment Contrsucted by John Fernald Using Techniques from Susanto Basu’s, John Fernald’s and Miles Kimball’s paper “Are Technology Improvements Contractionary”

Total Factor Productivity With and Without Utilization Adjustment Contrsucted by John Fernald Using Techniques from Susanto Basu’s, John Fernald’s and Miles Kimball’s paper “Are Technology Improvements Contractionary”

I had a very interesting email discussion with Matthew Rognlie (who blogs at mattrognlie.com) about price rigidity versus wage rigidity, sparked by my storify post “Why the Nominal GDP Target Should Go Up about 1% after a 1% Improvement in Technology,” where the argument hinges on whether prices are sticky, or wages are sticky, or both. The two of us decided to share our discussion with you.


Matthew: 

I’m a grad student at MIT, and I’ve been enjoying your blog a great deal recently – it’s one of the only blogs I know for discussions of business cycle macro from someone with a really good grasp of modern work in the field. (Plus, I want to steal the “supply side liberal” label for myself.)

I was particularly interested to see your recent twitter discussion about price vs wage rigidity. My view is that there is extraordinarily strong evidence for nominal rigidities at the aggregate level – the most compelling being the old Mussa point about real exchange rate fluctuations under pegs vs. floating – but I am not so convinced that it comes from price rather than wage rigidity. In fact, recently I’ve been evolving toward the view that wage rigidity may be more important.

One of the difficulties in macro models of rigidity, I think, is that for reasons of analytical tractability most models tend to focus on one or two sources of rigidity, when in fact we have quite a few compelling candidates (the following categories are not precisely defined):

1. Nominal price rigidities: direct stickiness in nominal prices themselves, or perhaps (somewhat less plausibly in my view) stickiness in a nominal plan for prices, a la Mankiw and Reis.

2. Real price rigidities: various reasons why firms do not adjust prices so much in response to changes in marginal cost (possibly because there is some kind of strategic complementarity driven by the market structure, a la your 1995 aggregator), or why marginal costs themselves do not move much (aside from the obvious impact of wage rigidity, this could be due to the role of intermediates a la Basu).

3. Nominal wage rigidities: direct stickiness in nominal wages, either due to explicit contracts or implicit guarantees of wage stability, particularly in the downward direction. (Uncomfortable questions here about whether measured wages are really allocative, of course – I suspect their allocative role is surprisingly high.)

4. “Real" wage rigidities: either literal stickiness in inflation-adjusted wages a la Blanchard and Gali, or a set of frictions that prevent firms from adjusting wages as necessary, like the complexity of firms’ internal wage structure.

Anyway, my best guess is that all four of these are relevant, probably each to a substantial degree. Since these rigidities multiply, it’s easy to see how we could end up with a very high degree of aggregate nominal rigidity, to a degree that seems implausible when we’re scrutinizing a model with only one or two sources of rigidity.

I’m beginning to think that nominal wage rigidity, however, has a disproportionately important role, especially during recessions. There are many reasons why I think this, and I can’t list them all here, but the one I think is particularly interesting is the existence of nominal asymmetry. A large output gap is extraordinarily effective at bringing inflation down from, say, 8% to 2%, but far less effective at bringing about a drop from 2% to -2%. Even Japan, the prototypical example of a country in a prolonged deflationary slump, never saw a sustained rate below -1%. To me, the rapidity of disinflation compared to deflation suggests strong asymmetries in the nature of rigidity – and by far the most plausible candidate for asymmetry is nominal wage rigidity.

Certainly there are some other possible explanations as well. Perhaps inflation rates are very strongly influenced by forward-looking expectations of central bank policy – and while it’s plausible that a committed central bank might be trying to disinflate, no one would ever expect a central bank to actively attempt large-scale deflation. Or perhaps the much quicker rate of disinflation is due to higher nominal flexibility when the rate of inflation is further away from 0. Such alternatives are plausible, but my intuition is that quantitatively, it is very tough to explain the observed asymmetry without recourse to some asymmetry in the rigidity itself.

Another reason I am skeptical of the 80s-90s shift toward exclusively price-side rigidities is that I think some of the commonly stated arguments are not quite right. You mention in the twitter dialogue, for instance, that price rigidities justify a procyclical price level, while wage rigidities would lead to a countercyclical price level. While this is true to some extent, the procyclicality induced by sticky prices is much stronger than the countercyclicality induced by sticky wages. Indeed, in a benchmark model where labor is the only factor of production and there are no real shocks, the real wage under sticky wages is acyclical: it’s just the MPL divided by the markup, and when prices are flexible and firms can freely hit the desired markup, this is unaffected by nominal shocks. Countercyclicality under sticky wages only emerges due to flexibly priced factors of production other than labor.

My sense is that the countercyclicality induced by sticky wages is so weak that, if one introduces moderately sticky prices and market structure amplifying those sticky prices (i.e. a high intermediate share), it is easy to come out with a mildly procyclical real wage. And, indeed, cutting out oil shocks I’d say that the real wage has been just mildly procyclical in the postwar era – so this checks out. (Huang, Liu, and Phaneuf’s 2004 AER is a nice reference that works through some of this more explicitly.)

Anyway, I would love to have some more dialogue with you about the price vs. wage stickiness issue. I’ve been spending a fair amount of time recently thinking about ways to empirically distinguish between the two sources of rigidity, and I’ve also been talking to some of my fellow grad students - who seem to have pretty strong opinions in favor of wage rigidity. Maybe you can set my generation straight!

Miles: 

My biggest objection to nominal wage rigidity is that observed wages are not allocative. It is hard to believe I would just give up on more labor input because the wage is high as opposed to asking my existing workers to work harder for the same pay, OR hire a worker at the high sticky wage level now (giving them a bigger piece of the pie of surplus from a match) and expecting them to understand that they might get a smaller piece of the pie of surplus from a match in the future. In other words, it makes no sense not to get more labor input just because you happen to have a high wage right now. I have no problem with wage rigidity when there is an actual union setting wages in the picture. But if the firm is a unilateral wage setter, and has a lot of influence over pace of work as well as wages, how can there be effective wage stickiness? 

In other words, I don’t think it needs to be spelled out what is really going on inside the firm/worker relationship before we too readily agree that there are sticky wages. Unfortunately, most of the models there are either too rudimentary or too complex and focused on other issues to be of the help we would want in figuring out how effectively rigid wages are. I am just raising the skeptical point that if there is an allocative inefficiency from having the wrong amount of labor input, wouldn’t firms and workers together figure out some way around that? They have a long-term relationship in a way that few customer-supplier relationships can match. 

A more simple prediction is that wages should look stickier the more conflict there is in the firm/worker relationship. Where firms and workers get along famously, there should be very little allocative inefficiency and therefore no allocative wage stickiness. Where firms and workers are at loggerheads, there could be a lot of effective wage stickiness.  

One other point: one way in which nominal wage rigidity fails is that firms make workers contribute more for medical insurance. If you can cut benefits across the board in that way, and then have raises for some, you have loosened the downward nominal rigidity. Finally, don’t forget my point that the observation that technology improvements are contractionary can only work if there is substantial price stickiness. You can’t get that from wage stickiness alone. So that means price stickiness is a major factor in the economy–though there might also be wage stickiness.

My bottom line has been that if for tractability you have to choose between only price stickiness in a model and only wagestickiness, you are closer to reality with price stickiness. But if you can manage both and can deal with the micro issues of long-term labor relationships and variable effort, then it could be reasonable to have some wage stickiness too. 

Matthew: 

Thanks so much for your quick and detailed response. I apologize for my tardiness - I was working on a response Thursday night, but then things around here got a little crazy and I dropped it for a while.

I agree that the key issue is whether nominal wages indeed play an allocative role. (After all, there is plenty of evidence showing that the nominal wages themselves are remarkably sticky – this is uncontroversial enough that the key question is whether these payments are meaningful, or whether they’re installments in a long-term labor relationship.) And I have to concede that surely,wages are not allocative on a day-by-day basis: if I’m expected to come to work and do a good job every day, I don’t really care that I’m paid $100 on Mondays and $200 on Tuesdays. There is a deeply important sense in which labor relationships differ from spot markets, with incentives provided through long-term bargains rather than explicit transactions.

But I don’t think that the implicit contract between firm and worker is really so thorough. Instead, there are profound commitment and information failures that keep labor relationships far short of the first best. Here’s the most important data point in my view: firms lay off many workers during deep recessions with minimal severance pay. Surely if firms and workers could agree to anything ex ante, they would agree to avoid this: layoff during a recession is a deep blow with massive costs to career, wallet, and psyche. If firms were truly insuring their workers, they would need to fork over much more than a few weeks’ (or months’) pay; except in the lowest tier of jobs, unemployment insurance is not nearly enough to recover from the financial calamity of joblessness.

So intellectually, I agree with your puzzlement that firms and workers would fail to reach an arrangement flexible enough to avert the inefficiencies of wage rigidity. That’s missing some pretty low-hanging fruit! But when the ultimate low-hanging fruit is "don’t cast out large chunks of your workforce onto a brutal job market with only token assistance”, and we’re missing even that, I have to conclude that there are deep inefficiencies in labor relationships that economists do not fully understand. My guess is that commitment problems lead the contractual wage to play a surprisingly large allocative role. In normal times, the continuation surplus from the worker-employee match is enough to efficiently respond to small shocks; but when the benefit from defaulting on the worker-employee arrangement is large enough, firms do not hesitate to do so. And at that point, the allocative price is the contract wage, not the shadow price in a long-term efficient bargain.

Note that there is imperfect commitment on both sides of the relationship. In your hypothetical situation where a firm is happy to hire more workers at the market wage, but its internal wages are rigid and high, one possible solution is to bring in new workers at the high wage with an understanding that they will give up more of the surplus in the future. But workers’ lack of commitment prevents this: in the future, when they’re supposed to receive a below-market wage, they’ll simply jump ship. 

This explains why firms are so reluctant to hire the long-term unemployed. To make up for the poor skills of an out-of-practice worker, they need to pay substantially less, but wage norms prevent them from doing so explicitly. (It’s totally conceivable to me that for the first 6 months, a long-term unemployed worker is only 50% as productive as an employed one. Firms might have some slack in setting entry wages, but most would never dream of paying worker A 50% as much as worker B for the same blue-collar job.) The obvious solution is to pay the new workers a decent salary coming in, under the tacit agreement that they’ll get less in the future to compensate their employers for rescuing them from unemployment. But again, these workers will simply renege on the agreement once they’re able – and this will be pretty easy for them, since their main obstacle on the job market was their joblessness, which has now been fixed.

I am just raising the skeptical point that if there is an allocative inefficiency from having the wrong amount of labor input, wouldn’t firms and workers together figure out some way around that? They have a long-term relationship in a way that few customer-supplier relationships can match.

I think that the comparison here to customer-supplier relationships is very interesting. I agree that at the retail level, customer-supplier pairs tend to be pretty fleeting – I do not have a long-term relationship with Walmart allowing us to pave over the inefficiencies resulting from sticky prices. Relationships higher on the input-output chart, on the other hand, often do last for long periods of time, possibly longer than most jobs. I don’t see why it should be any harder for Toyota to have an efficient long-term bargain with its suppliers than with its workers. And this is very problematic for the sticky price hypothesis, because stickiness at the retail level alone is just not enough. (As several pricing studies have documented, retail price stickiness and cyclicality have a strong negative correlation – many durable good prices are barely sticky at all, which is a huge problem given your results with Barsky and House.)

One other point: one way in which nominal wage rigidity fails is that firms make workers contribute more for medical insurance. If you can cut benefits across the board in that way, and then have raises for some, you have loosened the downward nominal rigidity.

   This is a very interesting point, and I’ve heard several variations on it. (Health insurance premiums are the most important by far, but there are also 401(k) matches, etc.) This does indeed seem to be a way for firms to overcome, to a small extent, the norm against wage cuts. But I don’t think firms can get away with too much along this dimension – at most, they might manage to cut effective compensation by a few percentage points, and even this only if they’re in cyclical sectors. I am skeptical that this is enough to diminish the importance of nominal wage rigidity by very much, though of course it will become steadily more important as “fringe” benefits take up more and more of the compensation bundle.

Finally, don’t forget my point that the observation that technology improvements are contractionary can only work if there is substantial price stickiness. You can’t get that from wage stickiness alone. So that means price stickiness is a major factor in the economy–though there might also be wage stickiness.

I am a very, very, big admirer of your work on the purified residual with Basu and Fernald. I have to confess, though, that I give it a different interpretation. I have a strong prior that all “technology shocks” in the data, even when the Solow residual is carefully adjusted, are artifacts of the data – my experience doing empirical work tells me that there will always be residuals with no plausible structural interpretation. And from my admittedly amateurish understanding of technological change, I find it hard to believe that the stochastic process for productivity is really a random walk. Innovations diffuse much too slowly for that – instead, I’d model productivity as a two-dimensional stochastic process, where there are shocks to “technological knowledge”, but these shocks’ influence on productivity is spread out over a long period.

Bottom line: I don’t know what high-frequency variations in the purified Solow residual are really capturing, but whatever it is, I don’t think it has much to do with underlying technological progress. My skepticism owes a lot to the numbers themselves – I’m not sure what was happening in 2009 and 2010, but I didn’t see anything consistent with a huge technological boom in 2009 and then technological regress in 2010, as in the adjusted TFP series maintained by John Fernald. (One can go way back with this. Did TFP really decline in the year 2006? Did it decline for three consecutive quarters in 1996-97? Or for three consecutive quarters in 1994?)

Despite all this skepticism, though, I’m a huge fan of the work. But my interpretation of your results is “look, some meticulous and reasonable adjustments to TFP make the series look completely different, and give it completely different cyclical properties – so let’s be very careful drawing inferences from this stuff”, not “it turns out technology improvements are contractionary after all”. (Honestly, I think that meaningful high-frequency variation in TFP is basically something that Ed Prescott made up, so I’m not sure that “are technology shocks contractionary?” is even a well-posed question.) RBC had been cruising for far too long on basically spurious Solow residual estimates that ignored the overwhelming importance of factor utilization, and it was imperative that some smart macroeconomists do the legwork and show that this was untenable. I’m extremely glad you did, and I cite it whenever I get the chance. But I’m still not willing to treat the high-frequency shocks as structural, which is why I don’t view this as decisive in the sticky prices vs. wages debate.

A few years ago, I read an aside in Stiglitz’s Nobel autobiography that really shook me:

Economists spend enormous energy providing refined testing to their models. Economists often seem to forget that some of the most important theories in physics are either verified or refuted by a single observation, or a limited number of observations (e.g. Einstein’s theory of relativity, or the theory of black holes). 

I really think that this is true: we often do very complicated, nontransparent estimation and testing of models, when in reality one or two carefully selected stylized facts could be much more decisive. My view is that the existence of mass layoffs during recessions with minimal severance, while perhaps not quite decisive, is one of these very important stylized facts - it appears to be a very important predictive failure of the implicit contract model.

Miles: Your point about the contractual wage being allocative for the layoff decision is well taken. But reduced hiring is at least as big a part of what makes the labor market what it is in recessions, and the contractual wage is not allocative at the hiring margin: those hired are just beginning an extended employment relationship. A model with stickywages at the layoff margin but effectively flexible wages at the hiring margin would be a very different model than one with sticky wages at both margins.  

Let me defend the Basu, Fernald Kimball measurement of technology shocks. I agree that the blip up in the John Fernald’s series [the graph at the top] in 2009 is an artifact, but that was also a very unusual time and should not signal a big problem with the series at other times. The blip hints that hours and effort requirements went different ways during that episode, despite the theory that says an optimizing firm should move hours and the effort requirements they impose on workers (and the workweek of capital) in synch with each other. A reasonable theoretical explanation is that firms at that juncture put a premium on liquid funds. Putting a precautionary premium on liquid funds, they reduced their head count even below what demand warranted, and made remaining workers work harder in some many cases. This runs down worker good will, but in that crisis time, firms were willing to run down worker good will in order to protect their cash balances. The model treats firms as able to freely borrow and lend, and so omits any liquidity concerns on the part of firms, so it would not track that phenomenon.   

On your theoretical doubt about the reasonableness of random walk technology, let me first say that a random walk for technology is much more plausible a priori than mean-reverting technology that implies that firms routinely backslide, as if they were forgetting technology. The random walk Susanto Basu, John Fernald and I find has very few negative technology shocks. At least at the annual level for the economy as a whole, technology shocks are mostly a matter of how much technology improves. (At the industry level, there are more negative technology shocks. To the extent these are not reflections of measurement error, we do not understand them very well.)  

In general, I would like to see much more work done to find the stories behind the technology shocks that Susanto Basu, John Fernald and I find in the data. Because we compute the technology shocks at the industry or sectoral level, it should be possible to investigate where the shocks come from. Finding the story behind particular sectoral technology shocks in our data would be a very worthy topic for undergraduate theses, for example. 

Let me talk about the gradual adoption of technology that you emphasize, given the little that we know now about economy-moving technology shocks. My view has been that technology shocks big enough to move the economy as a whole are a reflection of the steep part of the S-curve for technology adoption. The new technology is actually starting to spread long before we see it in the data. Then, there is a year when it goes from 15% adoption to 85% adoption, say, and that is the year we see the technology shock in the sectoral data, which then gets aggregated up to a macroeconomic technology shock. The standard errors are just to big to see clearly the gradual movement from 0 to 15% over several years or from 85% to almost 100% in several more years, but we can see the change in one year from 15% to 85%. What this means is that the technology shock in our data will be after, and predictable by, news reports of the new technology. At the Bank of Japan and to John Fernald at the San Francisco Fed, I have advocated that central banks should band together to do the staff work necessary to identify and predict macroeconomic technology shocks in advance, by gathering data on that initial introduction and adoption up to 15%. Hobbled as they are by the zero lower bound, central banks around the world have bigger problems to worry about right now, but in more halcyon times, better prediction of macroeconomic technology shocks would be a major part of their job. (In my column about Market Monetarism, NGDP targeting and optimal monetary policy, I talk both about how to eliminate the zero lower bound on nominal interest rates, and about how monetary policy can and should be adjusted for technology shocks.)

Matthew: 

You said:

Your point about the contractual wage being allocative for the layoff decision is well taken. But reduced hiring is at least as big a part of what makes the labor market what it is in recessions, and the contractual wage is not allocative at the hiring margin: those hired are just beginning an extended employment relationship. A model with stickywages at the layoff margin but effectively flexible wages at the hiring margin would be a very different model than one with stickywages at both margins.  

The same problems of imperfect commitment exist on the worker side. How can the effective wage for a new worker be much lower than the contractual wage? Only if the worker promises to compensate the employer by working at a below-market wage in the future. But it’s hard to make the worker keep his end of the implicit bargain – once he has other options, he’ll demand a fair, non-history-dependent wage. (Perhaps out of the loyalty to the firm for lifting him out of unemployment, he’ll be a little more pliable. Then again, he may be angry at having worse terms than his coworkers simply because he was unlucky enough to be hired during a recession.)

In general, my view of the employer-employee relationship is that it suffers from profound commitment and information failures. This is the only way to explain phenomena that couldn’t possibly be part of an efficient bargain - like layoffs in a depressed labor market. Most of the time, these failures are mitigated by the existence of surplus in the relationship between worker and firm. This surplus motivates both sides of the relationship to behave well in ways that can’t be codified in a formal contract. But when recession hits, at the contractual wage the surplus for the employer disappears, and it (inefficiently) terminates the relationship.

It’s similar for your hypothetical new worker. Suppose that he’s hired during a recession with the understanding that he’ll give up some of his future earnings. When the future arrives and prosperity returns, the worker won’t see any surplus from an ongoing relationship (other firms will compensate him fairly, without reference to the past), and he’ll terminate it. Any other outcome would be surprising. After all, apparently employers can’t commit to properly insure their workers against layoff, and if anything we’d expect implicit commitment to be easier for employers than workers. 

In practice, neither side can reliably keep costly implicit promises, which means that the allocative wage can’t be too different from the contractual one. Wage stickiness matters on both margins.


Before continuing the debate on TFP, I want to take a step back and discuss the implications for wage rigidity. Initially, you mentioned that the apparent contractionary effect of technology shocks is evidence for price rather than wage rigidity. I took this as given and disputed the validity of measured TFP instead. But after further reflection I think that the former inference is equally problematic - even if the TFP series and impulse responses are flawless, we shouldn’t be so quick to settle on price stickiness.

Let’s take a look at Figure 4 from Basu, Fernald, Kimball (2006). Here, we see that after a 1% technology shock, the GDP deflator falls by 1% and the nominal wage stays almost exactly constant. Superficially, this seems much more consistent with sticky wages than sticky prices. That’s not completely fair, because maybe the measured wage isn’t allocative, and depending on the monetary rule there might be reasons why the price level eventually has to fall. (More on that in a second.)

But there are other problems with the story. The putative reason why technology improvements are contractionary is that the nominal money supply does not immediately adjust to the new level of output, which temporarily forces output below its natural level. (This is where the difference between sticky prices and wages comes in; with sticky wages alone, prices would fall to offset the increase in productivity, and there would be no pressure on the money supply.) In equilibrium, however, this all occurs via the impact of monetary policy on the real interest rate. If the path of the real interest rate doesn’t increase, monetary policy can’t be producing a contractionary outcome - at least not in this case. Yet this doesn’t seem to be happening in Figure 4, where the real fed funds rate has a negative impulse response.

More broadly, I don’t see why technology improvements should be contractionary in any model, at least with a realistic specification of the monetary policy rule. While it’s true that they are contractionary under a money supply or nominal GDP rule, monetary policy during the sample period generally didn’t operate according to such rules. (A possible, brief Volcker exception notwithstanding.) Instead, it’s probably best characterized as following some kind of interest rate rule, perhaps a Taylor rule with inertia. And in that case, technology shocks aren’t contractionary at all.

To explore this further, I fired up Dynare and calculated impulse responses to technology improvements in a basic New Keynesian model, under various combinations of assumptions. (Results are here: http://www.mit.edu/~mrognlie/tech_shock_results.pdf)

 For monetary rules, I examined a basic Taylor rule, an inertial Taylor rule, and a money supply rule. In general, the shock was not contractionary for employment under either Taylor rule; this only happened for the money supply rule. In the case where a t=0 shock was anticipated at t=-1, there was generally a contraction in employment from t=-1 to t=0, which could conceivably produce the impulse responses in BFK. But this happened in a number of cases with wage rigidity too (albeit attenuated by the monetary reaction to a fall in inflation), so it’s not particularly strong evidence on the rigidity issue.

Furthermore, with an interest rate rule there was never a persistent decline in prices in response to the shock, except in the presence of wage rigidity. If we stipulate that the Fed followed an interest rate rule during the sample period, then the deflationary impact of a shock in Figure 4 is very powerful evidence for sticky wages.

All in all, it is difficult to reconcile the full set of impulse responses in BFK with any single model. But at the very least, the impulse responses provide just as much evidence for sticky wages as sticky prices. The only hint of sticky prices is the headline finding of a contraction – and the underlying story there is contradicted by the real interest rate decline in Figure 4.

[Administrative note: I’d like to mention the adjusted TFP series we discussed, but I’m not sure that we are using the same series. I was using the utilization-adjusted numbers from a spreadsheet on John Fernald’s website here: http://www.frbsf.org/csip/research/tfp/quarterly_tfp.xls It looks like this doesn’t actually implement all the corrections from your paper, so I don’t want to put too much emphasis on it. Notably, it looks like the utilization-adjusted TFP in his spreadsheet has just as frequent technological regress as regular TFP.]

My view has been that technology shocks big enough to move the economy as a whole are a reflection of the steep part of the S-curve for technology adoption. The new technology is actually starting to spread long before we see it in the data. Then, there is a year when it goes from 15% adoption to 85% adoption, say, and that is the year we see the technology shock in the sectoral data, which then gets aggregated up to a macroeconomic technology shock. The standard errors are just to big to see clearly the gradual movement from 0 to 15% over several years or from 85% to almost 100% in several more years, but we can see the change in one year from 15% to 85%.

I found this suggestion intriguing. I’d long had a vague intuition that micro-level technology improvements could not possibly produce a TFP series as erratic as the one we see in practice. But I hadn’t given this issue – in particular, the relationship between the S-curve of adoption and TFP growth at the macro level - nearly the same thought as you.

Rather than try to communicate my muddled intuition (which no one, including me, has good reason to trust), I decided to write a simple model to flesh out the relationship between the diffusion of micro-level technology improvements and the time series properties of aggregate productivity. The results are available here: 

http://www.mit.edu/~mrognlie/tfp_micro_brief.pdf

I found that under fairly general assumptions, there is a remarkably straightforward connection between the pace of technology diffusion at the micro level and the autocorrelation of aggregate TFP growth. The autocorrelation implied by the model, however, turns out to be far higher than anything visible in the data.

In particular, using a logistic functional form, suppose we parameterize the diffusion curve such that it takes one year for a technology to go from 12% to 88% adoption. (Pretty fast!) Then the autocorrelation of TFP growth in consecutive quarters should be 0.91. At lags of two and three quarters, it should be 0.70 and 0.46. This contrasts markedly with the values in the actual data, which are near zero – regardless of whether we’re using standard TFP, adjusted TFP, labor productivity, etc.

With a slower – and in my view more realistic – pace of diffusion, the contrast between model and data becomes even more stark. Suppose now that it takes two years for a technology to go from 12% to 88%. Then the autocorrelation of growth at lags of 1, 2, and 3 quarters should be 0.98, 0.91, and 0.82. This is nothing like the data.

The underlying logic of the model is pretty straightforward. It says that if new technologies aren’t adopted instantaneously, but instead are spread smoothly over time, then aggregate TFP growth should inherit some of that smoothness. It shouldn’t be nearly uncorrelated from quarter to quarter, which is what we see in practice.

It’s possible that the difference between model and data is caused by measurement error. But it would have to be quite severe measurement error, and it’s a suspicious coincidence that the negative correlation induced by measurement error would be exactly enough to change near-1 correlations to near-0!

Regardless, I think this casts some doubt on any interpretation of TFP as the aggregate reflection of micro-level technological progress. And it only strengthens my longstanding suspicion that short-run variability in TFP is dominated by the effects of specification error.

Glenn Ellison's New Book: Hard Math for Elementary School

Glenn Ellison, the microeconomic theorist at MIT, has written a new book for kids who love math. Here is what Susan Athey had to say about it on her Facebook page, and gave me permission to post:

If your elementary school kids love math–this truly unique book is for you. There’s enough material in here to run a math club for two years, at least. It is really inspiring to see what happens when someone with deep love of math, an incredible gift for teaching, and years of experience with coaching kids in math teams and working with his three brilliant daughters comes up with when he puts his mind to it! Thanks so much for sharing what you’ve created with the rest of us, Glenn Ellison! (And I can’t believe you managed to get this done on top of everything else you are doing!)

My Ec 10 teacher Mary O'Keeffe also gave a rave review of the book on her math circle blog.

Susan pointed out that Glenn also has a book for older kids, Hard Math for Middle School.

Allison Schrager: The Economic Case for the US to Legalize All Drugs

Here is a link to Allison Schrager’s well-written and thoughtful column in favor of drug legalization. My reflections on her column below are not intended to be read on their own, but only after you have read Allison’s column.

I agree with Allison that we need to legalize the production and sale of drugs in order to take revenue, and therefore power, away from criminal gangs. But I think it is important that we do whatever we can to drive down the usage of dangerous drugs consistent with taking the drug trade out of the hands of criminals:

  • Taxes on dangerous drugs as high as possible without encouraging large-scale smuggling;
  • Age limits on drug purchases as strict as consistent with keeping the drug trade out of the hands of illegal gangs;
  • Free drug treatment, financed by those taxes;
  • Evidence-based public education campaigns against drug use, financed by those taxes;
  • Demonization in the media and in polite company of those who (now legally) sell dangerous drugs;
  • Mandatory, gruesome warnings like those we have for cigarettes;
  • Widespread mandatory drug testing and penalties for use of dangerous drugs–but not for drug possession;
  • Strict penalties for driving under the influence of drugs.

Notice that in order to keep the drug trade from going underground, prosecutors must not be allowed to use evidence that an individual purchased or possessed drugs as evidence that he or she used drugs. Evidence of use would have to come from some form of drug testing or from behavior.

Since drug use would still be illegal, social disapproval of drug use would still be encoded into law. But under this policy, any reemergence of illegal gangs selling drugs would be reason for a course correction liberalizing drug sales to an even greater degree.

Despite all the efforts I advocate above to discourage use of dangerous drugs, legalizing the production, sale and possession of drugs would have serious costs. Those costs have to be set against what I consider the even more serious costs of the drug war itself.

John Stuart Mill: A Remedy for the One-Sidedness of the Human Mind

I have often marvelled at how the subtle philosophy of great thinkers is reduced to a caricature by those who claim those thinkers as an inspiration. People are drawn to simplifications. And therein lies danger. John Stuart Mill writes about how that danger can be reduced by including in the intellectual ecosystem even those who are off-base in their judgments. The following is from On Liberty, Chapter II: “Of the Liberty of Thought and Discussion,” paragraphs 34 and 35: 

It still remains to speak of one of the principal causes which make diversity of opinion advantageous, and will continue to do so until mankind shall have entered a stage of intellectual advancement which at present seems at an incalculable distance. We have hitherto considered only two possibilities: that the received opinion may be false, and some other opinion, consequently, true; or that, the received opinion being true, a conflict with the opposite error is essential to a clear apprehension and deep feeling of its truth. But there is a commoner case than either of these; when the conflicting doctrines, instead of being one true and the other false, share the truth between them; and the nonconforming opinion is needed to supply the remainder of the truth, of which the received doctrine embodies only a part. Popular opinions, on subjects not palpable to sense, are often true, but seldom or never the whole truth. They are a part of the truth; sometimes a greater, sometimes a smaller part, but exaggerated, distorted, and disjoined from the truths by which they ought to be accompanied and limited. Heretical opinions, on the other hand, are generally some of these suppressed and neglected truths, bursting the bonds which kept them down, and either seeking reconciliation with the truth contained in the common opinion, or fronting it as enemies, and setting themselves up, with similar exclusiveness, as the whole truth. The latter case is hitherto the most frequent, as, in the human mind, one-sidedness has always been the rule, and many-sidedness the exception. Hence, even in revolutions of opinion, one part of the truth usually sets while another rises. Even progress, which ought to superadd, for the most part only substitutes, one partial and incomplete truth for another; improvement consisting chiefly in this, that the new fragment of truth is more wanted, more adapted to the needs of the time, than that which it displaces. Such being the partial character of prevailing opinions, even when resting on a true foundation, every opinion which embodies somewhat of the portion of truth which the common opinion omits, ought to be considered precious, with whatever amount of error and confusion that truth may be blended. No sober judge of human affairs will feel bound to be indignant because those who force on our notice truths which we should otherwise have overlooked, overlook some of those which we see. Rather, he will think that so long as popular truth is one-sided, it is more desirable than otherwise that unpopular truth should have one-sided asserters too; such being usually the most energetic, and the most likely to compel reluctant attention to the fragment of wisdom which they proclaim as if it were the whole.

Thus, in the eighteenth century, when nearly all the instructed, and all those of the uninstructed who were led by them, were lost in admiration of what is called civilization, and of the marvels of modern science, literature, and philosophy, and while greatly overrating the amount of unlikeness between the men of modern and those of ancient times, indulged the belief that the whole of the difference was in their own favour; with what a salutary shock did the paradoxes of Rousseau explode like bombshells in the midst, dislocating the compact mass of one-sided opinion, and forcing its elements to recombine in a better form and with additional ingredients. Not that the current opinions were on the whole farther from the truth than Rousseau’s were; on the contrary, they were nearer to it; they contained more of positive truth, and very much less of error. Nevertheless there lay in Rousseau’s doctrine, and has floated down the stream of opinion along with it, a considerable amount of exactly those truths which the popular opinion wanted; and these are the deposit which was left behind when the flood subsided. The superior worth of simplicity of life, the enervating and demoralizing effect of the trammels and hypocrisies of artificial society, are ideas which have never been entirely absent from cultivated minds since Rousseau wrote; and they will in time produce their due effect, though at present needing to be asserted as much as ever, and to be asserted by deeds, for words, on this subject, have nearly exhausted their power.

Jonah Berger: Going Viral

Like many other readers, I was fascinated by Richard Dawkins introduction of the idea of a meme in his book The Selfish Gene.

Wikipedia gives a good discussion of memes:

A meme (/ˈmm/meem)[1] is “an idea, behavior, or style that spreads from person to person within a culture.”[2] A meme acts as a unit for carrying cultural ideas, symbols, or practices that can be transmitted from one mind to another through writing, speech, gestures, rituals, or other imitable phenomena. Supporters of the concept regard memes as cultural analogues to genes in that they self-replicate, mutate, and respond to selective pressures.[3]

The word meme is a shortening (modeled on gene) of mimeme (from Ancient Greek μίμημα Greek pronunciation: [míːmɛːma]mīmēma, “imitated thing”, from μιμεῖσθαι mimeisthai, “to imitate”, from μῖμος mimos "mime")[4] and it was coined by the British evolutionary biologist Richard Dawkins in The Selfish Gene (1976)[1][5] as a concept for discussion of evolutionary principles in explaining the spread of ideas and cultural phenomena. Examples of memes given in the book included melodies, catch-phrases, fashion, and the technology of building arches.[6]

Proponents theorize that memes may evolve by natural selection in a manner analogous to that of biological evolution. Memes do this through the processes of variationmutationcompetition, and inheritance, each of which influence a meme’s reproductive success. Memes spread through the behavior that they generate in their hosts. Memes that propagate less prolifically may become extinct, while others may survive, spread, and (for better or for worse) mutate. Memes that replicate most effectively enjoy more success, and some may replicate effectively even when they prove to be detrimental to the welfare of their hosts.[7]

A field of study called memetics[8] arose in the 1990s to explore the concepts and transmission of memes in terms of an evolutionary model.

Internet memes are a subset of memes in general. Wikipedia has a good discussion of this particular subset of memes as well:

An Internet meme may take the form of an imagehyperlinkvideopicturewebsite, or hashtag. It may be just a word or phrase, including an intentional misspelling. These small movements tend to spread from person to person via social networksblogs, direct email, or news sources. They may relate to various existing Internet cultures or subcultures, often created or spread on sites such as 4chanReddit and numerous others.

An Internet meme may stay the same or may evolve over time, by chance or through commentary, imitations, parody, or by incorporating news accounts about itself. Internet memes can evolve and spread extremely rapidly, sometimes reaching world-wide popularity within a few days. Internet memes usually are formed from some social interaction, pop culture reference, or situations people often find themselves in. Their rapid growth and impact has caught the attention of both researchers and industry.[3]Academically, researchers model how they evolve and predict which memes will survive and spread throughout the Web. Commercially, they are used in viral marketing where they are an inexpensive form of mass advertising.

But sometimes our image of an internet meme is too narrow. A tweet can easily become an internet meme if it is retweeted and modified. Thinking of bigger chunks of text, even a blog post sometimes both spreads in its original form and inspires other blog posts that can be considered mutated forms of the original blog post. And thinking just a bit smaller than a tweet, a link to a blog post can definitely be a meme, coevolving with different combinations of surrounding text recommending or denigrating what is at the link–sometimes just the surrounding text of a tweet and sometimes the surrounding text of an entire blog post that flags what is at the link. So those of us who care how many people read what we have to say have reason to be interested in the principles that determine when tweet, a post or a link will be contagious or not. In other words, what does it take to go viral?

Jonah Berger’s book Contagious gives answers based on research Jonah has done as a Marketing professor at the Wharton school. Jonah identifies six dimensions of a message that make it more likely to spread. Here are my notes what Jonah has to say about those six dimensions, for which Jonah gives the acronym STEPPS:

1. Social Currency: We share things that make us look good.

Jonah emphasizes three ways to make people want to share something in order to look good.

  • Inner Remarkability: making clear how remarkable something is. Two examples of remarkabilility are the Snapple facts on the inside of Snapple lids and the video series “Will It Blend?” showing Blendtec blenders grinding up just about anything, the more entertaining the better. Note how what is remarkable about the Blendtec blenders is brought out and dramatized in a non-obvious and entertaining way.  
  • Leverage Game Mechanics: Make a good game out of being a fan.  Here the allure of becoming the Foursquare mayor of some establishment is a great example. 
  • Make People Feel Like Insiders: Here, counterintuitively, creating a sense of scarcity, exclusivity, and the need for inside knowledge to access everything, can make something more attractive. Of course, if you can get away with the illusion of scarcity and exclusivity rather than the reality, more people can be brought on board.

2. Triggers: Top of mind, tip of tongue.

Here the key idea is to tie what you are trying to promote to some trigger that will happen often in someone’s environment.

  • Budweiser’s “Wassup” campaign might seem uninspired, but it tied Budweiser beer to what was a common greeting at the time among a key demographic of young males.  
  • The “Kitkat and Coffee” campaign tied Kitkat chocolate bars to a very frequent occurrence in many people’s days: drinking coffee.
  • The lines “Thinking about Dinner? Think About Boston Market” helped trigger thoughts of Boston Market at a time of day at which they hadn’t previously had as much business.  
  • The trigger can even be the communications of one’s adversary, as in the anti-smoking ads riffing off of the Marlboro Man commercials:

3. Emotion: When we care, we share.

The non-obvious finding here is that high arousal emotions such as 

regardless of whether they are positive or negative–encourage sharing more than low arousal emotions such as contentment and sadness. Indeed, arousal is so important for sharing, experiments indicate that even the physiological arousal induced by making people run in place can cause people to share an article more often.

To find the emotional core of an idea, so that emotional core can be highlighted, Jonah endorses the technique of asking why you think people are doing something, then asking “why is that important” three times. Of course, this could also be seen as a way to try to get at the underlying utility function: utility functions are implemented in important measure by emotions. 

Jonah recommends Google’s “Paris Love” campaign as an example of showing how to demonstrate that something seemingly prosaic, such as search, can connect to deeper concerns. 

4. Public: Built to show, built to grow.

Here I like the story of how Steve Jobs and his marketing expert Ken Segall decided that making the Apple log on a laptop look right-side up to other people when the laptop is in use was more important than making it look right-side up to the user at the moment of figuring out which way to turn to laptop to open it up. Jonah points out how the way the color yellow made them stand out helped make Livestrong wristbands a thing in the days before Lance Armstrong was disgraced

and how the color white made ipod headphones more noticeable than black would have. 

Jonah also makes interesting points about how talking about certain kinds of bad behavior, by making it seem everyone is doing it, can actually encourage bad behavior. Think of Nancy Reagan’s “Just Say No” antidrug campaign:

An alternative is to try to highlight the alternative, desired behavior.  

5. Practical Value: News you can use.

This dimension is fairly straightforward. But Jonah gives this interesting example of a video about how to shuck corn for corn on the cob that went viral in an older demographic where not many things go viral. He also points to the impulse to share information of presumed practical value as part of the reason it is so hard to eradicate the scientifically discredited idea that vaccines cause autism.

6. Stories: Information travels under the guise of idle chatter. 

Here, Jonah uses the example of the Trojan horse, which works well on many levels: the horse brought Greek warriors into Troy, and the story of the Trojan horse brings the idea “never trust your enemies, even if they seem friendly” deep into the soul. He points out just how much information is carried along by good stories.

But Jonah cautions that to make a story valuable, what you are trying to promote has to be integral to the story. Crashing the Olympics and doing a belly flop makes a good story, but the advertising on the break-in diver’s outfit was not central to the story and was soon forgotten. By contrast, for Panda brand Cheese, the Panda backing up the threat “Never say no to Panda” is a memorable part of the stories of Panda mayhem in the cheese commercials, and Dove products at least have an integral supporting role to play in Dove’s memorable Evolution commercial illustrating the extent to which much makeup and photoshopping are behind salient images of beauty in our environment.   

Applied Memetics for the Economics Blogger

Here are a few thought about how to use Jonah’s insights in trying to make a mark in the blogosphere and tweetosphere.

1. Social Currency

Inner Remarkability: I find the effort to encapsulate the inner remarkability of each post or idea in a tweet an interesting intellectual challenge. One good way to practice this is a tip I learned from Bonnie Kavoussi: try to find the most interesting quotation from someone else’s post and put that quotation in your tweet. That will win you friends from the authors of the posts, earn you more Twitter followers (remember that the author of the post will have a strong urge to retweet if you are advertising herhis post well), and hone your skills for when you want to advertise your own posts on Twitter. 

Leverage Game Mechanics: In the blogosphere and on Twitter, we are associating with peers. Much of what they want is similar to what w want–to be noticed, to get our points across, to get new ideas. So helping them to win their game is basically a matter of being a good friend or colleague. For example, championing people’s best work and being generous in giving credit will win points. 

Make People Feel Like Insiders: When writing for on online magazine (Quartz in my case), it feels I need to write as if the readers are reading me for the first time. By contrast, a blog is tailor-made to make readers feel like insiders. So it is valuable to have an independent blog alongside any writing I do for an online magazine.  

2. Triggers

A common piece of advice to young tenure-track assistant professors is to do enough of one thing to become known for that thing. This is consistent with Jonah’s advice about triggers. Having people think of you every time a particular topic comes up is a good way to make sure people think of you. That doesn’t mean you need to be a Johnny-one-note, but it does mean the danger of being seen as a Johnny-one-note is overrated. Remember that readers can easily get variety by diversifying their reading between you and other bloggers. So they will be fine even if your blog specializes to one particular niche, or a small set of niches.

On Twitter, one way to associate yourself with a particular trigger is to use a hashtag. In addition to the hashtag #ImmigrationTweetDay that Adam Ozimek, Noah Smith and I created for Immigration Tweet Day, I have made frequent use of the hashtag #emoney, and I created the hashtag #nakedausterity.  

3. Emotion

Economists often want to come across as cool and rational. But many of the most successful bloggers have quite a bit of emotion in their posts and tweets. I think Noah Smith’s blog Noahpinion is a good example of this. Noahpinion delivers humor, indignation, awe, and even the sense of anxiety that comes from watching him attack and wondering how the object of his attack will respond.  

One simple aid to getting an emotional kick that both Noah and I use is to put illustrations at the top of most of our blog posts. I think more blogs would benefit from putting well-chosen illustrations at the top of posts.    

4. Public

The secret to making a blog more public is simple: Twitter. Everything on Twitter is public, and every interaction with someone who has followers you don’t is a chance for someone new to realize you exist. Of course, you need to be saying something that will make people want to follow you once they notice that you exist.    

Facebook helps too. I post links to my blog posts on my Facebook wall and have friended many economists. 

Finally, the dueling blog posts in an online debate tend to attract attention.

5. Practical Value

In “Top 25 All-Time Posts and All 22 Quartz Columns in Order of Popularity, as of May 5, 2013,” I point out the two posts that are slowly and steadily gaining on posts that were faster out of the block:

I think the reason is practical value. Economists love to understand the economy, but they also have to teach school. They are glad for help and advice for that task.  

6. Stories

Let me make the following argument:

  • a large portion of our brains is devoted to trying to understand the people in our social network;
  • so the author of a blog is much more memorable than a blog, and
  • a memorable story about a blog is almost always coded in people’s brains as a memorable story about the author of the blog.  

Thus, to make a good story for your blog, it is important to “let people in.” That is, it pays off to let people get to know you. The challenge is then to let people get to know you without making them think you are so “full of yourself” that they flee in disgust. Economists as a rule have a surprisingly high tolerance for arrogance in others. But if you want non-economists to stick with you, you might want to inject some notes of humility into what you write.

One simple way to let people get to know you without seeming arrogant is to highlight a range of other people you think highly of. The set of people you think highly of is very revealing of who you are. (Of course, the set of people you criticize and attack is also very revealing of who you are, but not in the same way.)

Summary 

Jonah Berger’s book Contagious is one of the few books in my life where I got to the end and then immediately and eagerly went back to the beginning to read it all over again for the second time. (I can’t remember another one.) Of course, it is a relatively short book. But still, it took a combination of great stories, interesting research results, and practical value for me as a blogger to motivate me to read it twice in quick succession. I recommend it. And I would be interested in your thoughts about how to get a better chance of having blog posts and tweets go viral.         

Further Reading

Jonah recommends two other books that with insights into what makes an idea successful:

  • Malcolm Gladwell’s The Tipping Point: is a fantastic read. But while it is filled with entertaining stories, the science has come a long way since it was released over a decade ago.”
  • Chip Heath and Dan Heath's Made to Stick: Why Some Ideas Survive and Others Die…although the Heaths’ book focuses on making ideas ‘stick’–getting people to remember them–it says less about how to make products and ideas spread, or getting people to pass them on.”

Quartz #23—>QE or Not QE: Even Economists Need Lessons in Quantitative Easing, Bernanke Style

Link to the Column on Quartz

Here is the full text of my 23d Quartz column, “QE or Not QE: Even Economists need lessons in quantitative easing, Bernanke style,” now brought home to supplysideliberal.com. It was first published on May 14, 2013. Links to all my other columns can be found here.

If you want to mirror the content of this post on another site, that is possible for a limited time if you read the legal notice at this link and include both a link to the original Quartz column and the following copyright notice:

© May 14, 2013: Miles Kimball, as first published on Quartz. Used by permission according to a temporary nonexclusive license expiring June 30, 2014. All rights reserved.


Martin Feldstein is an eminent economist. In addition to being a prolific researcher, he served as head of US president Ronald Reagan’s Council of Economic Advisors, and made the National Bureau of Economic Research (NBER) what it is today—an institution that Paul Krugman called “the old-boy network of economics made flesh.” (I am one of the many economists who belongs to the NBER.) But Feldstein was wrong when he wrote in the Wall Street Journal last week, “The time has come for the Fed to recognize that it cannot stimulate growth,” in an op-ed headlined “The Federal Reserve’s Policy Dead End: Quantitative easing hasn’t led to faster growth. A better recovery depends on the White House and Congress.”

“Quantitative easing” or “QE” is when a central bank buys long-term or risky assets instead of purchasing short-term safe assets. One possible spark for Feldstein’s tirade against quantitative easing was the Fed’s announcement on May 1 that it “is prepared to increase or reduce the pace of its purchases” of long-term government bonds and mortgage-backed securities depending on the economic situation. This contrasts with the Fed’s announcement on March 20 that had only pledged as if the Fed would either keep the rate of purchases the same or scale them back, depending on circumstances. Philadelphia Fed Chief Charles Plosser described this as the Fed trying “to remind everybody” that it “has a dial that can move either way.”

So the Fed sounds more ready to turn to QE when needed than it did before.

Feldstein’s argument boils down to saying, “The Fed has done a lot of QE, but we are still hurting, economically. Therefore, QE has failed.” But here he misunderstands the way QE works. The special nature of QE means that the headline dollar figures for quantitative easing overstate how big a hammer any given program of QE is. Once one adjusts for the optical illusion that the headline dollar figures create for QE, there is no reason to think QE has a different effect than one should have expected. To explain why, let me lay out again the logic of one of the very first posts on my blog, “Trillions and Trillions: Getting Used to Balance Sheet Monetary Policy.” In that post I responded to Stephen Williamson, who misunderstood QE (or “balance sheet monetary policy,” as I call it there) in a way similar to Martin Feldstein.

To understand QE, it helps to focus on interest rates rather than quantities of assets purchased. Regular monetary policy operates by lowering safe short-term interest rates, and so pulling down the whole structure of interest rates: short-term, long-term, safe and risky. The trouble is that there is one safe interest rate that can’t be pulled down without a substantial reform to our monetary system: the zero interest rate on paper currency. (See “E-Money: How Paper Currency is Holding the US Recovery Back.”) There is no problem pulling other short-term safe interest rates (say on overnight loans between banks or on 3-month Treasury bills) down to that level of zero, but trying to lower other short-term safe rates below zero would just cause people to keep piles of paper currency to take advantage of the current government guarantee that you can get a zero interest rate on paper currency, which is higher than a negative interest rate.

As long as the zero interest rate on paper currency is left in place by the way we handle paper currency, the Fed’s inability to lower safe, short-term interest rates much below zero means that beyond a certain point it can’t use regular monetary policy to stimulate the economy any more. Once the Fed has hit the “zero lower bound,” it has to get more creative. What quantitative easing does is to compress—that is, squish down—the degree to which long-term and risky interest rates are higher than safe, short-term interest rates. The degree to which one interest rate is above another is called a “spread.” So what quantitative easing does is to squish down spreads. Since all interest rates matter for economic activity, if safe short-term interest rates stay at about zero, while long-term and risky interest rates get pushed down closer to zero, it will stimulate the economy. When firms and households borrow, the markets treat their debt as risky. And firms and households often want to borrow long term. So reducing risky and long-term interest rates makes it less expensive to borrow to buy equipment, hire coders to write software, build a factory, or build a house.

Some of the confusion around quantitative easing comes from the fact that in the kind of economic models that come most naturally to economists, in which everyone in sight is making perfect, deeply-insightful decisions given their situation, and financial traders can easily borrow as much as they want to, quantitative easing would have no effect. In those “frictionless” models, financial traders would just do the opposite of whatever the Fed does with quantitative easing, and cancel out all the effects. But it is important to understand that in these frictionless models where quantitative easing gets cancelled out, it has no important effects. Because in the frictionless models quantitative easing gets canceled out, it doesn’t stimulate the economy. But because in the frictionless models quantitative easing gets cancelled out it has no important effects. In the world where quantitative easing does nothing, it also has no side effects and no dangers. Any possible dangers of quantitative easing only occur in a world where quantitative easing actually works to stimulate the economy!

Now it should not surprise anyone that the world we live in does have frictions. People in financial markets do not always make perfect, deeply-insightful decisions: they often do nothing when they should have done something, and something when they should have done nothing. And financial traders cannot always borrow as much as they want, for as long as they want, to execute their bets against the Fed, as Berkeley professor and prominent economics blogger Brad DeLong explains entertainingly and effectively in “Moby Ben, or, the Washington Super-Whale: Hedge Fundies, the Federal Reserve, and Bernanke-Hatred.” But there is an important message in the way quantitative easing gets canceled out in frictionless economic models. Even in the real world, large doses of quantitative easing are needed to get the job done, since real-world financial traders do manage to counteract some of the effects of quantitative easing as they go about their normal business of trying to make good returns. And “large doses” means Fed purchases of long-term government bonds and mortgage-backed bonds that run into trillions and trillions of dollars. (As I discuss in “Why the US Needs Its Own Sovereign Wealth Fund,” quantitative easing would be more powerful if it involved buying corporate stocks and bonds instead of only long-term government bonds and mortgage-backed bonds.) It would have been a good idea for the Fed to do two or three times as much quantitative easing as it did early on in the recession, though there are currently enough signs of economic revival that it is unclear how much bigger the appropriate dosage is now.

Does QE work? Most academic and central bank analyses argue that it does. (See for example, work by Arvind Krishnamurthy and Annette Vising-Jorgenson of Northwestern University, and work by Signe Krogstrup, Samuel Reynard and Barbara Sutter of the Swiss National Bank. ) But I am also impressed by the decline in the yen since people began to believe that Japan would undertake an aggressive new round of QE. One yen is an aluminum coin that can float on the surface tension of water. Since September, it has floated down from being worth 1.25 cents (US) to less than a penny now. Exchange rates respond to interest rates, so the large fall in the yen is a strong hint that QE is working for Japan, as I predicted it would when I advocated massive QE for Japan back in June 2012.

Sometimes friction is a negative thing—something that engineers fight with grease and ball bearings. But if you are walking on ice across a frozen river, the little bit of friction still there between your boots and the ice allow you to get to the other side. It takes a lot of doing, but quantitative easing uses what friction there is in financial markets to help get us past our economic troubles. The folks at the Fed are not perfect, but they know how quantitative easing works better than Martin Feldstein does. If we had to depend on the White House and Congress for economic recovery, we would be in deep, deep trouble. It is a good thing we have the Fed.

Electronic Money: The Powerpoint File

UPDATE January 10, 2018: My presentation "Breaking Through the Zero Lower Bound" has evolved into a pair of presentations "21 Misconceptions about Eliminating the Zero Lower Bound (or Any Effective Lower Bound on Interest Rates)" and "Implementing Deep Negative Interest Rates: A Guide." The links are the latest versions of the presentations, as I gave them at Boston University on November 16, 2018. 

Videos: 

For more on this topic (including a 5-minute interview), see my bibliographical post “How and Why to Eliminate the Zero Lower Bound: A Reader’s Guide.”

Other than presentations at the University of Michigan, where I worked for 29 years, and the University of Colorado Boulder, where I am now, here is a list of places I have or am scheduled to give these presentations or closely related presentations:

  • Bank of England, May 20, 2013

  • Bank of Japan, June 18, 2013

  • Keio University, June 21, 2013

  • Japan’s Ministry of Finance, June 24, 2013

  • University of Copenhagen, September 5, 2013

  • National Bank of Denmark, September 6, 2013

  • Ecole Polytechnique (Paris), September 10, 2013

  • Paris School of Economics, September 12, 2013

  • Banque de France, September 13, 2013

  • Federal Reserve Board, November 1, 2013

  • US Treasury, May 19, 2014

  • European Central Bank, July 7, 2014

  • Bundesbank, July 8, 2014

  • Bank of Italy, July 11, 2014

  • Swiss National Bank, July 15, 2014

  • Society for the Advancement of Economic Theory Conference in Tokyo, August 20, 2014

  • Princeton University, October 13, 2014

  • Federal Reserve Bank of New York, October 15, 2014

  • New York University, October 17, 2014

  • European University Institute (Florence), October 29, 2014

  • Qatar Central Bank and Texas A&M University at Qatar joint seminar, November 17, 2014

  • International Monetary Fund, May 4, 2015

  • London conference on “Removing the Zero Lower Bound on Interest Rates” sponsored by the Imperial College Business School, the Brevan Howard Centre for Financial Analysis, the Centre for Economic Policy Research (CEPR) and the Swiss National Bank, panel on Economics, Financial, Legal and Practical Issues, May 18, 2015

  • Bank of England: Keynote Address for “Chief Economists’ Workshop– The Future of Money,” May 19, 2015

  • Bank of Finland, May 20, 2015

  • Sveriges Riksbank, May 21, 2015

  • Uppsala University, May 25, 2015

  • Norges Bank, May 28, 2015

  • Bank of Canada, June 11, 2015

  • Reserve Bank of New Zealand, July 22, 2015

  • New Zealand Treasury, August 5, 2015

  • Lake Forest University, September 1, 2015

  • Federal Reserve Bank of Chicago, September 3, 2015

  • American Economic Association Meetings, San Francisco, January 4, 2016

  • IMF, European Section, June 3, 2016

  • Brookings Institution, Hutchins Center Conference, June 6, 2016

  • St. Louis Fed Conference, September 23, 2016

  • Bank of Japan, September 27, 2016

  • Bank of Thailand, September 29, 2016

  • Bank Indonesia, October 3, 2016

  • Chulalongkorn University, Bangkok, October 5, 2016

  • Bank of Korea, October 6, 2016

  • Bank of Japan, October 7, 2016

  • Minneapolis Fed Conference, October 18-19, 2016

  • Sveriges Riksbank (Stockholm), October 31-November 1, 2016

  • Austrian National Bank November 2-4, 2016

  • Bank of Israel, November 6-7, 2016

  • Brussels Conference on “What is the impact of negative interest rates on Europe’s financial system? How do we get back?” sponsored by the European Capital Markets and Institute (ECMI), the Centre for European Policy Studies (CEPS) and the Brevan Howard Centre for Financial Analysis, November 9, 2016

  • Czech National Bank, November 10-11, 2016

  • European Central Bank, November 14-16, 2016

  • Bank of International Settlements, November 17, 2016

  • Swiss National Bank, November 18, 2016

  • Kansas City Fed, December 20, 2016

  • Bank of Canada/Central Bank Research Association Conference, July 20, 2017

  • Denver Association of Business Economists, August 16, 2017

  • De Nederlandsche Bank (Amsterdam), September 28, 2017

  • Bruegel Conference (Brussels), October 2, 2017

  • Bank of Spain, October 3-5, 2017

  • Bank of Portugal, October 9-10, 2017

  • Harvard University, November 12-13, 2018

  • Brown University, November 14, 2018

  • MIT, November 15, 2018

  • Boston University, November 16, 2018

  • Bundesbank, October 28, 2020 (virtual)

  • Reserve Bank of Australia, November 16, 2020 (virtual)

  • Shandong University, April 15 and September 22, 2024 (virtual)

(If you want to know more about the personal side of these trips, see my post “Electronic Money: The Travelogue.”)

Below is what I had to say in two early entries in this post that went beyond simply giving the itinerary of my worldwide, multi-year “Breaking Through the Zero Lower Bound” tour:

June 17, 2013: I went to the Bank of England to talk about how to eliminate the zero lower bound back in May. Tomorrow I will give this presentation (download) at the Bank of Japan:

I think my online readers will also find it interesting. It includes arguments that I have not made online yet in any detail.

The associated paper is very preliminary (in particular, it has very long quotations about the history of thought that need to be cut down to size, and needs to be revised along the lines of the Powerpoint file), but here is the current draft of the paper “Breaking Through the Zero Lower Bound” (download).

Update, June 29, 2013: My electronic money presentations on June 18 at the Bank of Japan, June 21 at Keio University and June 24 at Japan’s Ministry of Finance were well-received. The fact of my seminar makes the part of the  International Herald Tribune’s summary of Leika Kihara’s (gated) article “Japan policy appears set, like it or not” that I have italicized false:

The central bank is said to have no new stimulus plan in the works, nor is it pondering alternative measures.

Though I argue in my presentation that an electronic yen policy is superior to the massive quantitative easing that I advocated for Japan on June 29, 2012, because an electronic yen allows monetary policy to steer the economy without inflation, some version of an electronic yen is also the plausible fall-back policy if massive quantitative easing does not work.

For the record, the type of quantitative easing I advocated for Japan involved massive purchases of corporate stocks and bonds–“assets chosen to have nominal interest rates as far as possible above zero.” Purchases of corporate stocks and bonds should be much more powerful than purchases of Japanese government bonds. Though the Bank of Japan has the legal authority to purchases corporate stocks and bonds, there is a concern (perhaps misplaced) about the possible consequences of the risk for the Bank of Japan’s net worth. An alternative would be for Japan to push further in increasing the risky asset holdings in the Government Pension Investment Fund. That would be in line with what I write in my column “Why the US Needs Its Own Sovereign Wealth Fund.”

My Father's Trash Can

My father, Edward Lawrence Kimball, is 82 years old to my 52. To honor him on this father’s day, I wanted to give you an example of his wry sense of humor. (I warned him a while back that this was coming, so he won’t be totally surprised.)

In her February 2, 2012 post “Move Over Harvard: BYU Law Has Got Memorial Trash Cans,” in the online magazine Above the Law, Staci Zaretsky reports receiving an email saying:

While other law schools memorialize their noteworthy alumni with their name on a moot court room or on a co-curricular competition, BYU has stooped to a new low and now memorializes its alumni on trash cans.

Staci then continues:

The trash can isn’t dedicated to an alumnus, but rather, a professor emeritus of the law school. Professor Edward L. Kimball, who retired in 1995, used to teach criminal law, and was one of the original members of the BYU Law faculty. Here’s how the law school has chosen to honor Professor Kimball… [See the illustration above.]

The plaque on the Little Garbage Pail That Could reads: “The Edward L. Kimball Memorial Trash Can.” How freaking insulting. Professor Kimball is 82 years old, and according to his list of publications, he seems to be the master of all things Mormon. And all you’re going to give him is a trash can?

It took until the next day for Staci to figure out what was going on. She got this response from Brigham Young University’s J. Reuben Clark Law School: 

Professor Kimball was noted for two things: First, he had a dry sense of humor; and second, he did not take himself too seriously.

When he and his wife, Bee, gave a generous gift to the law school, the development officer indicated that there would be a plaque honoring them on the wall near the Moot Court Room. Professor Kimball objected and indicated that he would prefer to have a large, gold trash can placed in the foyer of the law school with a very small plaque stating: The Edward L. Kimball Memorial Trash Can.

Professor and Mrs. Kimball hoped that the “trash can” would bring a smile to students or visitors who read the plaque.“

Pieria Debate on the UK Productivity Puzzle

Miles Kimball, Jonathan Portes, Frances Coppola and Tomas Hirst discuss the mysterious case of the UK’s falling productivity. This post first appeared on Pieria on May 24, 2013.

Miles Kimball: A big issue that the Bank of England is worried about is that the UK may not be far below the natural level of output at all. They’re very interest in the productivity puzzle and I’m hoping they’ll put out a prize for research into it one of these days.

Tomas Hirst: We’ve had some interesting discussions on Pieria about how we can explain the productivity puzzle – including how it might reflect miscalculations of output and growing problems in the UK labour market. 

Jonathan Portes: Do they really think that we’re not far below the natural level of output at the moment?

Miles Kimball: Well opinions differ. I think it’s safe to say there’s a very active debate on exactly that question.

Tomas Hirst: The minutes of the MPC’s most recent meeting suggest that there’s something of a schism opening up in the committee between those worrying about the risks of further QE purchases (who are currently in the majority) and those worrying about the continued weakness of output. Do you think it reflects this debate?

Miles Kimball: Pieria really ought to talk about this more. For many other economies it seems crystal clear to almost everybody with an ounce of sense that output is below the natural level but I don’t know if it’s true in the UK. It’s not even clear to me, I just don’t know. 

The broadest sphere of the debate should really be trying to get a hold of that productivity puzzle. In addition to measures that could add to aggregate demand for the UK I think a great deal of work needs to be done to assess whether it really is below the natural level of output or not.

Tomas Hirst: I think in the UK people have been too focused on headline figures of inflation and unemployment, for example. What people have missed is the fact that core inflation has been below target throughout the crisis, which might itself justify further stimulus.

Miles Kimball: Well remember that the new remit from the Treasury says that the MPC should look through government-administered prices.

Tomas Hirst: Yes, but could that change in mandate not be a response to this problem of growing doubts in the usefulness of headline figures?

Miles Kimball: What I’m saying is that the remit could suggest that the BoE is being asked to look more at core inflation. It’s actually a little bit of a mixed message as they’re being told that their target should remain linked to headline inflation but are being told to look through the headline numbers at what’s happening to core inflation. Pushing them towards core inflation is important. 

On the productivity puzzle, there are things that can be solved by expansion and things that can’t. In the recession the government is not as willing to let firms go bankrupt so you get a long tail of unproductive firms carrying on. If you convince everybody that you’ve got all the aggregate demand you want you can allow for more bankruptcies, which will mean some of the puzzle will automatically correct.

Frances Coppola: I’ve heard that argument a lot but I’m not 100% convinced. You’ve got to look through the recession to see what the long-term secular trend is.

Over the last few years we’ve seen a huge increase in self-employment and at the same time self-employed incomes have crashed. That can’t be to do simply with unproductive companies.

Jonathan Portes: It’s an aggregate demand problem.

Frances Coppola: Exactly!

Jonathan Portes: Actually it was part of David Blanchflower’s recent paper that discussed a growing number of people in the UK who want to work more hours and can’t get them. If you’re self-employed and you want to work more hours the only thing that is stopping you is a lack of demand.

Frances Coppola: Speaking from personal experience, as I am self-employed and have been for a long time in a business that requires specialist skills, things were fine until two years ago. Since then demand has collapsed. And it’s not just singing. I’ve never seen the situation out there this bad.

Further Reading

Part 1: Pieria debate on electronic money and negative interest rates 

How Can We Explain Britain’s Productivity Puzzle? – Pieria

Perverse incentives and productivity – Coppola Comment

Can Intangible Investment Explain The UK Productivity Puzzle – Professor Jonathan Haskel

JOIN PIERIA TODAY!

Keep up to date with the latest thinking on some of the day’s biggest issues and get instant access to our members-only features, such as the News DashboardReading List,Bookshelf & Newsletter. It’s completely free.

Instrumental Tools for Debt and Growth

A Joint Post by Miles Kimball and Yichuan Wang

Yichuan (see photo above) and I talked through the analysis and ideas for this post together, but the words and the particulars of the graphs are all his. I find what he has done here very impressive. On his blog, where this post first appeared on June 4, 2013, the last two graphs are dynamic and show more information when you hover over what you are interested in. This post is a good complement to our analysis in our second joint Quartz column: “Autopsy: Economists looked even closer at Reinhart and Rogoff’s data–and the results might surprise you,” which pushes a little further along the lines we laid out in “For Sussing Out Whether Debt Affects Future Growth, the Key is Carefully Taking Into Account Past Growth.”


In a recent Quartz column, we found that high levels of debt do not appear to affect future rates of growth. In the Reinhart and Rogoff (henceforth RR) data set on debt and growth for a group of 20 advanced economies in the post WW-II period, high levels of debt to GDP did not predict lower levels of growth 5 to 10 years in the future. Notably, after controlling for various intervals of past growth, we found that there was a mild positive correlation between debt to GDP and future GDP growth.

In a companion post, we address some of the time window issues with some plots how adjusting for past growth can reverse any observed negative correlation between debt and future growth. In this post, we want to address the possibility that future growth can lead to high debt, and explain our use of instrumental variables to control for this possibility.

One major possibility for this relationship is that policy makers are forward looking, and base their decisions on whether to have high or low debt based on their expectations of future events. For example, if policy makers know that a recession is coming, they may increase deficit spending to mitigate the upcoming negative shock to growth. Even though debt may have increased growth, this would have been observed as lower growth following high debt.On the other hand, perhaps expectations of high future growth make policy makers believe that the government can afford to increase debt right now. Even if debt had a negative effect on growth, the data would show a rapid rise in GDP growth following the increase in debt.

Apart from government tax and spending decisions informed by forecasts of future growth, there are other mechanical relationships between debt and growth that are not what one should be looking for when asking whether debt has a negative effect on growth. For example a war can increase debt, but the ramp of the war makes growth high then and predictably lower after the ramp up is done and predictably lower still when the war winds down. So there is an increase in debt coupled with predictions for GDP growth different from non-war situations. None of this has to do with debt itself causing a different growth rate, so we would like to abstract from it. 

To do so, we need to extract the part of the debt to GDP statistic that is based on whether the country runs a long term high debt policy, and to ignore the high debt that arises because of changes in expected future outcomes or because of relatively mechanical short-run aggregate demand effects of government purchases as a component of GDP. Econometrically, this approach is called instrumental variables, and would involve using a set of variables, called instruments, that are uncorrelated with future outcomes to predict current debt.

Since we are considering future outcomes, a natural choice for instrument would be the lagged value of the debt to GDP ratio. As can be seen below, debt to GDP does not jump around very much. If debt is high today, it likely will also be high tomorrow. Thus lagged debt can predict future debt. Also, since economic growth is notoriously difficult to forecast, the lagged debt variable should no longer reflect expectations about future economic growth.   

By using lagged debt and growth as instruments, we isolate the part of current debt that reflects debt from a long term high debt policy, and not by short run forecasts or other mechanical pressures. We plot the resulting slopes on debt to GDP in the charts below, for both future growth in years 0-5 and for future years 5-10. For the raw data and computations, consult the public dropbox folder.

From these graphs, we can make some observations.

First, almost all the coefficients, across all the different lags and fixed effects, are positive. Since these results are small, we should not put too much weight on statistical significance. However, it should be noted that the plain results, OLS and IV, for both growth periods are all statistically significant at at least the 95% confidence level, and the IV estimates for the 5-10 year period in particular are significant at the 99% confidence level.

The one negative estimate, OLS estimate with country fixed effects, has a standard error with absolute size twice as large as the actual slope estimate.Moreover, country fixed effects are difficult to interpret because they pivot the analysis from looking at high debt versus low debt countries towards analyzing a country’s indebtedness relative to its long run average.

These results are striking considering therobustness with which Reinhart and Rogoff present the argument thatdebt causes low growth in their 2012 JEP article. Yet instead of finding a weaker negative correlation, after controlling for past growth, we find that the estimated relationship between current debt and future growth is weakly positive instead. 

Second, when taking out year fixed effects, there is almost no effect of debt and future . Econometrically, year fixed effects takes out the average debt level in every year, which leaves us analyzing whether being more heavily indebted relative to a country’s peers in that year has an additional effect on growth. Because this component is consistently smaller than the regular IV coefficient, this suggests,for the advanced countries in the sample, it’s absolute, not relative, debt that matters.

This should be no surprise. As most recently articulated in RR’s open letter to Paul Krugman, much of the argument against high debt levels relies on a fear that a heavily indebted country becomes “suddenly unable to borrow from international capital markets because its public and/or private debts that are a contingent public liability are deemed unsustainable.” The credit crunch stifles growth and governments are forced to engage in self-destructive cutbacks just in order to pay the bills. At its core, this is a story about whether the government can pay back the liabilities. But whether or not liabilities are sustainable should depend on the absolute size of the liabilities, not just whether the liabilities are large relative to their peers.

Now,our conclusion is not without limitations. As Paul Andrew notes, the RR data set used focuses on “20 or so of the most healthy economies the world has ever seen,” thus potentially adding a high level of selection bias.

Additionally, we have restricted ourselves to the RR data set of advanced countries in the post WW-II period. The 2012 Reinhart and Rogoff paper considered episodes of debt overhangs from the 1800’s, and thus the results are likely very different. However, it is likely that prewar government policies, such the gold standard and the lack of independent monetary authorities, contributed to the pain of debt crises. Thus our timescale does not detract from the implication that debt has a limited effect on future growth in modern advanced economies.

In their New York Times response to Herndon et. al., Reinhart and Rogoff “reiterate that the frontier question for research is the issue of causality”. And at this frontier, our Quartz column, Dube’s work on varying regression time frames, and these companion posts all suggest that causality from debt to growth is much smaller than previously thought.

David Blanchflower: Mark Carney Has a Major Task Ahead at the Bank of England

In this article in the Independent, David Blanchflower compares the task facing former head of the Bank of Canada Mark Carney as head of the Bank of England to the task facing former University of Michigan Provost Phil Hanlon as head of Dartmouth. For the Record, from what I saw, I thought Phil Hanlon did a great job at the University of Michigan as Budget Associate Dean of the College of Literature Science and Arts, as Budget Associate Provost of the university, and finally as Provost. The University of Michigan weathered tough financial times well, which was only possible because our leaders were good at distinguishing fat from muscle.