Finally, don’t forget my point that the observation that technology improvements are contractionary can only work if there is substantial price stickiness. You can’t get that from
stickiness alone. So that means price stickiness is a major factor in the economy—though there might also be
My bottom line has been that if for tractability you have to choose between only price stickiness in a model and only wage stickiness, you are closer to reality with price stickiness. But if you can manage both and can deal with the micro issues of long-term labor relationships and variable effort, then it could be reasonable to have some wage stickiness too.
I agree that the key issue is whether nominal wages indeed play an allocative role. (After all, there is plenty of evidence showing that the nominal wages themselves are remarkably sticky — this is uncontroversial enough that the key question is whether these payments are meaningful, or whether they’re installments in a long-term labor relationship.) And I have to concede that surely,wages are not allocative on a day-by-day basis: if I’m expected to come to work and do a good job every day, I don’t really care that I’m paid $100 on Mondays and $200 on Tuesdays. There is a deeply important sense in which labor relationships differ from spot markets, with incentives provided through long-term bargains rather than explicit transactions.
But I don’t think that the implicit contract between firm and worker is really so thorough. Instead, there are profound commitment and information failures that keep labor relationships far short of the first best. Here’s the most important data point in my view: firms lay off many workers during deep recessions with minimal severance pay. Surely if firms and workers could agree to anything ex ante, they would agree to avoid this: layoff during a recession is a deep blow with massive costs to career, wallet, and psyche. If firms were truly insuring their workers, they would need to fork over much more than a few weeks’ (or months’) pay; except in the lowest tier of jobs, unemployment insurance is not nearly enough to recover from the financial calamity of joblessness.
So intellectually, I agree with your puzzlement that firms and workers would fail to reach an arrangement flexible enough to avert the inefficiencies of wage rigidity. That’s missing some pretty low-hanging fruit! But when the ultimatelow-hanging fruit is “don’t cast out large chunks of your workforce onto a brutal job market with only token assistance”, and we’re missing even that, I have to conclude that there are deep inefficiencies in labor relationships that economists do not fully understand. My guess is that commitment problems lead the contractual wage to play a surprisingly large allocative role. In normal times, the continuation surplus from the worker-employee match is enough to efficiently respond to small shocks; but when the benefit from defaulting on the worker-employee arrangement is large enough, firms do not hesitate to do so. And at that point, the allocative price is the contract wage, not the shadow price in a long-term efficient bargain.
Note that there is imperfect commitment on both sides of the relationship. In your hypothetical situation where a firm is happy to hire more workers at the market wage, but its internal wages are rigid and high, one possible solution is to bring in new workers at the high wage with an understanding that they will give up more of the surplus in the future. But workers’ lack of commitment prevents this: in the future, when they’re supposed to receive a below-market wage, they’ll simply jump ship.
This explains why firms are so reluctant to hire the long-term unemployed. To make up for the poor skills of an out-of-practice worker, they need to pay substantially less, but wage norms prevent them from doing so explicitly. (It’s totally conceivable to me that for the first 6 months, a long-term unemployed worker is only 50% as productive as an employed one. Firms might have some slack in setting entry wages, but most would never dream of paying worker A 50% as much as worker B for the same blue-collar job.) The obvious solution is to pay the new workers a decent salary coming in, under the tacit agreement that they’ll get less in the future to compensate their employers for rescuing them from unemployment. But again, these workers will simply renege on the agreement once they’re able — and this will be pretty easy for them, since their main obstacle on the job market was their joblessness, which has now been fixed.
I am just raising the skeptical point that if there is an allocative inefficiency from having the wrong amount of labor input, wouldn’t firms and workers together figure out some way around that? They have a long-term relationship in a way that few customer-supplier relationships can match.
I think that the comparison here to customer-supplier relationships is very interesting. I agree that at the retail level, customer-supplier pairs tend to be pretty fleeting — I do not have a long-term relationship with Walmart allowing us to pave over the inefficiencies resulting from sticky prices. Relationships higher on the input-output chart, on the other hand, often do last for long periods of time, possibly longer than most jobs. I don’t see why it should be any harder for Toyota to have an efficient long-term bargain with its suppliers than with its workers. And this is very problematic for the sticky price hypothesis, because stickiness at the retail level alone is just not enough. (As several pricing studies have documented, retail price stickiness and cyclicality have a strong negative correlation — many durable good prices are barely sticky at all, which is a huge problem given your results with Barsky and House.)
One other point: one way in which nominal wage rigidity fails is that firms make workers contribute more for medical insurance. If you can cut benefits across the board in that way, and then have raises for some, you have loosened the downward nominal rigidity.
This is a very interesting point, and I’ve heard several variations on it. (Health insurance premiums are the most important by far, but there are also 401(k) matches, etc.) This does indeed seem to be a way for firms to overcome, to a small extent, the norm against wage cuts. But I don’t think firms can get away with too much along this dimension — at most, they might manage to cut effective compensation by a few percentage points, and even this only if they’re in cyclical sectors. I am skeptical that this is enough to diminish the importance of nominal wage rigidity by very much, though of course it will become steadily more important as “fringe” benefits take up more and more of the compensation bundle.
Finally, don’t forget my point that the observation that technology improvements are contractionary can only work if there is substantial price stickiness. You can’t get that from wage stickiness alone. So that means price stickiness is a major factor in the economy—though there might also be wage stickiness.
I am a very, very, big admirer of your work on the purified residual with Basu and Fernald. I have to confess, though, that I give it a different interpretation. I have a strong prior that all “technology shocks” in the data, even when the Solow residual is carefully adjusted, are artifacts of the data — my experience doing empirical work tells me that there will always be residuals with no plausible structural interpretation. And from my admittedly amateurish understanding of technological change, I find it hard to believe that the stochastic process for productivity is really a random walk. Innovations diffuse much too slowly for that — instead, I’d model productivity as a two-dimensional stochastic process, where there are shocks to “technological knowledge”, but these shocks’ influence on productivity is spread out over a long period.
Bottom line: I don’t know what high-frequency variations in the purified Solow residual are really capturing, but whatever it is, I don’t think it has much to do with underlying technological progress. My skepticism owes a lot to the numbers themselves — I’m not sure what was happening in 2009 and 2010, but I didn’t see anything consistent with a huge technological boom in 2009 and then technological regress in 2010, as in the adjusted TFP series maintained by John Fernald. (One can go way back with this. Did TFP really decline in the year 2006? Did it decline for three consecutive quarters in 1996-97? Or for three consecutive quarters in 1994?)
Despite all this skepticism, though, I’m a huge fan of the work. But my interpretation of your results is “look, some meticulous and reasonable adjustments to TFP make the series look completely different, and give it completely different cyclical properties — so let’s be very careful drawing inferences from this stuff”, not “it turns out technology improvements are contractionary after all”. (Honestly, I think that meaningful high-frequency variation in TFP is basically something that Ed Prescott made up, so I’m not sure that “are technology shocks contractionary?” is even a well-posed question.) RBC had been cruising for far too long on basically spurious Solow residual estimates that ignored the overwhelming importance of factor utilization, and it was imperative that some smart macroeconomists do the legwork and show that this was untenable. I’m extremely glad you did, and I cite it whenever I get the chance. But I’m still not willing to treat the high-frequency shocks as structural, which is why I don’t view this as decisive in the sticky prices vs. wages debate.
Economists spend enormous energy providing refined testing to their models. Economists often seem to forget that some of the most important theories in physics are either verified or refuted by a single observation, or a limited number of observations (e.g. Einstein’s theory of relativity, or the theory of black holes).
I really think that this is true: we often do very complicated, nontransparent estimation and testing of models, when in reality one or two carefully selected stylized facts could be much more decisive. My view is that the existence of mass layoffs during recessions with minimal severance, while perhaps not quite decisive, is one of these very important stylized facts - it appears to be a very important predictive failure of the implicit contract model.
Miles: Your point about the contractual wage being allocative for the layoff decision is well taken. But reduced hiring is at least as big a part of what makes the labor market what it is in recessions, and the contractual wage is not allocative at the hiring margin: those hired are just beginning an extended employment relationship. A model with sticky wages at the layoff margin but effectively flexible wages at the hiring margin would be a very different model than one with sticky wages at both margins.
Let me defend the Basu, Fernald Kimball measurement of technology shocks. I agree that the blip up in the John Fernald’s series [the graph at the top] in 2009 is an artifact, but that was also a very unusual time and should not signal a big problem with the series at other times. The blip hints that hours and effort requirements went different ways during that episode, despite the theory that says an optimizing firm should move hours and the effort requirements they impose on workers (and the workweek of capital) in synch with each other. A reasonable theoretical explanation is that firms at that juncture put a premium on liquid funds. Putting a precautionary premium on liquid funds, they reduced their head count even below what demand warranted, and made remaining workers work harder in some many cases. This runs down worker good will, but in that crisis time, firms were willing to run down worker good will in order to protect their cash balances. The model treats firms as able to freely borrow and lend, and so omits any liquidity concerns on the part of firms, so it would not track that phenomenon.
On your theoretical doubt about the reasonableness of random walk technology, let me first say that a random walk for technology is much more plausible a priori than mean-reverting technology that implies that firms routinely backslide, as if they were forgetting technology. The random walk Susanto Basu, John Fernald and I find has very few negative technology shocks. At least at the annual level for the economy as a whole, technology shocks are mostly a matter of how much technology improves. (At the industry level, there are more negative technology shocks. To the extent these are not reflections of measurement error, we do not understand them very well.)
In general, I would like to see much more work done to find the stories behind the technology shocks that Susanto Basu, John Fernald and I find in the data. Because we compute the technology shocks at the industry or sectoral level, it should be possible to investigate where the shocks come from. Finding the story behind particular sectoral technology shocks in our data would be a very worthy topic for undergraduate theses, for example.
Let me talk about the gradual adoption of technology that you emphasize, given the little that we know now about economy-moving technology shocks. My view has been that technology shocks big enough to move the economy as a whole are a reflection of the steep part of the S-curve for technology adoption. The new technology is actually starting to spread long before we see it in the data. Then, there is a year when it goes from 15% adoption to 85% adoption, say, and that is the year we see the technology shock in the sectoral data, which then gets aggregated up to a macroeconomic technology shock. The standard errors are just to big to see clearly the gradual movement from 0 to 15% over several years or from 85% to almost 100% in several more years, but we can see the change in one year from 15% to 85%. What this means is that the technology shock in our data will be after, and predictable by, news reports of the new technology. At the Bank of Japan and to John Fernald at the San Francisco Fed, I have advocated that central banks should band together to do the staff work necessary to identify and predict macroeconomic technology shocks in advance, by gathering data on that initial introduction and adoption up to 15%. Hobbled as they are by the zero lower bound, central banks around the world have bigger problems to worry about right now, but in more halcyon times, better prediction of macroeconomic technology shocks would be a major part of their job. (In my column about Market Monetarism, NGDP targeting and optimal monetary policy, I talk both about how to eliminate the zero lower bound on nominal interest rates, and about how monetary policy can and should be adjusted for technology shocks.)
Matthew: You said:
Your point about the contractual wage being allocative for the layoff decision is well taken. But reduced hiring is at least as big a part of what makes the labor market what it is in recessions, and the contractual wage is not allocative at the hiring margin: those hired are just beginning an extended employment relationship. A model with sticky wages at the layoff margin but effectively flexible wages at the hiring margin would be a very different model than one with sticky wages at both margins.
The same problems of imperfect commitment exist on the worker side. How can the effective wage for a new worker be much lower than the contractual wage? Only if the worker promises to compensate the employer by working at a below-market wage in the future. But it’s hard to make the worker keep his end of the implicit bargain — once he has other options, he’ll demand a fair, non-history-dependent wage. (Perhaps out of the loyalty to the firm for lifting him out of unemployment, he’ll be a little more pliable. Then again, he may be angry at having worse terms than his coworkers simply because he was unlucky enough to be hired during a recession.)
In general, my view of the employer-employee relationship is that it suffers from profound commitment and information failures. This is the only way to explain phenomena that couldn’t possibly be part of an efficient bargain - like layoffs in a depressed labor market. Most of the time, these failures are mitigated by the existence of surplus in the relationship between worker and firm. This surplus motivates both sides of the relationship to behave well in ways that can’t be codified in a formal contract. But when recession hits, at the contractual wage the surplus for the employer disappears, and it (inefficiently) terminates the relationship.
It’s similar for your hypothetical new worker. Suppose that he’s hired during a recession with the understanding that he’ll give up some of his future earnings. When the future arrives and prosperity returns, the worker won’t see any surplus from an ongoing relationship (other firms will compensate him fairly, without reference to the past), and he’ll terminate it. Any other outcome would be surprising. After all, apparently employers can’t commit to properly insure their workers against layoff, and if anything we’d expect implicit commitment to be easier for employers than workers.
In practice, neither side can reliably keep costly implicit promises, which means that the allocative wage can’t be too different from the contractual one. Wage stickiness matters on both margins.
Before continuing the debate on TFP, I want to take a step back and discuss the implications for wage rigidity. Initially, you mentioned that the apparent contractionary effect of technology shocks is evidence for price rather than wage rigidity. I took this as given and disputed the validity of measured TFP instead. But after further reflection I think that the former inference is equally problematic - even if the TFP series and impulse responses are flawless, we shouldn’t be so quick to settle on price stickiness.
Let’s take a look at Figure 4 from Basu, Fernald, Kimball (2006). Here, we see that after a 1% technology shock, the GDP deflator falls by 1% and the nominal wage stays almost exactly constant. Superficially, this seems much more consistent with sticky wages than sticky prices. That’s not completely fair, because maybe the measured wage isn’t allocative, and depending on the monetary rule there might be reasons why the price level eventually has to fall. (More on that in a second.)
But there are other problems with the story. The putative reason why technology improvements are contractionary is that the nominal money supply does not immediately adjust to the new level of output, which temporarily forces output below its natural level. (This is where the difference between sticky prices and wages comes in; with sticky wages alone, prices would fall to offset the increase in productivity, and there would be no pressure on the money supply.) In equilibrium, however, this all occurs via the impact of monetary policy on the real interest rate. If the path of the real interest rate doesn’t increase, monetary policy can’t be producing a contractionary outcome - at least not in this case. Yet this doesn’t seem to be happening in Figure 4, where the real fed funds rate has a negative impulse response.
More broadly, I don’t see why technology improvements should be contractionary in any model, at least with a realistic specification of the monetary policy rule. While it’s true that they are contractionary under a money supply or nominal GDP rule, monetary policy during the sample period generally didn’t operate according to such rules. (A possible, brief Volcker exception notwithstanding.) Instead, it’s probably best characterized as following some kind of interest rate rule, perhaps a Taylor rule with inertia. And in that case, technology shocks aren’t contractionary at all.
For monetary rules, I examined a basic Taylor rule, an inertial Taylor rule, and a money supply rule. In general, the shock was not contractionary for employment under either Taylor rule; this only happened for the money supply rule. In the case where a t=0 shock was anticipated at t=-1, there was generally a contraction in employment from t=-1 to t=0, which could conceivably produce the impulse responses in BFK. But this happened in a number of cases with wage rigidity too (albeit attenuated by the monetary reaction to a fall in inflation), so it’s not particularly strong evidence on the rigidity issue.
Furthermore, with an interest rate rule there was never a persistent decline in prices in response to the shock, except in the presence of wage rigidity. If we stipulate that the Fed followed an interest rate rule during the sample period, then the deflationary impact of a shock in Figure 4 is very powerful evidence for sticky wages.
All in all, it is difficult to reconcile the full set of impulse responses in BFK with any single model. But at the very least, the impulse responses provide just as much evidence for sticky wages assticky prices. The only hint of sticky prices is the headline finding of a contraction — and the underlying story there is contradicted by the real interest rate decline in Figure 4.
[Administrative note: I’d like to mention the adjusted TFP series we discussed, but I’m not sure that we are using the same series. I was using the utilization-adjusted numbers from a spreadsheet on John Fernald’s website here: http://www.frbsf.org/csip/research/tfp/quarterly_tfp.xls
It looks like this doesn’t actually implement all the corrections from your paper, so I don’t want to put too much emphasis on it. Notably, it looks like the utilization-adjusted TFP in his spreadsheet has just as frequent technological regress as regular TFP.]
My view has been that technology shocks big enough to move the economy as a whole are a reflection of the steep part of the S-curve for technology adoption. The new technology is actually starting to spread long before we see it in the data. Then, there is a year when it goes from 15% adoption to 85% adoption, say, and that is the year we see the technology shock in the sectoral data, which then gets aggregated up to a macroeconomic technology shock. The standard errors are just to big to see clearly the gradual movement from 0 to 15% over several years or from 85% to almost 100% in several more years, but we can see the change in one year from 15% to 85%.
I found this suggestion intriguing. I’d long had a vague intuition that micro-level technology improvements could not possibly produce a TFP series as erratic as the one we see in practice. But I hadn’t given this issue — in particular, the relationship between the S-curve of adoption and TFP growth at the macro level - nearly the same thought as you.
Rather than try to communicate my muddled intuition (which no one, including me, has good reason to trust), I decided to write a simple model to flesh out the relationship between the diffusion of micro-level technology improvements and the time series properties of aggregate productivity. The results are available here: http://www.mit.edu/~mrognlie/tfp_micro_brief.pdf
I found that under fairly general assumptions, there is a remarkably straightforward connection between the pace of technology diffusion at the micro level and the autocorrelation of aggregate TFP growth. The autocorrelation implied by the model, however, turns out to be far higher than anything visible in the data.
In particular, using a logistic functional form, suppose we parameterize the diffusion curve such that it takes one year for a technology to go from 12% to 88% adoption. (Pretty fast!) Then the autocorrelation of TFP growth in consecutive quarters should be 0.91. At lags of two and three quarters, it should be 0.70 and 0.46. This contrasts markedly with the values in the actual data, which are near zero — regardless of whether we’re using standard TFP, adjusted TFP, labor productivity, etc.
With a slower — and in my view more realistic — pace of diffusion, the contrast between model and data becomes even more stark. Suppose now that it takes two years for a technology to go from 12% to 88%. Then the autocorrelation of growth at lags of 1, 2, and 3 quarters should be 0.98, 0.91, and 0.82. This is nothing like the data.
The underlying logic of the model is pretty straightforward. It says that if new technologies aren’t adopted instantaneously, but instead are spread smoothly over time, then aggregate TFP growth should inherit some of that smoothness. It shouldn’t be nearly uncorrelated from quarter to quarter, which is what we see in practice.
It’s possible that the difference between model and data is caused by measurement error. But it would have to be quite severe measurement error, and it’s a suspicious coincidence that the negative correlation induced by measurement error would be exactly enough to change near-1 correlations to near-0!
Regardless, I think this casts some doubt on any interpretation of TFP as the aggregate reflection of micro-level technological progress. And it only strengthens my longstanding suspicion that short-run variability in TFP is dominated by the effects of specification error.