I had a very interesting email discussion with Matthew Rognlie (who blogs at mattrognlie.com) about price rigidity versus wage rigidity, sparked by my storify post “Why the Nominal GDP Target Should Go Up about 1% after a 1% Improvement in Technology,” where the argument hinges on whether prices are sticky, or wages are sticky, or both. The two of us decided to share our discussion with you.
I’m a grad student at MIT, and I’ve been enjoying your blog a great deal recently – it’s one of the only blogs I know for discussions of business cycle macro from someone with a really good grasp of modern work in the field. (Plus, I want to steal the “supply side liberal” label for myself.)
I was particularly interested to see your recent twitter discussion about price vs wage rigidity. My view is that there is extraordinarily strong evidence for nominal rigidities at the aggregate level – the most compelling being the old Mussa point about real exchange rate fluctuations under pegs vs. floating – but I am not so convinced that it comes from price rather than wage rigidity. In fact, recently I’ve been evolving toward the view that wage rigidity may be more important.
One of the difficulties in macro models of rigidity, I think, is that for reasons of analytical tractability most models tend to focus on one or two sources of rigidity, when in fact we have quite a few compelling candidates (the following categories are not precisely defined):
1. Nominal price rigidities: direct stickiness in nominal prices themselves, or perhaps (somewhat less plausibly in my view) stickiness in a nominal plan for prices, a la Mankiw and Reis.
2. Real price rigidities: various reasons why firms do not adjust prices so much in response to changes in marginal cost (possibly because there is some kind of strategic complementarity driven by the market structure, a la your 1995 aggregator), or why marginal costs themselves do not move much (aside from the obvious impact of wage rigidity, this could be due to the role of intermediates a la Basu).
3. Nominal wage rigidities: direct stickiness in nominal wages, either due to explicit contracts or implicit guarantees of wage stability, particularly in the downward direction. (Uncomfortable questions here about whether measured wages are really allocative, of course – I suspect their allocative role is surprisingly high.)
4. “Real" wage rigidities: either literal stickiness in inflation-adjusted wages a la Blanchard and Gali, or a set of frictions that prevent firms from adjusting wages as necessary, like the complexity of firms’ internal wage structure.
Anyway, my best guess is that all four of these are relevant, probably each to a substantial degree. Since these rigidities multiply, it’s easy to see how we could end up with a very high degree of aggregate nominal rigidity, to a degree that seems implausible when we’re scrutinizing a model with only one or two sources of rigidity.
I’m beginning to think that nominal wage rigidity, however, has a disproportionately important role, especially during recessions. There are many reasons why I think this, and I can’t list them all here, but the one I think is particularly interesting is the existence of nominal asymmetry. A large output gap is extraordinarily effective at bringing inflation down from, say, 8% to 2%, but far less effective at bringing about a drop from 2% to -2%. Even Japan, the prototypical example of a country in a prolonged deflationary slump, never saw a sustained rate below -1%. To me, the rapidity of disinflation compared to deflation suggests strong asymmetries in the nature of rigidity – and by far the most plausible candidate for asymmetry is nominal wage rigidity.
Certainly there are some other possible explanations as well. Perhaps inflation rates are very strongly influenced by forward-looking expectations of central bank policy – and while it’s plausible that a committed central bank might be trying to disinflate, no one would ever expect a central bank to actively attempt large-scale deflation. Or perhaps the much quicker rate of disinflation is due to higher nominal flexibility when the rate of inflation is further away from 0. Such alternatives are plausible, but my intuition is that quantitatively, it is very tough to explain the observed asymmetry without recourse to some asymmetry in the rigidity itself.
Another reason I am skeptical of the 80s-90s shift toward exclusively price-side rigidities is that I think some of the commonly stated arguments are not quite right. You mention in the twitter dialogue, for instance, that price rigidities justify a procyclical price level, while wage rigidities would lead to a countercyclical price level. While this is true to some extent, the procyclicality induced by sticky prices is much stronger than the countercyclicality induced by sticky wages. Indeed, in a benchmark model where labor is the only factor of production and there are no real shocks, the real wage under sticky wages is acyclical: it’s just the MPL divided by the markup, and when prices are flexible and firms can freely hit the desired markup, this is unaffected by nominal shocks. Countercyclicality under sticky wages only emerges due to flexibly priced factors of production other than labor.
My sense is that the countercyclicality induced by sticky wages is so weak that, if one introduces moderately sticky prices and market structure amplifying those sticky prices (i.e. a high intermediate share), it is easy to come out with a mildly procyclical real wage. And, indeed, cutting out oil shocks I’d say that the real wage has been just mildly procyclical in the postwar era – so this checks out. (Huang, Liu, and Phaneuf’s 2004 AER is a nice reference that works through some of this more explicitly.)
Anyway, I would love to have some more dialogue with you about the price vs. wage stickiness issue. I’ve been spending a fair amount of time recently thinking about ways to empirically distinguish between the two sources of rigidity, and I’ve also been talking to some of my fellow grad students - who seem to have pretty strong opinions in favor of wage rigidity. Maybe you can set my generation straight!
My biggest objection to nominal wage rigidity is that observed wages are not allocative. It is hard to believe I would just give up on more labor input because the wage is high as opposed to asking my existing workers to work harder for the same pay, OR hire a worker at the high sticky wage level now (giving them a bigger piece of the pie of surplus from a match) and expecting them to understand that they might get a smaller piece of the pie of surplus from a match in the future. In other words, it makes no sense not to get more labor input just because you happen to have a high wage right now. I have no problem with wage rigidity when there is an actual union setting wages in the picture. But if the firm is a unilateral wage setter, and has a lot of influence over pace of work as well as wages, how can there be effective wage stickiness?
In other words, I don’t think it needs to be spelled out what is really going on inside the firm/worker relationship before we too readily agree that there are sticky wages. Unfortunately, most of the models there are either too rudimentary or too complex and focused on other issues to be of the help we would want in figuring out how effectively rigid wages are. I am just raising the skeptical point that if there is an allocative inefficiency from having the wrong amount of labor input, wouldn’t firms and workers together figure out some way around that? They have a long-term relationship in a way that few customer-supplier relationships can match.
A more simple prediction is that wages should look stickier the more conflict there is in the firm/worker relationship. Where firms and workers get along famously, there should be very little allocative inefficiency and therefore no allocative wage stickiness. Where firms and workers are at loggerheads, there could be a lot of effective wage stickiness.
One other point: one way in which nominal wage rigidity fails is that firms make workers contribute more for medical insurance. If you can cut benefits across the board in that way, and then have raises for some, you have loosened the downward nominal rigidity. Finally, don’t forget my point that the observation that technology improvements are contractionary can only work if there is substantial price stickiness. You can’t get that from wage stickiness alone. So that means price stickiness is a major factor in the economy–though there might also be wage stickiness.
My bottom line has been that if for tractability you have to choose between only price stickiness in a model and only wagestickiness, you are closer to reality with price stickiness. But if you can manage both and can deal with the micro issues of long-term labor relationships and variable effort, then it could be reasonable to have some wage stickiness too.
Thanks so much for your quick and detailed response. I apologize for my tardiness - I was working on a response Thursday night, but then things around here got a little crazy and I dropped it for a while.
I agree that the key issue is whether nominal wages indeed play an allocative role. (After all, there is plenty of evidence showing that the nominal wages themselves are remarkably sticky – this is uncontroversial enough that the key question is whether these payments are meaningful, or whether they’re installments in a long-term labor relationship.) And I have to concede that surely,wages are not allocative on a day-by-day basis: if I’m expected to come to work and do a good job every day, I don’t really care that I’m paid $100 on Mondays and $200 on Tuesdays. There is a deeply important sense in which labor relationships differ from spot markets, with incentives provided through long-term bargains rather than explicit transactions.
But I don’t think that the implicit contract between firm and worker is really so thorough. Instead, there are profound commitment and information failures that keep labor relationships far short of the first best. Here’s the most important data point in my view: firms lay off many workers during deep recessions with minimal severance pay. Surely if firms and workers could agree to anything ex ante, they would agree to avoid this: layoff during a recession is a deep blow with massive costs to career, wallet, and psyche. If firms were truly insuring their workers, they would need to fork over much more than a few weeks’ (or months’) pay; except in the lowest tier of jobs, unemployment insurance is not nearly enough to recover from the financial calamity of joblessness.
So intellectually, I agree with your puzzlement that firms and workers would fail to reach an arrangement flexible enough to avert the inefficiencies of wage rigidity. That’s missing some pretty low-hanging fruit! But when the ultimate low-hanging fruit is "don’t cast out large chunks of your workforce onto a brutal job market with only token assistance”, and we’re missing even that, I have to conclude that there are deep inefficiencies in labor relationships that economists do not fully understand. My guess is that commitment problems lead the contractual wage to play a surprisingly large allocative role. In normal times, the continuation surplus from the worker-employee match is enough to efficiently respond to small shocks; but when the benefit from defaulting on the worker-employee arrangement is large enough, firms do not hesitate to do so. And at that point, the allocative price is the contract wage, not the shadow price in a long-term efficient bargain.
Note that there is imperfect commitment on both sides of the relationship. In your hypothetical situation where a firm is happy to hire more workers at the market wage, but its internal wages are rigid and high, one possible solution is to bring in new workers at the high wage with an understanding that they will give up more of the surplus in the future. But workers’ lack of commitment prevents this: in the future, when they’re supposed to receive a below-market wage, they’ll simply jump ship.
This explains why firms are so reluctant to hire the long-term unemployed. To make up for the poor skills of an out-of-practice worker, they need to pay substantially less, but wage norms prevent them from doing so explicitly. (It’s totally conceivable to me that for the first 6 months, a long-term unemployed worker is only 50% as productive as an employed one. Firms might have some slack in setting entry wages, but most would never dream of paying worker A 50% as much as worker B for the same blue-collar job.) The obvious solution is to pay the new workers a decent salary coming in, under the tacit agreement that they’ll get less in the future to compensate their employers for rescuing them from unemployment. But again, these workers will simply renege on the agreement once they’re able – and this will be pretty easy for them, since their main obstacle on the job market was their joblessness, which has now been fixed.
I am just raising the skeptical point that if there is an allocative inefficiency from having the wrong amount of labor input, wouldn’t firms and workers together figure out some way around that? They have a long-term relationship in a way that few customer-supplier relationships can match.
I think that the comparison here to customer-supplier relationships is very interesting. I agree that at the retail level, customer-supplier pairs tend to be pretty fleeting – I do not have a long-term relationship with Walmart allowing us to pave over the inefficiencies resulting from sticky prices. Relationships higher on the input-output chart, on the other hand, often do last for long periods of time, possibly longer than most jobs. I don’t see why it should be any harder for Toyota to have an efficient long-term bargain with its suppliers than with its workers. And this is very problematic for the sticky price hypothesis, because stickiness at the retail level alone is just not enough. (As several pricing studies have documented, retail price stickiness and cyclicality have a strong negative correlation – many durable good prices are barely sticky at all, which is a huge problem given your results with Barsky and House.)
One other point: one way in which nominal wage rigidity fails is that firms make workers contribute more for medical insurance. If you can cut benefits across the board in that way, and then have raises for some, you have loosened the downward nominal rigidity.
This is a very interesting point, and I’ve heard several variations on it. (Health insurance premiums are the most important by far, but there are also 401(k) matches, etc.) This does indeed seem to be a way for firms to overcome, to a small extent, the norm against wage cuts. But I don’t think firms can get away with too much along this dimension – at most, they might manage to cut effective compensation by a few percentage points, and even this only if they’re in cyclical sectors. I am skeptical that this is enough to diminish the importance of nominal wage rigidity by very much, though of course it will become steadily more important as “fringe” benefits take up more and more of the compensation bundle.
Finally, don’t forget my point that the observation that technology improvements are contractionary can only work if there is substantial price stickiness. You can’t get that from wage stickiness alone. So that means price stickiness is a major factor in the economy–though there might also be wage stickiness.
I am a very, very, big admirer of your work on the purified residual with Basu and Fernald. I have to confess, though, that I give it a different interpretation. I have a strong prior that all “technology shocks” in the data, even when the Solow residual is carefully adjusted, are artifacts of the data – my experience doing empirical work tells me that there will always be residuals with no plausible structural interpretation. And from my admittedly amateurish understanding of technological change, I find it hard to believe that the stochastic process for productivity is really a random walk. Innovations diffuse much too slowly for that – instead, I’d model productivity as a two-dimensional stochastic process, where there are shocks to “technological knowledge”, but these shocks’ influence on productivity is spread out over a long period.
Bottom line: I don’t know what high-frequency variations in the purified Solow residual are really capturing, but whatever it is, I don’t think it has much to do with underlying technological progress. My skepticism owes a lot to the numbers themselves – I’m not sure what was happening in 2009 and 2010, but I didn’t see anything consistent with a huge technological boom in 2009 and then technological regress in 2010, as in the adjusted TFP series maintained by John Fernald. (One can go way back with this. Did TFP really decline in the year 2006? Did it decline for three consecutive quarters in 1996-97? Or for three consecutive quarters in 1994?)
Despite all this skepticism, though, I’m a huge fan of the work. But my interpretation of your results is “look, some meticulous and reasonable adjustments to TFP make the series look completely different, and give it completely different cyclical properties – so let’s be very careful drawing inferences from this stuff”, not “it turns out technology improvements are contractionary after all”. (Honestly, I think that meaningful high-frequency variation in TFP is basically something that Ed Prescott made up, so I’m not sure that “are technology shocks contractionary?” is even a well-posed question.) RBC had been cruising for far too long on basically spurious Solow residual estimates that ignored the overwhelming importance of factor utilization, and it was imperative that some smart macroeconomists do the legwork and show that this was untenable. I’m extremely glad you did, and I cite it whenever I get the chance. But I’m still not willing to treat the high-frequency shocks as structural, which is why I don’t view this as decisive in the sticky prices vs. wages debate.
A few years ago, I read an aside in Stiglitz’s Nobel autobiography that really shook me:
Economists spend enormous energy providing refined testing to their models. Economists often seem to forget that some of the most important theories in physics are either verified or refuted by a single observation, or a limited number of observations (e.g. Einstein’s theory of relativity, or the theory of black holes).
I really think that this is true: we often do very complicated, nontransparent estimation and testing of models, when in reality one or two carefully selected stylized facts could be much more decisive. My view is that the existence of mass layoffs during recessions with minimal severance, while perhaps not quite decisive, is one of these very important stylized facts - it appears to be a very important predictive failure of the implicit contract model.
Miles: Your point about the contractual wage being allocative for the layoff decision is well taken. But reduced hiring is at least as big a part of what makes the labor market what it is in recessions, and the contractual wage is not allocative at the hiring margin: those hired are just beginning an extended employment relationship. A model with stickywages at the layoff margin but effectively flexible wages at the hiring margin would be a very different model than one with sticky wages at both margins.
Let me defend the Basu, Fernald Kimball measurement of technology shocks. I agree that the blip up in the John Fernald’s series [the graph at the top] in 2009 is an artifact, but that was also a very unusual time and should not signal a big problem with the series at other times. The blip hints that hours and effort requirements went different ways during that episode, despite the theory that says an optimizing firm should move hours and the effort requirements they impose on workers (and the workweek of capital) in synch with each other. A reasonable theoretical explanation is that firms at that juncture put a premium on liquid funds. Putting a precautionary premium on liquid funds, they reduced their head count even below what demand warranted, and made remaining workers work harder in some many cases. This runs down worker good will, but in that crisis time, firms were willing to run down worker good will in order to protect their cash balances. The model treats firms as able to freely borrow and lend, and so omits any liquidity concerns on the part of firms, so it would not track that phenomenon.
On your theoretical doubt about the reasonableness of random walk technology, let me first say that a random walk for technology is much more plausible a priori than mean-reverting technology that implies that firms routinely backslide, as if they were forgetting technology. The random walk Susanto Basu, John Fernald and I find has very few negative technology shocks. At least at the annual level for the economy as a whole, technology shocks are mostly a matter of how much technology improves. (At the industry level, there are more negative technology shocks. To the extent these are not reflections of measurement error, we do not understand them very well.)
In general, I would like to see much more work done to find the stories behind the technology shocks that Susanto Basu, John Fernald and I find in the data. Because we compute the technology shocks at the industry or sectoral level, it should be possible to investigate where the shocks come from. Finding the story behind particular sectoral technology shocks in our data would be a very worthy topic for undergraduate theses, for example.
Let me talk about the gradual adoption of technology that you emphasize, given the little that we know now about economy-moving technology shocks. My view has been that technology shocks big enough to move the economy as a whole are a reflection of the steep part of the S-curve for technology adoption. The new technology is actually starting to spread long before we see it in the data. Then, there is a year when it goes from 15% adoption to 85% adoption, say, and that is the year we see the technology shock in the sectoral data, which then gets aggregated up to a macroeconomic technology shock. The standard errors are just to big to see clearly the gradual movement from 0 to 15% over several years or from 85% to almost 100% in several more years, but we can see the change in one year from 15% to 85%. What this means is that the technology shock in our data will be after, and predictable by, news reports of the new technology. At the Bank of Japan and to John Fernald at the San Francisco Fed, I have advocated that central banks should band together to do the staff work necessary to identify and predict macroeconomic technology shocks in advance, by gathering data on that initial introduction and adoption up to 15%. Hobbled as they are by the zero lower bound, central banks around the world have bigger problems to worry about right now, but in more halcyon times, better prediction of macroeconomic technology shocks would be a major part of their job. (In my column about Market Monetarism, NGDP targeting and optimal monetary policy, I talk both about how to eliminate the zero lower bound on nominal interest rates, and about how monetary policy can and should be adjusted for technology shocks.)
Your point about the contractual wage being allocative for the layoff decision is well taken. But reduced hiring is at least as big a part of what makes the labor market what it is in recessions, and the contractual wage is not allocative at the hiring margin: those hired are just beginning an extended employment relationship. A model with stickywages at the layoff margin but effectively flexible wages at the hiring margin would be a very different model than one with stickywages at both margins.
The same problems of imperfect commitment exist on the worker side. How can the effective wage for a new worker be much lower than the contractual wage? Only if the worker promises to compensate the employer by working at a below-market wage in the future. But it’s hard to make the worker keep his end of the implicit bargain – once he has other options, he’ll demand a fair, non-history-dependent wage. (Perhaps out of the loyalty to the firm for lifting him out of unemployment, he’ll be a little more pliable. Then again, he may be angry at having worse terms than his coworkers simply because he was unlucky enough to be hired during a recession.)
In general, my view of the employer-employee relationship is that it suffers from profound commitment and information failures. This is the only way to explain phenomena that couldn’t possibly be part of an efficient bargain - like layoffs in a depressed labor market. Most of the time, these failures are mitigated by the existence of surplus in the relationship between worker and firm. This surplus motivates both sides of the relationship to behave well in ways that can’t be codified in a formal contract. But when recession hits, at the contractual wage the surplus for the employer disappears, and it (inefficiently) terminates the relationship.
It’s similar for your hypothetical new worker. Suppose that he’s hired during a recession with the understanding that he’ll give up some of his future earnings. When the future arrives and prosperity returns, the worker won’t see any surplus from an ongoing relationship (other firms will compensate him fairly, without reference to the past), and he’ll terminate it. Any other outcome would be surprising. After all, apparently employers can’t commit to properly insure their workers against layoff, and if anything we’d expect implicit commitment to be easier for employers than workers.
In practice, neither side can reliably keep costly implicit promises, which means that the allocative wage can’t be too different from the contractual one. Wage stickiness matters on both margins.
Before continuing the debate on TFP, I want to take a step back and discuss the implications for wage rigidity. Initially, you mentioned that the apparent contractionary effect of technology shocks is evidence for price rather than wage rigidity. I took this as given and disputed the validity of measured TFP instead. But after further reflection I think that the former inference is equally problematic - even if the TFP series and impulse responses are flawless, we shouldn’t be so quick to settle on price stickiness.
Let’s take a look at Figure 4 from Basu, Fernald, Kimball (2006). Here, we see that after a 1% technology shock, the GDP deflator falls by 1% and the nominal wage stays almost exactly constant. Superficially, this seems much more consistent with sticky wages than sticky prices. That’s not completely fair, because maybe the measured wage isn’t allocative, and depending on the monetary rule there might be reasons why the price level eventually has to fall. (More on that in a second.)
But there are other problems with the story. The putative reason why technology improvements are contractionary is that the nominal money supply does not immediately adjust to the new level of output, which temporarily forces output below its natural level. (This is where the difference between sticky prices and wages comes in; with sticky wages alone, prices would fall to offset the increase in productivity, and there would be no pressure on the money supply.) In equilibrium, however, this all occurs via the impact of monetary policy on the real interest rate. If the path of the real interest rate doesn’t increase, monetary policy can’t be producing a contractionary outcome - at least not in this case. Yet this doesn’t seem to be happening in Figure 4, where the real fed funds rate has a negative impulse response.
More broadly, I don’t see why technology improvements should be contractionary in any model, at least with a realistic specification of the monetary policy rule. While it’s true that they are contractionary under a money supply or nominal GDP rule, monetary policy during the sample period generally didn’t operate according to such rules. (A possible, brief Volcker exception notwithstanding.) Instead, it’s probably best characterized as following some kind of interest rate rule, perhaps a Taylor rule with inertia. And in that case, technology shocks aren’t contractionary at all.
To explore this further, I fired up Dynare and calculated impulse responses to technology improvements in a basic New Keynesian model, under various combinations of assumptions. (Results are here: http://www.mit.edu/~mrognlie/tech_shock_results.pdf)
For monetary rules, I examined a basic Taylor rule, an inertial Taylor rule, and a money supply rule. In general, the shock was not contractionary for employment under either Taylor rule; this only happened for the money supply rule. In the case where a t=0 shock was anticipated at t=-1, there was generally a contraction in employment from t=-1 to t=0, which could conceivably produce the impulse responses in BFK. But this happened in a number of cases with wage rigidity too (albeit attenuated by the monetary reaction to a fall in inflation), so it’s not particularly strong evidence on the rigidity issue.
Furthermore, with an interest rate rule there was never a persistent decline in prices in response to the shock, except in the presence of wage rigidity. If we stipulate that the Fed followed an interest rate rule during the sample period, then the deflationary impact of a shock in Figure 4 is very powerful evidence for sticky wages.
All in all, it is difficult to reconcile the full set of impulse responses in BFK with any single model. But at the very least, the impulse responses provide just as much evidence for sticky wages as sticky prices. The only hint of sticky prices is the headline finding of a contraction – and the underlying story there is contradicted by the real interest rate decline in Figure 4.
[Administrative note: I’d like to mention the adjusted TFP series we discussed, but I’m not sure that we are using the same series. I was using the utilization-adjusted numbers from a spreadsheet on John Fernald’s website here: http://www.frbsf.org/csip/research/tfp/quarterly_tfp.xls It looks like this doesn’t actually implement all the corrections from your paper, so I don’t want to put too much emphasis on it. Notably, it looks like the utilization-adjusted TFP in his spreadsheet has just as frequent technological regress as regular TFP.]
My view has been that technology shocks big enough to move the economy as a whole are a reflection of the steep part of the S-curve for technology adoption. The new technology is actually starting to spread long before we see it in the data. Then, there is a year when it goes from 15% adoption to 85% adoption, say, and that is the year we see the technology shock in the sectoral data, which then gets aggregated up to a macroeconomic technology shock. The standard errors are just to big to see clearly the gradual movement from 0 to 15% over several years or from 85% to almost 100% in several more years, but we can see the change in one year from 15% to 85%.
I found this suggestion intriguing. I’d long had a vague intuition that micro-level technology improvements could not possibly produce a TFP series as erratic as the one we see in practice. But I hadn’t given this issue – in particular, the relationship between the S-curve of adoption and TFP growth at the macro level - nearly the same thought as you.
Rather than try to communicate my muddled intuition (which no one, including me, has good reason to trust), I decided to write a simple model to flesh out the relationship between the diffusion of micro-level technology improvements and the time series properties of aggregate productivity. The results are available here:
I found that under fairly general assumptions, there is a remarkably straightforward connection between the pace of technology diffusion at the micro level and the autocorrelation of aggregate TFP growth. The autocorrelation implied by the model, however, turns out to be far higher than anything visible in the data.
In particular, using a logistic functional form, suppose we parameterize the diffusion curve such that it takes one year for a technology to go from 12% to 88% adoption. (Pretty fast!) Then the autocorrelation of TFP growth in consecutive quarters should be 0.91. At lags of two and three quarters, it should be 0.70 and 0.46. This contrasts markedly with the values in the actual data, which are near zero – regardless of whether we’re using standard TFP, adjusted TFP, labor productivity, etc.
With a slower – and in my view more realistic – pace of diffusion, the contrast between model and data becomes even more stark. Suppose now that it takes two years for a technology to go from 12% to 88%. Then the autocorrelation of growth at lags of 1, 2, and 3 quarters should be 0.98, 0.91, and 0.82. This is nothing like the data.
The underlying logic of the model is pretty straightforward. It says that if new technologies aren’t adopted instantaneously, but instead are spread smoothly over time, then aggregate TFP growth should inherit some of that smoothness. It shouldn’t be nearly uncorrelated from quarter to quarter, which is what we see in practice.
It’s possible that the difference between model and data is caused by measurement error. But it would have to be quite severe measurement error, and it’s a suspicious coincidence that the negative correlation induced by measurement error would be exactly enough to change near-1 correlations to near-0!
Regardless, I think this casts some doubt on any interpretation of TFP as the aggregate reflection of micro-level technological progress. And it only strengthens my longstanding suspicion that short-run variability in TFP is dominated by the effects of specification error.