Must All Economics Papers Be Doorstoppers?
Economics papers have gotten bigger. The world outside the orbit of economists has noticed. The Wall Street Journal devoted 967 words to this topic in Ben Leubsdorf's July 23, 2018 article. He writes:
The average length of a published economics paper has more than tripled over the past four decades, and some academics are sick of wading through them. ...
Between 1970 and 2017, the average length of papers published in five top-ranked economics journals swelled from 16 pages to 50 pages, according to an analysis by University of California, Berkeley economists Stefano DellaVigna and David Card.
The graph on the left just below shows this dramatic increase in length:
The question is whether the extra length is fat or muscle. Giving readers everything they need to be convinced that a claimed empirical result correctly represents the real world can take a lot. The existence of some careful—and carefully described—empirical work in economics is an important part of what gives economists the prestige they have with journalists, businesspeople and policymakers.
But sometimes the length of a paper can deter other economists from actually reading it, so that very few people end up knowing whether the paper is on track or not. If the editor and referees who approved the paper miss something, or let the paper through because of methodological bias despite serious flaws, a falsehood can gain currency.
When I used to have graduate students present papers from the literature in class, I was always dismayed that they believed the abstract! Of papers I have read since I received my PhD, more than half the time, the abstract badly misrepresents what the paper really demonstrates. If smart graduate students who have actually read a paper believe abstracts way too much, it is a slam dunk to guess that economists will believe abstracts too much when they haven't read a paper—unless of course the abstract claims a result that goes against their prejudices.
If most important papers are so long that almost no one really reads them, the conversation among economists becomes impoverished. A nice example that came up in Ben Leubsdorf's interview of Anil Kashyap is Anne Case and Angus Deaton's paper "Rising morbidity and mortality in midlife among white non-Hispanic Americans in the 21st century":
In 2015, Princeton University economists Anne Case and Angus Deaton published research on rising death rates for middle-aged white men. Their six pages in the Proceedings of the National Academy of Sciences set off a national debate over possible links between mortality and economic distress, and “there was a lot of discussion about whether a paper like that, sent to a standard economics journal, would have had a chance to get published," said University of Chicago economist Anil Kashyap.
The way in which Anne and Angus's paper enriched not only the conversation among economists but the conversation in the country in general is nicely captured by Jeff Guo's April 6, 2017 interview of them, published in the Washington post: "‘How dare you work on whites’: Professors under fire for research on white mortality."
Is there any place for shorter papers in economics? The American Economic Association has officially decided "Yes" in the face of what has been an all-too-pervasive answer "No" in the top journals:
The AEA announced last year it would launch a journal dedicated to publishing only concise papers, at least by economists’ standards—nothing longer than 6,000 words, or about 15 double-spaced pages. ...
“Certainly not all papers should be short,” said MIT economist Amy Finkelstein, founding editor of what’s being called American Economic Review: Insights. “But on the other hand, not all papers should be long.”
She noted that seminal 1950s papers by Paul Samuelson and John Nash took only a few pages to convey findings on public goods and game theory; both men later won the Nobel Prize in economics. Some journals today seem wary of publishing such quick reads.
“If you want to publish a paper in a top journal, even if you think you have one key insight that can be conveyed succinctly, the referees are not going to take it,” Ms. Finkelstein said.
One part of Ben Leubsdorf's reporting was incomplete. He writes:
Ms. Finkelstein said the new journal is on track to have more than 600 submissions for its first year.
There is no clue here how 600+ submissions compares to other American Economic Association journals. Any American Economic Association journal is likely to get a lot of submissions. (Update: Claudia Sahm tweets: "and on @mileskimball point about the new AER Insights “There is no clue here how 600+ submissions compares to other American Economic Association journals.” blog.supplysideliberal.com/post/2018/7/25… That’s nothing for the AER, 1500+ submitted papers in 2013." Go to the tweet itself for a relevant graph.)
Conclusion: If economists were not deterred from reading papers by their length, the cost of long papers could conceivably be only in reduced leisure for economists. But I'll bet the elasticity of reading a paper with respect to its length is substantial. When I cross-post a blog post to Medium, as I do occasionally, the statistics Medium gives on "reads" as well as "pageviews" indicate that the shorter a post, the more likely someone is to get to the end!
If fewer economists read a paper, that means fewer economists evaluate its claims. I mentioned above the possibility that false claims creep into conventional wisdom as a result. Another place economists opting out of reading papers is corrosive is in the tenure and promotion process. If tenure and promotion committees don't read a candidate's papers, and only do bean-counting, they are outsourcing judgment to journal editors and referees.
Journal editors and referees seem like a dangerously small set of people to be the only evaluators of a paper. Optimistically, one out of every ten or so citations to a paper might represent another serious evaluator of a paper, but many papers never get that many citations. (Many citations are defensive, in case the one cited might become a referee. Other citations are by people who believe abstracts relatively uncritically.)
I don't mind papers that are wrong getting published. But I do mind papers that are wrong being believed. I want to have enough people actually read papers that the economics profession as a whole learns what to believe and what not to believe, and learns what is important and what is not important. If papers being 50+ pages long keeps economists from reading them, that is a problem.
One thing that could help a lot is if we had a way to collect data on how many people actually read a paper. Technology may make this easier. What if downloaded papers had a link at the end of the pdf file that could be clicked to say "I read this paper"? (Taking those who clicked the link to a page encouraging them to offer anonymous comments could also be valuable.) There could be authentication that identified the particular reader in order to avoid cheating, but an ironclad promise of confidentiality of who clicked that they had read a paper to avoid people trying to look good by claiming to read things they hadn't. (There would still be an incentive to help friends by clicking that link, but hopefully some fraction of those friends would feel guilty enough about not reading when they said they did to generate some additional readers.) This would provide crucial data. It could be a better indication of the importance of a paper than citations. And it would communicate to economists the importance service they are doing to the profession when they read a paper.