Quartz #36—>There's One Key Difference Between Kids Who Excel at Math and Those Who Don't

Here is the full text of my 36th Quartz column, and 2d column coauthored with Noah Smith, “There’s One Key Difference Between Kids Who Excel at Math and Those Who Don’t.” I am glad to now bring it home to supplysideliberal.com, and Noah will post it on his blog Noahpinion as well. It was first published on October 27, 2013. Links to all my other columns can be found here. In particular, don’t miss my follow-up column

How to Turn Every Child into a “Math Person.”

The warm reception for this column has been overwhelming. I think there is a hunger for this message out there. We want to get the message out, so if you want to mirror the content of this post on another site, just include both a link to the original Quartz column and to supplysideliberal.com.


“I’m just not a math person.”

We hear it all the time. And we’ve had enough. Because we believe that the idea of “math people” is the most self-destructive idea in America today. The truth is, you probably are a math person, and by thinking otherwise, you are possibly hamstringing your own career. Worse, you may be helping to perpetuate a pernicious myth that is harming underprivileged children—the myth of inborn genetic math ability.

Is math ability genetic? Sure, to some degree. Terence Tao, UCLA’s famous virtuoso mathematician, publishes dozens of papers in top journals every year, and is sought out by researchers around the world to help with the hardest parts of their theories. Essentially none of us could ever be as good at math as Terence Tao, no matter how hard we tried or how well we were taught. But here’s the thing: We don’t have to! For high school math, inborn talent is just much less important than hard work, preparation, and self-confidence.

How do we know this? First of all, both of us have taught math for many years—as professors, teaching assistants, and private tutors. Again and again, we have seen the following pattern repeat itself:

  1. Different kids with different levels of preparation come into a math class. Some of these kids have parents who have drilled them on math from a young age, while others never had that kind of parental input.

  2. On the first few tests, the well-prepared kids get perfect scores, while the unprepared kids get only what they could figure out by winging it—maybe 80 or 85%, a solid B.

  3. The unprepared kids, not realizing that the top scorers were well-prepared, assume that genetic ability was what determined the performance differences. Deciding that they “just aren’t math people,” they don’t try hard in future classes, and fall further behind.

  4. The well-prepared kids, not realizing that the B students were simply unprepared, assume that they are “math people,” and work hard in the future, cementing their advantage.

Thus, people’s belief that math ability can’t change becomes a self-fulfilling prophecy.

The idea that math ability is mostly genetic is one dark facet of a larger fallacy that intelligence is mostly genetic. Academic psychology journals are well stocked with papers studying the world view that lies behind the kind of self-fulfilling prophecy we just described. For example, Purdue University psychologist Patricia Linehan writes:

A body of research on conceptions of ability has shown two orientations toward ability. Students with an Incremental orientation believe ability (intelligence) to be malleable, a quality that increases with effort. Students with an Entity orientation believe ability to be nonmalleable, a fixed quality of self that does not increase with effort.

The “entity orientation” that says “You are smart or not, end of story,” leads to bad outcomes—a result that has been confirmed by many other studies. (The relevance for math is shown by researchers at Oklahoma City who recently found that belief in inborn math ability may be responsible for much of the gender gap in mathematics.)

Psychologists Lisa Blackwell, Kali Trzesniewski, and Carol Dweck presented these alternatives to determine people’s beliefs about intelligence:

  1. You have a certain amount of intelligence, and you really can’t do much to change it.

  2. You can always greatly change how intelligent you are.

They found that students who agreed that “You can always greatly change how intelligent you are” got higher grades. But as Richard Nisbett recounts in his book Intelligence and How to Get It,they did something even more remarkable:

Dweck and her colleagues then tried to convince a group of poor minority junior high school students that intelligence is highly malleable and can be developed by hard work…that learning changes the brain by forming new…connections and that students are in charge of this change process.

The results? Convincing students that they could make themselves smarter by hard work led them to work harder and get higher grades. The intervention had the biggest effect for students who started out believing intelligence was genetic. (A control group, who were taught how memory works, showed no such gains.)

But improving grades was not the most dramatic effect, “Dweck reported that some of her tough junior high school boys were reduced to tears by the news that their intelligence was substantially under their control.” It is no picnic going through life believing you were born dumb—and are doomed to stay that way.

For almost everyone, believing that you were born dumb—and are doomed to stay that way—is believing a lie. IQ itself can improve with hard work. Because the truth may be hard to believe, here is a set of links about some excellent books to convince you that most people can become smart in many ways, if they work hard enough:

So why do we focus on math? For one thing, math skills are increasingly important for getting good jobs these days—so believing you can’t learn math is especially self-destructive. But we also believe that math is the area where America’s “fallacy of inborn ability” is the most entrenched. Math is the great mental bogeyman of an unconfident America. If we can convince you that anyone can learn math, it should be a short step to convincing you that you can learn just about anything, if you work hard enough.

Is America more susceptible than other nations to the dangerous idea of genetic math ability? Here our evidence is only anecdotal, but we suspect that this is the case. While American fourth and eighth graders score quite well in international math comparisons—beating countries like Germany, the UK and Sweden—our high-schoolers  underperform those countries by a wide margin. This suggests that Americans’ native ability is just as good as anyone’s, but that we fail to capitalize on that ability through hard work. In response to the lackluster high school math performance, some influential voices in American education policy have suggested simply teaching less math—for example, Andrew Hacker has called for algebra to no longer be a requirement. The subtext, of course, is that large numbers of American kids are simply not born with the ability to solve for x.

We believe that this approach is disastrous and wrong. First of all, it leaves many Americans ill-prepared to compete in a global marketplace with hard-working foreigners. But even more importantly, it may contribute to inequality. A great deal of research has shown that technical skills in areas like software are increasingly making the difference between America’s upper middle class and its working class. While we don’t think education is a cure-all for inequality, we definitely believe that in an increasingly automated workplace, Americans who give up on math are selling themselves short.

Too many Americans go through life terrified of equations and mathematical symbols. We think what many of them are afraid of is “proving” themselves to be genetically inferior by failing to instantly comprehend the equations (when, of course, in reality, even a math professor would have to read closely). So they recoil from anything that looks like math, protesting: “I’m not a math person.” And so they exclude themselves from quite a few lucrative career opportunities. We believe that this has to stop. Our view is shared by economist and writer Allison Schrager, who has written two wonderful columns in Quartz (here and here), that echo many of our views.

One way to help Americans excel at math is to copy the approach of the Japanese, Chinese, and Koreans.  In Intelligence and How to Get It,  Nisbett describes how the educational systems of East Asian countries focus more on hard work than on inborn talent:

  1. “Children in Japan go to school about 240 days a year, whereas children in the United States go to school about 180 days a year.”

  2. “Japanese high school students of the 1980s studied 3 ½ hours a day, and that number is likely to be, if anything, higher today.”

  3. “[The inhabitants of Japan and Korea] do not need to read this book to find out that intelligence and intellectual accomplishment are highly malleable. Confucius set that matter straight twenty-five hundred years ago.”

  4. “When they do badly at something, [Japanese, Koreans, etc.] respond by working harder at it.”

  5. “Persistence in the face of failure is very much part of the Asian tradition of self-improvement. And [people in those countries] are accustomed to criticism in the service of self-improvement in situations where Westerners avoid it or resent it.”

We certainly don’t want America’s education system to copy everything Japan does (and we remain agnostic regarding the wisdom of Confucius). But it seems to us that an emphasis on hard work is a hallmark not just of modern East Asia, but of America’s past as well. In returning to an emphasis on effort, America would be returning to its roots, not just copying from successful foreigners.

Besides cribbing a few tricks from the Japanese, we also have at least one American-style idea for making kids smarter: treat people who work hard at learning as heroes and role models. We already venerate sports heroes who make up for lack of talent through persistence and grit; why should our educational culture be any different?

Math education, we believe, is just the most glaring area of a slow and worrying shift. We see our country moving away from a culture of hard work toward a culture of belief in genetic determinism. In the debate between “nature vs. nurture,” a critical third element—personal perseverance and effort—seems to have been sidelined. We want to bring it back, and we think that math is the best place to start.

Follow Miles on Twitter at @mileskimball. Follow Noah at @noahpinion.

Matt Griffin: How Paul Krugman Convinced Me to Support Miles Kimball's E-Money Idea


Paul Krugman wrote a post over the weekend in response to the speech that Larry Summers gave at the IMF about the possible stagnation of the U.S. economy due to the zero lower bound (ZLB). The post gives a good summary of Summers’ speech and issues facing the economy due to the ZLB. A key argument in the post is that the economy has been fighting against a liquidity trap decades through successive economic bubbles.

So with all that household borrowing, you might have expected the period 1985-2007 to be one of strong inflationary pressure, high interest rates, or both. In fact, you see neither – this was the era of the Great Moderation, a time of low inflation and generally low interest rates. Without all that increase in household debt, interest rates would presumably have to have been considerably lower – maybe negative. In other words, you can argue that our economy has been trying to get into the liquidity trap for a number of years, and that it only avoided the trap for a while thanks to successive bubbles.

An argument that bubbles have been good for the economy is a counter intuitive claim that is likely to be met with heavy resistance, but that reaction is precisely why (according to Krugman’s logic) the economy is having trouble escaping the fallout of the housing bubble. Less serious bubbles in the past have been met with painful, yet short, recessions because the economy was able to essentially shrug off its past mistakes and move on to new productive investments. However, the housing bubble was a widespread phenomenon that has personally impacted a massive proportion of the population. Huge negative effects hit individual consumers much harder than previous bubbles, which has caused a fear of economic instability within the population that is unrivaled since the Great Depression.

People are now afraid of bubbles and are actively trying to prevent future bubbles from disrupting the economy. The response and fear of the public has lead to overwhelming support for financial reform like Dodd-Frank. The movement for financial reform might actually be impairing economic growth, as Krugman states:

He goes on to say that the officially respectable policy agenda involves “doing less with monetary policy than was done before and doing less with fiscal policy than was done before,” even though the economy remains deeply depressed. And he says, a bit fuzzily but bravely all the same, that even improved financial regulation is not necessarily a good thing – that it may discourage irresponsible lending and borrowing at a time when more spending of any kind is good for the economy.

It is a particularly terrifying idea that financial reform is harming the economy because it is discouraging irresponsible lending, which would help to create another bubble that leads us to a temporary recovery. It is plausible that the economy could stagnate, a la Japan, due to handcuffed monetary policy and regulation acting to prevent a bubble-fueled recovery. This one blog post by Krugman is perhaps the best argument yet for Miles Kimball’s idea of e-money (read Miles on e-money here).

The Summers speech/Krugman post has lead me to closely examine my beliefs on monetary policy and has convinced me that e-money offers the best alternative to the current policy regime. E-money can provide large social benefits by avoiding an arbitrary boundary on perhaps the one policy mechanism that economists understand very well. If Summers and Krugman are correct about the possibility of stagnation, support for e-money (or other similar policy alternatives) is almost a moral imperative for economists. It is the duty of economists to use the influence they hold to improve the economy and the lives of the people in it. I am now convinced that e-money is perhaps the best example of socially beneficial policy changes that can occur because of the influence of the economics profession.

The Shakeup at the Minneapolis Fed and the Battle for the Soul of Macroeconomics

Here is a link to my 38th column on Quartz, coauthored with Noah Smith, “The shakeup at the Minneapolis Fed is a battle for the soul of macroeconomics–again.” Our editor insisted on a declarative title that seriously overstates our degree of certainty on the nature of the specific events that went down at the Minneapolis Fed. I toned it down a little in my title above.

Quartz #35—>Get Real: Robert Shiller’s Nobel Should Help the World Improve Imperfect Financial Markets

blog.supplysideliberal.com tumblr_inline_mwmjhtYUp11r57lmx.png

Link to the Column on Quartz

Here is the full text of my 35th Quartz column, “Get Real: Robert Shiller’s Nobel should help the world accept (and improve) imperfect financial markets,” now brought home to supplysideliberal.com. It was first published on October 16, 2013. Links to all my other columns can be found here.

If you want to mirror the content of this post on another site, that is possible for a limited time if you read the legal notice at this link and include both a link to the original Quartz column and the following copyright notice:

© October 16, 2013: Miles Kimball, as first published on Quartz. Used by permission according to a temporary nonexclusive license expiring June 30, 2015. All rights reserved.


With the world still suffering from the 2008 financial crisis, it is good to see Nobel prizes going to three economists who have set the bar for analyzing how stock prices and other asset prices move in the real world: Eugene Fama, Robert Shiller, and Lars Hansen. Eugene Fama is best known for setting the benchmark for how financial markets would work in a world of perfect efficiency. Robert Shiller pointed out that financial markets look much less efficient at the macroeconomic scale of financial market booms and busts than they do at the microeconomic level of prices for individual stocks. And Lars Hansen developed the statistical techniques that have served as the touchstone for arbitrating between competing views of financial markets.

In many respects the “popular science” account of the work of Fama, Hansen and Shiller, given by the official Nobel prize website, is excellent. But its understated tone does not fully convey the drama of Fama and Shiller painting two diametrically opposed pictures of financial markets. (Nor the beauty and the clarity of Hansen’s way of thinking about the statistical issues in refereeing between these opposing views—but that would be too much to expect in a popular science treatment.) Fama’s picture of financial markets is Panglossian: all is for the best in the best of all possible worlds. In Shiller’s picture, financial markets are much more chaotic. As Berkeley economics professor and well-known blogger Brad DeLong puts it:

Financial markets are supposed to tell the real economy the value of providing for the future—of taking resources today and using them nor just for consumption or current enjoyment but in building up technologies, factories, buildings, and companies that will produce value for the future. And Shiller has, more than anyone else, argued economists into admitting that financial markets are not very good at this job.

Shiller’s view of financial markets that are swept up in successive excesses of optimism and pessimism allowed him to sound a warning of both the crash of the dot-com bubble in 2000 and the collapse of the house price bubble that interacted with high levels of leverage by big banks to bring down the world economy—to depths it still hasn’t recovered from.

Even when they don’t fully believe that all is for the best in the best of all possible worlds, the imaginations of most economists are captivated by the image of perfect markets, of which Eugene Fama’s Efficient Markets Theory provides an excellent example. The bad part about economists being riveted by the image of perfect markets is that they sometimes mistake this image for reality. The good part is that this image provides a wonderful picture of how things could be—a vision of a world in which (in addition to the routine work of facilitating transactions) financial markets gracefully do the work of:

  • information acquisition and processing,
  • getting funds from those who want to save to firms and individuals who need to borrow, and
  • sharing risks, so that the only risks people face are their share of the risks the world economy as a whole faces—except for entrepreneurs, who need to face additional risks in order to be motivated to do whatever they can to make their businesses successful.

One way to see how far the world is from fully efficient financial markets is that perfect markets would function so frictionlessly that the financial sector itself would earn income that was only a tiny fraction of GDP, where in the real world, “finance and insurance” earn something like 8% of GDP (see 1 and 2,) with many hedge fund managers joining Warren Buffett on Forbes’ list of billionaires.

One reason the financial sector accounts for such a big chunk of GDP may be that information acquisition and processing is much harder in the real world than in pristine economic models. After all, there is a strong tradition in economics for treating information processing (as distinct from information acquisition) as if it came for free. That is, look inside the fantasy world of almost all economic models, and you will see that everyone inside has an infinite IQ, at least for thinking about economic and financial decisions!

In the real world, being able to think carefully about financial markets is a rare and precious skill. But it is worse than that. Those smart enough to work at high levels in the financial sector are also smart enough to see the angles for taking advantage of regular investors and taxpayers, should they be so inclined. Indeed, two of the most important forces driving events in financial markets are the quest for plausible, but faulty stories about how the financial markets work that can fool legislators and regulators on the one hand and stories that can fool regular investors. A great deal of money made by those in the financial sector rides on convincing people that actively managed mutual funds do better that plain vanilla index funds—something that is demonstrably false on average, at least. And a surprisingly large amount of money is made by nudging regular investors to buy high-fee plain vanilla index funds as opposed to low-fee plain vanilla index funds. (There is a reason why, for my retirement savings account, I had to drill down to the third or fourth webpage for each mutual fund before I could see what fees it charges.) Even those relatively sophisticated investors who can qualify to put their money into hedge funds have been fooled by the hedge funds into paying not only management fees that typically run about 2% per year, but also “performance fees” averaging about 20% of the upside when the hedge fund does well, with the investor taking the full hit when the hedge fund does badly. So one crucial requisite for financial markets to do what they should be doing is for regular investors to know enough to notice when financial operators are taking them for a ride (which as it stands, is most of the time, at least to the tune of the bulk of fees paid) and when they are getting a decent deal.

For getting funds from those who want to save to those who need to borrow, the biggest wrench in the works of the financial system right now is that the government is soaking up most of the saving. The obvious part of this is budget deficits, which at least have the positive effect of providing stimulus for the economy in the short run. The less obvious part is that the US Federal Reserve is paying 0.25% to banks with accounts at the Fed and 0% on green pieces of paper when, after risk adjustment, many borrowers (who would start a business, build a factory, buy equipment, do R&D, pay for an education, or buy a house, car or washing machine) can only afford negative interest rates. (See “America’s huge mistake on monetary policy: How negative interest rates could have stopped the Great Recession in its tracks.”)

Yet, the departure from financial utopia that I find the most heart wrenching is the failure of real-world financial markets to share risks in the way they do in our theories. If financial markets worked as they should:

  • There would be no reason for the people in a banana republic to suffer when banana prices unexpectedly went down—that contingency would have been insured just as routinely as our houses have fire insurance,
  • There would be no reason for people to suffer if house prices unexpectedly went down in particular metropolitan areas more than elsewhere, since home price insurance built into mortgages would automatically adjust the size of the mortgage,
  • There would be no reason for people to suffer if the industry they worked in did unexpectedly badly, since that possibility would be fully hedged.

Some of these things don’t happen because people don’t understand financial markets well enough. But some don’t happen because the financial markets have not developed enough to offer certain kinds of insurance. All three winners this year richly deserve to be Nobel laureates. I tweeted the day before the announcement in favor of Robert Shiller because he, more than anyone else, has been trying to make financial markets live up to this vision of risk sharing. It not just that this is a big theme in the books he has written. Shiller has also patented new types of financial assets to enhance risk sharing and helped create the Case-Shiller home-price index as a foundation on which home-price insurance contracts could be based. Shiller’s vision of risk sharing is far from being a reality, but one day, maybe it will be. If that day comes, the world will look back on Robert Shiller as much more than a Nobel-Prize-winning economist. As Brad DeLong says of Shiller: “Pay attention to him.”

John Stuart Mill on Puritanism

John Knox, one of the leaders of the Protestant Reformation and founder of Presbyterianism in Scotland

John Knox, one of the leaders of the Protestant Reformation and founder of Presbyterianism in Scotland

“Puritanism” is often used figuratively to mean the suspicion of one’s own preferences and the preferences of others. John Stuart Mill this attitude in On Liberty, calling it “the Calvinistic theory.” The connection is that the Puritans had strong Calvinistic leanings. Here is what John Stuart Mill has to say about “the Calvinistic theory” in On Liberty chapter III, “Of Individuality, as One of the Elements of Well-Being,” paragraphs 6-8:

…Thus the mind itself is bowed to the yoke: even in what people do for pleasure, conformity is the first thing thought of; they like in crowds; they exercise choice only among things commonly done: peculiarity of taste, eccentricity of conduct, are shunned equally with crimes: until by dint of not following their own nature, they have no nature to follow: their human capacities are withered and starved: they become incapable of any strong wishes or native pleasures, and are generally without either opinions or feelings of home growth, or properly their own. Now is this, or is it not, the desirable condition of human nature?

It is so, on the Calvinistic theory. According to that, the one great offence of man is self-will. All the good of which humanity is capable, is comprised in obedience. You have no choice; thus you must do, and no otherwise: “whatever is not a duty is a sin.” Human nature being radically corrupt, there is no redemption for any one until human nature is killed within him. To one holding this theory of life, crushing out any of the human faculties, capacities, and susceptibilities, is no evil: man needs no capacity, but that of surrendering himself to the will of God: and if he uses any of his faculties for any other purpose but to do that supposed will more effectually, he is better without them. This is the theory of Calvinism; and it is held, in a mitigated form, by many who do not consider themselves Calvinists; the mitigation consisting in giving a less ascetic interpretation to the alleged will of God; asserting it to be his will that mankind should gratify some of their inclinations; of course not in the manner they themselves prefer, but in the way of obedience, that is, in a way prescribed to them by authority; and, therefore, by the necessary conditions of the case, the same for all.

In some such insidious form there is at present a strong tendency to this narrow theory of life, and to the pinched and hidebound type of human character which it patronizes. Many persons, no doubt, sincerely think that human beings thus cramped and dwarfed, are as their Maker designed them to be; just as many have thought that trees are a much finer thing when clipped into pollards, or cut out into figures of animals, than as nature made them. But if it be any part of religion to believe that man was made by a good Being, it is more consistent with that faith to believe, that this Being gave all human faculties that they might be cultivated and unfolded, not rooted out and consumed, and that he takes delight in every nearer approach made by his creatures to the ideal conception embodied in them, every increase in any of their capabilities of comprehension, of action, or of enjoyment. There is a different type of human excellence from the Calvinistic; a conception of humanity as having its nature bestowed on it for other purposes than merely to be abnegated. “Pagan self-assertion” is one of the elements of human worth, as well as “Christian self-denial." There is a Greek ideal of self-development, which the Platonic and Christian ideal of self-government blends with, but does not supersede. It may be better to be a John Knox than an Alcibiades, but it is better to be a Pericles than either; nor would a Pericles, if we had one in these days, be without anything good which belonged to John Knox.

I am glad that mainstream economics takes as its policy mission getting people more of what they desire, without too much questioning of those desires. This widespread attitude among economists may owe a great deal to John Stuart Mill, who wrote the leading economics textbook of the mid-19th century.

19th Century Populist and Monetary Dove Ignatius Donnelly

In a loose sense, I have thought of the Tea Party as populists. But in reading H. W. Brands’ history American Colossus: The Triumph of Capitalism, 1865-1900, I learned that in 19th Century U.S. history it was the members of the People’s Party who were called “Populists.” The 19th Century Populists saw low interest rates as good for the interests of common people, who were more likely to be debtors, and high interest rates as good for the big banks, who represented creditors. As a result, they were what we would now call monetary policy doves.  

Ignatius Donnelly was a very interesting character. Before being nominated for vice president in 1900 on the People’s Party ticket, he had invented many controversial historical theories, particularly about Atlantis, Catastrophism, and Sir Francis Bacon as the author of what we know as the works of Shakespeare. The title of his Catastrophist work Ragnarok: The Age of Fire and Gravel(in which he argues that the Biblical Flood, and consequent destruction of Atlantis, was the result of the near collision of the Earth with a comet) reminds me of the title of my science fiction story “Ragnarok” that I posted back in September. 

Ignatius’s monetary theory was more on target than his history. H. W. Brands (p. 442-443) quotes from Ignatius’s dystopian novel Caesar’s Column, where Ignatius took a dig at the deflationary policies of the Benjamin Harrison administration:

Take a child a few years old; let a blacksmith weld around his waist an iron band. At first it causes him little inconvenience. He plays. As he grows older it becomes tighter; it causes him pain; he scarcely knows what ails him. He still grows. All his internal organs are cramped and displaced. He grows still larger; he has the head, shoulders and limbs of a man and the waist of a child. He is a monstrosity. He dies. This is a picture of the world of to-day, bound in the silly superstition of some prehistoric nation. But this is not all. Every decrease in the quantity, actual or relative, of gold and silver increases the purchasing power of the dollars made out of them; and the dollar becomes the equivalent for a larger amount of the labor of man and his production. This makes the rich man richer and the poor man poorer. The iron band is displacing the organs of life. As the dollar rises in value, man sinks. Hence the decrease in wages; the increase in the power of wealth; the luxury of the few; the misery of the many.

Interview by Danny Vinik for Business Insider: There's an Electronic Currency that Could Save the Economy—and It's Not Bitcoin

blog.supplysideliberal.com tumblr_inline_mwmv80tIqt1r57lmx.png

Link to Danny Vinik’s article on the Business Insider website

Danny Vinik and I talked for about 75 minutes on Tuesday evening. He did a very nice article based on our interview. One thing I talked a lot about in the interview is that of all the possible ways to handle the demand-side problem, repealing the zero lower bound is the one that leaves us best able to subsequently pursue supply-side growth. Fiscal stimulus leaves us with an overhang of government debt that then has to be worked off by painfully higher taxes or lower spending. Going easy on banks and financial firms to prop up demand (as Larry Summers at least halfway recommends in his recent speech at the International Monetary Fund) risks another financial crisis. Higher inflation to steer away from the zero lower bound (as Paul Krugman favorsmesses up the price system, misdirects both household decision-making and government policy, and makes the behavior of the economy less predictable. (On Paul Krugman, also see this column.)

Let me push a little further the case that electronic money can clear the decks on the demand side so that we can focus on the supply side with this example. Suppose you firmly believed that the demand side played no role in the real economy–that the behavior of the economy could be described well by a real business cycle model, regardless of what the Fed and other central banks do, and regardless of the zero lower bound. From that point of view, in which monetary policy only matters for inflation, electronic money would still be valuable as a way of persuading others that it was OK to have zero inflation rather than 2% inflation.

Visionary Grit

blog.supplysideliberal.com tumblr_inline_mwjnlogxqg1r57lmx.png

Click here to watch the TEDTalk that inspired this post–Angela Duckworth’s talk “The Key to Success: The Surprising Trait That is MUCH More Important Than IQ.”

TED Weekends, which is associated with Huffington Post, asked me to write an essay on my reaction to Angela Duckworth’s wonderful talk about grit as the secret to success. Here is a link to my essay on TED Weekends:

Below is the full text of my essay. It pushes further the themes in the Quartz column I wrote with Noah Smith: “Power of Myth: There’s one key difference between kids who excel at math and those who don’t.”


Grit, more than anything else, is what makes people succeed. Psychologist Angela Duckworth, who has devoted her career to studying grit, defines grit this way:

Grit is passion and perseverance for very long-term goals. Grit is having stamina. Grit is sticking with your future, day in, day out, not just for the week, not just for the month, but for years – and working really hard to make that future a reality. Grit is living life like a marathon, not a sprint.

But where does grit come from? First, it comes from understanding and believing that grit is what makes people succeed:

  • understanding that persistence and hard work are necessary for lasting success, and
  • believing that few obstacles can ultimately stop those who keep trying with all of their hearts, and all of their wits.

But that is not enough. Grit also comes from having a vision, a dream, a picture in the mind’s eye, of something you want so badly, you are willing to work as hard and as long as it takes to achieve that dream. Coaches know how powerful dreams – dreams of making the team, of scoring a goal, of winning the game, or of winning a championship – can be for kids. Dreams of knowing the secrets of complex numbers, graduating from college, rising in a career, making a marriage work, achieving transcendence, changing the world, need to be powerful like that to have a decent chance of success.

Grit is so powerful that once the secret is out, a key concern is to steer kids toward visions that are not mutually contradictory. Not everyone can win the championship. Someone has to come in second place. But almost everyone can learn the secrets of complex numbers, graduate from college, rise in a career, make a marriage work, achieve transcendence, and change the world for the better.

What can adults do to help kids understand and believe that grit is what makes people succeed, and to help them find a vision that is powerful enough to motivate long, hard work? Noah Smith and I tried to do our bit with our column “Power of Myth: There’s one key difference between kids who excel at math and those who don’t.” We were amazed at the reception we got. Our culture may be turning the corner, ready to reject the vicious myth that out of any random sampling of kids, many are genetically doomed to failure at math, failure at everything in school, failure in their careers, or even failure at life. The amazing reception of Angela Duckworth’s TEDTalk is another good sign. But articles and TEDTalks won’t do the trick, because not everyone watches TEDTalks, and – as things are now – many people read only what they absolutely have to. So getting the word out that grit, not genes, is the secret to success, will take the work of the millions who do read and who do watch TEDTalks, to tell, one by one, the hundreds of millions in this country and in other countries with similar cultures about the importance of grit.

What can adults do to help kids get a vision that is powerful enough to motivate long, hard work? Many are already doing heroic work in that arena. But other would-be physicians among us must first heal ourselves. How many of us have a defeatist attitude when we think of the problems our nation and the world face? How many of us lack a vision of what we want to achieve that will motivate us to long, hard work, stretching over many years?

Visions don’t have to be perfect. It is enough if they are powerful motivators, and good rather than bad. And it is good to share our visions with one another. Here are some of the things that dance before my mind’s eye and motivate me: 12. I hope everyone who reads this will think about how to express her or his own vision – a vision that motivates hard work to better one’s own life and to better the world. That is the example we need to set for the kids.

Lately, since I started reading and thinking about the power of hard, deliberate effort, I have been catching myself; when I hear myself thinking “I am bad at X” I try to recast the thought as “I haven’t yet worked hard at getting good at X.” Some of the skills I haven’t yet worked at honing, I probably never will; there are only so many hours in the day. But with others, I have started trying a little harder, once I stopped giving myself the easy excuse of “I am bad at X.” There is no need to exaggerate the idea that almost everyone (and that with little doubt includes you) can get dramatically better at almost anything. But if we firmly believe that we can improve at those tasks to which we devote ourselves, surprising and wonderful things will happen.

Among the many wonderful visions we can pursue with the faith that working hard – with all of our hearts and all of our wits – will bear fruit, let’s devote ourselves to getting kids to understand that grit is the key to success. Let’s help them find visions that will motivate them to put in the incredibly hard effort necessary to do the amazing things that they are capable of, and help them tap the amazing potential they have as human beings.

Ideas are not set in stone. When exposed to thoughtful people, they morph and adapt into their most potent form. TEDWeekends will highlight some of today’s most intriguing ideas and allow them to develop in real time through your voice! Tweet #TEDWeekends to share your perspective or email tedweekends@huffingtonpost.com to learn about future weekend’s ideas to contribute as a writer.

Interview by Dylan Matthews for Wonkblog: Can We Get Rid of Inflation and Recessions Forever?

blog.supplysideliberal.com tumblr_inline_mwhkz3Fxp21r57lmx.png

Here is a link to Dylan Matthew’s extremely skillful writeup of his interview with me last Thursday (November 14, 2013) on eliminating the zero lower bound on nominal interest rates by making electronic money the unit of account and legal tender. Dylan’s piece provides the most accessible explanation of the nuts and bolts of my proposal for how to get the negative interest rates I have argued we desperately need in our monetary policy toolkit.   

My answer to the question in Wonkblog’s title is 

  • Yes, by changing the way we deal with paper currency, we can safely have inflation hover around zero, instead of hovering around 2% per year.  
  • No, we can’t prevent all recessions, but we can make them short if we are prepared to use negative interest rates. If we repeal the zero lower bound, we should be able to do at least as well as we did during what macroeconomists called The Great Moderation: the period from the mid-1980s to the first intimations of the Financial Crisis that culminated in 2008.  
  • Indeed, with sound policy we should be able to stabilize the economy somewhat better than during The Great Moderation, both because we keep learning more about the best way to conduct monetary policy and because eliminating the zero lower bound makes it safe to strengthen financial regulation and thereby prevent some of the shocks that might cause recessions.   

Pieria #2—>The Costs and Benefits of Repealing the Zero Lower Bound...and Then Lowering the Long-Run Inflation Target

blog.supplysideliberal.com tumblr_inline_mvx539vIjs1r57lmx.png

Link to the Column on Pieria

Here is the full text of my 1st Pieria exclusive “Going Off the Paper Standard,” now brought home to supplysideliberal.com. It was first published on Pieria on October 28, 2013. 

This post complements my recent column “Larry Summers just confirmed that he is still a heavyweight on economic policy," which could have been called "Larry Summers and the zero lower bound.” In brief, Larry Summers gave a powerful speech at an IMF conference, emphasizing the costs of the zero lower bound–which might include the kind of “secular stagnation” (Larry’s words) that Japan has suffered in the last two decades. I then argue that we should simply eliminate the zero lower bound.

But I did not explain in “Larry Summers just confirmed that he is still a heavyweight on economic policy," why we shouldn’t just steer away from the zero lower bound by engineering higher inflation (assuming we can). This Pieria post on the costs and benefits of inflation in the absence of the zero lower bound makes that case. (Also see the Powerpoint file for my November 1, 2013 presentation at the Federal Reserve Board, and my Twitter discussion with Daniel Altman on the costs and benefits of inflation in the absence of the zero lower bound.)

In ”Larry Summers just confirmed that he is still a heavyweight on economic policy,“ I address the politics of eliminating the zero lower bound by saying

Politics will stay the same until a critical mass of people do what it takes to make them different. Summers proved at the IMF conference that he is still an economic policy heavyweight—someone who could contribute a lot toward reaching that critical mass in the war against the zero lower bound, if he is willing to join the fight.

I don’t know Larry’s views on repealing the zero lower bound in the way that I advocate, but Larry Summers’ IMF talk has led to discussion in other quarters about eliminating the zero lower bound. (Update March 15, 2018: Larry Summers is now an advocate of eliminating the zero lower bound and occasionally refers favorably to the proposal I have made for how to do it. I know this mainly by personal communications with Larry and others whom Larry has talked to. In print, you can see it here.) Matthew Yglesias renewed his advocacy of abolishing paper currency in "The Biggest Problem in Economic Policy Today” a few hours after my column appeared. Brad DeLong picked up on my column here, and Paul Krugman picked up on Brad DeLong’s post in his “Secular Stagnation, Coalmines, Bubbles, and Larry Summers.” (And others are picking up on Paul’s post.) In his post, Paul said the most positive thing I have seen him say so far about negative nominal interest rates as a real-world policy:

If the market wants a strongly negative real interest rate, we’ll have persistent problems until we find a way to deliver such a rate.

One way to get there would be to reconstruct our whole monetary system – say, eliminate paper money and pay negative interest rates on deposits. 

Finally, Dylan Matthews of Wonkblog interviewed me last Thursday about repealing the zero lower bound to add negative interest rates to the policy toolkit. That interview might appear even as early as today.

If you want to mirror the content of this post on another site, that is possible for a limited time if you read the legal notice at this link and include both a link to the original Pieria exclusive and the following copyright notice:

© October 28, 2013: Miles Kimball, as first published on Pieria. Used by permission according to a temporary nonexclusive license expiring June 30, 2015. All rights reserved.


Historically, there have been many different monetary systems. Tom Sargent surprised me with the range of monetary systems that have existed in the United States since 1776 when he presented his paper “Fiscal Discriminations in Three Wars” at the University of Michigan this Fall. Nevertheless, we have become used to our current monetary system, and have gained useful experience with it, so any proposal to change it should be carefully justified.My efforts in that regard are laid out in “How and Why to Eliminate the Zero Lower Bound: A Reader’s Guide.”  By “our current monetary system” I mean the monetary system of most advanced economies and most emerging economies in 2013. The proposed new monetary system I call an “electronic money system” because the electronic dollar, euro, yen, pound, or the like would be the unit of account. 

The brief appeals for eliminating the zero lower bound at the bottom of “How and Why to Eliminate the Zero Lower Bound: A Reader’s Guide.” plus my column “America’s Big Monetary Policy Mistake: How Negative Interest Rates Could Have Stopped the Great Recession in Its Tracks” are my attempts to show how the polemics for repealing the zero lower bound could be approached in the political arena.  The links collected under the heading “Operational Details for Eliminating the Zero Lower Bound” in “How and Why to Eliminate the Zero Lower Bound: A Reader’s Guide.” address the nuts and bolts of eliminating the zero lower bound. 

For an overview of the operational details, I recommend starting with my Pieria post “Going Off the Paper Standard.” What is missing in my justification for changing our monetary system is a point by point tally of the costs and benefits of repealing the zero lower bound. Hence this post: costs first, then benefits. My title reflects an important complementarity that exists between repealing and lowering the long-run inflation target.

The Costs of Repealing the Zero Lower Bound

Repealing the zero lower bound as I have proposed is not without some costs. The most obvious cost is the extra computation needed to deal with what is, in effect, an exchange rate between paper currency and electronic money that periodically inches away from par during serious recessions, and then gradually returns to par after the recession is over. But for consumers, this computational cost is of a similar type to the computational cost of dealing with sales taxes that are added on to the price of purchases, and for business people, it is much easier than many other computations they need to make. And for both consumers and business people, any computational cost from an exchange rate between paper currency and electronic money is likely to apply to a smaller and smaller share of goods as technological change makes the use of electronic money look progressively more convenient compared to paper currency.  

The most important costs of repealing the zero lower bound are costs of the negative interest rates themselves. Given the level of inflation, going from zero to negative interest rates has all of the usual costs and benefits of lower short-term nominal interest rates and lower short-term real interest rates, including important distributional effects. In addition, nominal illusion makes even the concept of negative interest rates unfamiliar and confusing to some people. Beyond any direct psychological distress confusion about negative interest rates causes, that confusion could cause harm by opening up new strategies for financial hucksters and bubble-mongers.  

Additional costs could arise if political or legal constraints prevent the full policy prescription from being followed. Most important among these are the extra dangers to financial stability if equity requirements for banks and financial firms are not raised substantially beyond anything in current law. Also important are the distortions to the intent of financial contracts if amounts of money specified in old contracts that are ambiguous are interpreted as amounts in paper currency rather than according the electronic unit of account.  

The Benefits of Repealing the Zero Lower Bound

Direct Costs of the Zero Lower Bound.The benefits of repealing the zero lower bound come from avoiding the costs of keeping the zero lower bound. The obvious cost of the zero lower bound is in preventing a central bank from lowering its short-term interest rate when negative interest rates would be helpful for macroeconomic stabilization. That includes not only the cost of having less stimulus than would otherwise be optimal, but also the cost of using other ways to stimulate the economy, such as those arising from

  1. the deficits traditional fiscal stimulus generates,
  2. the unusual spreads that large-scale purchases of long-term government debtgenerates,
  3. the danger of reigniting a bubble in home prices by large-scale purchases of mortgage-backed securities, and
  4. any reduction in the responsiveness of monetary policy to future needs that forward guidance engenders. 

Indirect Costs of the Zero Lower Bound Through the Long-Run Inflation Target. In addition to such direct costs of the zero lower bound and responses to a currently binding zero lower bound, there are costs from efforts to avoid running into the zero lower bound in the future. In particular, if there is any fear of these direct costs of the zero lower bound, central banks are likely to choose long-run inflation targets that are higher than they would otherwise choose in order to take into account the danger from these direct costs. The zero lower bound should not be taken as a given. But if it is, many find the logic behind tilting the inflation target higher to steer away from the zero lower bound compelling. Ben Bernanke gave the conventional view for an inflation target at 2% rather than zero in his March 20, 2013 press conference, saying:

… if you have zero inflation, you’re very close to the deflation zone and nominal interest rates will be so low that it would be very difficult to respond fully to recessions. And so historical experiences suggested that 2 percent is an appropriate balance …

And Brad DeLong counts himself, Olivier Blanchard, Larry Ball, and Paul Krugman as serious advocates of an even higher 4% inflation target due to their worries about the zero lower bound.

The contrary view is that a nominal anchor such as price level targeting or NGDP targeting can make running into the zero lower bound so uncommon that the optimal inflation rate would be quite low even with the zero lower bound. Olivier Coibion, Yuriy Gorodnichenko and Johannes Wieland found this in a formal model for price level targeting. Scott Sumner argues the corresponding view nominal GDP targeting.  (Scott Sumner also argues that nominal GDP targeting will do the trick even when the economy actually up against the zero lower bound.) But claims that with a better monetary rule the zero lower bound would be easy to avoid even with a low inflation target remain speculative. I advocate both repealing the zero lower bound and following a version of nominal GDP targeting that leans in the direction of price-level targeting.

The Costs of Inflation

As argued above, gauging the benefits of repealing the zero lower bound requires assessing the costs and benefits of inflation in the long run. My goal here is to point out how those costs and benefits would be affected by the repeal of the zero lower bound. What remains is to examine how the repeal of the zero lower bound affects the other costs and benefits of inflation. The answer is not immediately obvious because my proposal involves at some points a higher rate of inflation relative to paper currency than to the electronic money that serves as a unit of account. So it is important to pay attention to whether each cost or benefit of inflation is about inflation relative to the unit of account, or inflation relative to paper currency. 

Messing Up Price Signals. Many of the costs of inflation have to do with messing up price signals in one way or another. Sticky prices and sticky wages mess up prices signals to some extent even in the absence of trend inflation. But unless price changes are fully synchronized across firms, trend inflation tends to lead to different prices leap-frogging each other in a complex dance that distorts signals about which goods have the lowest social costs. This is a potential issue for

  • varieties of final goods
  • varieties of intermediate goods
  • varieties of labor inputs
  • leisure over time
  • each good over time

Every one of these costs has to do with the setting of sticky prices–including sticky wages (the prices of labor and of leisure). So for these costs, it is inflation relative to the unit in which sticky prices or wages are set. My contention is that (with the measures discussed in “Going Off the Paper Standard.” ) retailers can successfully be encouraged to set almost all prices in terms of the electronic unit of account, with a single store-wide conversion factor for converting the electronic price of a bundle to the amount of paper currency that would be charged for those who prefer to pay in paper currency. The choice of conversion factor itself is likely to be determined in large measure by retailers’ costs from credit and debit card fees, desire to price discriminate and some desire to keep paper and electronic prices equal. If there is only one conversion factor for the entire line of goods at a given retailer, I would expect it to be relatively flexible once it departed from par. (Not only should the menu costs for a single conversion factor be low, the information relevant for deciding on the conversion factor is relatively straightforward.) If so, once the conversion factor is away from par, the stickiness would be in terms the electronic unit of account. To the extent sticky prices and wages are set in terms of the electronic unit of account, zero inflation relative to the electronic unit of account minimizes the distortion of price and wage signals from sticky prices and sticky wages.

As for the initial stickiness of the conversion factor at par, I have argued on several occasions that the initial stickiness of the conversion factor at par could ease acceptance of the exchange rate between electronic money and paper currency, since there would be a few months near the inception of the electronic money system in which households could obtain paper money from the bank at a discount, but have it accepted at par by retailers.

What If Some Prices Are Set in Both Electronic and Paper Terms?  Even if most goods are priced in terms of the electronic unit of account, it may be that there is also a paper currency price for relatively inexpensive goods that are frequently (though not always) purchased with paper currency. These are goods for which “convenient prices” in Ed Knotek’s sense matter a lot, so a formal analysis would be complicated. But they are goods that have a relatively small budget share and are unlikely have a big effect in inducing people to go to the wrong store (“wrong” from a social-welfare-maximizing point of view). And, once a customer has chosen whether to use paper currency or electronic money to pay, the within-store price ratios are unaffected by the existence of both an electronic and paper price for these items. (I suspect that most stores can adequately discourage most people from purchasing some items with paper money and some with electronic money. The choice of paying by electronic money or paper money becomes interesting when there are these two sets of prices.) The bottom line is that these effects are likely to be complex in many ways(including depending on both inflation relative to the electronic unit of account and inflation relative to paper currency), but small.

Resources Used Up By Menu Costs. Menu costs are incurred by changing sticky prices. So they are also affected by inflation-induced price-leapfrogging. For the direct use of resources to pay menu costs, what matters is the unit in which prices are set. If prices are set in terms of an electronic unit of account, menu costs will be minimized at zero inflation relative to the electronic unit of account.

Causing Confusion.The costs under the heading “messing up price signals” all persist in models in which all the agents are infinitely intelligent and optimize fully subject to (sometime ad hoc) constraints. There are serious additional costs of inflation that arise when cognition is finite. I consider these the main costs of inflation. Let me list some of the likely types of confusion and some of their consequences.

  • making people blame something they call “inflation” (though it is not the general rise in prices and wages that macroeconomist refer to when they say “inflation”) for the fact that their real wage is not higher than it is
  • causing unintended distortions in the tax code, and in particular a higher effective rate of capital taxation than elected representatives may have intended
  • leading people to mistake nominal rates of return for real rates of return when deciding how much they need to save for retirement
  • muddling intertemporal comparisons more generally.

Greg Mankiw gives this parable about the cost of muddling intertemporal comparisons in his best-selling Brief Principles of Macroeconomics textbook (p. 260):

Imagine that we took a poll and asked people the following question: ‘This year the yard is 36 inches. How long do you think it should be next year?’ Assuming we could get people to take us seriously, they would tell us that the yard should stay the same length—36 inches. Anything else would complicate life needlessly.

All of these costs are from inflation in the unit of account, where in this case it is the most literal sense of “unit of account” that matters. As long as people are thinking in terms of electronic dollars, euros, yen, pounds, etc., confusion costs will be minimized by zero inflation in the electronic unit of account.

Note that having zero inflation in the electronic unit of account would, in turn, encourage people to think in those terms. In addition, public education, accounting rules, and the tax system can be used to explicitly encourage households to think in terms of the electronic unit of account.

Unpredictability of Inflation. Zero inflation is quite focal for many decisions. So zero inflation is likely to minimize the costs of unpredictable inflation. Being focal has to do with the yardsticks people have in their minds. Thus, this is about inflation relative to the unit of account.

Causing People to Use Too Little Paper Currency. Socially, there is very little directcost to providing paper currency. To the extent that paper currency provides convenience and helps avoid transactions costs associated with credit and debit card transactions, private costs to using paper currency will lead people to use too little paper currency. As long as paper currency is at par with electronic money, the key private costs to using paper currency are

  1. the chances of theft,
  2. the gap between the checking account interest rate and the paper currency interest rate, and
  3. the “shoe-leather costs” of making more trips to the ATM in order to keep the first two costs down. 

The key point here is this: although an electronic money system sometimes has a negative paper currency interest rate, that would occur when checking account interest rates are very low or negative. That is, the spread  between the checking account interest rate and the paper currency interest rate can be kept small–except when paper currency is already back to par and checking account rates are at distinctly positive levels. (Note also that if the inflation target is lowered, nominal interest rates won’t be as far above the zero paper currency interest rate when paper currency is kept at par.) Thus, any substantial costs from people using too little paper currency would arise from

  • a choice of the central bank to leave some spread between the paper currency interest rate and the target interest rate so that retail banking as we know it could continue to make non-negative economic profits, and
  • a choice of the central bank to keep paper currency from going above par to obtain the benefits of paper currency being at par much of the time.

If neither of these considerations were a concern, the central bank could keep the paper currency interest rate equal to the target rate at all times–or even above it by the extent of the theft rate–to avoid all costs coming from people using too little paper currency.

Let me try to drive home the point with these additional remarks:

  1. Costs coming from too little use of paper currency are primarily about the spreadbetween the paper currency interest rate and other interest rates, not about the level of these rates. They have nothing to do with the rate of inflation per se either relative to electronic money or relative to paper currency, except when paper currency is being kept at par. When paper currency is being kept at par, lower inflation will lead to lower nominal interest rates on everything but paper currency, and so will lead to less underuse of paper currency. 
  2. If there are any serious problems from too small a spread between paper currency and other interest rates, the central bank’s ability in an electronic money system to choose the paper currency interest rate can help avoid these costs.
  3. The point of making it possible to have negative paper currency interest rates (by time-varying paper currency deposit fees) is not to disadvantage paper currency. Rather, it is to make it so that there is nowhere to hide from the negative interest rates (either in paper currency or in the bank) without taking on risks in a way that reduces risk premia, buying goods or services, or generating capital outflows and thereby stimulating net exports. (If negative interest rates, both in the bank and in paper currencies prevailed worldwide, then the only place to hide would be by taking risks in a way that reduces risk premia and thereby leads to additional physical investment purchases,  or by directly buying goods and services.)

Will Having Paper Currency Away from Par Discourage People from Using It? The one remaining issue about how much paper currency is used is the effects of being away from par. To the extent people get a discount on paper currency at the bank in a way that exactly makes up for the extra paper currency needed to make purchases at the store, the effects on use of paper currency should wash out, except to the extent the extra computation cost for paper currency discourages its use. But that effect should be overwhelmed by the fact that retailers can choose the conversion factor to make those who purchase with credit or debit cards pay for the extra transactions fees. Thus, under an electronic money system, the true resource cost of credit and debit card transactions is likely to be somewhat better transmitted to the customers who make the decision of whether to use credit or debit cards or paper currency. Getting what is in effect a “cash discount” some of the time should encourage people to use paper currency more, in a way that gets closer to the socially optimal level of use of paper currency. 

The Possible Benefits of Inflation

All of the benefits of inflation come from a second-best narrowing of some other distortion. Here are the three logical possibilities that I am aware of. The first depends directly on the statutory unit of account used for tax calculations. The other two depend on the unit in which prices and wages are set, which is also likely to be the electronic unit of account.

Raising the Effective Rate of Capital Taxation (If that is Good Rather than Bad). In my view, rates of capital taxation are too high. If I am right, one additional cost of inflation is that it raises capital taxation even further, when capital taxation is already too high. But if one thought that elected representatives had set rates of capital taxation too low, one might be in favor of higher inflation to raise the effective rate of capital taxation. Here it is important to recognize that the models that are the most favorable to capital taxation involve relatively sudden capital taxation, either once at the beginning of fiscal time, and never again (with the problem of needing a clear legal definition of the beginning of fiscal time) or during particular bad contingencies. Thus, that kind of capital taxation cannot be achieved by steady inflation, and is correspondingly more costly and dangerous.

Making It Easier for Firms to Lower the Real Wages of Particular Employees. There is a considerable amount of evidence that it is difficult for firms to lower nominal wages because of negative effects on the morale of both the employee whose wage is cut and all the other employees to whom that one complains. When inflation is positive, real wages can be lowered by leaving nominal wages the same, or increasing nominal wages by only a small amount. The lower inflation is, the more difficult it is to lower real wages without lowering nominal wages. The trouble with not being able to lower real wages is that a firm might then want to reduce how much it uses employees whose marginal product has declined, but whose real wage has not. It could lay those employees off or cut their hours. Both options are socially inefficient compared to reducing those employees’ wages and continuing to employ them fully.

This cost of blocking otherwise appropriate cuts in the real wage is a potentially important benefit of inflation. However, I think it is relatively easy to deal with this issue in ways other than inflation. In particular, having a substantial portion of pay in an annual bonus makes it much easier to reduce annual nominal wages. This is the way things work for a large share of firms in Japan. In a very low or zero-inflation environment, it is likely that firms would gravitate toward this solution on their own. But it is also straightforward to encourage this kind of solution by public policy, as Martin Weitzman details in his underappreciated classic The Share Economy, which is the bible for minimizing whatever costs are caused by nominal wage rigidity (including the costs of messing up price signals discussed above).   

Leading Firms to Lower Their Markups of Price Over Marginal Cost and Wage Setters to Lower Their Markups of the Wage Over the Opportunity Cost of Time. Because of discounting of the future, a sticky price or wage will be adapted somewhat more closely to the immediate future than to the more distant future. First consider firms. Since inflation tends to give an increasing track to the price a firm would want to have absent any costs or limitations on price changing, the immediate future of the firm tends to suggest a lower price than the more distant future. Thus, heavier discounting should interact with positive inflation to encourage the firm to have a lower price. If price is above marginal cost to begin with, a lower markup tends to increase efficiency. But firms should discount at the shadow interest rate they should use to evaluate investment projects. The interaction of this interest rate with the amount of inflation that occurs in other prices while a particular price is fixed creates only a small effect.

For workers involved in setting wages above the opportunity cost of time, Liam Graham and Dennis Snower have argued in their well-written paper “Hyperbolic Discounting and Positive Optimal Inflation that workers who disagree with their future selves about the way to discount one month relative to the next might plausibly have very high effective discount rates. They go on to argue that then inflation could have a significant benefit in leading these present-biased workers to accept lower wages, which would be closer to their opportunity cost of time.  In my view, their story, while intriguing, seems fragile. It depends on work hours being determined by contractual wages at a quite fine-grained level. It also requires believing workers could see this as a significant issue and still not press for more frequent wage adjustments. Finally, it depends on workers being at once hyper-rational and internally conflicted, when experiments by Daniel Benjamin, Sebastian Brown and Jesse Shapiro (reported in their paper “Who is ‘Behavioral’? Cognitive Ability and Anomalous Preferences” ) suggest that present-bias is associated with low cognition.  It is probably safe to say that Liam Graham and Dennis Snower have given a best case scenario for their effect, in finding that it raises the optimal level of inflation in a model with no zero lower bound issue from zero to 2.1%.

The Bottom Line for the Long-Run Inflation Target

Whatever the optimal target for long-run inflation is when there is a zero lower bound, the optimal target for long-run inflation is likely to lower in the absence of a zero lower bound. The overall benefit of repealing the zero lower bound is

  • the benefit of repealing the zero lower bound would have if the long-run inflation target held fixed, PLUS
  • the benefit of lowering the long-run inflation target from its previous value to whatever value is optimal in the absence of the zero lower bound. 
  • I remain unimpressed by the purported benefits of inflation other than steering away from the zero lower bound. I will be surprised if a nation that repeals its zero lower bound does not also gradually lower its long-run inflation target to zero.

Conclusion

Repealing the zero lower bound has some costs, but those costs should be weighed against the benefits: not only ending recessions, but also ending inflation. The key analytical point is that by and large the costs of inflation are costs of inflation relative to the unit of account.Thus,

  • if electronic money provides the unit of account (including the unit of account for price and wage setting),
  • and inflation is close to zero in terms of the electronic unit of account,
  • then one can have inflation relative to paper currency without serious costs,
  • as long as the central bank keeps the spread between the paper currency interest rate and the checking account interest rate small.

Noah Smith: God and SuperGod

blog.supplysideliberal.com tumblr_inline_mwa3k5u9zi1r57lmx.png

Detail from the ceiling of the Sistine Chapel, by Michelangelo

This is a guest religion post by Noah Smith. I am truly delighted to hand over my pulpit to Noah for this powerful sermon.

(You can see more religion posts in my "Religion, Humanities and Science sub-blog.)


How does God know he’s God?

I’m serious. Think about this for a moment. God - as described in the Bible - is the most powerful being in the Universe. But how does He know that there isn’t an even more powerful being - call it “SuperGod” - who has chosen to stay completely hidden up until now? Since the hypothetical SuperGod is, hypothetically, even more powerful than God, there’s no way for God to know that SuperGod does not in fact exist.

This is true whether or not there is a SuperGod or not! Even if there is no SuperGod - even if God really is the most powerful being in the Universe - God will never know for sure that this is the case. And of course if there were a SuperGod, then He also couldn’t be certain that there wasn’t a SuperDuperGod out there somewhere!

Conclusion: The most powerful being in the Universe, whoever that happens to be, will never be certain of His (or Her) status as such.

Now before you reach for the keyboard to write a quick reply (“Of course God knows He’s God, God knows everything, DUH!”), realize that I’m not trying to catch theists with a clever “gotcha” or make a logical argument against religion. Instead, I’m trying to illustrate an important point about the nature of the God of the Bible. God’s most defining and important attribute isn’t that He’s the most powerful and wise being in the Universe; in fact, it doesn’t really matter if He is or not. The most important thing about God is that he chooses to take responsibility for the world.

Think about it. God chooses to create life and humanity, set down laws, punish evil and reward good, send people to various afterlives, and dictate the fate of nations. He doesn’t waste time wondering if there is a SuperGod somewhere out there. He doesn’t need to know for certain that He’s the most powerful being in the Universe; all He knows is that He’s the most powerful being in the neighborhood.

Kind of like you and me.

Some people claim to receive direct communication from God. Others claim to witness miracles. But most of us go through life without seeing direct evidence of the God of the Bible. Instead, we go through life wondering if we’re the most powerful beings in the Universe. And we have to decide whether to take responsibility for those less powerful than us - animals, children, the weak and the poor.

There’s a strong instinct to abdicate that responsibility - to look at things like global warming, poverty, environmental destruction, human misery in all its forms and say “God will take care of that.” For some people it’s not God, but “the free market”, or “evolution”, or “history”. But even if you believe in those things, you don’t really know that they’ll make everything right, any more than God knows whether a hidden SuperGod is guiding all of His actions. 

The truth is, whether you like it or not, it’s all on you. The responsibility for those weaker than yourself is not on God’s or the free market’s or history’s or evolution’s head, it’s on your head. So think hard about what you’re going to do with all your power.

Learning to Do Deep Knee Bends Balanced on One Foot

I am 53 now and sometime think forward to some dangers of getting older. I read a few years ago that Tai Chi exercises improve balance enough to significantly reduce falls that can sometimes break older bones. I don’t know where to find time for Tai Chi itself in my schedule, so I cut corners. I just do a daily set of deep knee bends balanced on one foot: 18 reps on the right leg, and 20 reps on the left leg, because that one is weaker and needs more strengthening. I had a pretty tough time getting so I could do that many repetitions without toppling over again and again and having to catch myself with my hands. But gradually, gradually, I could do a few more repetitions in a row before toppling over, until now I don’t have too much trouble doing 18 or 20 in a row.

I think of this as a good analogy for a lot of learning: making mistakes and carefully correcting them, over and over again, until very gradually the number of mistakes diminishes. If you aren’t willing to fall–many times–in order to learn, you will fail.

Marc F. Bellemare's Story: "I'm Bad at Math"

Link to “I’m Bad at Math: My Story” on Marc’s blog

I think it is very valuable to share one another’s stories about what the idea that math ability is primarily genetic did to our lives. My story is at this link. Marc Bellemere wrote his story on his blog, and kindly agreed to let me publish it here on supplysideliberal.com as well.


Last week, Miles Kimball and Noah Smith, two economists (one at Michigan, one at Long Island) had a column on the Atlantic’s website (ht: Joaquin Morales, via Facebook) in which they took to task those who claim that math ability is genetic.

Kimball and Smith argue that that’s largely a cop-out, and that there is no such thing as “I’m bad at math.” Rather, being good at math is the product of good, old-fashioned hard work:

Is math ability genetic? Sure, to some degree. Terence Tao, UCLA’s famous virtuoso mathematician, publishes dozens of papers in top journals every year, and is sought out by researchers around the world to help with the hardest parts of their theories. Essentially none of us could ever be as good at math as Terence Tao, no matter how hard we tried or how well we were taught. But here’s the thing: We don’t have to! For high-school math, inborn talent is much less important than hard work, preparation, and self-confidence.

How do we know this? First of all, both of us have taught math for many years—as professors, teaching assistants, and private tutors. Again and again, we have seen the following pattern repeat itself:

  1. Different kids with different levels of preparation come into a math class. Some of these kids have parents who have drilled them on math from a young age, while others never had that kind of parental input.
  2. On the first few tests, the well-prepared kids get perfect scores, while the unprepared kids get only what they could figure out by winging it—maybe 80 or 85%, a solid B.
  3. The unprepared kids, not realizing that the top scorers were well-prepared, assume that genetic ability was what determined the performance differences. Deciding that they “just aren’t math people,” they don’t try hard in future classes, and fall further behind.
  4. The well-prepared kids, not realizing that the B students were simply unprepared, assume that they are “math people,” and work hard in the future, cementing their advantage.

Kimball and Smith’s column resonated deeply with me, because I discovered quite late (but just in time) that hard work trumps natural ability any day of the week when it comes to high-school math–if not when it comes to PhD-level math for economists.

My Story

What follows is a story which, although I have mentioned it to a few colleagues in the past, I’ve never told publicly until I posted it on my blog on November 6.

Until my early 20s, I never knew that one could become good at math. In high school, I failed tenth-grade math. That year, I’d had mono, so that provided a convenient excuse that I could use when I would tell people that I had to take tenth-grade math again in the summer.

That summer, though, I worked really hard at math, and I did very well, scoring something like a 96% score. But I ascribed my success to the people I was competing with rather than to my own hard work. The class, after all, was entirely composed of other failures, and in the kingdom of the blind, the one-eyed man is king.

When I began studying economics in college, I enrolled in a math for economists course the first semester. I quickly dropped out of it, thinking it was too difficult (and to be sure, the textbook was somewhat hardcore for a first course in math for economists). The following semester, I enrolled in the same course, which was taught by a different instructor, one who seemed a bit more laid-back and who taught it at a level that was better suited for someone like me.

As it turns out, that instructor was a Marxian, so one of the things he taught was the use of Leontief matrices, or input-output models. Like the clueless college student that I was back then, I decided that that stuff was not important, and so skipped studying it for the final.

Much to my surprise, 60% of the final was on Leontief matrices, and so I failed the course and had to take it again the next semester. Even that second time around, I didn’t do that great, scraping by with a C+ (which, if I recall correctly, was the average score in core econ major courses at the Université de Montréal back then).

After finally passing Math for Economists I, I realized I had to take Math for Economists II, which was reputed to be very difficult. But for some reason, it was then that I remembered my tenth-grade math summer course, and how my hard work had seemed to yield impressive results back then. So I decided to really apply myself in that second Math for Economists course, and I got an A.

When I saw my transcript that semester, I finally saw the light: I had been terrible at math all my life because I hadn’t worked hard at it; in fact, I hadn’t worked at all up until that point, and here I was, getting an A in one of the hardest classes in the major.

I graduated with a 3.2 GPA, which wasn’t great considering that my alma mater has a 4.3 scale. But it was enough to get admitted into the M.Sc. program in Economics at the Université de Montréal, and so I applied and got in. But then, I remembered that my hard work had paid off handsomely during my senior year, and I decided to apply myself in every single class. Lo and behold, I did well. So well, in fact, that I finished my M.Sc. with a 4.1 GPA, which allowed me not only to get admitted for a Ph.D. in Applied Economics at Cornell, but to get a full financial ride, including a fellowship for my first year.

Perhaps more importantly, my cumulative experience with the hard work–excellent results nexus boosted my confidence, and it taught me that I could do well in a graduate program in applied economics. Indeed, Cornell was then known for the difficulty of its qualifier in microeconomic theory (which was administered back then by the economics department and was on all of Mas-Colell et al. and more). In any given year, half of all the students (i.e., applied economics, business, and economics students) taking it would fail.

To be sure, I had to work very, very hard during my first year, but I managed to pass my qualifying exam the first time around (thankfully, us applied economics students didn’t have to take the macro qualifier; we only needed to get a B- in one of the core macro courses). In fact, many of my classmates who seemed to rely on their “natural” ability to do math (including folks who had been math majors in college) ended up failing the micro qualifier.

That series of successes followed by hard work was eventually what gave me the confidence to do a little bit of micro theory: in the first essay in my dissertation, I developed a dynamic principal-agent model to account for the phenomenon I was studying empirically. And ultimately, I published an article in the American Journal of Agricultural Economics (AJAE) that relied entirely on microeconomic theory (and thus on quite a bit of math), an article for which my coauthor and I won that year’s best AJAE article award.

Ironically enough, in that article, we cited Miles Kimball’s 1990 Econometrica paper on prudence.

The Tweets on Faith

My post “The Unavoidability of Faith" provides the theoretical background for the column ”There’s one key difference between kids who excel in math and those who don’t“ that Noah and I wrote. ”The Unavoidability of Faith“ argues that faith is an unavoidable component of decision-making–including when making economic decisions. ”The Tweets on Faith“ storifies a set of very interesting Twitter discussions sparked by ”The Unavoidability of Faith.“ Its title is a riff on The Lectures on Faith.

JP Koning: The Zero Lower Bound as an Instance of Gresham's Law in Reverse

Link to JP Koning’s post on his Moneyness blog, which is illustrated by this painting of Sir Thomas Gresham (c. 1554) by Anthonis Mor

Once again, JP Koning has written an erudite and brilliantly clear post on the zero lower bound in historical context, with an application to current policy debates. I have mirrored it here, with his permission


The zero-lower bound may seem like a new problem, but I’m going to argue that it’s only the most recent incarnation of one of the most ancient conundrums facing monetary economists: Gresham’s law. A number of radical plans to evade the zero-lower bound have emerged, including Miles Kimball's electronic money plan. When viewed with an eye to history, however, plans like Miles’s are really not so radical. Rather, they are only the most recent in a long line of patches that have been devised by monetary tinkerers to spare the monetary system from Gresham-like monetary problems.

Here’s an old example of the problem. At the urging of Isaac Newton and John Locke, British authorities in 1696 embarked on an ambitious project to repair the nation’s miserable silver coinage. This three-year effort consumed an incredible amount of time and energy. Something unexpected happened after the recoinage was complete. Almost immediately, all of the shiny new silver coins were melted down and sent overseas, leaving only large denomination gold coins in circulation.

What explains this incredible waste of time and effort? Because it offered to freely coin both silver and gold at fixed rates, the Royal Mint effectively established an exchange ratio between gold and silver. English merchants in turn accepted gold and silver coins at face value, or the mint’s official rate, and debts were payable in either medium at the given rate. Unfortunately, the ratio the Mint had chosen overvalued gold relative to the world price and undervalued silver. Rather than spend their newly minted silver coins to buy £x worth of goods or to settle £y of debt, the English public realized that it was more cost-effective to use overvalued gold coins to purchase £x or settle £y. Then, if they melted down their full bodied silver coins and sent them across the Channel, the silver therein would purchase a higher quantity of real goods, say  £x+1 goods, or settle more debts than at home, say £y+1 debts.

Newton and Locke had run into Gresham’s law. When the monetary authority defines the unit-of-account (£, $, ¥) in terms of two different mediums, the market will always choose to transact using the overvalued medium while hording and melting down the undervalued medium. “Bad” money drives out the “good”. (For a better explanation, few people know more about Gresham’s law than George Selgin.)

The abrupt switches between metals that characterized bimetallism weren’t the only manifestation of Gresham’s law. Constant shortages of silver change in the medieval period were another sign of the law in operation. Over time, a realm’s silver coinage would naturally wear out as it was passed from hand to hand. Clippers would shave off the edges of coins, and counterfeiters would introduce competing tokens that contained a fraction of the silver. Any new coins subsequently minted at the official standard would be horded and sent elsewhere. After all, why would an owner of a “good” full-bodied silver coin spend it on, say, a chicken at the local market when a “bad” debased silver coin would be sufficient to consummate the transaction? The result was a dearth of new full bodied coins, leaving only a fixed amount of deteriorating silver coins to serve as exchange media.

This sort of Gresham-induced silver coin shortage, a common phenomenon in the medieval period, was the very problem that Newton and Locke initially set out to fix with their 1696 recoinage. Out of the Gresham pan into the Gresham fire, so to say, since Newton and Locke’s fix only led to a different, and just as debilitating, encounter with Gresham’s law  the flight of all silver out of Britain.

Over the centuries, a number of technical fixes have been devised to fight silver coin shortages. By milling the edges of coins, clipping would be more obvious to the eye, thereby deterring the practice. High quality engravings, according to Selgin (pdf), rendered counterfeiting much more difficult. Selgin also points out that the adoption of restraining collars in the minting process created rounder and more uniform coins. Adding alloys to silver and gold strengthened coins and allowed them to circulate longer without being worn down. These innovations helped to prevent, or at least delay, a distinction between good and bad money from arising. As long as degradation of the existing coinage could be forestalled by technologies that promoted uniformity and durability, any new coins made to the official standard would be no better than the old coins. New coins could now circulate along with the old, reducing the incidence of coin shortages. Gresham’s law had been cheated.*

Let’s bring this back to modern money. As I wrote earlier, Gresham’s Law is free to operate the moment that the unit of account is defined with reference to two different mediums rather than just one. In the case of bimetallism, the pound was defined as a certain amount of silver and gold, whereas in a pure silver system the unit was defined in terms of old debased silver coins and new full bodied silver coins. In our modern economy, £, $, ¥ are defined in terms two different mediums—central bank deposits and central bank notes.

Normally this dual-definition of modern units doesn’t cause any problems. However, when economic shocks hit a central bank may be required to reduce interest rates to a negative level in order to execute monetary policy. Say it attempts to do so by setting a -5% interest rate on central bank deposits. The problem is that bank notes will continue to yield 0% since the technical wherewithal to create a negative rate on cash has not yet been developed. This disparity in returns allows a distinction between good and bad money to suddenly emerge. Just as full-bodied silver coins were prized relative to debased silver coins, the public will have a preference for 0% yielding cash over -5% yielding deposits. It’s Gresham’s Law all over again, with a twist…

…when rates fall to -5% it isn’t the bad money that chases out the good, but the mirror image. Everyone will convert bad deposits into good cash, or, as Miles describes it, we get massive paper storage. All deposits having been converted into cash, the central bank loses its ability to reduce interest rates below 0%  it has hit the zero lower bound.

In this case, the reason that the good drives out the bad rather than the opposite is because a modern central bank promises to costlessly convert all notes into deposits and vice versa at a 1:1 rate. If bad -5% deposits can be turned into good 0% notes, who wouldn’t jump on the opportunity?

To make our analogy to previous standards more accurate, consider that this sort of “reverse-Gresham effect” would also have arisen in the medieval period if the mint had promised to directly convert debased silver coinage into good coins at a 1:1 rate.** As it was, mints typically converted metal into coin, not coin into coin. If mints, like central banks, had offered direct conversion of bad money into good, everyone would have jumped at the opportunity to get more silver from the mint with less silver. Good coin would have rapidly chased bad coin out of circulation as the latter medium was brought to the mint. In offering citizens such a terrific arbitrage opportunity, the mint could very quickly go bankrupt.

Here’s a medieval-era example of the “reverse Gresham-effect”. When it called in the existing circulating silver coinage to be reminted in 1696, Parliament decided to accept these debased coins at their old face value rather than at their actual, and much diminished, weight. In the same way that everyone would quickly convert bad -5% deposits into good 0% cash given the chance, everyone jumped at this opportunity to turn bad coin into good. John Locke criticized this policy, noting that upon the announcement, clippers would begin to reduce the existing coinage even more rapidly. After all, every coin, no matter how debased, would ultimately be redeemed with a full bodied coin. Why not clip an old coin a bit more before bringing it in for conversion? Even worse, since the recoinage was to take two years, profiteers could repeatedly bring in bad coin for full bodied coin, clip their new good coins down into bad ones, and return them to the mint for more good coin. Locke pointed out that this would come at great expense to the mint, and ultimately the tax-paying public. [For a good example of Locke’s role in the 1696 recoinage, read Morrison's A Monetary Revolution]

Just as the reverse-Gresham effect would cripple a mint, allowing free conversion of -5% deposits into 0% notes would be financial suicide for a bank. As I’ve suggested here, any private note-issuing bank that found it necessary to reduce rates below zero would quickly try to innovate ways to save themselves from massive paper conversion. Less driven by the profit motive, central banks have been slow to innovate ways to get below zero. Rather, they have avoided the reverse-Gresham problem by simply keeping rates high enough that the distinction between good and bad money does not emerge.

In order to allow a central bank to set negative rates without igniting a reverse-Gresham rush into cash, Kimball has proposed the replacement of the permanent 1:1 conversion rate between cash and deposits with a variable conversion rate. Now when it reduces rates to -5%, a central bank would simultaneously commit itself to buying back cash (ie. redeeming it) in the future at an ever worsening rate to deposits. As long as the loss imposed on cash amounts to around 5% a year, depositors will not convert their deposits to cash en masse when deposit rates hit -5%. This is because cash will have been rendered equally “bad” as deposits, thereby removing the good/bad distinction that gives rise to the Gresham effect. The zero lower bound will have been removed.

To summarize, Kimball’s variable conversion rate between cash and deposits is a technical fix to an age-old problem. Gresham’s law (and the reverse-Gresham law) kick in when the unit of account is defined by two different mediums, one of which becomes the “good” medium and the other the “bad”. When this happens, people will all choose to use only one of the two mediums, a choice that is likely to cause significant macroeconomic problems. In the medieval days, it led to shortages of small change. Nowadays it prevents interest rates from going below 0.

In this respect, Miles’s technical fix is no different from the other famous fixes that have been adopted over the centuries to reduce the good vs bad distinction, including milled coin edges, high quality engravings, alloys, mint devaluations, and recoinages. Milled edges may have been new-fangled when they were first introduced five centuries ago, but these days we hardly bat an eye at them. While Miles’s suspension of par conversion may seem odd to the modern observer, one hundred years from now we’ll wonder how we got by without it. In the meantime, the longer we put off fixing our modern incarnation of the Gresham problem, the more likely that future recessions willbe deeper and longer than we are used to  all because we refuse to innovate ways to get below zero.

*Debasing the mint price, or the amount of silver put into new coins (other wise known as a devaluation, explained in this post), was another way to ensure that old and new silver coins contained the same amount of silver. A devaluation rendered all new coin equally “bad” as the old coin, ensuring that Gresham’s law was no longer free to operate. In addition to devaluations, constant recoinages re-standardized the nation’s circulating medium. Much like a devaluation, a recoinage removed the distinction between good and bad coins, at least for a time, thereby nullifying the Gresham effect and putting a pause to coin shortages.

** In a bimetallic setting, the process would have worked like this. Say that the mint promised to redeem gold with silver coins and vice versa at the posted fixed rate. When this rate diverges from the market, buyers needn’t send the overvalued coin overseas to secure a market price. They only had to bring all their overvalued coins (the bad ones) to the mint to exchange for undervalued ones (the good ones), until at last no bad coins remained. Thus the good drives out the bad. In the meantime, the mint would probably have gone out of business.

Quartz #34—>Janet Yellen is Hardly a Dove—She Knows the US Economy Needs Some Unemployment

Link to the Column on Quartz

Here is the full text of my 34th Quartz column “Janet Yellen is Hardly a Dove–She Knows the US Economy Needs Some Unemployment” now brought home to supplysideliberal.com. It was first published on October 11, 2013. Links to all my other columns can be found here.

If you want to mirror the content of this post on another site, that is possible for a limited time if you read the legal notice at this link and include both a link to the original Quartz column and the following copyright notice:

© October 11, 2013: Miles Kimball, as first published on Quartz. Used by permission according to a temporary nonexclusive license expiring June 30, 2015. All rights reserved.

Below, after the text of the column as it appeared in Quartz, I note some of the reactions and explain some of the math behind the column.   


President Obama was right to say his appointment of Janet Yellen to head the US Federal Reserve has been one of his most important economic decisions. As the graph below shows, from the mid-1980s through 2007, monetary policy kept US GDP growth fairly steady, without needing much help from Keynesian fiscal policy. Economists talk about this period when GDP growth was much steadier than before as “The Great Moderation.” Monetary policy has done less well in years since the financial crisis in 2008, because the Fed felt it could not lower its target interest rate below zero, and has not been fully comfortable with its backup tools of quantitative easing and “forward guidance” about what it will do to interest rates years down the road.

Yellen’s academic research on the theory of unemployment points to one of the key reasons it is important to keep the growth of the economy steady. Let me explain.

With her husband George Akerlof, who was among recipients of the Nobel Prize in Economics in 2001, Yellen edited “Efficiency Wage Models of the Labor Market,” which gives one of the leading theories of why some level of unemployment persists even in good times, and why unemployment gets much worse in bad times. Yellen summarized the major variants of Efficiency Wage Theory. They all share the idea that firms often want to pay their workers more than their workers can get elsewhere. It might seem that employers would always want to pay workers as little as possible, but badly paid workers don’t care much about keeping their jobs.

Low pay affords workers an attitude of “Take this job and shove it!.” If workers have no reason to obey you because they are just as well off without the job—and owe you nothing—it will be hard to run a business. And if you hire someone at very low pay who actually sticks around, it is reasonable to worry about what is wrong with the worker that makes it so that worker can’t do better than the miserable job you are offering them. The way out of this trap is for an employer to pay enough that the worker is significantly better off with the job than without the job.

It might sound like a good thing that firms have a reason to pay workers more, except that, according to the Efficiency Wage Theory, firms have to keep raising wages until workers are too expensive for all of them to get hired. The reasoning goes like this: There will always be some jobs that are at the bottom of the heap. Suppose some of those bottom-of-the-heap jobs are also dead-end jobs, with no potential for promotion or any other type of advancement. If bottom-of-the-heap, dead-end jobs were free for the taking, no one would ever worry about losing one of those jobs. The Johnny Paycheck moment—when the worker says “Take this job and shove it”—will not be long in coming. If they were free for the taking, bottom-of-the-heap, dead-end jobs would also be subject to high turnover and low levels of emotional attachment to the firm.

The only way a bottom-of-the-heap, dead-end job will ever be worth something to a worker is if there is a something worse than a bottom-of-the-heap, dead-end job. In Efficiency Wage Theory, that something worse is being unemployed. To make workers care about bottom-of-the-heap, dead-end jobs, employers have to keep raising their wages above what other firms are offering until workers are expensive enough that there is substantial unemployment—enough unemployment that being unemployed is worse than having one of those bottom-of-the-heap, dead-end jobs. For the worker, Efficiency Wage Theory is bittersweet.

Some of what counts as unemployment in the official statistics arises from people in between jobs who simply need a little time to identify and decide among all the different jobs potentially available to them. And some is from people who have an unrealistic idea of what kinds of jobs are potentially available to them. But let me call the part of unemployment due to this Efficiency-Wage-Theory logic motivational unemployment. In the case of motivational unemployment, there will be people who are unemployed who are essentially identical to people who do have jobs. It is just bad luck on the part of the unemployed to be allotted the social role of scaring those who do have jobs into doing the boss’s bidding.

In criminal justice, swift, sure punishment does not need to be as harsh as slow, uncertain punishment. Just so, in Efficiency Wage Theory, the better and faster bosses are at catching worker dereliction of duty, the less motivational unemployment is needed. Because it is easier to motivate workers when worker dereliction of duty is detected more quickly, firms will stop raising wages and cutting back on employment at lower levels of unemployment.

There are other conceivable ways to reduce the necessity of motivational unemployment in the long run.

  1. If all jobs had advancement possibilities—that is, no jobs were dead-end jobs—it might be possible to motivate workers by the hope of moving up the ladder. This works best if workers actually learn and get better at what they do over time by sticking with a job.
  2. If doing what needs to be done on the job could be made more pleasant, it would reduce the need for the carrot of above-market wages or the stick of unemployment.
  3. If workers could trust firms not to cheat them and were required to pay for their jobs, they would be afraid of having to pay for a job all over again if they were fired.
  4. There could be a threat other than unemployment, such as deportation.
  5. Unemployment could be made less attractive.
  6. Worker’s reputations could be tracked more systematically and made available online.

To make possibilities 5 and 6 more concrete, let me mention online activist Morgan Warstler’s thought-provoking (if Dickensian and possibly unworkable) proposal that would make unemployment less attractive and would better track workers reputations: An “eBay job auction and minimum income program for the unemployed.” The program would require those receiving unemployment insurance or other assistance to work in a temp-job—within a certain radius from the worker’s home. The employer would go online to bid on an employee to hire and the wages would offset some of the cost of government assistance. Both the history of bids and an eBay-like rating system of the workers would give later employers a lot of useful information about the worker. Workers would also give feedback on firms, to help ferret out abuses. It is obvious that many of the policies that Efficiency Wage Theory suggests might reduce unemployment would be politically toxic and some (such as using the threat of deportation to keep employees in line) are morally reprehensible. But some of those policies merit serious thought.

What does Efficiency Wage Theory have to say about monetary policy? The details of how motivational unemployment works matter. Think about bottom-of-the-heap, dead-end jobs again. As the unemployment rate goes down in good times, the wage firms need to pay to motivate those workers goes up faster and faster, creating inflationary pressures. But the wages of those jobs at the bottom are already so low that when unemployment goes up in the bad times, it takes a lot of extra unemployment to noticeably reduce the wages that firms feel they need to pay and bring inflation back down. This is one of several, and possibly the biggest reason that the round trip of letting inflation creep up and then having to bring it back down is a bad deal. And a round trip in the other direction—letting inflation fall as it has in the last few years with the idea of bringing it back up later—is just as costly. (You can see the fall in what the Fed calls “core” inflation—the closest thing to being the measure of inflation the Fed targets—in the graph below.) It is much better to keep inflation steady by keeping output and unemployment at their natural levels.

The conventional classification divides monetary policy makers into “hawks,” who hate inflation more than unemployment and “doves” who hate unemploymentmore than inflation. Most commentators classify Janet Yellen as a dove. But I parse things differently. There can be serious debates about the long-run inflation target. I have taken the minority position that our monetary system should be adapted so that we can safely have a long-run inflation target of zero. But as long as there is a consensus on the Fed’s monetary policy committee that 2% per year (in terms of the particular measure of inflation in the graph above) is the right long-run inflation target, it is entirely appropriate for Janet Yellen to think that inflation below 2% is too low in any case, so that further monetary stimulus is beneficial not only because it lowers unemployment, but also because it raises inflation towards its 2% target level.

To see the logic, imagine some future day in which everyone agreed that the long-run inflation target should be zero. Then if inflation were below the target—in that case actually being deflation–then almost everyone would agree that monetary stimulus would be good not only because it lowered unemployment, but also because it raised inflation from negative values toward zero. Anyone who wants to make the case for a long-run inflation target lower than 2% should make that argument, but otherwise they should not be too quick to call Janet Yellen a dove for insisting that the Fed should keep inflation from falling below the Fed’s agreed-upon long-run inflation target of 2%.

Nor should anyone be called a hawk and have the honor of being thought to truly hate inflation if they are not willing to do what it takes to safely bring inflation down to zero and keep it there. Letting inflation fall willy-nilly because a serious recession has not been snuffed out as soon as it should have been is no substitute for keeping the economy on an even keel and very gradually bringing inflation down to zero, with all due preparation.

There is also no special honor in having a tendency to think that a dangerous inflationary surge is around the corner when events prove otherwise. One feather in Yellen’s cap is the Wall Street Journal’s determination that her predictions for the economy have been more accurate than any of the other 14 Fed policy makers analyzed. For the Fed, making good predictions about where the economy would go without any policy intervention, and what the effects of various policies would be, is more than half the battle. Differences in views about the relative importance of inflation and unemployment pale in comparison to differences in views about how the economy works in influencing policy recommendations. Having a good forecasting record is not enough to show that one understands how the economy works, but over time, having a bad forecasting record certainly indicates some lack of understanding—unless one is learning from one’s mistakes.

In the last 10 years, America’s economic policy-making apparatus as a whole made at least two big mistakes: not requiring banks to put up more of their own shareholders’ money when they took risks, and not putting in place the necessary measures to allow the Fed to fight the Great Recession as it should have, with negative interest rates. It is time for America’s economic policy-making apparatus to learn from its mistakes, on both counts.

As the saying goes, “It’s difficult to make predictions, especially about the future.” But I will hazard the prediction that if the Senate confirms her appointment, monetary historians 40 years from now will say that Janet Yellen was an excellent Fed chief. There will be more tough calls ahead than we can imagine clearly. As president of the San Francisco Fed from 2004 to 2010, and as vice chair of the Fed since then, Yellen has brought to bear on her role as a policymaker both skills in deep abstract thinking from her academic background and the deep practical wisdom also known as “common sense.” It is time for her to move up to the next level.


eactions and the Math Behind the Column

Ezra Klein: Given his 780,386 Twitter followers, a tweet from Ezra Klein is worth reporting. I like his modification to my tweet: 

No, she’s a human being RT @mileskimball: Don’t miss my column “Janet Yellen is hardly a dove”http://blog.supplysideliberal.com/post/63725670856/janet-yellen-efficiency-wages-and-monetary-policy

Andy Harless’s Question: Where Does the Curvature Come From? Andy Harless asks why there is an asymmetry—in this case a curvature—that makes things different when unemployment goes up than when it goes down. The technical answer is in Carl Shapiro and Joseph Stiglitz’ paper “Unemployment as a Worker Discipline Device.” It is not easy to make this result fully intuitive. A key point is that unemployed folks find jobs again at a certain rate. This and the rate at which diligent workers leave their jobs for exogenous reasons dilute the motivation from trying to reduce one’s chances of leaving a job. The discount rate r also dilutes any threats that get realized in the future. So the key equation is 

dollar cost of effort per unit time 

                    =  (wage - unemployment benefit) 

                                                          · detection rate

÷ [detection rate + rate at which diligent workers leave their jobs                              + rate at which the unemployed find jobs + r]  

That is, the extra pay people get from work only helps deter dereliction of duty according to the fraction of the sum of all the rates that comes from the detection probability. And the job finding rate depends on the reciprocal of the unemployment rate. So as unemployment gets low, the job finding rate seriously dilutes the effect of the detection probability times the extra that workers get paid.

(The derivation of the equation above uses the rules for dealing with fractions quite heavily, backing up the idea in the WSJ article I tweeted as follows.

The Dividing Line: Why Are Fractions Key to Future Math Success?http://on.wsj.com/15rlupS

Deeper intuition for the equation above would require developing a deeper and more solid intuition about fractions in general than I currently have.)

Solving for the extra pay needed to motivate workers yields this equation:

(wage - unemployment benefit) 

           = dollar cost of effort per unit time 

· [detection rate + rate at which diligent workers leave their jobs                              + rate at which the unemployed find jobs + r]  ÷

                                  detection rate

In labor market dynamics the rates are high, so a flow-in-flow-out steady state is reached fairly quickly, and we can find the rate at which the unemployed find jobs by the equation flow in = flow out, or since in equilibrium the firms keep all their workers motivated,  

rate at which diligent workers lose jobs * number employed

= rate at which the unemployed find jobs * number unemployed.

Solving for the rate of job finding:

rate at which the unemployed find jobs 

= rate at which diligent workers leave their jobs 

· number employed  ÷  number unemployed

Finally, it is worth noting that

rate at which diligent workers leave their jobs

+ rate at which the unemployed find jobs

= rate at which diligent workers leave their jobs 

· [number unemployed + number employed]/[number unemployed]

= rate at which diligent workers leave their jobs 

÷ unemployment rate

Morgan Warstler’s Reply: The original link in the column about Morgan Warstler’s plan was to a Modeled Behavior discussion of his plan. Here is a link to Morgan Warstler’s own post about his plan. Morgan’s reply in the comment thread is important enough I will copy it out here so you don’t miss it:

1. The plan is not Dickensian. It allows the poor to earn $280 per week for ANY job they can find someone to pay them $40 per week to do. And it gives them the online tools to market themselves.

Work with wood? Those custom made rabbit hatches you wish you could get the business of the ground on? Here ya go.

Painter, musician, rabbit farmer, mechanic - dream job time.

My plan is built to be politically WORKABLE. The Congressional Black Caucus, the Tea Party and the OWS crowd. They are beneficiaries here.

2. No one in economics notices the other key benefit - the cost of goods and services in poor zip codes goes down ;:So the $280 minimum GI check buys 30% more! (conservative by my napkin math) So real consumption goes up A LOT.

This is key, bc the effect is a steep drop in income inequality, and mobility.

That $20 gourmet hamburger in the ghetto costs $5, and it’s kicking McDonalds ass. And lots of hipsters are noticing that the best deals, on things OTHER THAN HOUSING are where the poor live.

Anyway, I wish amongst the better economists there was more mechanistic thinking about how thigns really work.