The Deep Magic of Money and the Deeper Magic of the Supply Side

Introduction

I will assume that you have either read The Lion, the Witch and the Wardrobe, seen the movie, or don’t intend to do either. So I won’t worry about spoiling the story for you. C.S. Lewis’s fantasy is set in the world of Narnia. WikiNarnia explains the laws of nature in Narnia that drive the plot of The Lion, the Witch and the Wardrobe:

The Deep Magic was a set of laws placed into Narnia by the Emperor-beyond-the-Sea at the time of its creation. It was written on the Stone Table, the firestones on the Secret Hill and the sceptre of the Emperor-beyond-the-Sea.

This law stated that the White Witch Jadis was entitled to kill every traitor. If someone denied her this right then all of Narnia would be overturned and perish in fire and water.

Unknown to Jadis, a deeper magic from before the start of Time existed which said that if a willing victim who had comitted no treachery was killed in a traitor’s stead, the Stone Table would crack and Death would start working backwards.

Like the Deep Magic and the Deeper Magic in Narnia, in macroeconomics, money is the Deep Magic and the supply side is the Deeper Magic. In the short run, money rules the roost. In the long run, pretty much, only the supply side matters. In this post, I want to trace out what happens when a strong monetary stimulus is used to increase output and reduce unemployment. In the short run, output will go up, but in the long run, output will return to what it was.

The Deep Magic of Money

Let me start by explaining why money is the Deep Magic of macroeconomics. There are many people in the world today who think it is hard making output go up, and that we need to resort to massive deficit spending by the government spending to stimulate the economy or from tax cuts meant to stimulate the economy. But as I explained in an earlier post, Balance Sheet Monetary Policy: A Primer, there are few limits to the power of money to make output go up in the short run. 

Money as a Hot Potato when the Short-Term Safe Interest Rate is Above Zero. When short-term safe interest rates such as the Treasury bill rate or the federal funds rate at which banks lend to each other overnight are positive, almost all economists agree that money is very powerful. Suppose the Federal Reserve (“the Fed”) or some other central bank prints money to buy assets. In this context, when I say “money” I mean currency (in the U.S., green pieces of paper with pictures of dead presidents on them) or the electronic equivalent of currency–what economists sometimes call “high-powered money.” (When the Fed creates the electronic equivalent of currency, it isn’t physically “printing” money but it might as well be.) The Fed requires banks to hold a certain amount of high-powered money in reserve for every dollar of deposits they hold. Any high-powered money that a bank holds beyond that is not needed to meet the reserve requirement and is usually not a good deal because it earns an interest rate of zero (unless the Fed decides to pay more than that for the electronic equivalent of currency held in an account with the Fed). So inside the banking system, reserves beyond those that are required–called “excess reserves”–are usually a hot potato. Also, outside the banking system, at an interest rate of zero, high-powered money is normally a “hot potato” that households and firms other than banks try to spend relatively quickly, since every minute they hold high-powered money they are losing out on higher interest rates they could earn on other assets, such as Treasury bills. I say “relatively” quickly because there is some convenience to currency. So if the Fed prints high-powered money to buy assets, that hot potato money stimulates spending until until people and firms wind up with enough deposits in bank accounts that most of the high-powered money is used up meeting banks’ requirements to hold reserves against deposits, while the rest is in people’s pockets or the equivalent for convenience.

What Happens at the Zero Lower Bound on the Nominal Interest Rate. Many things change when short-term, safe interest rates such as the federal funds rate or the Treasury bill rate get very low, near zero. Then high-powered money is no longer a hot potato, either inside or outside the banking system. Banks and firms and households become willing to keep large piles of high-powered money–piles doing nothing (something even many non-economists have remarked upon lately). In the U.S. extremely low interest rates are a relatively new thing, but Japan has had extremely low interest rates for a long time; in Japan, it is not unusual for people to have thick wads of 10,000-yen notes (worth about $100 each) in their wallets. There are economists who believe that when short-term safe interest rates are essentially zero so that high-powered money is no longer a hot potato that money has lost its magic. Not so. Printing money to buy assets has two effects: one from the printing of the money, the other from the buying of the assets. That effect can be important, depending on what asset the Fed is buying.

Normally, the Fed likes to buy Treasury bills when it prints money. But buying Treasury bills really does lose its magic after a while. Interest rates on Treasury bills falling to zero is equivalent to people being willing to pay a full $10,000 for the promise of receiving $10,000 three months later. (You can see that the interest rate is then zero, since you don’t get any more dollars back than what you put in. If you paid less than $10,000 at first, then you would be getting more dollars back at the end than what you put in, so you would be earning some interest.) No one is willing to pay much more than $10,000 for the promise of $10,000 in three months, since other than the cost of storage, one can always get $10,000 in three months just by finding a very good hiding place for $10,000 in currency. So when the interest rate on Treasury bills has fallen to zero, it is not only impossible to push that interest rate significantly below zero, in what turns out to be the same thing, it is impossible to push the price of a Treasury bill that pays $10,000 in three months significantly above $10,000.

Fortunately, there are many other assets in the world to buy other than Treasury bills. Unfortunately, the Fed only has the legal authority to buy a few types of assets. It can buy long-term U.S. Treasury bonds. It can buy mortgage-backed assets from Fannie Mae and Freddie Mac–which are companies that were created by the government to make it easier for people to buy houses. (They used to be somewhat separate from the government, despite being created by the government, but the government had to fully take them over in the recent financial crisis.) The Fed can buy bonds issued by cities and states. It can also buy bonds issued by other countries (as long as the bonds are reasonably safe), but usually doesn’t, since other countries would have strong opinions about that. A key thing the Fed does not feel it is allowed to do is to buy packages of corporate stocks and bonds. Still, with the menu of assets the Fed clearly is allowed to buy, it can have a big effect on the economy, even when short-term, safe interest rates are basically zero.

If the Fed buys packages of mortgages, it pushes up the price of those mortgage-backed assets. When the price of mortgage-backed assets is high, financial firms become more eager to lend money for mortgages, even though they remain somewhat cautious because they (or others who serve as cautionary tales) were burned by mortgages that went sour as part of the financial crisis. If financial firms become eager to lend against houses, more people will be able to refinance and spend the money they get or that they save from lower monthly house payments, and some may even build a new house.

If the Fed buys long-term Treasury bonds, that pushes up their price, making them more expensive. Some firms and households who had intended by buy Treasury bonds will now find them too pricey as a way to get a fixed payoff in the future. With Treasury bonds too pricey, they will look for ways to get payoffs in the future that are not so pricey now. They may hold onto their hats and buy corporate bonds or even corporate stock, despite the risk. That makes it easier for companies to raise money by selling additional stocks and bonds. Up to a point it also pushes up the price of stocks and bonds, so that people looking at their brokerage accounts or their retirement accounts feel richer and may spend more. If you don’t believe me, just watch how joyous the stock market seems every time the Fed surprises people by announcing that it will buy more long-term Treasury bonds than people expected–or how disappointed the stock market seems every time the Fed surprises people by announcing that it won’t buy as many long-term Treasury bonds as people had expected.

The Cost of the Limited Range of Assets the Fed is Allowed to Buy. It is true that at some point the legal limits on what the Fed is allowed to buy will put a brake on how much the Fed can stimulate the economy. But that does not deny the power of money to raise the price of assets and stimulate the economy, it only means that when we don’t allow newly created money to be used to buy a wide range of assets, then money is hobbled. Aside from the effect limits on what the Fed can buy have on the ability of money to stimulate the economy, those limits also affect the cost of what the Fed does. If the Fed is only allowed to buy a narrow range of assets, it will have to push the price of each of those assets up a lot to get the desired effect, and then when it sells them again to avoid the economy overheating, it may lose money from the roundtrip of buying high (when it pushed the price up by buying) and selling low (when it later pulls the price down by selling). This is a bigger problem the lower the interest rate on a given type of asset is to begin with. It is also a bigger problem the longer-term an asset is. So risky assets that have higher interest rates to begin with–and perhaps, especially, risky short-term assets–are better in that regard.

Summarizing the Deep Magic of Money. The bottom line is that in the short run, money has deep magic that can stimulate the economy as much as desired. Right now, the power of money is as about as circumscribed as it ever is, and yet it still has its magic. And yet, I claim, as almost all other economists claim, that in the long run, the supply side will win out. Not only will the supply side win out in the long run, but in the long run, money has virtually no power to affect anything important–unless continual, rampant printing of money drives the economy into the disaster of hyperinflation, or a serious shortage of money causes prices to fall in a long-lasting bout of deflation. (The fact that, short of hyperinflation or deflation, money has virtually no power to affect anything important in the long run is called monetary superneutrality.) How can money have so much power in the short run and so little in the long run?

The Deeper Magic of the Supply Side

The answer to how money can have so much power in the short run and so little in the long run is that the supply side will bend in many ways in the short run, but will always bounce back.

Price Above Marginal Cost Makes Output Demand-Determined in the Short Run. To begin with, the most basic way in which the supply side is accomodating in the short run is that if a firm has–for some period of time–fixed a price above the cost to produce an extra unit of its good or service (the marginal cost), then it is eager to sell its good or service to any extra customer who walks in the door. And firms will, in general, set their prices at least a little above what it normally costs to produce an extra unit as long as they can do so without losing all of their existing customers. Here is why. Thinking in long-run terms, if the firm sets its price equal to marginal cost, then it doesn’t earn anything from the last few customers. So losing that customer by raising the price a little is no harm. And raising the price a little means that all of the customers who don’t bolt will now be paying more–more that will go into the firm’s pocket. Raising the price too high puts that extra pocket money in jeopardy, so the firm won’t raise prices too high, but it will raise the price at least some above marginal cost as long as it doesn’t lose all of its customers by doing so. To summarize, if firms do fix prices for some length of time as opposed to changing them all the time, they are likely to set those prices above what it normally costs to produce an extra unit of the good or service they sell. And if price is above marginal cost, then given a temporarily fixed price, the amount by which price is above marginal cost is what the firm gets on net when an extra customer walks in the door. For example, produce a widget for a marginal cost of $6, sell it for $10, and take home $4 as extra profits.  

So firms who won’t lose every last customer by raising their price will set price above marginal cost, and then will typically be eager to sell to an extra customer during the period when their price is fixed. I say “typically” because if enough new customers walked in the door, then marginal cost might increase enough above normal to exceed the fixed price. Then the firm would lose money by selling further units, and it will make up an excuse to tell customers about why it won’t sell more. The usual excuse is “we have run out”–which is a polite way of saying that they could do more, for a high enough price, but won’t for the price they have actually set. But since the firm will set price some distance above marginal cost to begin with, there is some buffer in which marginal cost can increase without going above the price. And anywhere in that buffer zone, the firm will still be eager to serve additional customers.  

How Extra Output is Produced in the Short Run. How does the firm actually produce extra units in the short run? Here it is more interesting to broaden the scope to the whole economy. (Much of what follows is drawn from a paper I teamed up with Susanto Basu and John Fernald to write: “Are Technology Improvements Contractionary?”–a paper that has to consider what happens as a result of changes in demand before it can begin to address what happens with a supply-side change in technology.) When the amount customers are spending increases, so that firms need to produce more to serve that extra quantity demanded, the firms may, at the end of the day hire additional employees. But that is usually a last resort. There are many other ways to increase output short of hiring a new employee. Here are three avenues to increase production even before hiring new workers:

  1. ask existing employees to stay longer and work more hours in a week and take fewer vacations;
  2. ask existing employees to work harder while they are at work–to be more focused while at their stations or their desks, and to spend less time away from their work at the water cooler;
  3. delay maintenance of the factory, training, and other activities that can help the firm’s productivity in the long run, but don’t help produce the output the customer needs today.

The Workweek of Capital. One thing that doesn’t have time to contribute much to output when demand goes up is new machines and factories. It is simply hard to add new machines and factories fast enough to contribute that big a percentage of the increase in output. But people working longer hours with the same number of machines and factories don’t necessarily have to crowd around the limited number of machines and workspaces, since those machines and workspaces were often unused after hours anyway. So when the workers work longer, so do their machines and workspaces. Even when new workers are added, they can often be added in a new shift at a time when the machines and workspaces had been unused. So the fact that it is hard to quickly add extra factories and machines is not as big a limitation to output in the short run as one might think. Of course using machines and workspaces around the clock has costs. Extra wear and tear is one cost, but probably a bigger cost is having to pay people extra to be willing to work at the inconvenient hours of a second or third shift. (Note that paying an inexperienced worker working at night the same as a more experienced worker during the day is also paying extra beyond what the inexperienced worker would be worth for production if he or she were working during the day.)

Reallocation of Labor. At the economy-wide level another contribution to higher GDP in a boom is that in a boom the amount of work done tends to increase most in those sectors of the economy where a 1% increase in inputs leads to considerably more than a 1% increase in output–that is, in sectors such as the automobile sector where there are strong economies of scale (also called increasing returns to scale). These tend to also be sectors in which the price of output, and therefore the marginal value of output, is the furthest above the marginal cost of output. So when more work is done in those sectors, it adds a lot of valuable output that adds a lot to GDP–a lot more than if extra work by that same person were done in another sector where the price (and therefore the marginal value) of output is not as far above marginal cost.

Okun’s Law. When firms are finally driven to hiring additional workers, this still doesn’t reduce the number of “unemployed” workers by an equal amount, for the simple reason that, when firms are hiring, more people decide it is a good time to look for a job, and go from being “out of the labor force” (not looking for work) to “in the labor force and unemployed” (looking for work but haven’t found it yet). So in addition to all the ways that firms can increase output without hiring extra workers, the fact that hiring extra workers causes more worker to look for work also makes it hard to make the unemployment rate go down. So hard, in fact, that the current estimates for what is called “Okun’s Law” (after the economist Arthur Okun) say that it typically takes 2% higher output to make the unemployment rate 1 percentage point lower. (Note that a typical constant-returns to scale production function would say that 2% higher output would require 3% higher labor input. Thus, if 2% higher output came simply from hiring extra workers for a constant returns to scale production function, then the unemployment rate would go down almost 3%. So the details of how firms manage to produce more output matter a lot.)   

The Supply Side in the Short Run and in the Long Run. That is the story of the short run. Extra money increases the amount that firms and households want to spend. Firms accommodate that extra desire to spend because price is above marginal cost. They actually produce the extra output by a combination of hiring extra workers and asking existing workers to work longer and harder, in a way that often takes advantage of economies of scale. Firms also may focus their productive efforts more on immediately salable output. They deal with a relatively fixed number of workspaces and machines by keeping the factory or office in operation more hours of the week.

The thing to notice is that both the ways in which the firms accommodate extra demand and their motivation for doing so rely on things that won’t last forever. Workers may work longer and harder without complaint for a while, but sooner or later they will start to grumble about the long hours and the pace of work, and maybe begin looking for another job. Of course, they may not even have to look for another job, since with a booming economy, a job may come looking for them.  So even a boss who is too dense to realize all along the strain he or she is putting workers through, will eventually realize the cost of those extra hours and effort as wages get driven up labor market competition. What is more, the boss will eventually get around to raising the firm’s prices in line with this increased marginal cost as the “shadow wage” of the extra strain on workers goes up (something smart bosses will pay attention to) and ultimately the actual wage goes up (which will catch the attention of even dense bosses).   

As prices rise throughout the economy, another force kicks in: workers will realize they are working hard for a paycheck that doesn’t stretch as far anymore, and start to wonder “Is it worth spending so many long, hard, late hours at work?” Even when the workers’ answer is still “On balance, yes,” because the answer is no longer “YES!” they will not jump to the boss’s orders with the same speed anymore, which will make the boss see the workers as less productive, and therefore see a higher marginal cost of output. All of this speeds the increase in prices even more, and speeds the return of hours worked and intensity of work to a normal pace. The temporary bending of the supply side toward greater production will be undone. There are things that permanently affect the supply side, but short of a monetary disaster, money is not one of those things. Short of a monetary disaster, and leaving aside tiny effects, money only matters in the short run. Economists call this monetary superneutrality and say that money only matters in the short run by saying the words the long-run aggregate supply curve is vertical

Three Codas: Inflation Magic, Sticky vs. Flexible Prices, and Federal Lines of Credit

Is There Any Direct Magic by which Money Causes Inflation? A crucial aspect of the story above is that money only causes a general increase in prices–inflation–by increasing output and leading to all the measures discussed above to produce more output. Some economists think that printing money can cause inflation even if it doesn’t lead to an increase in output. Money has magic, but not that kind of magic. 

Let me discuss the two closest things I can think of to money having some direct magic that could raise inflation even without an increase in output.

  1. First, to some extent, inflation can be a self-fulfilling prophecy. If firms believe that prices will be higher in the future, those who have gotten around to changing prices will set higher prices now. So if firms believed that printing money could cause inflation without increasing output, then to some extent it would. But I see no evidence that many firms believe this. They know how hard it is for them to raise prices in their own industry when demand is low.
  2. Printing money to buy assets drives up the prices of assets in general, as financial investors look for assets that are still reasonably priced to buy, bringing up their prices as well. Many commodities, such as oil, copper, and even cattle, have an asset-like quality because they can be used either now or later. (And copper–and depending on the use, cattle–can be used both now and later.) When the Fed pushes up the prices of assets, it pushes up what people are willing to pay now for a payout down the road. That pushes up the price of oil, copper, and cattle now. This looks like inflation, but it is not a general increase in prices, but an increase in commodity prices relative to other prices in the economy. When the economy cools down (often, unlike the story above, because the Fed sells assets to mop up money and cool down an overheated economy), all of these increases in commodity prices go in reverse, and the roundtrip effect on the overall price level from the rise and fall of commodity prices along the way is modest. 

Sticky Prices vs. Flexible Prices. Some prices are relatively flexible and quick to change, while others are fixed for a relatively long period of time. (I don’t emphasize wages being fixed for often as much as a year at a time, since a smart boss should realize that in a long-term relationship, a high level of strain on workers, which can come on quickly, leads to extra costs even if the actual wage changes only slowly.) Prices are especially flexible and quick to change for the long-lasting commodities I discussed above, and for relatively unprocessed food such as bananas and orange juice. (In relatively unprocessed food, most of the cost is from the ingredients and bringing the food to the customer rather than the processing. And it is hard to differentiate one’s product from the competition’s product, so the price can’t be pushed very far above marginal cost.) Another interesting area where prices are very flexible is in air travel, where ticket prices can change dramatically from one week to the next. By contrast, prices are fixed for relatively long periods of time for most services. (My wife Gail is a massage therapist. I know that massage therapists think long and hard before they raise prices on their clients, and warn their clients long in advance about any price increase. In an even more extreme example, it is not uncommon for psychotherapists to keep their price fixed for a given client during the whole period of treatment, even if it lasts for years.) The prices of manufactured goods are in-between in their degree of flexibility.   

When demand is high so that the economy booms, flexible prices move up quickly, while sticky prices move up only slowly. But when the economy cools down, the flexible prices can easily reverse course, while the sticky prices have momentum. (Greg Mankiw and Ricardo Reis explain one mechanism behind this momentum in their paper “Sticky Information Versus Sticky Prices: A Proposal to Replace the New Keynesian Phillips Curve”: firms’ sense of the rate of inflation–often based on old news–feeds into their price-setting. In this account, inflation feeds on past inflation that affects that sense of what the rate of inflation is.) So, perhaps counterintuitively, it is inflation in sticky prices that is the most worrisome. The Fed is right to focus its worries about inflation on what is happening to the sticky prices. In the news, this is described as focusing on “core inflation”–the overall rate of price increases for goods other than oil and food. 

The existence of a mix of flexible and sticky prices in the economy is important for macroeconomic models, since it means that higher aggregate demand will have some immediate effect on prices (because of the flexible prices), but the effect on the overall price level will still be limited (because of the sticky prices). Economists often describe this as the “short-run aggregate supply curve” sloping upward–as opposed to being vertical, as it would be if all prices were flexible, or horizontal, as it would be if all prices were sticky. The existence of a mix of flexible and sticky prices is also important because it means that this “short-run aggregate supply curve” can shift when flexible prices change for reasons other than the level of aggregate demand. (Unfortunately, the most obvious reason the “short-run aggregate supply curve” might shift is because of a war in the Middle East that raises the price of oil in a way that is not do to the level of aggregate demand.) 

Federal Lines of Credit. I have focused on monetary policy in this post, arguing that traditional fiscal stimulus–government spending or tax cuts meant to stimulate the economy in the short run–is inferior because it adds so much to the national debt. But  there is one type of fiscal policy that adds relatively little to the national debt, as I discuss in my post “Getting the Biggest Bang for the Buck in Fiscal Policy.” The “Federal Lines of Credit” I propose in that post are a type of fiscal policy that is similar in some ways to monetary policy, since Federal Lines of Credit involve the government making loans to households. Federal Lines of Credit, like money, have deep magic, but in the long run their effects on output will also be countered by the deeper magic of the supply side.

Miles's Teaching Tumblog

There are some posts I am using for my Principles of Macroeconomics class I think most readers of this blog will not be interested in, but that I want to make available to anyone who is interested. I am going to put them on a secondary Tumbler blog. Here is the link again. And here is the link spelled out:

http://profmileskimball.tumblr.com

I will also put a link on my sidebar, so you can always access it.

The first post on my Teaching Tumblog is up. I typically won’t announce each post on my Teaching Tumblog. You will have to go look to see what is there.  

Why I am a Macroeconomist: Increasing Returns and Unemployment

During my first year in Harvard’s Economic Ph.D. program (1983-1984),  I thought to myself I could never be a macroeconomist, because I couldn’t figure out where the equations came from in the macro papers we were studying. In my second year, I focused on microeconomic theory, with Andreu Mas-Colell as my main role model. Then, during the first few months of calendar 1985, I stumbled across Martin Weitzman’s paper “Increasing Returns and the Foundations of Unemployment Theory” in the Economics Department library. Marty’s paper made me decide to be a macroeconomist. (I took the macroeconomics field courses and began working on writing some macroeconomics papers the following year, my third year–the year Greg Mankiw joined the Harvard faculty–and went on the job market in my fourth year.) I want to give you some of the highlights from “Increasing Returns and the Foundations of Unemployment Theory”, not only so you can see what affected me so strongly, but also because it includes ideas that every serious economist should have in his or her mental arsenal. Marty’s paper is a “big-think” paper. It has a lot to say, even after all of the equations were stripped out of it.

There is one important piece of background before turning to Marty’s paper: Say’s Law. In Say’s own words, organized by the wikipedia article on Say’s Law:

In Say’s language, “products are paid for with products” (1803: p. 153) or “a glut can take place only when there are too many means of production applied to one kind of product and not enough to another” (1803: p. 178-9). Explaining his point at length, he wrote that:

It is worthwhile to remark that a product is no sooner created than it, from that instant, affords a market for other products to the full extent of its own value. When the producer has put the finishing hand to his product, he is most anxious to sell it immediately, lest its value should diminish in his hands. Nor is he less anxious to dispose of the money he may get for it; for the value of money is also perishable. But the only way of getting rid of money is in the purchase of some product or other. Thus the mere circumstance of creation of one product immediately opens a vent for other products. (J. B. Say, 1803: pp.138–9)

Say’s Law is sometimes expressed as “Supply creates its own demand.”  Say’s law seems to deny the possibility of Keynesian unemployment–unemployed workers who are identical in their productivity to workers who have jobs, and are willing to work for the same wages, but cannot find a job in a reasonable amount of time. The argument of Say’s law needs to be countered in some way in order to argue for the existence of Keynesian unemployment. Marty paints of picture of Keynesian unemployment in this way:

In a modern economy, many different goods are produced and consumed. Each firm is a specialist in production, while its workers are generalists in consumption. Workers receive a wage from the firm they work for, but they spend it almost entirely on the products of other firms. To obtain a wage, the unemployed worker must first succeed in being hired. However, when demand is depressed because of unemployment, each firm sees no indication it can profitably market the increased output of an extra worker. The inability of the unemployed to communicate effective demand results in a vicious circle of self-sustaining involuntary unemployment. There is an atmosphere of frustration because the problem is beyond the power of any single firm to correct, yet would go away if only all firms would simultaneously expand output. It is difficult to describe this kind of ‘prisoner’s dilemma’ unemployment rigorously, much less explain it, in an artificially aggregated economy that produces essentially one good.

Marty mentions one economic fact that has big implications even outside of business cycle theory. A remarkable fact about the political economy of trade is that trade policy often favors the interests of producers over the interests of consumers. Why are producer lobbies more powerful than consumer lobbies? The key underlying fact is that “Each firm is a specialist in production, while its workers are generalists in consumption.” So particular firms and the workers of those firms care a huge amount about trade policy for the good that they make, while the many consumers who would each benefit a little from a lower price with free imports are not focused on the issue of that particular good, since it is only a small share of their overall consumption. The exceptions, where consumer interests take the front seat in policy making, are typically where the good in question is a very large share of the consumption bundle (such as wheat or rice in poor countries) or where trade policy for many different goods has been combined into an overall trade package that could make a noticeable difference for an individual consumer. Other political actions that depart from the free market often follow a similar principle–either favoring a producer or favoring households interests in relation to a good that is a large share of the household’s budget, such as rent, or a very salient good such as gasoline, which seems to consumers as if it is even more important for their budgets than it really is.  

After painting the picture of the world that he wants to provide a foundation for, Marty dives into his main argument–that increasing returns is essential if one wants to explain unemployment. 

In this paper I want to argue that the ultimate source of unemployment equilibrium is increasing returns. When compared at the same level of aggregation, the fundamental differences between classical and unemployment versions of general equilibrium theory trace back to the issue of returns to scale.

More formally, I hope to show that the very logic of strict constant returns to scale (plus symmetric information) must imply full employment, whereas unemployment can occur quite naturally with increasing returns

He argues that much the same issues would arise from increasing returns to scale from a wide variety of difference causes:  

The reasons for increasing returns are anything that makes average productivity increase with scale - such as physical economies of area or volume, the internalisation of positive externalities, economising on information or transactions, use of inanimate power, division of labour, specialisation of roles or functions, standardisation of parts, the law of large numbers, access to financial capital, etc., etc.

Marty lays out a sequence of three models. Here are the first two models or “stages”:

III. STAGE I: SELF SUFFICIENCY    

Suppose each labourer can produce α units of any commodity. In such a world the economic problem has a trivial Robinson Crusoe solution. A person of attribute type i simply produces and consumes α units of commodity i.

IV. STAGE II: SMALL SCALE SPECIALISATION    

Now suppose a person of type (i,j) prefers to consume i but has a comparative advantage in producing j.

In such an economy there can be no true unemployment because there are no true firms. If anyone is declared 'unemployed’ by a firm, he can just announce his own miniature firm, hire himself, and sell the product directly on a perfectly competitive market.

In the context of the “Stage II” model, Marty points to increasing returns to scale not only as the explanation for unemployment, but also as what makes plants discrete entities (in this paper he does not distinguish between plants and firms):  

In a constant returns economy the firm is an artificial entity. It does not matter how the boundary of a firm is drawn or even if it is drawn at all. There is no operational distinction between the size of a firm and the number of firms.

Also, increasing returns to scale is the reason it is typical for a firm, defined in important measure by its capital, to hire workers, rather than the other way around. With constant returns to scale, workers could easily hire capital and there would be less unemployment: 

When unemployed factor units are all going about their business spontaneously employing themselves or being employed, the economy will automatically break out of unemployment.

One reason increasing returns to scale is so powerful in its effects is that it is closely linked to imperfect competition–as constant returns to scale is closely linked to perfect competition. 

The seemingly institution-free or purely technological question of the extent of increasing returns is a loaded issue precisely because the existence of pure competition is at stake.

To emphasise a basic truth dramatically, let the case be overstated here. Increasing returns, understood in its broadest sense, is the natural enemy of pure competition and the primary cause of imperfect competition. (Leave aside such rarities as the monopoly ownership of a particular factor.) 

After laying out a particular macroeconomic model with increasing returns to scale, Marty directly addresses Say’s law, writing this: 

Behind a mathematical veneer, the arguments used in the new classical macroeconomics to discredit steady state involuntary unemployment are implicitly based on some version or other of Say’s Law. It is true that under strict constant returns to scale and perfect competition, Say’s Law will operate to ensure that involuntary unemployment is automatically eliminated by the self interested actions of economic agents. Each existing or potential firm knows that irrespective of what the other firms do it cannot glut its own market by unilaterally expanding production, hence a balanced expansion of the entire underemployed econorny in fact takes place. But increasing returns prevents supply from creating its own demand because the unemployed workers are essentially blocked from producing. Either the existing firms will not hire them given the current state of demand, or, even if a group of unemployed workers can be coalesced effectively into a discrete lump of new supply, it will spoil the market price before ever giving Say’s Law a chance to start operating. When each firm is afraid of glutting its own local market by unilaterally increasing output, the economy can get trapped in a low level equilibrium simply because there is insufficient pressure for the balanced simultaneous expansion of all markets. Correcting this 'externality’, if that is how it is viewed, requires nothing less than economy-wide coordination or stimulation. The usual invisible hand stories about the corrective powers of arbitrage do not apply to effective demand failures of the type considered here.

To this day–more than 27 years later–I stand convinced that increasing returns to scale are essential to understanding macroeconomics in the real world. Much of what we see around us stems from the inability of half a factory, staffed with half as many workers, to produce half the output. Despite the difficulty of explaining Marty’s logic for why increasing returns to scale matters and what its detailed consequences are, I believe Intermediate Macroeconomics textbooks–and even Principles of Macroeconomics textbooks–need to try. Anyone who learns much macroeconomics at all should not be denied a chance to hear some of Marty’s logic.

Rodney Stark on a Major Academic Pitfall

Rodney Stark writes of an unfortunate side-effect of academic incentives in his book Discovering God: The Origins of the Great Religions and the Evolution of Belief. Although his specific context is New Testament scholarship, the incentives he points to operate in all areas of academia:

In order to enjoy academic success one must innovate; novelty at almost any cost is the key to a big reputation. This rule holds across the board and has often inflicted remarkably foolish new approaches on many fields. (pp. 294-295.)

In my experience, the only truly effective defenses against the danger Rodney points to are to have research disciplined by either abundant data or by rigorous logic like that used in mathematics.

My Proudest Moment as a Student in Ph.D. Classes

Is it a watermelon or is it an ellipsoid?

Ellipsoids, which are more or less a watermelon shape, are important in econometrics. In my Ph.D. Econometrics class at Harvard, Dale Jorgenson explained the effect of linear constraints by saying that slicing a plane through an ellipsoid would be like slicing a watermelon. Slices of a 3-dimensional ellipse–a watermelon–are in the shape of a 2-dimensional ellipse–a watermelon slice. Dale’s analogy of watermelons and watermelon slices inspired me to exclaim that slicing a 4-dimensional ellipsoid with a hyperplane would get you a whole watermelon!

The Shape of Production: Charles Cobb's and Paul Douglas's Boon to Economics

Paul Douglas, Economist and Senator from Illinois Paul Douglas was not only an economist, but one of the most admirable politicians I have ever read about. See what you think: here is the wikipedia article on Paul. I would be interested in whether there are any skeletons in his closet that this article is silent on. If the Devil’s Advocate’s case is weak, he may qualify as a Supply-Side Liberal saint. (He was divorced, so a Devil’s Advocate might have something to work with. See my discussion of saints and heroes in “Adam Smith as Patron Saint of Supply-Side Liberalism?”) Paul was one of Barack’s predecessors as senator of Illinois, serving from 1949-1967, but chose not to run for president when he was given the chance.

In 1927, before he dove fully into politics, Paul teamed up with mathematician and economist Charles Cobb to develop and apply what has come to be called the “Cobb-Douglas” production function. (The wikipedia article on Charles Cobb is just a stub, so I don’t know much about him.) Here is the equation:

A very famous economist, Knut Wicksell, had used this equation before, but it was the work of Charles Cobb and Paul Douglas that gave this equation currency in economics. Because of their work, Paul Samuelson–a towering giant of economics–and his fellow Nobel laureate Robert Solow, picked up on this functional form. (Paul Samuelson did more than any other single person to transform economics from a subject with many words and a little mathematics, to a subject dominated by mathematics.)

In the equation, the letter A represents the level of technology, which will be a constant in this post. (If you want to think more about technology, you might be interested in my post “Two Types of Knowledge: Human Capital and Information.”) The Greek letter alpha, which looks like a fish (α), represents a number between 0 and 1 that shows how important physical capital, K–such as machines, factories or office buildings–is in producing output, Y. The complementary expression (1-α) represents a number between 0 and 1 that shows how important labor, L, is in producing output, Y. For now, think of α as being 1/3 and (1-α) as being 2/3:

  • α= 1/3;
  • (1-α) = 2/3.

As long at the production function has constant returns to scale so that doubling both capital and labor would double output as here, the formal names for α and 1-α are

  • α = the elasticity of output with respect to capital
  • 1-α = the elasticity of output with respect to labor.

What Makes Cobb-Douglas Functions So Great. The Cobb-Douglas function has a key property that both makes it convenient in theoretical models and makes it relatively easy to judge when it is the right functional form to model real-world situations: the constant-share property. My goal in this post is to explain what the constant-share property is and why it holds, using the logarithmic percent change tools I laid out in my post “The Logarithmic Harmony of Percent Changes and Growth Rates.” If any of the math below seems hard or unclear, please try reading that post and then coming back to this one.

The Logarithmic Form of the Cobb-Douglas Equation. By taking the natural logarithm of both sides of the defining equation for the Cobb-Douglas production function above, that equation can be rewritten this way:

log(Y) = log(A) + α log(K) + (1-α) log(L)

This is an equation that holds all the time, as long as the production engineers and other organizers of production are doing a good job. If two things are equal all the time, then changes in those two things must also be equal. Thus, 

Δ log(Y) = Δ log(A) + Δ {α log(K)} + Δ {(1-α) log(L)}.

Remember that, for now, α= 1/3. The change in 1/3 of log(K) is 1/3 of the change in log(K). Also, the change in 2/3 of log(L) is 2/3 of the change in log(L). And quite generally, constants can be moved in front of the change operator Δ in equations. (Δ is also called a “difference operator” or “first difference operator.”) So

Δ log(Y) = Δ log(A) + α Δ log(K) + (1-α) Δ log(L).

As defined in “The Logarithmic Harmony of Percent Changes and Growth Rates,”the change in the logarithm of X is the Platonic percent change in X. In that statement X can be anything, including Y, A, K or L. So as long as we interpret %Δ in the Platonic way, 

%ΔY = %ΔA + α %ΔK + (1-α) %ΔL

is an exact equation, given the assumption of a Cobb-Douglas production function.

Percent Changes of Sums: An Approximation. Now let me turn to an approximate equation, but one that is very close to being exact for small changes. Economists call small changes marginal changes, so what I am about to do is marginal analysis. (By the way, the name of Tyler Cowen and Alex Tabarrok’s popular blog Marginal Revolutionis a pun on the “Marginal Revolution” in economics in the 19th century when many economists realized that focusing on small changes added a great deal of analytic power.)

For small changes,

%Δ (X+Z) ≈ [X/(X+Z)] %ΔX + [Z/(X+Z)] %ΔZ,

where X and Z can be anything. (Those of you who know differential calculus can see where this approximation comes from by showing that d log(X+Z) = [X/(X+Z)] d log(X) + [Z/(X+Z)] d log(Z)], which says that the approximation gets extremely good when the changes are very small. But as long as you are willing to trust me on this approximate equation for percent changes of sums, you won’t need any calculus to understand the rest of this post.)

The ratios X/(X+Z) and Z/(X+Z) are very important. Think of X/(X+Z) as the fraction of X+Z accounted for by X; and think of Z/(X+Z) as the fraction of X+Z accounted for by Z.  Economists use this terminology:

  • X/(X+Z) is the “share of X in X+Z.”
  • Z/(X+Z) is the “share of Z in X+Z." 

By the way they are defined, the shares of X and Z in X+Z add up to 1. 

The main reason the rule for the percent changes of sums is only an approximation is that the shares of X and Z don’t stay fixed at their starting values. The shares of X and Z change as X and Z change. Indeed, if one changed X and Z gradually (avoiding any point where X+Z=0), the approximate rule for the percent change of sums would have to hold exactly for some pair of values of the shares of X and Z passed through along the way. 

The Cost Shares of Capital and Labor. Remember that in the approximate rule for the Platonic percent change of sums, X and Z can be anything. In thinking about the production decision of firms, it is especially useful to think of X as the amount of money that a firm spends on capital and Z as the amount of money the firm spends on labor. If we write R for the price of capital (the "Rental price” of capital) and W for the price of labor (the “Wage” of labor), this yields

  • X = RK 
  • Z = WL.

For the issues at hand, it doesn’t matter whether the amount R that it costs to rent a machine or an office and the amount W it costs to hire an hour of labor is real (adjusted for inflation) or nominal. It does matter, though, that nothing the firm can do will change R or W. The kind of analysis done here would work if what the firm does affects R and W, but the results, including the constant-share property, would be altered. I am going to analyze the case when the firm cannot affect R and W–that is, I am assuming the firm faces competitive markets for physical capital and labor. Substituting RK in for X and WL in for Z, the approximate equation for percent changes of sums becomes

%Δ (RK+WL) ≈ [RK/(RK+WL)] %Δ(RK) + [WL/(RK+WL)] %Δ(WL)

Economically, this approximate equation is important because RK+WL is the total cost of production. RK+WL is the total cost because the only costs are total rentals for capital RK and total wages WL. In this approximate equation

  • s_K = share_K = RK/(RK+WL) is the cost share of capital (the share of the cost of capital rentals in total cost.)
  • s_L = share_L = WL/(RK+WL) is the cost share of labor (the share of the cost of the wages of labor in total cost.) 

The two shares always add up to 1 (as can be confirmed with a little algebra), so

s_L = 1 - s_K. 

Using this notation for the shares, the approximation for the percent change of total costs is 

%Δ (RK+WL) ≈ {s_K} %Δ(RK) + {s_L} %Δ(WL).

The Product Rule for Percent Changes. In order to expand the approximation above, I am going to need the rule for percent changes of products. Let me spell out the rule, along with its justification twice, using RK and WL as examples:

%Δ (RK) = Δ log(RK) = Δ {log( R ) + log(K)} = Δ log( R )  + Δ log(K) = %ΔR + %ΔK

%Δ (WL) = Δ log(WL) = Δ {log(W) + log(L)} = Δ log(W) + Δ log(L) = %ΔW + %ΔL

These equations, reflecting the rule for percent changes of products, hold exactly for Platonic percent changes. Aside from the definition of Platonic percent changes as the change in the natural logarithm, what I need to back up these equations is the fact that the change in one thing plus another, say log( R ) + log(K), is equal to the change in one plus the change in the other, so that Δ {log( R ) + log(K)} = Δ log( R ) + Δ log(K). Using the product rule,

%Δ (RK+WL) ≈ {s_K} (%ΔR + %ΔK) + {s_L} (%ΔW+ %ΔL).

Cost-Minimization. Let’s focus now on the firm’s aim of producing a given amount of output Y at least cost. We can think of the firm exploring different values of capital K and labor L that produce the same amount of output Y. An important reason to focus on changes that keep the amount of output the same is that it sidesteps the whole question of how much control the firm has over how much it sells, and what the costs and benefits are of changing the amount it sells. Therefore, focusing on different values of capital and labor that produce the same amount of output yields results that apply to many different possible selling situations (=marketing situations=industrial organization situations=competitive situations) a firm may be in. That is, I am going to rely on the firm facing a simple situation for buying the time of capital and labor, but I am going to try not to make too many assumptions about the details of the firm’s selling, marketing, industrial organization, and competitive situation. (The biggest way I can think of in which a firm’s competitive situation could mess things up for me is if a firm needs to own a large factory to scare off potential rivals, or a small one to reassure its competitors it won’t start a price war. I am going to assume that the firm I am talking about is only renting capital, so that it has no power to credibly signal its intentions with its capital stock.) 

The Isoquant. Economists call changes in capital and labor that keep output the same “moving along an isoquant,” since an “isoquant” is the set of points implying the same (“iso”) quantity (“quant”). To keep the amount of output the same, both sides of the percent change version of the Cobb-Douglas equation should be zero:

0 = %ΔY = %ΔA + α %ΔK + (1-α) %ΔL

Since I am treating the level of technology as constant in this post, %ΔA=0. So the equation defining how the Platonic percent changes of capital and labor behave along the isoquant is 

0 = α %ΔK + (1-α) %ΔL.

Equivalently,

%ΔL = -[α/(1-α)] %ΔK.

With the realistic value of α=1/3, this would boil down to %ΔL = -.5 %ΔK. So in that case, %ΔK= 1% (a 1 % increase in capital) and %ΔL = -.5 % (a one-half percent decrease in labor) would be a movement along the isoquant–an adjustment in the quantities of capital and labor that would leave output unchanged.

Moving Toward the Least-Cost Way of Producing Output. To find the least-cost or cost-minimizing way of producing output, think of what happens to costs as the firm changes capital and labor in a way that leaves output unchanged. This is a matter of transforming the approximation for the percent change of total costs by 

  1. replacing %ΔR and %ΔW with 0, since nothing the firm does changes the rental price of capital or the wage of labor that it faces;
  2. replacing %ΔL with -[α/(1-α)] %ΔK in the approximate equation for the percent change of total costs; and
  3. replacing s_L with 1-s_K. 

After Step 1, the result is  

%Δ (RK+WL) ≈ {s_K} %ΔK + {s_L} %ΔL.

After doing Step 2 as well, 

%Δ (RK+WL) ≈ {s_K} %ΔK - {s_L} {[α/(1-α)] %ΔK}.

Then after Step 3, and collecting terms, 

%Δ (RK+WL) ≈ {s_K - (1-s_K) [α/(1-α)]} %ΔK

                   = { [s_K/(1-s_K)] - [α/(1-α)] }  [(1-s_K) %ΔK].

Notice that since the

1-s_K = s_L = the cost share of labor

is positive, the sign of (1-s_K) %ΔK is the same as the sign of %ΔK. To make costs go down (that is, to make %Δ (RK+WL) < 0), the firm should follow this operating rule: 

1. Substitute capital for labor (making %ΔK > 0) 

     if  [s_K/(1-s_K)] - [α/(1-α)] < 0. 

2. Substitute labor for capital (making %ΔK < 0)

     if  [s_K/(1-s_K)] - [α/(1-α)] > 0.

Thus, the key question is whether s_K/(1-s_K) is bigger or smaller than α/(1-α). If it is smaller, the firm should substitute capital for labor. If s_K/(1-s_K) is bigger, the firm should do the opposite: substitute labor for capital. Note that the function X/(1-X) is an increasing function, as can be seen from the graph below:

&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; X

                                                                      X

Since X/(1-X) gets bigger whenever X gets bigger (at least in the range from 0 to 1 (which is what matters here), 

  • s_K/(1-s_K) is bigger than α/(1-α) precisely when s_K > α
  • s_K/(1-s_K) is smaller than α/(1-α) precisely when s_K < α.

So the firm’s operating rule can be rephrased as follows:

1. Substitute capital for labor (making %ΔK > 0) 

     if  s_K <  α. 

2. Substitute labor for capital (making %ΔK < 0)

     if  s_K > α.

This operating rule is quite intuitive. In Case 1, the importance of capital for the production of output (α) is greater than the importance of capital for costs (s_K). So it makes sense to use more capital. In Case 2, the importance of capital for the production of output (α) is less than the importance of capital for costs (s_K), so it makes sense to use less capital.  

Proof of the Constant-Share Property of Cobb-Douglas. So what should the firm do in the end? For fixed R and W, the more capital a firm uses, the bigger effect a 1% increase in capital has on costs. So if the firm is using a lot of capital, the cost share of capital will be greater than the importance of capital in production α and the firm should reduce its use of capital, substituting labor in place of capital. If the firm is using only a little capital, the cost share of capital will be smaller than the importance of capital in production α, and it will be a good deal for the firm to increase its use of capital, allowing it to reduce its use of labor. At some intermediate level of capital, the cost share of capital will be exactly equal to the importance of capital in production α, and there will be no reason for the firm to either increase or reduce its use of capital once it reaches that point. So a firm that is minimizing its costs–a first step toward optimizing overall–will produce a given level of output with a mix of capital and labor that makes the cost share of capital equal to the importance of capital in production:

cost-minimization ⇒     s_K = α.

Concordantly, one can say 

cost-minimization ⇒     1-s_K = 1-α.

That is, the firm will use a mix of capital and labor that makes the cost share of labor equal to the importance of labor in production as well. Since the Cobb-Douglas functional form makes the importance of capital in production α a constant, a cost-minimizing firm will continually adjust its mix of capital and labor to keep the cost share of capital equal to that constant level α, and the cost share of labor equal to another constant, 1-α. This is the constant-share property of Cobb-Douglas. The constant-share property is something that can be tested in the data, and often seems to hold surprisingly well in the real world. So economists often use Cobb-Douglas production functions.  

Another Application of the Cobb-Douglas Idea: Achieving a Given Level of Cobb-Douglas Utility at Least Cost. Note that similar logic will work for utility functions as well. For example, in my post “The Flat Tax, The Head Tax and the Size of Government: A Tax Parable,” since the importance of consumption and leisure for utility is equal (both equal to 1/3), adjusting consumption C and leisure L so that %ΔC = - %ΔL will leave utility unchanged. Then,

  1. if the share of spending on consumption is lower than the share of spending on leisure,
  2. which is equivalent to the total spending on consumption being lower than total spending on leisure, 
  3. then increasing consumption (by reducing leisure and working harder) will make sense. 

On the other hand, 

  1. if the share of spending on consumption is higher than the share of spending on leisure,
  2. which is equivalent to total spending on consumption being higher, 
  3. then reducing consumption (and increasing leisure by working less) will make sense. 

This means that if consumption is too high, it should be reduced, while if consumption is too low, it should be increased, until the amount of spending on consumption equals the amount of spending on leisure.

Is Nuclear Energy Safe? Well, Which One?

Liquid-fluoride-thorium reactors are very different from the kinds of nuclear reactors you have heard of. Anyone who wants to say something for or against nuclear energy should watch at least the first few minutes of this video first. My title is a quotation from Kirk Sorenson in the first few minutes of the video. Kirk is amazing at explaining nuclear reactor technology. Kirk also has a much shorter TED talk as well, here

Casey Thormahlen suggested this video, and this book by Robert Hargraves: Thorium: Energy Cheaper than Coal. Here are Casey’s tweets.

Smaller, Cheaper, Faster: Does Moore's Law Apply to Solar Cells? by Ramez Naam

The way the future looks depends on the rate of decline in the cost of solar power. In this article (my title is a link), Ramez Naam says that solar power is getting cheaper at the rate of 7% per year. Notice how his graph with a logarithmic scale compares to his graph with a regular scale. By the rule of 70, how many years to cut the cost of solar power in half?

The Logarithmic Harmony of Percent Changes and Growth Rates

Logarithms. On Thursday, I let students in my Principles of Macroeconomics class in on the secret that logarithms are the central mathematical tool of macroeconomics. If my memory isn’t playing tricks on me, I can say that in both papers that examine real world data and at least half of macroeconomic theory papers, a logarithm makes an appearance, often in a starring role.  Why are natural logarithms so important?

  1. Lesser reason: logarithms can often model how a household or firm makes choices in a particularly simple, convenient way.

  2. Greater reason: multiplication and powers appear all the time in macroeconomics. For a price in initial difficulty, logarithms make multiplication and powers and exponential growth look easy.

Among other aspects of making multiplication and powers and exponential growth look easy, logarithms provide a very clean, elegant way of thinking about percent changes.

I am determined to have very few equations in this post, so you will have to depend on your math training for the basic rules of logarithms: how they turn multiplication into addition and powers into multiplication. What I want to accomplish in this post is to give you a better intuitive feel for logarithms–an intuitive feel that math textbooks often don’t provide. I also hope to make a strong connection in your mind between natural logarithms and percent changes.  

One of the most basic uses of logarithms in economics is the logarithmic scale. On a logarithmic scale, the distance between each power of 10 is the same. So the distance from 1 to 10 on the graph is the same as the distance from 10 to 100, which is the same as the distance from 100 to 1000. Here is a link to an example of a graph with a logarithmic scale on the vertical axis I have used before from Catherine Mulbrandon in Visualizing Economics:

Contrast that growth line for US GDP to the curve Catherine gets when not using a logarithmic scale on the vertical axis. Here is the link: 

The idea of the logarithmic scale–which can be boiled down to the idea of always representing multiplication by a given number as the same distance–shows up in two concrete objects, one familiar and one no-longer familiar: pianos and slide rules.

A Piano Keyboard as a Logarithmic Scale. You may not have thought of a piano keyboard as a logarithmic scale, but it is. Including all of the black keys on an equal footing with the white keys, going up one key on the piano is called going up a "semitone.” Going up an octave (say from Low C to Middle C) is going up 12 semitones. And each octave doubles the frequency of the vibrations in a piano string. As explained in the wikipedia article “Piano key frequencies,” at Middle C, the piano string vibrates 261.626 times per second. Each semitone higher on the piano keyboard makes the vibration of the string 1.0594631… times faster. And multiplying by 1.0594631… twelve times is the same as multiplying by 2. The reason our Western musical scale has been designed to have 12 semitones in an octave is interesting. To begin with, two notes whose frequencies have a ratio that is an easy fraction such as 3/2, 5/4 or 6/5 make a pleasing interval. (The Pythagoreans made mathematics part of their religion thousands of years ago partly because of this fact.) Then, it turns out that various powers of 1.0594631… come pretty close to many easy fractions. Here is a table showing the frequencies of various notes relative to the frequency of Middle C, showing some of the easy fractions that come close to various powers of 1.0594631…. A distance of three semitones yields a ratio close to 6/5; a distance of four semitones yields a ratio close to 5/4; a distance of five semitones yields a ratio close to 4/3; and a distance of seven semitones yields a ratio close to 3/2. None of this is exact, but it is all close enough to sound good when the piano is tuned according to this scheme:

Let me bring the discussion back to economics by pointing out that, although interest rates are lower right now, it is not uncommon for the returns on financial investments to multiply savings by something averaging close to 1.059 every year. At typical rates of return for investments bearing some risk, one can think of each year of returns as raising the pitch of one’s funds on average by about one semitone. Starting from Middle C, one can hope to get quite a ways up the piano keyboard by retirement. And savings early in life get raised in pitch a lot more than savings late in life.

Slide Rules. Slide rules, like the one in the picture right above, are designed first and foremost to use two logarithmic scales that slide along each other to do multiplication. The distances are logarithmic and adding logarithms multiplies the underlying numbers. For example, to multiply 2 times 3,  put the 1 of the sliding piece right at the 3 of the stationary piece. Then look at the 2 on the sliding piece and see what number is next to it on the stationary piece. You could buy a physical slide rule on ebay, but you might instead want to play with a virtual slide rule for free. Playing with this virtual slide rule is one of the best ways to get some intuition for logarithms. (Remember that the distances on a slide rule are all logarithms.) If you like this slide rule and want to go further, here are some much better instructions for using a slide rule than I just gave: Illustrated Self-Guided Course on How to Use the Slide Rule.

Percent Changes (%ΔX). Let me preface what I have to say about percent changes by saying that–other than being a clue that a percent change or a ratio expressed as a percentage lurks somewhere close–I view the % sign as being equivalent to 1/100. So, for example, 23% is just another name for .23, and 100% is just another name for 1. Indeed, economists are just as likely to say “with probability 1” as they are to say “with a 100% probability.”   

It turns out that natural logarithms (“ln” or “log”) are the perfect way to think about percent changes. Suppose a variable X has a “before” and an “after” value.

  • I want to take the point of view that the change in the natural logarithm is the pure, Platonic percent change between before and after. It is calculated as the natural logarithm of Xafter minus the natural logarithm of Xbefore.

  • I will call the ordinary notion of percent change the earthly percent change. It is calculated as the change divided by the starting value, (Xafter - Xbefore)/Xbefore.

  • In between these two concepts is the midpoint percent change. It is calculated as the change divided by the average of the starting and ending values:

(Xafter-Xbefore) / { (Xafter + Xbefore)/2 }

Below is a table showing the relationship between Platonic percent changes, midpoint percent changes and earthly percent changes. In financial terms, one can think of earthly percentage changes as “continuously compounded” versions of Platonic percent changes. Here is the Excel file I used to construct this table that will give you the formulas I used if you want to see them.  

There are at least two things to point out in this table:

  1. When the percent changes are small, all three concepts are fairly close, but the midpoint percent change is much closer to the Platonic (logarithmic) percent change.

  2. A 70% Platonic percent change is very close to being a doubling–which would be a 100% earthly percent change. This is where the “rule of 70” comes from. (Greg Mankiw talks about the rule of 70 on page 180 of Brief Principles of Macroeconomics.) The rule of 70 is a reflection of the natural logarithm of 2 being equal to approximately .7 = 70%. Similarly, a 140% Platonic percent change is basically two doublings–that is, it is close to multiplying X by a factor of 4; and a 210% Platonic percent change is basically three doublings–that is, it is close to multiplying X by a factor of 8.

Let’s look at negative percent changes as well. Here is the table for how the different concepts of negative percent changes compare:

A key point to make is that with both Platonic (logarithmic) percent changes and midpoint percent changes, equal sized changes of opposite direction cancel each other out. For example, a +50% Platonic percent change, followed by a -50% Platonic percent change, would leave things back where they started. This is true for a +50% midpoint percent change, followed by a -50% midpoint percent change. But, starting from X, a 50% earthly percent change leads to 1.5 X. Following that by a -50% earthly percent change leads to a result of .75 X, which is not at all where things started. This is a very ugly feature of earthly percent changes. That ugliness is one good reason to rise up to the Platonic level, or at least the midpoint level.

Continuous-Time Growth Rates. There are many wonderful things about Platonic percent changes that I can’t go into without breaking my resolve to keep the equation count down. But one of the most wonderful is that to find a growth rate one only has to divide by the time that has elapsed between Xbefore and Xafter. That is, as long as one is using the Platonic percent change %ΔX=log(Xafter)-log(Xbefore),

%ΔX / [time elapsed] = growth rate.

The growth rate here is technically called a “continuous-time growth rate.” The continuous-time growth rate is not only very useful, it is a thing of great beauty.

Update on How the Different Concepts of Percent Change Relate to Each Other.  One of my students asked about how the different percent change concepts relate to each other. For that, I need some equations. And I need “exp” which is the opposite of the natural logarithm “log.” Taking the function exp(X) is the same as taking e, (a number that is famous among mathematicians and equal to 2.718…) to the power X. For the equations below, it is crucial to treat % as another name for 1/100, so that, for example, 5% is the same thing as .05.  

Earthly percent changes are the most familiar. Almost anyone other than an economist who talks about percent changes is talking about earthly percent changes. Most of you learned about earthly percent changes in elementary school. So let me first write down how to get from the earthly percent change to the Platonic and midpoint percent changes. (I won’t try to prove these equations here, just state them.) 

Platonic = log(1 + earthly)

midpoint = 2 earthly/(2 + earthly)

 If you are trying to figure out the effects of continuously compounded interest, or the effects of some other continuous-time growth rate, you will want to be able go from Platonic percent changes–which come straight from multiplying the growth rate by the amount of elapsed time–to earthly percent changes. For good measure, I will include the formula for midpoint percent changes as well:

earthly = exp(Platonic) - 1 

midpoint = 2 {exp(Platonic) - 1}/{exp(Platonic) + 1}

I found the function giving the midpoint percent change as a function of the Platonic percent change quite intriguing. For one thing, when I changed signs and put “-Platonic” in the place where you see “Platonic” on the right-hand side of the equation the result equal to -midpoint. When switching the sign of the argument (the inside thing: Platonic) just switches the sign of the overall expression, mathematicians call it an “odd” function (“odd” as in “odd and even” not “odd” as in “strange”). The meaning of this function being odd is that Platonic and midpoint percent changes map into each other the same way for negative changes as for positive changes.  (That isn’t true at all for the earthly percent changes.) The other intriguing thing about the function giving the midpoint percent change as a function of the Platonic percent change is how close it is to giving back the same number. To a fourth-order (the squared term and the fourth power term are zero), the approximation for the function is this:

midpoint=Platonic - (Platonic cubed/12) + (5th power and higher terms) 

Finally, let me give the equations to go from the midpoint percent change to the Platonic and the     

earthly = 2 midpoint/(2-midpoint)

Platonic = log(2+midpoint) - log(2-midpoint)

             = log(1+{midpoint/2} ) - log(1-{midpoint/2})

The expression for Platonic percent changes in terms of midpoint percent changes has such a beautiful symmetry that its “oddness” is clear. Since I know the way to approximate natural logarithms to as high an order as I want (and I am not special in this), I can give the approximation for Platonic percent changes in terms of powers of midpoint percent changes as follows:

Platonic = midpoint + (midpoint cubed)/12

                   + (midpoint to the fifth power)/80

                   + (midpoint to the seventh power)/448

                   + (9th and higher order terms).

The bottom line is that for even medium-sized percent changes (say 30%), the Platonic percent change is quite close to the midpoint percent change–something the tables above show. By the time the Platonic percent changes and midpoint percent changes start to diverge from each other in any worrisome way, the rule of 70 that makes a 70% Platonic percent change close to equivalent to a doubling starts to kick in to help out.

Evan Soltas: The Great Depression in Graphs

Evan Soltas is a freshman this Fall at Princeton. He is 19. Here is the picture he gives of the Great Depression, and here is a short bio taken from his website:

Evan Soltas is the writer of Wonkbook, the morning email newsletter of Ezra Klein’s Wonkblog at The Washington Post, and for Bloomberg View’s “The Ticker” blog. A student at Princeton University, where he intends to major in economics, Evan blogs daily on economic news, policy, and research findings – and a variety of other topics, approaching the subject as a student and not as an expert.

His research has been featured recently in The Wall Street Journal, the Financial Times, The Atlantic, Slate, the Daily Beast, the National Review, The American Conservative, The Nation, and The Globe and Mail.

His particular areas of research and blogging interest include monetary economics and macroeconomics. His blog further contains substantial discussion of labor and financial markets, development, economic history, econometrics, and public finance.

It is not as if I have a ranking worked out, so I might be understating things, but in my book, Evan is clearly one of the best 10 economics bloggers out there, without regard to age. What I especially like is Evan’s attention to facts–and his skill at making facts come alive. Evan’s attention to facts is especially valuable in an era when so many of the media, the commentariat, and those in the public sphere more generally, have left facts behind.

Principles of Macroeconomics Posts through September 3, 2012

This is a list of posts I thought I might want to find quickly during class. I bolded the first post in the month from the list.

  1. What is a Supply-Side Liberal?
  2. Getting the Biggest Bang for the Buck in Fiscal Policy
  3. Balance Sheet Monetary Policy: A Primer
  4. Can Taxes Raise GDP?
  5. National Rainy Day Accounts
  6. Trillions and Trillions: Getting Used to Balance Sheet Monetary Policy
  7. Noah Smith: “Miles Kimball, the Supply-Side Liberal”
  8. Why Taxes are Bad
  9. A Supply-Side Liberal Joins the Pigou Club
  10. “Henry George and the Carbon Tax”: A Quick Response to Noah Smith
  11. Leading States in the Fiscal Two-Step
  12. Going Negative: The Virtual Fed Funds Rate Target
  13. Mike Konczal: What Constrains the Federal Reserve? An Interview with Joseph Gagnon
  14. Leveling Up: Making the Transition from Poor Country to Rich Country
  15. Mark Thoma: Kenya’s Kibera Slum
  16. The supplysideliberal Review of the FOMC Monetary Policy Statement: June 20th, 2012
  17. Justin Wolfers on the 6/20/2012 FOMC Statement
  18. Mark Thoma: Laughing at the Laffer Curve
  19. Thoughts on Monetary and Fiscal Policy in the Wake of the Great Recession: supplysideliberal.com’s First Month
  20. Health Economics
  21. Future Heroes of Humanity and Heroes of Japan
  22. The Euro and the Mediterano
  23. Is Taxing Capital OK?
  24. Jobs
  25. Dissertation Topic 3: Public Savings Systems that Lift the No-Margin-Buying Constraint
  26. Rich, Poor and Middle-Class
  27. Reply to Mike Sax’s Question “But What About the Demand Side, as a Source of Revenue and of Jobs?”
  28. Bill Greider on Federal Lines of Credit: “A New Way to Recharge the Economy”
  29. Will the Health Insurance Mandate Lead People to Take Worse Care of Their Health?
  30. Corporations are People, My Friend
  31. What to Do When the World Desperately Wants to Lend Us Money
  32. Paul Romer on Charter Cities
  33. Miles Kimball and Brad DeLong Discuss Wallace Neutrality and Principles of Macroeconomics Textbooks
  34. Paul Romer’s Reply and a Save-the-World Tweet
  35. Adam Ozimek on Worker Voice
  36. Dr. Smith and the Asset Bubble
  37. Reply to Matthew Yglesias: What to Do About a House Price Boom
  38. Preventing Recession-Fighting from Becoming a Political Football
  39. Magic Ingredient 1: More K-12 School
  40. Matthew Yglesias: “Miles Kimball on Potential Housing Bubble Remedies”
  41. Ezra Klein: “Does Teacher Merit Pay Work? A New Study Says Yes”
  42. You Didn’t Build That: America Edition
  43. My First Radio Interview on Federal Lines of Credit
  44. The Most Conflicted Review I Have Received
  45. The Euro and the Mark
  46. Saturday Morning Breakfast Cereal
  47. Adam Ozimek: What “You Didn’t Build That” Tells Us About Immigration
  48. Charles Murray: Why Capitalism Has an Image Problem
  49. Adam Smith as Patron Saint of Supply-Side Liberalism?
  50. Things are Getting Better: 3 Videos
  51. Google Search Hints
  52. Government Purchases vs. Government Spending
  53. Mark Thoma on the Politicization of Stabilization Policy
  54. Milton Friedman: Celebrating His 100th Birthday with Videos of Milton
  55. Isomorphismes: A Skew Economy & the Tacking Theory of Growth
  56. Daniel Kuehn: Remembering Milton Friedman
  57. Why My Retirement Savings Accounts are Currently 100% in the Stock Market
  58. Grammar Girl: Speaking Reflexively
  59. Dismal Science Humor: 8/3/12
  60. Should Everyone Spend Less than He or She Earns?
  61. Dismal Science Humor: Econosseur
  62. Dismal Science Humor: Yoram Baumann, Standup Economist
  63. The True Story of How Economics Got Its Nickname “The Dismal Science”
  64. Dismal Science Humor: phdcomics.com
  65. Rich People Do Create Jobs: 10 Tweets
  66. The Paul Ryan Tweets
  67. Miles Kimball and Noah Smith on Balancing the Budget in the Long Run
  68. Joe Gagnon on the Internal Struggles of the Federal Reserve Board
  69. Miles Kimball and Noah Smith on Job Creation
  70. Matthew O'Brien on Paul Ryan’s Monetary Policy Views
  71. Noah Smith on the Coming Japanese Debt Crisis
  72. The Flat Tax, The Head Tax and the Size of Government: A Tax Parable
  73. The Economist on the Origin of Money
  74. When the Government Says “You May Not Have a Job”
  75. Brad DeLong’s Views on Monetary Policy and the Fed’s Internal Politics
  76. Persuasion
  77. Evan Soltas on Medical Reform Federalism–in Canada
  78. Private Equity Investment in Africa
  79. Gavyn Davies on the Political Debate about Economic Uncertainty
  80. Larry Summers on the Reality of Trying to Shrink Government
  81. James Surowiecki on Skilled Worker Immigration
  82. Josh Barro on a Central Issue of Political Economy: Poor vs. Old
  83. Matt Yglesias on How the “Stimulus Bill” was About a Lot More Than Stimulus
  84. Copyright
  85. Scott Adams’s Finest Hour: How to Tax the Rich
  86. My Ec 10 Teacher Mary O’Keeffe Reviews My Blog
  87. Occupy Wall Street Video
  88. Joshua Hausman on Historical Evidence for What Federal Lines of Credit Would Do
  89. Why George Osborne Should Give Everyone in Britain a New Credit Card
  90. Twitter Round Table on Federal Lines of Credit and Monetary Policy
  91. Matthew Yglesias on Archery and Monetary Policy
  92. No Tax Increase Without Recompense
  93. Adam Ozimek: School Choice in the Long Run
  94. Learning Through Deliberate Practice
  95. Matthew O'Brien versus the Gold Standard
  96. Health Economics Posts through August 26, 2012
  97. What is a Partisan Nonpartisan Blog?
  98. Two Types of Knowledge: Human Capital and Information
  99. The Great Recession and Per Capita GDP
  100. Family Income Growth by Quintile Since 1950
  101. Jonathan Rauch on Democracy, Capitalism and Liberal Science
  102. Bill Dickens on Helping the Poor
  103. The Magic of Etch-a-Sketch: A Supply-Side Liberal Fantasy
  104. Michael Woodford Endorses Monetary Policy that Targets the Level of Nominal GDP
  105. How Americans Spend Their Money and Time

A Market Measure of Long-Run Inflation Expectations

Brad DeLong’s graph of “breakeven inflation”:&nbsp;the rate of inflation at which regular (nominal) 30-year Treasury bonds would neither better nor worse than 30-year Treasury Inflation Protected Securities.

Brad DeLong’s graph of “breakeven inflation”: the rate of inflation at which regular (nominal) 30-year Treasury bonds would neither better nor worse than 30-year Treasury Inflation Protected Securities.

Brad DeLong explains here how the difference in interest rates between the Federal government’s 30-year nominal bonds and its 30-year real bonds (Treasury Inflation Protected Securities) can measure financial investors’ expectations about average inflation over the next 30 years.  

Unlike Brad, I think the investor’s expectations are reasonable. Knowing the articles in economics journals that the folks at the Fed are reading–and that young economists whose future is at the Fed are reading–makings me confident that the commitment to controlling inflation in the long run is durable. 2% seems to have been settled on as the long-run target.

How Americans Spend Their Money and Time

Two of the most fundamental choices people make are how to spend their money and their time. Economists talk about a “budget constraint” for money and a “budget constraint” for time. Here is a set of links to well-done graphs on how Americans deal with those two budget constraints: 

  1. Jacob Goldstein and Lam Thuy Vo: “What America Buys”
  2. Jacob Goldstein and Lam Thuy Vo: “How The Poor, The Rich And The Middle Class Spend Their Money”
  3. Lam Thuy Vo: “What Americans Actually Do All Day Long, In 2 Graphics”
  4. Jacob Goldstein and Lam Thuy Vo: “What America Does For Work.” 

Bonus

Thanks to my brother Joseph Kimball for pointing me to this series of posts by Lam Thuy Vo and Jacob Goldstein.

Michael Woodford Endorses Monetary Policy that Targets the Level of Nominal GDP

When I want to better understand the principles of optimal monetary policy, Mike Woodford is the one I turn to. Someday I hope to finish reading his book Interest and Prices and many of his key academic journal articles. If I do, I am sure that then I will have many nuances to argue over with Mike (including the effects of departures from Wallace neutrality on optimal monetary policy)–and despite my relative ignorance in this area, I did manage a Powerpoint discussion of one of Mike’s papers with one of his coauthors, Vasco Curdia, at a Bank of Japan Conference. But until the fabled day when I can really dig into optimal monetary policy, Mike is my authority on many of the fundamental principles of how to conduct monetary policy. And I am not alone in my esteem for Mike.  

So it is big news that Mike has come out in favor of nominal GDP targeting. I know this thanks to Lars Christensen, who in addition to these two recent posts about Mike

“Michael Woodford endorses NGDP level targeting”

“Michael Woodford on NGDP targeting and Friedman”

has an excellent recent post arguing that the European debt crisis is due to overly tight monetary policy.

Mike notes, as I would, that there are nuances of optimal monetary policy that a simple nominal GDP targeting rule does not capture. But the simplicity, robustness, transparency and rough-and-ready approach toward optimality of such a rule makes it a key step in improving monetary policy from the implicit rule being followed now.  

Bill Dickens on Helping the Poor

In my post “Rich, Poor and Middle-Class” I wrote 

I am deeply concerned about the poor, because they are truly suffering, even with what safety net exists. Helping them is one of our highest ethical obligations. I am deeply concerned about the honest rich—not so much for themselves, though their welfare counts too—but because they provide goods and services that make our lives better, because they provide jobs, because they help ensure that we can get good returns for our retirement saving, and because we already depend on them so much for tax revenue. But for the middle-class, who count heavily because they make up the bulk of our society, I have a stern message. We are paying too high a price when we tax the middle class in order to give benefits to the middle-class—and taxing the rich to give benefits to the middle-class would only make things worse. The primary job of the government in relation to the middle-class has to be to help them help themselves, through education, through loans, through libertarian paternalism, and by stopping the dishonest rich from preying on the middle-class through deceit and chicanery. 

In his correspondence with Bryan Caplan, Bill Dickens gives a good picture of what government efforts to help the poor currently look like. The distinction between the suffering of the poor and the struggling of the middle class is clear in Bill’s description. Bill is arguing against Bryan’s desire to reduce support for the poor.  He argues persuasively that since the Clinton-era Welfare Reforms, government efforts to help the poor have been appropriate.

Note that because of the nature of the argument with Bryan, Bill does not address here the question of whether more should be done to help the poor.  There are two terms in what Bill writes that may need some explanation: “memes” and “leaky bucket.” Here is a link for “memes.” I didn’t find a good link for “leaky bucket." "Leaky bucket” is a metaphor economists use for the idea that a government policy intended to help the poor often has unintended side effects: (1) the poor acting in ways that make it more likely that they will get help and (2) those who are better off acting in ways that make it more likely that they won’t be asked to help.  

Since Bill’s argument is long, let me give you some of the highlights of what Bill writes to Bryan:

So this is the crux of it. You subscribe to two central right-wing memes: government coddles the poor and won’t make them face the tough choices everyone else does, and welfare recipients are overwhelmingly lazy and undeserving. Anyone with firsthand experience dealing with a wide range of the poor or those receiving government assistant (with the later being only a small subset of the former) knows these two things to be false.

Overwhelmingly those on public assistance were full of regret and/or a sense of hopelessness that they are fated to their condition. They know they should have worked harder in school, they know they should be working to support their family, they know it would be better if their children’s father was there to help support their kids. There is no shortage of hectoring from society, welfare caseworkers, family members, and the media. Consider that even before the passage of TANF most women on welfare worked at least some during every year (on or off the books). Most welfare mothers are not drug abusers or alcoholics (when they have been tested only a tiny fraction fail). A lot had their children with a husband or boyfriend they had hoped to marry. A lot of the AFDC caseload cycled on and off welfare as people made repeated attempts to return to work (attempts that were often stymied by lack of adequate child care - one of the most common reasons for returning to welfare was being fired by a low wage employer for missing work when child care arrangements fell through).

Over and over when I talk to people about government income support programs I’m told that they have no objection to giving money to the truly needy, but that they don’t like supporting lazy bums who don’t like to work. When I tell them that overwhelmingly government support goes to families (usually single women) with children they don’t believe me.

Now let’s consider the case of a bucket that was probably too leaky and needed to be replaced. As you know I was converted by my experience with Clinton’s welfare reform task force to the belief that AFDC needed to be time limited. Over and over I heard young women tell me that they didn’t think much about having a baby because that is what people in their world did. “You get to be 16, you get yourself a baby and you get yourself a check and an apartment.” AFDC as a career choice was a serious problem back then. But even as we went around preparing the welfare reform we heard over-and-over again that the word was out that welfare was going away and you were going to have to do something else now. Starting in the early 90s - long before TANF actually limited benefits to 2 years - AFDC caseloads started dropping and ultimately dropped enormously. 

People know they make bad decisions. They often know when they are making them that they are bad. Telling them that they are being stupid isn’t news to them. Find ways to change the system to help them make better decisions and I’m all with you. Take money away from children because their mothers and fathers made bad choices I’m very disappointed. Overlook all the people who are receiving aid not because of bad choices, but bad luck and I’m more than disappointed - I’m angry.

… I’m not “outraged” by people who don’t want to pay taxes to support the government transfer system. A few of them may be selfish and/or racist jerks. There are few enough of them that I could care less. I believe that most people with that view are misinformed about who gets government transfers, how the programs are administered, the amount of the benefits, and how much of their taxes go to such programs. I think the vast majority of people, if they knew the facts, would not object to paying taxes for the system.

To me, given what I know, what Bill says has the ring of truth to it. But I would be interested in any evidence anyone has that contradicts what Bill says, especially anything that contradicts the passages I have quoted.

Jonathan Rauch on Democracy, Capitalism and Liberal Science

Jonathan Rauch gave a talk at a Campus Freedom Network Conference summarizing the argument in his book “Kindly Inquisitors: The New Attacks on Free Thought.” In addition to the link under the picture of Jonathan above, here is a link to a nice piece by Greg Lukianoff flagging the video: 

Jonathan Rauch on Why Free Speech is Even More Important than You Thought.

I loved Jonathan’s talk. I was struck by the similarities between Jonathan’s arguments for academic freedom in this video and Milton Friedman’s arguments for capitalism in the videos I marshalled in Milton Friedman: Celebrating His 100th Birthday with Videos of Milton.

The key elements of what Jonathan calls “liberal science” are its decentralization (no one in particular is in charge) and its rules. The discipline of criticism is just as necessary for ideas floated in the academy as the discipline of the market is for enterprises. However painful systems of trial and error are, if we interfere with the systems of trial and error, we will be saddled with errors.

Although in this video Jonathan is talking mainly about liberal science and only in passing about capitalism, the parallels made me appreciate the strength of Milton’s arguments even more than I had. And Milton’s arguments in turn, by the parallels, strengthen Jonathan’s case for liberal science. Finally, the arguments for both liberal science and capitalism strengthen the case for democracy; and the arguments for democracy strengthen the case for both liberal science and capitalism.

Postscript: Speaking of decentralization, some government functions (such as taking care of the poor) might be better served if they could be decentralized to nonprofit organizations. In particular, such decentralization allows a trial and error process to work its magic as donations shift away from the least effective nonprofits to more effective nonprofits. Because people love freedom, such decentralization of certain government functions has other advantages as well, as I argue in my post “No Tax Increase Without Recompense.” In that post, I propose a way to make sure such nonprofit efforts are adequately funded.

The Great Recession and Per Capita GDP

Although recessions in the United States are officially determined by a committee of the independent and nonpartisan National Bureau of Economic Research (NBER)–usually long after the fact–a rough-and-ready definition of a recession is the period of time when real GDP (the actual amount of goods and services produced) is falling, if it falls for a period of at least six months. This period of time when real GDP is falling is very different from the period when real GDP is “in the hole” compared to its peak, let alone the period when real GDP per person is in the hole.

For the same level of GDP,

GDP/Population goes down when Population goes up

and Population in the United States is growing.

Americans are used to real GDP not only growing, but keeping up with the growth of population, plus a couple of percent more each year. This link shows a graph of real GDP per capita since the beginning of the Great Recession. Since at least the beginning of 2008, real GDP has been doing quite a bit worse than Americans are used to.

Catherine Mulbrandon has made a great set of graphs on her Visualizing Economics website.

Here is a graph showing the history of the logarithm of real per capita GDP in the U.S. since 1871. With the logarithm on the vertical axis, the slope of the curve shows the percentage growth rate.  

Here is a graph showing the history of real per capita GDP in the U.S. since 1871. You can see what the miracle of compound growth does.  

Two Types of Knowledge: Human Capital and Information

Human Capital and Information. Knowledge can be either “human capital” or “information.” The difference is the resource cost of transferring a body of knowledge from one person to another. Here is the classification scheme I have in mind:

Human capital is knowledge that is hard to transfer.

Information is knowledge that is easy to transfer.

(This is a specific technical meaning of the word “information” for economics. I use the word “information” in a more general philosophical sense in my post “Ontology and Cosmology in 14 Tweets.”) Note that a given body of knowledge can shift from one category to another when technology changes. The words of the Iliad and the Odyssey were “human capital” when the only means of transferring this knowledge was oral transmission and memorization. When printing arose, the words of the Iliad and the Odyssey became “information." (See Albert Lord's The Singer of Tales on the original oral transmission of the Iliad and the Odyssey.)

Now comes the mid-post homework problem. Read Daniel Little’s description of the knowledge of how to fix machines or my abridged version of it just below, and classify the knowledge of how to fix machines as human capital or information. Here is Daniel Little’s opening paragraph:

There is a kind of knowledge in an advanced mechanical society that doesn’t get much attention from philosophers of science and sociologists of science, but it is critical for keeping the whole thing running. I’m thinking here of the knowledge possessed by skilled technicians and fixers – the people who show up when a complicated piece of equipment starts behaving badly. You can think of elevator technicians, millwrights, aircraft maintenance specialists, network technicians, and locksmiths.

Here is Daniel’s account of the level of difficulty of transferring this knowledge, based on his conversations with a fixer of mining machinery: 

I said to him, you probably run into problems that don’t have a ready solution in the handbook. He said in some amazement, "none of the problems I deal with have textbook solutions. You have to make do with what you find on the ground and nothing is routine.” I also asked about the engineering staff back in Wisconsin. “Nice guys, but they’ve never spent any time in the field and they don’t take any feedback from us about how the equipment is failing.” He referred to the redesign of a heavy machine part a few years ago. The redesign changed the geometry and the moment arm, and it’s caused problems ever since. “I tell them what’s happening, and they say it works fine on paper. Ha! The blueprints have to be changed, but nothing ever happens.”

I would trust Tim to fix the machinery in my gold mine, if I had one. And it seems that he, and thousands of others like him, have a detailed and practical kind of knowledge about the machine and its functioning in a real environment that doesn’t get captured in an engineering curriculum. It is practical knowledge: “If you run into this kind of malfunction, try replacing the thingamajig and rebalance the whatnot.” It’s also a creative problem-solving kind of knowledge: “Given lots of experience with this kind of machine and these kinds of failures, maybe we could try X.” And it appears that it is a cryptic, non-formalized kind of knowledge. The company and the mine owners depend crucially on knowledge in Tim’s head and hands that can only be reproduced by another skilled fixer being trained by Tim.

In philosophy we have a few distinctions that seem to capture some aspects of this kind of knowledge: “knowing that” versus “knowing how”, epistime versus techne, formal knowledge versus tacit knowledge. Michael Polanyi incorporated some of these distinctions into his theory of science in Personal Knowledge: Towards a Post-Critical Philosophy sixty years ago, but I’m not aware of developments since then.

As a practical matter, Polanyi’s distinction between “knowing how” (formal knowledge) and “knowing that” (tacit knowledge) is so important for the costs of transferring knowledge from one person to another that it closely parallels the distinction between human capital and information.

Pure Technology. Let me assume that your answer to the homework problem is the same as mine: knowledge of how to fix machines has a large element of human capital. This has an important consequence: “technology” as we usually think of “technology” is not just made of the easily copied “recipes” that Paul Romer talks about in his Concise Encyclopedia of Economics article “Economic Growth.”

Suppose for the purposes of economic theory, we insist on defining “pure technology” as a recipe that can be cheaply replicated. Then “technology” in the ordinary sense has an element of human capital in it as well as “pure technology,” much as “profit” in the ordinary sense has an element of return to capital in it as well as “pure profit.” The pure technology for mining would include not only

  1. a plan for how the machines are used and repaired, but also
  2. a plan for having new operators learn how to operate the machines and for having new machine repairers learn from more experienced machine repairers. 

The “technology” in the ordinary sense is human capital for using and repairing the machines–that is, already embedded knowledge produced from 1, 2 and learning time.

Economic Metaknowledge. In addition to straight ideas or recipes, Paul Romer emphasizes the importance of meta-ideas:

Perhaps the most important ideas of all are meta-ideas. These are ideas about how to support the production and transmission of other ideas. The British invented patents and copyrights in the seventeenth century. North Americans invented the agricultural extension service in the nineteenth century and peer-reviewed competitive grants for basic research in the twentieth century.

There are many meaning of the prefix “meta.” Paul is using “meta” so that “meta-X” means “things in category X to foster the production and transmission of things in category X.” When another meaning of “meta-” might otherwise intrude, let’s use “economic meta-X” for this meaning. Then with the distinction between human capital and information in hand, there are at least four types of economic metaknowledge–knowledge to foster the production and transmission of knowledge:

  • Meta-human-capital: human capital to foster the production and transmission of human capital. (Teaching skill is the most important example.) 
  • Economic meta-information: information to foster the production and transmission of information. (Many of the most important software programs are in this category: Microsoft Office, the software behind Social Media such as Tumblr, Twitter, and Facebook, TiVo’s software, the software behind the web itself…. Also, computer science and electrical engineering journals on library shelves contain some economic meta-information. In its time, a 17th Century printer’s manual would count.)
  • Human capital to foster the production and transmission of ideas. (Research skill– including the skill of writing academic papers–is a good example.)
  • Information to foster the production and transmission of human capital. (The contents of Daniel Willingham’s book Why Don’t Students Like School? are an excellent example that I highly recommend. He draws his suggestions for teaching from the U.S. Department of Education’s What Works Clearinghouse)

Extra Credit: Figure out how Paul Romer’s meta-ideas listed above–patents and copyrights, agricultural extension services, and peer-reviewed competitive grants–fit into this fourfold division of economic metaknowledge.

Rumsfeldian Metaknowledge. According to Colin Powell (as excerpted in the Appendix below and given more fully at this link) we can blame Donald Rumsfeld’s unchecked insubordination in disbanding the Iraqi Army for some portion of the long hard slog we faced in the War in Iraq since 2003, but Donald did coin a memorable description of another kind of metaknowledge. Here is the 21-second video, and here is the transcript:

[T]here are known knowns; there are things we know that we know.

There are known unknowns; that is to say there are things that, we now know we don’t know. But there are also unknown unknowns–there are things we do not know, we don’t know.

Metaknowledge in this sense of knowing what one knows and knowing what one doesn’t know often has great economic value, whether in daily life, business and policy making. But metaknowledge in this Rumsfeldian sense–even economically valuable Rumsfeldian metaknowledge–should be distinguished from “economic metaknowledge” as I define it above.

Appendix.Here is what Colin Powell wrote:

When we went in, we had a plan, which the president approved. We would not break up and disband the Iraqi Army. We would use the reconstituted Army with purged leadership to help us secure and maintain order throughout the country. We would dissolve the Baath Party, the ruling political party, but we would not throw every party member out on the street. In Hussein’s day, if you wanted to be a government official, a teacher, cop, or postal worker, you had to belong to the party. We were planning to eliminate top party leaders from positions of authority. But lower-level officials and workers had the education, skills, and training needed to run the country.

The plan the president had approved was not implemented. Instead, Secretary Donald Rumsfeld and Ambassador L. Paul Bremer, our man in charge in Iraq, disbanded the Army and fired Baath Party members down to teachers. We eliminated the very officials and institutions we should have been building on, and left thousands of the most highly skilled people in the country jobless and angry—prime recruits for insurgency. These actions surprised the president, National Security Adviser Condi Rice, and me, but once they had been set in motion, the president felt he had to support Secretary Rumsfeld and Ambassador Bremer.