# Randall Wray: Government Deficits Translate into Surpluses for the Non-Government Sector →

Make up your own mind about what the facts mean, but this article by Randall Wray explains an accounting identity worth knowing: deficits and surpluses for all economic actors in the world put together have to add up to zero. It is good to look at both sides of the coin.

# Neil Irwin: American Manufacturing is Coming Back. Manufacturing Jobs Aren't

Sometimes a sector of the economy has so much technological progress that over many decades, output in the sector increases while inputs into the sector–particularly the amount of labor used–decreases. Agriculture went through this transformation first. In more recent decades, manufacturing employment has been shrinking while manufacturing output has been growing. Just as we need only a few farmers to feed everyone, we are moving toward a world where we only need a few people to manufacture things, while almost everyone is employed in the service sector.

Neil Irwin describes this transformation in his post “American manufacturing is coming back. Manufacturing jobs aren’t.”

# How Marginal Tax Rates Work

Here is an exercise. What is wrong with the way the people quoted below are thinking?

1. Kristina Collins, a chiropractor in McLean, Va., said she and her husband planned to closely monitor the business income from their joint practice to avoid crossing the income threshold for higher taxes outlined by President Obama on earnings above $200,000 for individuals and$250,000 for couples.

Ms. Collins said she felt torn by being near the cutoff line and disappointed that federal tax policy was providing a disincentive to keep expanding a business she founded in 1998.

“If we’re really close and it’s near the end-year, maybe we’ll just close down for a while and go on vacation,” she said.

2. … [the extra money that comes with a raise] “is nice, but it could very well bump you into the next tax bracket, possibly leaving you with less money than you had before the raise.”

For an answer, see the wikipedia entry on “Tax Rate” and Matthew Yglesias’s posts “Nobody Understands How Taxes Work,”  “Tax Whiners Don’t Understand How Marginal Tax Rates Work,” and “Tax Ignoramuses.”

# International Finance: A Primer

In this post, I want to lay out the basics of international finance at the level of my Principles of Macroeconomics class. Trust me, it will be worth it. There are many points about economic policy I have wanted to make on this blog that I have been unable to make without first laying the groundwork with a discussion of international finance like this. This post focuses on international finance in the long-enough run that aggregate demand is not an issue. So any discussion of monetary policy will have to wait for another post: “Short-Run International Finance: A Primer” to come. Also, the longer run focus of this post will show up when I talk about an increase in the national saving rate as a good thing–which I think it will be, about five years from now. But right now (2012) it would be good for people to spend more, as I assume in my posts so far about short-run fiscal policy and about monetary policy.

I use Greg Mankiw’s Brief Principles of Macroeconomics in my class; I like his treatment of international finance very much. Underneath the surface, Greg’s treatment of international finance has two key foundational pillars:

1. People have definite ideas (somewhat independent of the true distribution of returns on those foreign assets) about how much in the way of foreign assets they want among the assets that make up their wealth
2. Foreign currency is a hot potato that people want to get rid of.

Let start by discussing these two foundational pillars in turn.

Having Definite Ideas about the Amount of Foreign Assets to Hold. Having definite ideas about how much of one’s portfolio should be in foreign assets, in a way that is partly independent of the true return properties of those assets, is not fully rational. But home-bias, the tendency to be underweight in foreign assets relative to what would be optimal in the absence of prejudice based on the distribution of asset returns alone is one of the well-documented psychological biases in economics. And because of finite cognition that is scarce in relation to the difficulty regular households have in thinking about foreign assets, people’s attitudes toward foreign assets are likely to change over time in ways that are not fully rational.

If people were fully rational about foreign assets, I suspect that Greg’s treatment of international finance would not work very well. But I think it actually does work well because cognitive limitations exist. Financial decisions are some of the hardest decisions that people make, and international finance adds an extra layer of complexity. Having some players in the market who are fully rational would make a difference, but risk aversion and limitations on the quantity of wealth those rational financial actors control means that they cannot necessarily make the markets over in their own images. Also, many people think they are being fully rational when they are depending very heavily on returns in the future having similar properties to returns in the past, and depending on those returns to have few sudden jumps.

Foreign Currency as a Hot Potato.  Nick Rowe, who is a Canadian, gives a good discussion of why foreign currency is more of a hot potato than domestic currency, in his post “Money is Always and Everywhere a Hot Potato”:

I sold my car for Australian dollars, which were also a hot potato. I sold them at a place, called a “bank”, which is a dealer in Australian dollars. Because, including transactions costs and search costs and everything else, I would probably have got the best deal from someone who specialises in trading Australian dollars and who holds an inventory of Australian dollars. Only banks do that.

If I had sold my car for Canadian dollars, which are also a hot potato, I would have looked for a buyer who specialises in trading Canadian dollars and who would give me the best deal. But everyone I deal with here in Canada specialises in dealing with Canadian dollars and holds inventories of Canadian dollars. Everyone does that. Not just banks, but jewelers too, and car dealers, and my local supermarket, and my broker, and everyone I know.

Right now I have $100 in my pocket. It’s a hot potato. I don’t want it. I plan to get rid of it. Only not right now, because I am typing this right now. I plan to get rid of it a little later. Where will I get rid of it? At my bank? Well, if I thought my bank would give me the best deal, and something I really wanted more than anything else right now, then yes I would get rid of it at the bank. But I don’t think that. I think I will get a better deal at the supermarket and gas station. So that’s where I’m planning to spend it, in a little while. (Unless the bank phones me with a great new offer that can’t wait.) The gas station trades gas for Canadian dollars. The supermarket trades food for Canadian dollars. Continue through a long list of other traders. And the bank trades IOUs for Canadian dollars. A bank is just 1 out of 999 other places I could trade Canadian dollars. Thus, for reasons that Nick describes, most people are willing to tolerate a significantly bigger pile of domestic currency than of foreign currency. And they are reasonably quick to trade even domestic currency for assets that are IOU’s from someone else such as an addition to the balance in a checking account, savings account or money market fund, or for stocks or bonds. And for most people, those assets that people regularly convert currency into are primarily domestic assets. The Recycling of Dollars (and Other Currencies). Let’s start by looking at things from the perspective of Americans thinking of buying something from abroad or otherwise sending dollars abroad, say as a charitable gift. Since U.S. dollars are a foreign currency in most of the world (leaving aside places such as Ecuador, which do use U.S. dollars), people there will want to get rid of those dollars. They are likely to go to a bank and exchange those dollars for euros, yen or whatever the local currency is. But the bank doesn’t really want those U.S. dollars either, so it wants to get rid of them as well. One way or another, the price system will ensure that those dollars get back to the United States where they are wanted, instead of staying where they are not wanted except as a way to get euros, yen or whatever the local currency is. The key part of the price system that accomplishes this are exchange rates between different currencies. But the brilliance of Greg’s approach is that one can wait until the very end to figure out what happens to exchange rates. To begin with, all one needs to know is that the exchange rates will do whatever it takes to get U.S. dollars back to the United States (or other places such as Ecuador that use U.S. dollars as the local currency). The other thing to realize is that exchange rates primarily affect the level of exports and imports–and given a little time, usually quite a bit: more than a 1% change in the quantity of exports or imports for a 1% change in the exchange rate. As a result: • A lower value of the dollar as expressed in foreign currency makes American goods cheaper to foreigners, increasing exports, and makes foreign goods look more expensive to Americans, reducing imports. (Thus, net exports, which equals the value of exports minus the value of imports, definitely increases.) Why? Let me look at things from the perspective of American, and think of transactions as having dollars on one side or the other of a transaction (with whatever currency exchange necessary to think of things that way rolled into the transaction). From that point of view, exports are an exchange of foreigners’ dollars for some of our goods and services. More exports are a way to get dollars that are in foreigners’ hands back to the United States. And imports can be seen as an exchange of dollars in our hands for foreign-produced goods and services. So fewer imports means less outward flow of dollars. Therefore, a lower value of the dollar shifts the flow of dollars back toward the United States. • A higher value of the dollar as expressed in foreign currency makes American goods more expensive to foreigners, reducing exports, and makes foreign goods look cheaper to Americans, increasing imports. (Thus, net exports, which equals the value of exports minus the value of imports, definitely decreases.) Since imports send dollars outward, while exports bring them back, a higher value of the dollar tends shifts the flow of dollars away from the United States. The bottom line is that, in response to any initial flow of dollars, exchange rates will adjust to modify the levels of exports and imports in a way that will recycle those dollars back to where they came from. The total amount of recycling of dollars is equal to the value of net exports. And the same principle works for any other currency. The flow of U.S. dollars helps us think about the U.S. dollar zone (the U.S. plus Ecuador and few other places), the flow of euros helps us think about the Eurozone, and the flow of yen helps us think about Japan. The Effects of Exchange Rates on Asset Holding. The reason to emphasize the effect of exchange rates (in this case the dollar in relation to other currencies) on net exports rather than on asset flows is that, as long as I want to start and end in the same currency, the level of exchange rates does not affect my rates of return. For example, suppose that there are 100 yen to the dollar, and the interest rate is 1% per year in Japan. I change a dollar into 100 yen, get 101 yen a year later, then turn those 101 yen into$1.01. The interest rate is still 1%.  It is only predictable changes in exchange rates that should affect rates of return. (And the fact that something is a foreign asset, which I am assuming you care about, is pretty much the same regardless of the level of the exchange rate.) What Greg does is to effectively assume that the only predictable exchange rate movements are those associated with the two currency areas at issue having different rates of inflation. That means that there are no predictable movements in real (that is, inflation-adjusted) exchange rates, so that if we think about real interest rates, we don’t need to worry about the effects of exchange rates on rates of return in our home currency. (I should say that advanced international finance models worry a lot about the effects of predictable exchange rate movements on the desire to hold various assets.)

There is one part of the effect of predictable exchange rate movements that we should definitely worry about here. If people can ever predict a sudden movement in exchange rates, they will want to get ahead of that movement by getting into the currency that is going up relative to the other, and out of the one that is going down. That tends to make the sudden movement happen early–that is, as soon as people are confident there will be a sudden movement.

But there is another part of predictable exchange rates that it might be OK for us to ignore for now: small predictable movements. Because unpredictable movements in exchange rates tend to be so large, it is hard for people to be confident about the small predictable movements in exchange rates that might be there. Given the uncertainties, it is easy for people to ignore those small predictable movements even if they should pay attention to them. In any case, as a starting point for a Principles of Macroeconomics level analysis, Greg and I will treat asset transactions as unaffected by exchange rates.

The Principle of Comparative Advantage from the Perspective of International Finance. Suppose that in the future, some other country became better at making everything. Let’s call it Superbia, since they are superb at making everything, and call its currency the superbo. Would we run a trade deficit with Superbia? In order to save the effects of international transactions involving financial assets until later, let’s imagine that Superbia doesn’t allow any international financial transactions except in currency. Also, assume there are no international gifts. And let’s keep things simple by thinking of Superbia as the only other country in the world.

At first, we might imagine that we would be buying just about everything from Superbia, and they would be buying very little from us. But if initially that did happen, the people in Superbia would soon get big piles of dollars that they would be trying to get rid of. They would start getting very reluctant to let go of their superbos (the currency of Superbia) for dollars that they already had way too many of. So the value of dollars relative to superbos would go down, and the value of superbos relative to dollars would go up. That would make all of the wonderfully made Superbian goods look quite expensive. The exchange rate would keep adjusting until the dollars were well recycled.

Knowing that the dollars will get recycled, what can we say about imports and exports? With no international financial transactions and no gifts, basically the only way dollars get from one country to another are in exchange for goods. For dollars to be recycled, the Superbians must be buying things from America as well as Americans buying things from Superbia. Indeed, the value of imports and exports must be equal, which is what we mean when we say that “net exports” are zero. Of the everything that the Superbians make better with the same resources or just as well with fewer resources, Americans will end up buying those things where the Superbian advantage is greatest. But where the Superbian advantage is less, the high price of the superbo in dollars will make the Superbian version look more expensive than the American version of the good, so it will be exported from America to Superbia.

International Asset Purchases and Sales Drive Net Exports. Now let’s add in asset purchases and sales. Suppose, for example, that my readers in the United States (but not elsewhere) really took to heart the arguments I give for international diversification in my post “Why My Retirement Savings Accounts are Currently 100% in the Stock Market.” So American purchases of foreign stock increase. Initially, this puts a lot of dollars in the hands of foreigners. Those dollars are a hot potato. And no one outside the United States has been convinced by my arguments to change the amount of U.S. assets they have. So they don’t want to get rid of those dollars by buying and holding U.S. assets for any substantial period. Those unwanted dollars then kick around in the rest of the world until the price of dollars in terms of other currencies goes down. That makes American goods cheaper to the rest of the world, increasing our exports, and makes foreign goods more expensive to the rest of the world, reducing our imports, as discussed above, and eventually the dollars make their way back home.

Notice that the shift in Americans’ portfolio choices toward holding foreign assets ends up raising net exports from America. And we can figure out how much, without knowing the details of how much the dollar goes down! The increase in net exports must be exactly equal to the value of the foreign asset purchases American’s made. If they shifted $100 billion into foreign assets, it will result in$100 billion worth of extra exports as compared to imports from America.

An Easy Policy to Restore America’s Industrial Heartland (Including Key Swing States). It is not likely that many people will actually be persuaded by my portfolio advice, so let’s think of a policy that really would increase the amount of foreign assets that Americans buy and so increase our exports and reduce our imports. David Laibson and his coauthors have found that in retirement accounts, people often stay with the default contribution level and allocation to different assets, even if they are allowed to change the contribution level and allocations of contributions to different assets by going through a little paperwork. There are at least two reasons for this. One is that people are sometimes a little lazy–or to be more charitable, perhaps scared of financial decisions. That makes them want to do nothing. The other reason people often stick with the default settings for their retirement accounts is that they think (unfortunately wrongly for the most part right now), that their company, or maybe the government has carefully thought through how much they should be putting aside and what they should be financially investing it in.

So imagine that the government establishes a regulation that employers all need to have a retirement saving account and have a relatively high default contribution level. The employers are not required to match it. And employees can get out of making any contributions just by doing a little paperwork. But many, many employees won’t change the default contribution. So this simple regulation could dramatically raise the household saving rate in America. Assuming the government keeps its budget deficits on the same path as it otherwise would, that would also raise the national saving rate. A higher national saving rate would make loanable funds more plentiful at any real interest rate, making a surplus of loanable funds at a high real interest rate and so drive down the real interest rate. With real interest rates low in the United States, Americans would start thinking of buying more foreign assets that earn higher interest rates, and foreigners would be less likely to buy low-interest-rate American assets. (How much people want foreign assets is only somewhat independent of rates of return, not totally independent. A big enough interest rate differential will lead people in both countries to shift.) With Americans buying more foreign assets and foreigners buying fewer American assets, the flow of dollars has shifted outwards. Something has to happen to recycle those dollars. That something is a change in the exchange rate that increases net exports. And it has to increase net exports by the same amount as the change in the flow of dollars for asset purchases.

Indeed, following the tradition of calling the flow of dollars for intentional asset purchases net capital outflow, we can say that net exports would have to equal net capital outflow. More precisely, the net flow of dollars for anything other than buying goods and services has to be exactly balanced by a countervailing net flow of dollars that is about buying goods and services. And except for short periods of time, the net flow of dollars for purposes other than buying goods and services has to be intentional; it won’t take long before unintentional movements get undone by recycling.

Now suppose that the government wants to increase net exports even more than was accomplished by mandating that all employers provide retirement savings accounts and setting a high default contribution level for retirement savings accounts. The government could simply add the regulation that the default asset allocation would be, say, 40% in foreign assets. That would dramatically increase the buying of foreign assets relative to what would be likely to happen otherwise (at least in the United States with current attitudes toward foreign assets). That would further increase net financial capital outflow from the United States, and lead to exchange rate adjustments that would further raise net exports to recycle those dollars back to the United States.

At the end of the day, we get a lot of Chinese goods that come over in container ships, and they get a large pile of IOU’s such as U.S. Treasury bills. There is something odd about this. An argument can be made that this hurts the Chinese much more than it hurts us Americans, and probably even helps us, but the effects are actually quite complex because they affect different groups within each country differently. In China, the political elites are often closely connected to exporters, and may even have a strong financial interest in exporters, so they may want to help their exporters even if it hurts China overall. And the part of the U.S. trade deficit caused by large Chinese purchases of American assets hurts many American businesses (both exporters and those who compete with imports) even while it helps American consumers.

For now, though, I want to emphasize the economics without getting too much into policy evaluation. I want to emphasize that if the Chinese buy a lot of foreign assets, they will run a trade surplus, and we can say that in a quantitative way without even knowing the details of what will happen to the exchange rates. For example, if the Chinese buy an extra $1 trillion worth of foreign assets, it will result in an extra cumulative trade surplus of$1 trillion. Sometimes people say that something like that can’t go on forever. But if the Chinese government had an unlimited willingness to accumulate foreign assets, there is nothing to stop it from going on a long, long time. At some point, the pile of foreign assets will get so big, that I doubt the Chinese would actually succeed in trying to collect on all of those assets. But if they are willing to risk not getting their money back, they can keep on accumulating.

# Greg Ip on Barack Obama's Performance as Steward of the Economy →

This is an excellent discussion by Greg Ip of how Barack has done in his economic policy choices and the economic role of presidents in general.

Note: The appropriate judgment of Barack’s performance would be much different if the many ways to stimulate aggregate demand

1. without adding too much to the national debt and
2. in an environment where short-term interest rates are already down to zero

had been better understood when he faced the economic challenges of the last few years.

On the many ways to stimulate aggregate demand without adding too much to the debt and in a low-interest rate environment, see my blog posts on short-run fiscal policy and monetary policy, which are nicely laid out in these two “sub-blogs” of tagged posts:

# Nicholas Kristof: "Where Sweatshops are a Dream"

This op/ed by Nicholas Kristof is a classic that Greg Mankiw links to. I use it in my class to make two points:

1. The value of an extra dollar (or an extra Cambodian riel) can be extraordinarily high for someone who is very poor. (See my post “Inequality Aversion Utility Functions,” where I emphasize that almost all the benefits from redistribution are from helping the poor, not from transferring money from the rich to the middle class.)
2. Caring about helping the poor does not always mean one should support policies recommended by activists who say they care about the poor.

A number of policies recommended by those who say they care about the poor have the common element of saying, in effect:

If you can’t or won’t create a good job, don’t create a job at all.

For some people, a “bad job” is a lifeline. And if we insist that only good jobs should exist, they will have no job.

I think there is another element behind opposition to sweatshops. When people in poor countries are suffering before the arrival of an American company in their backyard, that hideous suffering from poverty is out of sight for us in America. But as soon as the American company arrives to give the opportunity of taking what look like bad jobs to us, if they choose to, the somewhat lesser suffering of their poverty after taking the “bad job” seems like the fault of the American company for not making the jobs nicer. In fact the company has helped them, but we only see the suffering from poverty after, not the hideous suffering from worse poverty before.

One factor that can make it easier to blame the American company for the suffering left after providing the job is that some of the corporate executives involved in setting up and running the new factory in a poor country may, in fact, be uncaring, unfeeling people (though I doubt this is true anywhere near as often as people suppose). But even if many of the corporate executives involved in setting up and running the new factory are uncaring, unfeeling people, it doesn’t change the fact that, by their actions of setting up and running the factory, they have made people’s lives better. They could have made people’s lives better still if they had taken a bigger fraction of their personal earnings and donated it to helping the poor than they actually did, but that is something that can be said for almost every American.

One policy change that could increase what Americans do to help the desperately poor in other countries is the program of “public contributions” I recommend in my post “No Tax Increase Without Recompense.” That program of public contributions would dramatically increase the amount of assistance American give to the desperately poor in other countries. Government-funded foreign aid is very unpopular–and often is relatively ineffective because much of it is channeled through corrupt foreign governments. But many individuals (with whatever money they have set aside to donate to good causes) are attracted by the idea of helping the desperately poor.

# Noah Smith on the Demand for Japanese Government Bonds →

In this post, Noah Smith argues that the price of Japanese government bonds (JGB’s) is still high (which is the same thing as saying the interest rate on Japanese government bonds is still low) despite the size of the Japanese government debt because people believe that the Japanese government will raise taxes in the future.

Towards the end of his post, Noah raises the possibility of negative real interest rates as another way to deal with the debt. This seems quite possible to me. If confidence in the willingness of future Japanese governments to raise taxes falls, then the price of JGB’s will fall and their interest rates will rise significantly above zero. In that situation there would be more room for the Bank of Japan (BOJ) to push interest rates on JGB’s down toward zero again (and equivalently, push prices of JGB’s up) to stimulate the Japanese economy. If that stimulus raises inflation to the 2% per year rate that the Bank of Japan has said it wants, then real interest rates could easily be -2% (a nominal interest rate of 0 minus inflation of 2%) for quite some time.

An important bit of background is that the Japanese government seems to be able to do quite a bit to twist the arms of insurance companies, regional banks and pension funds to get them to continue to hold JGB’s, as Noah argues in his earlier post “Financial Repression, Japanese Style.” And the pension funds in turn don’t give workers many choices about how to invest. That is the core of Noah’s answer to the obvious question of why anyone would ever put up with low real interest rates for JGB’s when higher real interest rates are available on foreign assets.

# A List of Macro Blogs from Gavyn Davies →

This blog, Confessions of a Supply-Side Liberal, made it onto Gavyn Davies’s list of macro blogs to follow.

# Stephen Donnelly on How the Difference Between GDP and GNP is Crucial to Understanding Ireland's Situation →

Ireland is in trouble. But outside Ireland, many economists think it is doing fine. Why? Stephen Donnelly argues that part of the answer turns on the difference between Gross Domestic Product and Gross National Product. Gross Domestic Product (GDP) is the value of goods and services produced within a country each year or quarter. Gross National Product (GNP) is the value of goods and services produced by the labor, capital and other resources owned by citizens of a country each year or quarter. For most countries, GDP and GNP are close to each other, but Ireland has attracted so much foreign investment that a large share of its capital stock in owned by foreigners. Thus, Ireland’s GNP is much lower than its GDP.

The presence of the foreign-owned capital raises wages in Ireland, so it is a good thing. But the income from the foreign-owned capital itself does not belong to Irish citizens, and so is not much help when it comes to handling the debt of the Irish government–especially since the Irish government needs to keep the promise to tax foreign-owned capital lightly that it made in order to attract foreign investment.

# Energy Imports and Domestic Natural Resources as a Percentage of GDP

Much is written and said about the impact of energy imports and natural resources on output. But a basic fact makes it hard for energy imports and natural resources to matter as much as people seem to think they do: natural resources account for a small share of GDP–on the order of 1% = .01, and energy imports measured as a fraction of GDP are also on the order of 1% = .01. Even a 20% increase in the price of imported oil, for example, should make overall prices go up something like a .01 * 20% = .2%. It should take a huge increase in the price of oil to make overall prices go up by even 1%.  Am I missing something?

It is a little dated, but here is what I found online about oil imports as a percentage of GDP. (I’ll gladly link to a more recent graph instead if there is one.) 2% of U.S. GDP is near the high end for the value of our oil imports in the past.  And here are World Bank numbers for factor payments to natural resources as a percentage of GDP.

# The Deep Magic of Money and the Deeper Magic of the Supply Side

## Introduction

I will assume that you have either read The Lion, the Witch and the Wardrobe, seen the movie, or don’t intend to do either. So I won’t worry about spoiling the story for you. C.S. Lewis’s fantasy is set in the world of Narnia. WikiNarnia explains the laws of nature in Narnia that drive the plot of The Lion, the Witch and the Wardrobe:

The Deep Magic was a set of laws placed into Narnia by the Emperor-beyond-the-Sea at the time of its creation. It was written on the Stone Table, the firestones on the Secret Hill and the sceptre of the Emperor-beyond-the-Sea.

This law stated that the White Witch Jadis was entitled to kill every traitor. If someone denied her this right then all of Narnia would be overturned and perish in fire and water.

Unknown to Jadis, a deeper magic from before the start of Time existed which said that if a willing victim who had comitted no treachery was killed in a traitor’s stead, the Stone Table would crack and Death would start working backwards.

Like the Deep Magic and the Deeper Magic in Narnia, in macroeconomics, money is the Deep Magic and the supply side is the Deeper Magic. In the short run, money rules the roost. In the long run, pretty much, only the supply side matters. In this post, I want to trace out what happens when a strong monetary stimulus is used to increase output and reduce unemployment. In the short run, output will go up, but in the long run, output will return to what it was.

## The Deep Magic of Money

Let me start by explaining why money is the Deep Magic of macroeconomics. There are many people in the world today who think it is hard making output go up, and that we need to resort to massive deficit spending by the government spending to stimulate the economy or from tax cuts meant to stimulate the economy. But as I explained in an earlier post, Balance Sheet Monetary Policy: A Primer, there are few limits to the power of money to make output go up in the short run.

Money as a Hot Potato when the Short-Term Safe Interest Rate is Above Zero. When short-term safe interest rates such as the Treasury bill rate or the federal funds rate at which banks lend to each other overnight are positive, almost all economists agree that money is very powerful. Suppose the Federal Reserve (“the Fed”) or some other central bank prints money to buy assets. In this context, when I say “money” I mean currency (in the U.S., green pieces of paper with pictures of dead presidents on them) or the electronic equivalent of currency–what economists sometimes call “high-powered money.” (When the Fed creates the electronic equivalent of currency, it isn’t physically “printing” money but it might as well be.) The Fed requires banks to hold a certain amount of high-powered money in reserve for every dollar of deposits they hold. Any high-powered money that a bank holds beyond that is not needed to meet the reserve requirement and is usually not a good deal because it earns an interest rate of zero (unless the Fed decides to pay more than that for the electronic equivalent of currency held in an account with the Fed). So inside the banking system, reserves beyond those that are required–called “excess reserves”–are usually a hot potato. Also, outside the banking system, at an interest rate of zero, high-powered money is normally a “hot potato” that households and firms other than banks try to spend relatively quickly, since every minute they hold high-powered money they are losing out on higher interest rates they could earn on other assets, such as Treasury bills. I say “relatively” quickly because there is some convenience to currency. So if the Fed prints high-powered money to buy assets, that hot potato money stimulates spending until until people and firms wind up with enough deposits in bank accounts that most of the high-powered money is used up meeting banks’ requirements to hold reserves against deposits, while the rest is in people’s pockets or the equivalent for convenience.

What Happens at the Zero Lower Bound on the Nominal Interest Rate. Many things change when short-term, safe interest rates such as the federal funds rate or the Treasury bill rate get very low, near zero. Then high-powered money is no longer a hot potato, either inside or outside the banking system. Banks and firms and households become willing to keep large piles of high-powered money–piles doing nothing (something even many non-economists have remarked upon lately). In the U.S. extremely low interest rates are a relatively new thing, but Japan has had extremely low interest rates for a long time; in Japan, it is not unusual for people to have thick wads of 10,000-yen notes (worth about $100 each) in their wallets. There are economists who believe that when short-term safe interest rates are essentially zero so that high-powered money is no longer a hot potato that money has lost its magic. Not so. Printing money to buy assets has two effects: one from the printing of the money, the other from the buying of the assets. That effect can be important, depending on what asset the Fed is buying. Normally, the Fed likes to buy Treasury bills when it prints money. But buying Treasury bills really does lose its magic after a while. Interest rates on Treasury bills falling to zero is equivalent to people being willing to pay a full$10,000 for the promise of receiving $10,000 three months later. (You can see that the interest rate is then zero, since you don’t get any more dollars back than what you put in. If you paid less than$10,000 at first, then you would be getting more dollars back at the end than what you put in, so you would be earning some interest.) No one is willing to pay much more than $10,000 for the promise of$10,000 in three months, since other than the cost of storage, one can always get $10,000 in three months just by finding a very good hiding place for$10,000 in currency. So when the interest rate on Treasury bills has fallen to zero, it is not only impossible to push that interest rate significantly below zero, in what turns out to be the same thing, it is impossible to push the price of a Treasury bill that pays $10,000 in three months significantly above$10,000.

If the Fed buys packages of mortgages, it pushes up the price of those mortgage-backed assets. When the price of mortgage-backed assets is high, financial firms become more eager to lend money for mortgages, even though they remain somewhat cautious because they (or others who serve as cautionary tales) were burned by mortgages that went sour as part of the financial crisis. If financial firms become eager to lend against houses, more people will be able to refinance and spend the money they get or that they save from lower monthly house payments, and some may even build a new house.

If the Fed buys long-term Treasury bonds, that pushes up their price, making them more expensive. Some firms and households who had intended by buy Treasury bonds will now find them too pricey as a way to get a fixed payoff in the future. With Treasury bonds too pricey, they will look for ways to get payoffs in the future that are not so pricey now. They may hold onto their hats and buy corporate bonds or even corporate stock, despite the risk. That makes it easier for companies to raise money by selling additional stocks and bonds. Up to a point it also pushes up the price of stocks and bonds, so that people looking at their brokerage accounts or their retirement accounts feel richer and may spend more. If you don’t believe me, just watch how joyous the stock market seems every time the Fed surprises people by announcing that it will buy more long-term Treasury bonds than people expected–or how disappointed the stock market seems every time the Fed surprises people by announcing that it won’t buy as many long-term Treasury bonds as people had expected.

The Cost of the Limited Range of Assets the Fed is Allowed to Buy. It is true that at some point the legal limits on what the Fed is allowed to buy will put a brake on how much the Fed can stimulate the economy. But that does not deny the power of money to raise the price of assets and stimulate the economy, it only means that when we don’t allow newly created money to be used to buy a wide range of assets, then money is hobbled. Aside from the effect limits on what the Fed can buy have on the ability of money to stimulate the economy, those limits also affect the cost of what the Fed does. If the Fed is only allowed to buy a narrow range of assets, it will have to push the price of each of those assets up a lot to get the desired effect, and then when it sells them again to avoid the economy overheating, it may lose money from the roundtrip of buying high (when it pushed the price up by buying) and selling low (when it later pulls the price down by selling). This is a bigger problem the lower the interest rate on a given type of asset is to begin with. It is also a bigger problem the longer-term an asset is. So risky assets that have higher interest rates to begin with–and perhaps, especially, risky short-term assets–are better in that regard.

Summarizing the Deep Magic of Money. The bottom line is that in the short run, money has deep magic that can stimulate the economy as much as desired. Right now, the power of money is as about as circumscribed as it ever is, and yet it still has its magic. And yet, I claim, as almost all other economists claim, that in the long run, the supply side will win out. Not only will the supply side win out in the long run, but in the long run, money has virtually no power to affect anything important–unless continual, rampant printing of money drives the economy into the disaster of hyperinflation, or a serious shortage of money causes prices to fall in a long-lasting bout of deflation. (The fact that, short of hyperinflation or deflation, money has virtually no power to affect anything important in the long run is called monetary superneutrality.) How can money have so much power in the short run and so little in the long run?

## The Deeper Magic of the Supply Side

The answer to how money can have so much power in the short run and so little in the long run is that the supply side will bend in many ways in the short run, but will always bounce back.

Price Above Marginal Cost Makes Output Demand-Determined in the Short Run. To begin with, the most basic way in which the supply side is accomodating in the short run is that if a firm has–for some period of time–fixed a price above the cost to produce an extra unit of its good or service (the marginal cost), then it is eager to sell its good or service to any extra customer who walks in the door. And firms will, in general, set their prices at least a little above what it normally costs to produce an extra unit as long as they can do so without losing all of their existing customers. Here is why. Thinking in long-run terms, if the firm sets its price equal to marginal cost, then it doesn’t earn anything from the last few customers. So losing that customer by raising the price a little is no harm. And raising the price a little means that all of the customers who don’t bolt will now be paying more–more that will go into the firm’s pocket. Raising the price too high puts that extra pocket money in jeopardy, so the firm won’t raise prices too high, but it will raise the price at least some above marginal cost as long as it doesn’t lose all of its customers by doing so. To summarize, if firms do fix prices for some length of time as opposed to changing them all the time, they are likely to set those prices above what it normally costs to produce an extra unit of the good or service they sell. And if price is above marginal cost, then given a temporarily fixed price, the amount by which price is above marginal cost is what the firm gets on net when an extra customer walks in the door. For example, produce a widget for a marginal cost of $6, sell it for$10, and take home \$4 as extra profits.

So firms who won’t lose every last customer by raising their price will set price above marginal cost, and then will typically be eager to sell to an extra customer during the period when their price is fixed. I say “typically” because if enough new customers walked in the door, then marginal cost might increase enough above normal to exceed the fixed price. Then the firm would lose money by selling further units, and it will make up an excuse to tell customers about why it won’t sell more. The usual excuse is “we have run out”–which is a polite way of saying that they could do more, for a high enough price, but won’t for the price they have actually set. But since the firm will set price some distance above marginal cost to begin with, there is some buffer in which marginal cost can increase without going above the price. And anywhere in that buffer zone, the firm will still be eager to serve additional customers.

How Extra Output is Produced in the Short Run. How does the firm actually produce extra units in the short run? Here it is more interesting to broaden the scope to the whole economy. (Much of what follows is drawn from a paper I teamed up with Susanto Basu and John Fernald to write: “Are Technology Improvements Contractionary?”–a paper that has to consider what happens as a result of changes in demand before it can begin to address what happens with a supply-side change in technology.) When the amount customers are spending increases, so that firms need to produce more to serve that extra quantity demanded, the firms may, at the end of the day hire additional employees. But that is usually a last resort. There are many other ways to increase output short of hiring a new employee. Here are three avenues to increase production even before hiring new workers:

1. ask existing employees to stay longer and work more hours in a week and take fewer vacations;
2. ask existing employees to work harder while they are at work–to be more focused while at their stations or their desks, and to spend less time away from their work at the water cooler;
3. delay maintenance of the factory, training, and other activities that can help the firm’s productivity in the long run, but don’t help produce the output the customer needs today.

The Workweek of Capital. One thing that doesn’t have time to contribute much to output when demand goes up is new machines and factories. It is simply hard to add new machines and factories fast enough to contribute that big a percentage of the increase in output. But people working longer hours with the same number of machines and factories don’t necessarily have to crowd around the limited number of machines and workspaces, since those machines and workspaces were often unused after hours anyway. So when the workers work longer, so do their machines and workspaces. Even when new workers are added, they can often be added in a new shift at a time when the machines and workspaces had been unused. So the fact that it is hard to quickly add extra factories and machines is not as big a limitation to output in the short run as one might think. Of course using machines and workspaces around the clock has costs. Extra wear and tear is one cost, but probably a bigger cost is having to pay people extra to be willing to work at the inconvenient hours of a second or third shift. (Note that paying an inexperienced worker working at night the same as a more experienced worker during the day is also paying extra beyond what the inexperienced worker would be worth for production if he or she were working during the day.)

Reallocation of Labor. At the economy-wide level another contribution to higher GDP in a boom is that in a boom the amount of work done tends to increase most in those sectors of the economy where a 1% increase in inputs leads to considerably more than a 1% increase in output–that is, in sectors such as the automobile sector where there are strong economies of scale (also called increasing returns to scale). These tend to also be sectors in which the price of output, and therefore the marginal value of output, is the furthest above the marginal cost of output. So when more work is done in those sectors, it adds a lot of valuable output that adds a lot to GDP–a lot more than if extra work by that same person were done in another sector where the price (and therefore the marginal value) of output is not as far above marginal cost.

Okun’s Law. When firms are finally driven to hiring additional workers, this still doesn’t reduce the number of “unemployed” workers by an equal amount, for the simple reason that, when firms are hiring, more people decide it is a good time to look for a job, and go from being “out of the labor force” (not looking for work) to “in the labor force and unemployed” (looking for work but haven’t found it yet). So in addition to all the ways that firms can increase output without hiring extra workers, the fact that hiring extra workers causes more worker to look for work also makes it hard to make the unemployment rate go down. So hard, in fact, that the current estimates for what is called “Okun’s Law” (after the economist Arthur Okun) say that it typically takes 2% higher output to make the unemployment rate 1 percentage point lower. (Note that a typical constant-returns to scale production function would say that 2% higher output would require 3% higher labor input. Thus, if 2% higher output came simply from hiring extra workers for a constant returns to scale production function, then the unemployment rate would go down almost 3%. So the details of how firms manage to produce more output matter a lot.)

The Supply Side in the Short Run and in the Long Run. That is the story of the short run. Extra money increases the amount that firms and households want to spend. Firms accommodate that extra desire to spend because price is above marginal cost. They actually produce the extra output by a combination of hiring extra workers and asking existing workers to work longer and harder, in a way that often takes advantage of economies of scale. Firms also may focus their productive efforts more on immediately salable output. They deal with a relatively fixed number of workspaces and machines by keeping the factory or office in operation more hours of the week.

The thing to notice is that both the ways in which the firms accommodate extra demand and their motivation for doing so rely on things that won’t last forever. Workers may work longer and harder without complaint for a while, but sooner or later they will start to grumble about the long hours and the pace of work, and maybe begin looking for another job. Of course, they may not even have to look for another job, since with a booming economy, a job may come looking for them.  So even a boss who is too dense to realize all along the strain he or she is putting workers through, will eventually realize the cost of those extra hours and effort as wages get driven up labor market competition. What is more, the boss will eventually get around to raising the firm’s prices in line with this increased marginal cost as the “shadow wage” of the extra strain on workers goes up (something smart bosses will pay attention to) and ultimately the actual wage goes up (which will catch the attention of even dense bosses).

As prices rise throughout the economy, another force kicks in: workers will realize they are working hard for a paycheck that doesn’t stretch as far anymore, and start to wonder “Is it worth spending so many long, hard, late hours at work?” Even when the workers’ answer is still “On balance, yes,” because the answer is no longer “YES!” they will not jump to the boss’s orders with the same speed anymore, which will make the boss see the workers as less productive, and therefore see a higher marginal cost of output. All of this speeds the increase in prices even more, and speeds the return of hours worked and intensity of work to a normal pace. The temporary bending of the supply side toward greater production will be undone. There are things that permanently affect the supply side, but short of a monetary disaster, money is not one of those things. Short of a monetary disaster, and leaving aside tiny effects, money only matters in the short run. Economists call this monetary superneutrality and say that money only matters in the short run by saying the words the long-run aggregate supply curve is vertical

## Three Codas: Inflation Magic, Sticky vs. Flexible Prices, and Federal Lines of Credit

Is There Any Direct Magic by which Money Causes Inflation? A crucial aspect of the story above is that money only causes a general increase in prices–inflation–by increasing output and leading to all the measures discussed above to produce more output. Some economists think that printing money can cause inflation even if it doesn’t lead to an increase in output. Money has magic, but not that kind of magic.

Let me discuss the two closest things I can think of to money having some direct magic that could raise inflation even without an increase in output.

1. First, to some extent, inflation can be a self-fulfilling prophecy. If firms believe that prices will be higher in the future, those who have gotten around to changing prices will set higher prices now. So if firms believed that printing money could cause inflation without increasing output, then to some extent it would. But I see no evidence that many firms believe this. They know how hard it is for them to raise prices in their own industry when demand is low.
2. Printing money to buy assets drives up the prices of assets in general, as financial investors look for assets that are still reasonably priced to buy, bringing up their prices as well. Many commodities, such as oil, copper, and even cattle, have an asset-like quality because they can be used either now or later. (And copper–and depending on the use, cattle–can be used both now and later.) When the Fed pushes up the prices of assets, it pushes up what people are willing to pay now for a payout down the road. That pushes up the price of oil, copper, and cattle now. This looks like inflation, but it is not a general increase in prices, but an increase in commodity prices relative to other prices in the economy. When the economy cools down (often, unlike the story above, because the Fed sells assets to mop up money and cool down an overheated economy), all of these increases in commodity prices go in reverse, and the roundtrip effect on the overall price level from the rise and fall of commodity prices along the way is modest.

Sticky Prices vs. Flexible Prices. Some prices are relatively flexible and quick to change, while others are fixed for a relatively long period of time. (I don’t emphasize wages being fixed for often as much as a year at a time, since a smart boss should realize that in a long-term relationship, a high level of strain on workers, which can come on quickly, leads to extra costs even if the actual wage changes only slowly.) Prices are especially flexible and quick to change for the long-lasting commodities I discussed above, and for relatively unprocessed food such as bananas and orange juice. (In relatively unprocessed food, most of the cost is from the ingredients and bringing the food to the customer rather than the processing. And it is hard to differentiate one’s product from the competition’s product, so the price can’t be pushed very far above marginal cost.) Another interesting area where prices are very flexible is in air travel, where ticket prices can change dramatically from one week to the next. By contrast, prices are fixed for relatively long periods of time for most services. (My wife Gail is a massage therapist. I know that massage therapists think long and hard before they raise prices on their clients, and warn their clients long in advance about any price increase. In an even more extreme example, it is not uncommon for psychotherapists to keep their price fixed for a given client during the whole period of treatment, even if it lasts for years.) The prices of manufactured goods are in-between in their degree of flexibility.

When demand is high so that the economy booms, flexible prices move up quickly, while sticky prices move up only slowly. But when the economy cools down, the flexible prices can easily reverse course, while the sticky prices have momentum. (Greg Mankiw and Ricardo Reis explain one mechanism behind this momentum in their paper “Sticky Information Versus Sticky Prices: A Proposal to Replace the New Keynesian Phillips Curve”: firms’ sense of the rate of inflation–often based on old news–feeds into their price-setting. In this account, inflation feeds on past inflation that affects that sense of what the rate of inflation is.) So, perhaps counterintuitively, it is inflation in sticky prices that is the most worrisome. The Fed is right to focus its worries about inflation on what is happening to the sticky prices. In the news, this is described as focusing on “core inflation”–the overall rate of price increases for goods other than oil and food.

The existence of a mix of flexible and sticky prices in the economy is important for macroeconomic models, since it means that higher aggregate demand will have some immediate effect on prices (because of the flexible prices), but the effect on the overall price level will still be limited (because of the sticky prices). Economists often describe this as the “short-run aggregate supply curve” sloping upward–as opposed to being vertical, as it would be if all prices were flexible, or horizontal, as it would be if all prices were sticky. The existence of a mix of flexible and sticky prices is also important because it means that this “short-run aggregate supply curve” can shift when flexible prices change for reasons other than the level of aggregate demand. (Unfortunately, the most obvious reason the “short-run aggregate supply curve” might shift is because of a war in the Middle East that raises the price of oil in a way that is not do to the level of aggregate demand.)

Federal Lines of Credit. I have focused on monetary policy in this post, arguing that traditional fiscal stimulus–government spending or tax cuts meant to stimulate the economy in the short run–is inferior because it adds so much to the national debt. But  there is one type of fiscal policy that adds relatively little to the national debt, as I discuss in my post “Getting the Biggest Bang for the Buck in Fiscal Policy.” The “Federal Lines of Credit” I propose in that post are a type of fiscal policy that is similar in some ways to monetary policy, since Federal Lines of Credit involve the government making loans to households. Federal Lines of Credit, like money, have deep magic, but in the long run their effects on output will also be countered by the deeper magic of the supply side.

# The Shape of Production: Charles Cobb's and Paul Douglas's Boon to Economics

In 1927, before he dove fully into politics, Paul teamed up with mathematician and economist Charles Cobb to develop and apply what has come to be called the “Cobb-Douglas” production function. (The wikipedia article on Charles Cobb is just a stub, so I don’t know much about him.) Here is the equation:

A very famous economist, Knut Wicksell, had used this equation before, but it was the work of Charles Cobb and Paul Douglas that gave this equation currency in economics. Because of their work, Paul Samuelson–a towering giant of economics–and his fellow Nobel laureate Robert Solow, picked up on this functional form. (Paul Samuelson did more than any other single person to transform economics from a subject with many words and a little mathematics, to a subject dominated by mathematics.)

In the equation, the letter A represents the level of technology, which will be a constant in this post. (If you want to think more about technology, you might be interested in my post “Two Types of Knowledge: Human Capital and Information.”) The Greek letter alpha, which looks like a fish (α), represents a number between 0 and 1 that shows how important physical capital, K–such as machines, factories or office buildings–is in producing output, Y. The complementary expression (1-α) represents a number between 0 and 1 that shows how important labor, L, is in producing output, Y. For now, think of α as being 1/3 and (1-α) as being 2/3:

• α= 1/3;
• (1-α) = 2/3.

As long at the production function has constant returns to scale so that doubling both capital and labor would double output as here, the formal names for α and 1-α are

• α = the elasticity of output with respect to capital
• 1-α = the elasticity of output with respect to labor.

What Makes Cobb-Douglas Functions So Great. The Cobb-Douglas function has a key property that both makes it convenient in theoretical models and makes it relatively easy to judge when it is the right functional form to model real-world situations: the constant-share property. My goal in this post is to explain what the constant-share property is and why it holds, using the logarithmic percent change tools I laid out in my post “The Logarithmic Harmony of Percent Changes and Growth Rates.” If any of the math below seems hard or unclear, please try reading that post and then coming back to this one.

The Logarithmic Form of the Cobb-Douglas Equation. By taking the natural logarithm of both sides of the defining equation for the Cobb-Douglas production function above, that equation can be rewritten this way:

log(Y) = log(A) + α log(K) + (1-α) log(L)

This is an equation that holds all the time, as long as the production engineers and other organizers of production are doing a good job. If two things are equal all the time, then changes in those two things must also be equal. Thus,

Δ log(Y) = Δ log(A) + Δ {α log(K)} + Δ {(1-α) log(L)}.

Remember that, for now, α= 1/3. The change in 1/3 of log(K) is 1/3 of the change in log(K). Also, the change in 2/3 of log(L) is 2/3 of the change in log(L). And quite generally, constants can be moved in front of the change operator Δ in equations. (Δ is also called a “difference operator” or “first difference operator.”) So

Δ log(Y) = Δ log(A) + α Δ log(K) + (1-α) Δ log(L).

As defined in “The Logarithmic Harmony of Percent Changes and Growth Rates,”the change in the logarithm of X is the Platonic percent change in X. In that statement X can be anything, including Y, A, K or L. So as long as we interpret %Δ in the Platonic way,

%ΔY = %ΔA + α %ΔK + (1-α) %ΔL

is an exact equation, given the assumption of a Cobb-Douglas production function.

Percent Changes of Sums: An Approximation. Now let me turn to an approximate equation, but one that is very close to being exact for small changes. Economists call small changes marginal changes, so what I am about to do is marginal analysis. (By the way, the name of Tyler Cowen and Alex Tabarrok’s popular blog Marginal Revolutionis a pun on the “Marginal Revolution” in economics in the 19th century when many economists realized that focusing on small changes added a great deal of analytic power.)

For small changes,

%Δ (X+Z) ≈ [X/(X+Z)] %ΔX + [Z/(X+Z)] %ΔZ,

where X and Z can be anything. (Those of you who know differential calculus can see where this approximation comes from by showing that d log(X+Z) = [X/(X+Z)] d log(X) + [Z/(X+Z)] d log(Z)], which says that the approximation gets extremely good when the changes are very small. But as long as you are willing to trust me on this approximate equation for percent changes of sums, you won’t need any calculus to understand the rest of this post.)

The ratios X/(X+Z) and Z/(X+Z) are very important. Think of X/(X+Z) as the fraction of X+Z accounted for by X; and think of Z/(X+Z) as the fraction of X+Z accounted for by Z.  Economists use this terminology:

• X/(X+Z) is the “share of X in X+Z.”
• Z/(X+Z) is the “share of Z in X+Z."

By the way they are defined, the shares of X and Z in X+Z add up to 1.

The main reason the rule for the percent changes of sums is only an approximation is that the shares of X and Z don’t stay fixed at their starting values. The shares of X and Z change as X and Z change. Indeed, if one changed X and Z gradually (avoiding any point where X+Z=0), the approximate rule for the percent change of sums would have to hold exactly for some pair of values of the shares of X and Z passed through along the way.

The Cost Shares of Capital and Labor. Remember that in the approximate rule for the Platonic percent change of sums, X and Z can be anything. In thinking about the production decision of firms, it is especially useful to think of X as the amount of money that a firm spends on capital and Z as the amount of money the firm spends on labor. If we write R for the price of capital (the "Rental price” of capital) and W for the price of labor (the “Wage” of labor), this yields

• X = RK
• Z = WL.

For the issues at hand, it doesn’t matter whether the amount R that it costs to rent a machine or an office and the amount W it costs to hire an hour of labor is real (adjusted for inflation) or nominal. It does matter, though, that nothing the firm can do will change R or W. The kind of analysis done here would work if what the firm does affects R and W, but the results, including the constant-share property, would be altered. I am going to analyze the case when the firm cannot affect R and W–that is, I am assuming the firm faces competitive markets for physical capital and labor. Substituting RK in for X and WL in for Z, the approximate equation for percent changes of sums becomes

%Δ (RK+WL) ≈ [RK/(RK+WL)] %Δ(RK) + [WL/(RK+WL)] %Δ(WL)

Economically, this approximate equation is important because RK+WL is the total cost of production. RK+WL is the total cost because the only costs are total rentals for capital RK and total wages WL. In this approximate equation

• s_K = share_K = RK/(RK+WL) is the cost share of capital (the share of the cost of capital rentals in total cost.)
• s_L = share_L = WL/(RK+WL) is the cost share of labor (the share of the cost of the wages of labor in total cost.)

The two shares always add up to 1 (as can be confirmed with a little algebra), so

s_L = 1 - s_K.

Using this notation for the shares, the approximation for the percent change of total costs is

%Δ (RK+WL) ≈ {s_K} %Δ(RK) + {s_L} %Δ(WL).

The Product Rule for Percent Changes. In order to expand the approximation above, I am going to need the rule for percent changes of products. Let me spell out the rule, along with its justification twice, using RK and WL as examples:

%Δ (RK) = Δ log(RK) = Δ {log( R ) + log(K)} = Δ log( R )  + Δ log(K) = %ΔR + %ΔK

%Δ (WL) = Δ log(WL) = Δ {log(W) + log(L)} = Δ log(W) + Δ log(L) = %ΔW + %ΔL

These equations, reflecting the rule for percent changes of products, hold exactly for Platonic percent changes. Aside from the definition of Platonic percent changes as the change in the natural logarithm, what I need to back up these equations is the fact that the change in one thing plus another, say log( R ) + log(K), is equal to the change in one plus the change in the other, so that Δ {log( R ) + log(K)} = Δ log( R ) + Δ log(K). Using the product rule,

%Δ (RK+WL) ≈ {s_K} (%ΔR + %ΔK) + {s_L} (%ΔW+ %ΔL).

Cost-Minimization. Let’s focus now on the firm’s aim of producing a given amount of output Y at least cost. We can think of the firm exploring different values of capital K and labor L that produce the same amount of output Y. An important reason to focus on changes that keep the amount of output the same is that it sidesteps the whole question of how much control the firm has over how much it sells, and what the costs and benefits are of changing the amount it sells. Therefore, focusing on different values of capital and labor that produce the same amount of output yields results that apply to many different possible selling situations (=marketing situations=industrial organization situations=competitive situations) a firm may be in. That is, I am going to rely on the firm facing a simple situation for buying the time of capital and labor, but I am going to try not to make too many assumptions about the details of the firm’s selling, marketing, industrial organization, and competitive situation. (The biggest way I can think of in which a firm’s competitive situation could mess things up for me is if a firm needs to own a large factory to scare off potential rivals, or a small one to reassure its competitors it won’t start a price war. I am going to assume that the firm I am talking about is only renting capital, so that it has no power to credibly signal its intentions with its capital stock.)

The Isoquant. Economists call changes in capital and labor that keep output the same “moving along an isoquant,” since an “isoquant” is the set of points implying the same (“iso”) quantity (“quant”). To keep the amount of output the same, both sides of the percent change version of the Cobb-Douglas equation should be zero:

0 = %ΔY = %ΔA + α %ΔK + (1-α) %ΔL

Since I am treating the level of technology as constant in this post, %ΔA=0. So the equation defining how the Platonic percent changes of capital and labor behave along the isoquant is

0 = α %ΔK + (1-α) %ΔL.

Equivalently,

%ΔL = -[α/(1-α)] %ΔK.

With the realistic value of α=1/3, this would boil down to %ΔL = -.5 %ΔK. So in that case, %ΔK= 1% (a 1 % increase in capital) and %ΔL = -.5 % (a one-half percent decrease in labor) would be a movement along the isoquant–an adjustment in the quantities of capital and labor that would leave output unchanged.

Moving Toward the Least-Cost Way of Producing Output. To find the least-cost or cost-minimizing way of producing output, think of what happens to costs as the firm changes capital and labor in a way that leaves output unchanged. This is a matter of transforming the approximation for the percent change of total costs by

1. replacing %ΔR and %ΔW with 0, since nothing the firm does changes the rental price of capital or the wage of labor that it faces;
2. replacing %ΔL with -[α/(1-α)] %ΔK in the approximate equation for the percent change of total costs; and
3. replacing s_L with 1-s_K.

After Step 1, the result is

%Δ (RK+WL) ≈ {s_K} %ΔK + {s_L} %ΔL.

After doing Step 2 as well,

%Δ (RK+WL) ≈ {s_K} %ΔK - {s_L} {[α/(1-α)] %ΔK}.

Then after Step 3, and collecting terms,

%Δ (RK+WL) ≈ {s_K - (1-s_K) [α/(1-α)]} %ΔK

= { [s_K/(1-s_K)] - [α/(1-α)] }  [(1-s_K) %ΔK].

Notice that since the

1-s_K = s_L = the cost share of labor

is positive, the sign of (1-s_K) %ΔK is the same as the sign of %ΔK. To make costs go down (that is, to make %Δ (RK+WL) < 0), the firm should follow this operating rule:

1. Substitute capital for labor (making %ΔK > 0)

if  [s_K/(1-s_K)] - [α/(1-α)] < 0.

2. Substitute labor for capital (making %ΔK < 0)

if  [s_K/(1-s_K)] - [α/(1-α)] > 0.

Thus, the key question is whether s_K/(1-s_K) is bigger or smaller than α/(1-α). If it is smaller, the firm should substitute capital for labor. If s_K/(1-s_K) is bigger, the firm should do the opposite: substitute labor for capital. Note that the function X/(1-X) is an increasing function, as can be seen from the graph below:

X

Since X/(1-X) gets bigger whenever X gets bigger (at least in the range from 0 to 1 (which is what matters here),

• s_K/(1-s_K) is bigger than α/(1-α) precisely when s_K > α
• s_K/(1-s_K) is smaller than α/(1-α) precisely when s_K < α.

So the firm’s operating rule can be rephrased as follows:

1. Substitute capital for labor (making %ΔK > 0)

if  s_K <  α.

2. Substitute labor for capital (making %ΔK < 0)

if  s_K > α.

This operating rule is quite intuitive. In Case 1, the importance of capital for the production of output (α) is greater than the importance of capital for costs (s_K). So it makes sense to use more capital. In Case 2, the importance of capital for the production of output (α) is less than the importance of capital for costs (s_K), so it makes sense to use less capital.

Proof of the Constant-Share Property of Cobb-Douglas. So what should the firm do in the end? For fixed R and W, the more capital a firm uses, the bigger effect a 1% increase in capital has on costs. So if the firm is using a lot of capital, the cost share of capital will be greater than the importance of capital in production α and the firm should reduce its use of capital, substituting labor in place of capital. If the firm is using only a little capital, the cost share of capital will be smaller than the importance of capital in production α, and it will be a good deal for the firm to increase its use of capital, allowing it to reduce its use of labor. At some intermediate level of capital, the cost share of capital will be exactly equal to the importance of capital in production α, and there will be no reason for the firm to either increase or reduce its use of capital once it reaches that point. So a firm that is minimizing its costs–a first step toward optimizing overall–will produce a given level of output with a mix of capital and labor that makes the cost share of capital equal to the importance of capital in production:

cost-minimization ⇒     s_K = α.

Concordantly, one can say

cost-minimization ⇒     1-s_K = 1-α.

That is, the firm will use a mix of capital and labor that makes the cost share of labor equal to the importance of labor in production as well. Since the Cobb-Douglas functional form makes the importance of capital in production α a constant, a cost-minimizing firm will continually adjust its mix of capital and labor to keep the cost share of capital equal to that constant level α, and the cost share of labor equal to another constant, 1-α. This is the constant-share property of Cobb-Douglas. The constant-share property is something that can be tested in the data, and often seems to hold surprisingly well in the real world. So economists often use Cobb-Douglas production functions.

Another Application of the Cobb-Douglas Idea: Achieving a Given Level of Cobb-Douglas Utility at Least Cost. Note that similar logic will work for utility functions as well. For example, in my post “The Flat Tax, The Head Tax and the Size of Government: A Tax Parable,” since the importance of consumption and leisure for utility is equal (both equal to 1/3), adjusting consumption C and leisure L so that %ΔC = - %ΔL will leave utility unchanged. Then,

1. if the share of spending on consumption is lower than the share of spending on leisure,
2. which is equivalent to the total spending on consumption being lower than total spending on leisure,
3. then increasing consumption (by reducing leisure and working harder) will make sense.

On the other hand,

1. if the share of spending on consumption is higher than the share of spending on leisure,
2. which is equivalent to total spending on consumption being higher,
3. then reducing consumption (and increasing leisure by working less) will make sense.

This means that if consumption is too high, it should be reduced, while if consumption is too low, it should be increased, until the amount of spending on consumption equals the amount of spending on leisure.

# Smaller, Cheaper, Faster: Does Moore's Law Apply to Solar Cells? by Ramez Naam →

The way the future looks depends on the rate of decline in the cost of solar power. In this article (my title is a link), Ramez Naam says that solar power is getting cheaper at the rate of 7% per year. Notice how his graph with a logarithmic scale compares to his graph with a regular scale. By the rule of 70, how many years to cut the cost of solar power in half?

# The Logarithmic Harmony of Percent Changes and Growth Rates

Logarithms. On Thursday, I let students in my Principles of Macroeconomics class in on the secret that logarithms are the central mathematical tool of macroeconomics. If my memory isn’t playing tricks on me, I can say that in both papers that examine real world data and at least half of macroeconomic theory papers, a logarithm makes an appearance, often in a starring role.  Why are natural logarithms so important?

1. Lesser reason: logarithms can often model how a household or firm makes choices in a particularly simple, convenient way.

2. Greater reason: multiplication and powers appear all the time in macroeconomics. For a price in initial difficulty, logarithms make multiplication and powers and exponential growth look easy.

Among other aspects of making multiplication and powers and exponential growth look easy, logarithms provide a very clean, elegant way of thinking about percent changes.

I am determined to have very few equations in this post, so you will have to depend on your math training for the basic rules of logarithms: how they turn multiplication into addition and powers into multiplication. What I want to accomplish in this post is to give you a better intuitive feel for logarithms–an intuitive feel that math textbooks often don’t provide. I also hope to make a strong connection in your mind between natural logarithms and percent changes.

One of the most basic uses of logarithms in economics is the logarithmic scale. On a logarithmic scale, the distance between each power of 10 is the same. So the distance from 1 to 10 on the graph is the same as the distance from 10 to 100, which is the same as the distance from 100 to 1000. Here is a link to an example of a graph with a logarithmic scale on the vertical axis I have used before from Catherine Mulbrandon in Visualizing Economics:

Contrast that growth line for US GDP to the curve Catherine gets when not using a logarithmic scale on the vertical axis. Here is the link:

The idea of the logarithmic scale–which can be boiled down to the idea of always representing multiplication by a given number as the same distance–shows up in two concrete objects, one familiar and one no-longer familiar: pianos and slide rules.

A Piano Keyboard as a Logarithmic Scale. You may not have thought of a piano keyboard as a logarithmic scale, but it is. Including all of the black keys on an equal footing with the white keys, going up one key on the piano is called going up a "semitone.” Going up an octave (say from Low C to Middle C) is going up 12 semitones. And each octave doubles the frequency of the vibrations in a piano string. As explained in the wikipedia article “Piano key frequencies,” at Middle C, the piano string vibrates 261.626 times per second. Each semitone higher on the piano keyboard makes the vibration of the string 1.0594631… times faster. And multiplying by 1.0594631… twelve times is the same as multiplying by 2. The reason our Western musical scale has been designed to have 12 semitones in an octave is interesting. To begin with, two notes whose frequencies have a ratio that is an easy fraction such as 3/2, 5/4 or 6/5 make a pleasing interval. (The Pythagoreans made mathematics part of their religion thousands of years ago partly because of this fact.) Then, it turns out that various powers of 1.0594631… come pretty close to many easy fractions. Here is a table showing the frequencies of various notes relative to the frequency of Middle C, showing some of the easy fractions that come close to various powers of 1.0594631…. A distance of three semitones yields a ratio close to 6/5; a distance of four semitones yields a ratio close to 5/4; a distance of five semitones yields a ratio close to 4/3; and a distance of seven semitones yields a ratio close to 3/2. None of this is exact, but it is all close enough to sound good when the piano is tuned according to this scheme:

Let me bring the discussion back to economics by pointing out that, although interest rates are lower right now, it is not uncommon for the returns on financial investments to multiply savings by something averaging close to 1.059 every year. At typical rates of return for investments bearing some risk, one can think of each year of returns as raising the pitch of one’s funds on average by about one semitone. Starting from Middle C, one can hope to get quite a ways up the piano keyboard by retirement. And savings early in life get raised in pitch a lot more than savings late in life.

Slide Rules. Slide rules, like the one in the picture right above, are designed first and foremost to use two logarithmic scales that slide along each other to do multiplication. The distances are logarithmic and adding logarithms multiplies the underlying numbers. For example, to multiply 2 times 3,  put the 1 of the sliding piece right at the 3 of the stationary piece. Then look at the 2 on the sliding piece and see what number is next to it on the stationary piece. You could buy a physical slide rule on ebay, but you might instead want to play with a virtual slide rule for free. Playing with this virtual slide rule is one of the best ways to get some intuition for logarithms. (Remember that the distances on a slide rule are all logarithms.) If you like this slide rule and want to go further, here are some much better instructions for using a slide rule than I just gave: Illustrated Self-Guided Course on How to Use the Slide Rule.

Percent Changes (%ΔX). Let me preface what I have to say about percent changes by saying that–other than being a clue that a percent change or a ratio expressed as a percentage lurks somewhere close–I view the % sign as being equivalent to 1/100. So, for example, 23% is just another name for .23, and 100% is just another name for 1. Indeed, economists are just as likely to say “with probability 1” as they are to say “with a 100% probability.”

It turns out that natural logarithms (“ln” or “log”) are the perfect way to think about percent changes. Suppose a variable X has a “before” and an “after” value.

• I want to take the point of view that the change in the natural logarithm is the pure, Platonicpercent change between before and after. It is calculated as the natural logarithm of Xafter minus the natural logarithm of Xbefore.

• I will call the ordinary notion of percent change the earthly percent change. It is calculated as the change divided by the starting value, (Xafter - Xbefore)/Xbefore.

• In between these two concepts is the midpoint percent change. It is calculated as the change divided by the average of the starting and ending values:

(Xafter-Xbefore) / { (Xafter + Xbefore)/2 }

Below is a table showing the relationship between Platonic percent changes, midpoint percent changes and earthly percent changes. In financial terms, one can think of earthly percentage changes as “continuously compounded” versions of Platonic percent changes. Here is the Excel file I used to construct this table that will give you the formulas I used if you want to see them.

There are at least two things to point out in this table:

1. When the percent changes are small, all three concepts are fairly close, but the midpoint percent change is much closer to the Platonic (logarithmic) percent change.

2. A 70% Platonic percent change is very close to being a doubling–which would be a 100% earthly percent change. This is where the “rule of 70” comes from. (Greg Mankiw talks about the rule of 70 on page 180 of Brief Principles of Macroeconomics.) The rule of 70 is a reflection of the natural logarithm of 2 being equal to approximately .7 = 70%. Similarly, a 140% Platonic percent change is basically two doublings–that is, it is close to multiplying X by a factor of 4; and a 210% Platonic percent change is basically three doublings–that is, it is close to multiplying X by a factor of 8.

Let’s look at negative percent changes as well. Here is the table for how the different concepts of negative percent changes compare:

A key point to make is that with both Platonic (logarithmic) percent changes and midpoint percent changes, equal sized changes of opposite direction cancel each other out. For example, a +50% Platonic percent change, followed by a -50% Platonic percent change, would leave things back where they started. This is true for a +50% midpoint percent change, followed by a -50% midpoint percent change. But, starting from X, a 50% earthly percent change leads to 1.5 X. Following that by a -50% earthly percent change leads to a result of .75 X, which is not at all where things started. This is a very ugly feature of earthly percent changes. That ugliness is one good reason to rise up to the Platonic level, or at least the midpoint level.

Continuous-Time Growth Rates. There are many wonderful things about Platonic percent changes that I can’t go into without breaking my resolve to keep the equation count down. But one of the most wonderful is that to find a growth rate one only has to divide by the time that has elapsed between Xbefore and Xafter. That is, as long as one is using the Platonic percent change %ΔX=log(Xafter)-log(Xbefore),

%ΔX / [time elapsed] = growth rate.

The growth rate here is technically called a “continuous-time growth rate.” The continuous-time growth rate is not only very useful, it is a thing of great beauty.

Update on How the Different Concepts of Percent Change Relate to Each Other.  One of my students asked about how the different percent change concepts relate to each other. For that, I need some equations. And I need “exp” which is the opposite of the natural logarithm “log.” Taking the function exp(X) is the same as taking e, (a number that is famous among mathematicians and equal to 2.718…) to the power X. For the equations below, it is crucial to treat % as another name for 1/100, so that, for example, 5% is the same thing as .05.

Earthly percent changes are the most familiar. Almost anyone other than an economist who talks about percent changes is talking about earthly percent changes. Most of you learned about earthly percent changes in elementary school. So let me first write down how to get from the earthly percent change to the Platonic and midpoint percent changes. (I won’t try to prove these equations here, just state them.)

Platonic = log(1 + earthly)

midpoint = 2 earthly/(2 + earthly)

If you are trying to figure out the effects of continuously compounded interest, or the effects of some other continuous-time growth rate, you will want to be able go from Platonic percent changes–which come straight from multiplying the growth rate by the amount of elapsed time–to earthly percent changes. For good measure, I will include the formula for midpoint percent changes as well:

earthly = exp(Platonic) - 1

midpoint = 2 {exp(Platonic) - 1}/{exp(Platonic) + 1}

I found the function giving the midpoint percent change as a function of the Platonic percent change quite intriguing. For one thing, when I changed signs and put “-Platonic” in the place where you see “Platonic” on the right-hand side of the equation the result equal to -midpoint. When switching the sign of the argument (the inside thing: Platonic) just switches the sign of the overall expression, mathematicians call it an “odd” function (“odd” as in “odd and even” not “odd” as in “strange”). The meaning of this function being odd is that Platonic and midpoint percent changes map into each other the same way for negative changes as for positive changes.  (That isn’t true at all for the earthly percent changes.) The other intriguing thing about the function giving the midpoint percent change as a function of the Platonic percent change is how close it is to giving back the same number. To a fourth-order (the squared term and the fourth power term are zero), the approximation for the function is this:

midpoint=Platonic - (Platonic cubed/12) + (5th power and higher terms)

Finally, let me give the equations to go from the midpoint percent change to the Platonic and the

earthly = 2 midpoint/(2-midpoint)

Platonic = log(2+midpoint) - log(2-midpoint)

= log(1+{midpoint/2} ) - log(1-{midpoint/2})

The expression for Platonic percent changes in terms of midpoint percent changes has such a beautiful symmetry that its “oddness” is clear. Since I know the way to approximate natural logarithms to as high an order as I want (and I am not special in this), I can give the approximation for Platonic percent changes in terms of powers of midpoint percent changes as follows:

Platonic = midpoint + (midpoint cubed)/12

+ (midpoint to the fifth power)/80

+ (midpoint to the seventh power)/448

+ (9th and higher order terms).

The bottom line is that for even medium-sized percent changes (say 30%), the Platonic percent change is quite close to the midpoint percent change–something the tables above show. By the time the Platonic percent changes and midpoint percent changes start to diverge from each other in any worrisome way, the rule of 70 that makes a 70% Platonic percent change close to equivalent to a doubling starts to kick in to help out.

# Evan Soltas: The Great Depression in Graphs

Evan Soltas is a freshman this Fall at Princeton. He is 19. Here is the picture he gives of the Great Depression, and here is a short bio taken from his website:

Evan Soltas is the writer of Wonkbook, the morning email newsletter of Ezra Klein’s Wonkblog at The Washington Post, and for Bloomberg View’s “The Ticker” blog. A student at Princeton University, where he intends to major in economics, Evan blogs daily on economic news, policy, and research findings – and a variety of other topics, approaching the subject as a student and not as an expert.

His research has been featured recently in The Wall Street Journal, the Financial Times, The Atlantic, Slate, the Daily Beast, the National Review, The American Conservative, The Nation, and The Globe and Mail.

His particular areas of research and blogging interest include monetary economics and macroeconomics. His blog further contains substantial discussion of labor and financial markets, development, economic history, econometrics, and public finance.

It is not as if I have a ranking worked out, so I might be understating things, but in my book, Evan is clearly one of the best 10 economics bloggers out there, without regard to age. What I especially like is Evan’s attention to facts–and his skill at making facts come alive. Evan’s attention to facts is especially valuable in an era when so many of the media, the commentariat, and those in the public sphere more generally, have left facts behind.

# A Market Measure of Long-Run Inflation Expectations

Brad DeLong’s graph of “breakeven inflation”: the rate of inflation at which regular (nominal) 30-year Treasury bonds would neither better nor worse than 30-year Treasury Inflation Protected Securities.

Brad DeLong explains here how the difference in interest rates between the Federal government’s 30-year nominal bonds and its 30-year real bonds (Treasury Inflation Protected Securities) can measure financial investors’ expectations about average inflation over the next 30 years.

Unlike Brad, I think the investor’s expectations are reasonable. Knowing the articles in economics journals that the folks at the Fed are reading–and that young economists whose future is at the Fed are reading–makings me confident that the commitment to controlling inflation in the long run is durable. 2% seems to have been settled on as the long-run target.

# How Americans Spend Their Money and Time

Two of the most fundamental choices people make are how to spend their money and their time. Economists talk about a “budget constraint” for money and a “budget constraint” for time. Here is a set of links to well-done graphs on how Americans deal with those two budget constraints:

1. Jacob Goldstein and Lam Thuy Vo: “What America Buys”
2. Jacob Goldstein and Lam Thuy Vo: “How The Poor, The Rich And The Middle Class Spend Their Money”
3. Lam Thuy Vo: “What Americans Actually Do All Day Long, In 2 Graphics”
4. Jacob Goldstein and Lam Thuy Vo: “What America Does For Work.”

## Bonus

Thanks to my brother Joseph Kimball for pointing me to this series of posts by Lam Thuy Vo and Jacob Goldstein.