One Nation

The beginning of the 2020 US presidential campaign is a reminder of the divisions within the United States. Understanding those with different views is not only the road to healing those divisions, but also, for either side, the road to winning in the general election sixteen months from now. Most of you and I already have a reasonably good understanding of the “Progressive” viewpoint that is now so influential in the Democratic Party. Therefore, let’s try to dig into the views of those who are enthusiastic Trump supporters as well as those who might reluctantly vote for Donald Trump because they are uncomfortable with the Democratic Party alternative.  

Peggy Noonan, in her most recent op-ed “The 2020 Democrats Lack Hindsight,” emphasizes “identity” issues as important to those who enthusiastically support Donald or might vote for him because of discomfort with the alternative. She quotes a middle-aged Kansan man, who said:

Every day, Americans are told of the endless ways they are falling short. If we don’t show the ‘proper’ level of understanding according to a talking head, then we are surely racist. If we don’t embrace every sanitized PC talking point, then we must be heartless. If we have the audacity to speak our mind, then we are most definitely a bigot. …

We are jabbed like a boxer with no gloves on to defend us. And we are fed up. We are tired of being told we aren’t good enough. … in Donald Trump, voters found a massive sledgehammer that pulverizes the ridiculous notion that Americans aren’t good enough.

The previous week, in “My Sister, My Uncle and Trump,” Peggy quoted her sister and uncle and characterized these two early Donald supporters this way:

They were patriots; they loved America. They weren’t radical; they’d voted for Republicans and Democrats. They had no grudge against any group or class. They knew that on America’s list of allowable bigotries they themselves—middle Americans, Christians who believed in the old constitutional rights—were the only ones you were allowed to look down on. It’s no fun looking down on yourself, so looking down wasn’t their habit.

A good resolution of cultural issues and racial, ethnic and gender disparities could help heal the divisions in America. (Here, I will leave aside the fraught issue of abortion. For my views on abortion, see “Safe, Legal, Rare and Early.”) Let me give my opinion on a way forward.  

First, for racial, ethnic and gender disparities, as in the area of climate change, a crucial rule to make a civil discussion possible is that recognition of a serious problem should not be construed as agreeing that the remedy urged by those highlighting the problem is the right remedy. People need to have confidence that their views about a remedy will be respected enough that they are not giving away the game by acknowledging the reality of a problem. Admitting a problem exists should not be construed as agreeing to be railroaded into a particular remedy.

As Peggy Noonan points out, people hate being called racist or sexist or otherwise being told they are deplorables. It is good to look for alternative explanations for people’s attitudes before jumping to accusing people of invidious racism or sexism. Here I use the phrase “invidious racism or sexism” to mean seriously blameworthy racism or sexism as opposed to the even more troublesome racist and sexist attitudes that are like the air we breathe and hence not particularly blameworthy in an individual. Non-invidious pervasive racism or sexism is one of the most important alternatives to positing invidious racism or sexism.

Second, racism and sexism can often be supported by systemic structures plus routine self-interest and self-aggrandizement. For example, in economics departments, professors have a strong interest in building up their own fields and their own styles of economics. To the extent their numbers tilt male right now, and male economics professors have, on average, different field and style preferences, their desires to build up their own fields and styles of economics will handicap female job candidates, even if they don’t have any prejudice at all against women who happen to be doing the field and style of economics they are looking for.    

Turning to invidious racism and sexism, it is important to realize that some comes from personal grievances that might not have happened in a better society. For example, children often live in fear of being bullied. Two types of bullying and nasty teasing can lead to invidious racism, sexism and other bad attitudes. First, if the bully happens to be of a different race, the hatred of that bully might be overgeneralized into a hatred of a race. Second, bullies often taunt other children by saying they are a member of disfavored group. When I was a boy, bullies often taunted other boys by saying they were a “fag,” which powerfully got across the idea that to be a homosexual was bad. Both of these mechanisms for creating invidious racism, sexism and other bad attitudes can be forestalled by reducing the amount of bullying that children face from one another. (See my post “Against Bullying.”)

Another reduceable source of invidious racism is the centrality to our current society of prizes—such as admission to elite colleges or professional schools or prestigious jobs—that have an excessive amount of surplus. If elite colleges and professional schools each expanded the number of students admitted, it would reduce the stress on those trying to get admitted and reduce the likelihood that that stress would lead to resentment of affirmative action—and might even reduce the sense that affirmative action was needed, because admission wasn’t quite such a big prize.

When particular jobs have a huge amount of surplus for those who get them, it would be helpful for us to reduce the gap in prestige, pay and perks between them and the next job down on the ladder. The top nurses on the totem pole should be at about as high on the ladder as the least of even experienced doctors. The most talented non-tenure-track lecturers should have at least as much prestige as struggling professors. And the most skilled paralegals should be nearly the equal in prestige to mediocre members of the bar.

Eliminating the kinds of gaps beloved of those doing regression discontinuity analyses—in this case between those barely admitted and barely rejected, or between those barely hired and those barely turned away—should reduce any resentment due to affirmative action, but will still leave the kind of racial/ethnic animosities common against Jews and Asian Americans. There is no single solution to all forms of racism or ethnic or religious hatred.

Finally, there are likely to be many interventions that can be made with schoolchildren that can reduce racism, sexism and other bad attitudes. The key thing is to have these programs evaluated in randomized trials. Just because someone believes something will help doesn’t make it so. (For older age groups, some evidence has come in suggesting that sensitivity training of the common types is not very effective.) There is no shortage of ideas to be tested. In “Nationalists vs. Cosmopolitans: Social Scientists Need to Learn from Their Brexit Blunder” I write:

As a Cosmopolitan, what I most want to know from social science is what interventions can help make people more accepting of foreigners. Somewhat controversially, it is now common in the US for elementary school teachers to make efforts to instill pro-environmental attitudes in schoolchildren. Whether or not those efforts make a difference to children’s attitudes, are there interventions or lessons that can make schoolchildren and the adults they grow up to be likely to feel more positive about the foreign-born in their midst? For example, having had a very good experience learning foreign languages on my commute by listening to Pimsleur CDs in my car, I wonder whether dramatically more effective Spanish language instruction for school children following those principles of audio- and recall-based learning with repetition at carefully graded intervals might make a difference in attitudes toward Hispanic culture and toward Hispanics themselves in the US.

Although it is the province of social scientists to test interventions intended to improve attitudes toward the foreign-born, many of the best interventions will be created by writers, artists, script-writers, directors, and others in the humanities. There are also many other marginalized groups in society, but the strength of anti-foreigner attitudes suggests the need for imaginative entertainment and cultural events to help people identify with human beings who were born in other countries.

My bottom line is that when we think of racism and sexism and other bad attitudes, we should consider root causes that are not entirely within the individual and not leap too quickly to castigating individuals. And we should cast the net wide for root causes and plausibly helpful interventions, and test hypotheses rigorously. Some proposed remedies for racism, sexism and other bad attitudes may do more harm than good. It does not make one a racist, sexist or bad person to say that we should ask for evidence about the effects of various remedies. (And we should gather evidence for the effects of remedies recommended by those on the right as well as by those on the left. For example, effective crime control measures that make people feel safer might reduce racism, or certain kinds of easy cultural training that immigrants are happy to receive might make them seem less threatening to the native-born.)

In the last few years I have become aware of the serious possibility that for a long time we were successful at driving racism and sexism underground by silencing people with such attitudes, without fully convincing people to relinquish such attitudes. Silencing people with such attitudes may reduce the chance of transmitting those attitudes to the rising generation, but it also causes the resentment people almost always feel when they can’t say their piece. If, as a society, we had not succumbed to the temptation o trying to silence people, we might—after great effort—now be further along the road to persuasion. Letting people say their piece often seems threatening when we disagree strongly (and perhaps especially when we disagree strongly for good and sound reasons), but I believe letting people say their piece and then responding with our views is the wiser course.

A good rule of thumb is to avoid reading anyone out of the human race—not even those who would read others out of the human race. Given our evolutionary heritage, taking an “Us and Them” approach is extremely contagious. Let’s not play with that kind of fire. In a cultural war like the one we are in now, I believe it is the side that can best rise above the us-versus-them temptation that will prevail.

Related posts and links, beginning with those flagged above:

Why I Am Not a Neoliberal

Link to the article above

Link to the article above

Without looking at the details, I would have thought that I was a Neoliberal. Indeed, I have a Storify story "The Time Miles was Called a "Neoliberal Sellout" by Matt Yglesias and was Glad for the Compliment in the End." But digging deeper, I am now not at all sure I am a Neoliberal. Let me consider point by point where I agree with Neoliberalism and where I disagree.

Whole books have been written on Neoliberalism, but I haven't read them. So let me take Mike Konczal's take on Neoliberalism in his excellent Vox essay "'Neoliberalism' isn’t an empty epithet. It’s a real, powerful set of ideas" as a rough-and-ready definition of Neoliberalism. My discussion of whether I am a Neoliberal or not will only be relative to Mike Konczal's description of Neoliberalism there. If Neoliberalism moves in the direction of Supply-Side Liberalism as laid out in all of the posts in this blog, so much the better. But historically, Neoliberalism seems to have many differences from my version of Supply-Side Liberalism. 

Early on in his essay, Mike Konczal cautions:

The difficulty of the term ["Neoliberalism"] is that it’s used to described three overlapping but very distinct intellectual developments.

                                                      Moving to the Political Center

The first of these three intellectual developments was political:

In political circles, ["Neoliberalism" is] most commonly used to refer to a successful attempt to move the Democratic Party to the center in the aftermath of conservative victories in the 1980s. [One] can look to Bill Galston and Elaine Kamarck’s influential 1989 The Politics of Evasion, in which the authors argued that Democratic “programs must be shaped and defended within an inhospitable ideological climate, and they cannot by themselves remedy the electorate's broader antipathy to contemporary liberalism.”

To me, this is just democracy in action—when political entrepreneurs don't get blinded by their own personal ideology. Ignoring the views of close to half the electorate can be politically dangerous. You can see some of my views about the partisan divide abroad and in the US in

Personally, I have a great deal of sympathy for many (but by no means all) "Conservative" arguments. 

                                                         The Washington Consensus

Mike Konczal continues: 

In economic circles, however, “neoliberalism” is most identified with an elite response to the economic crises of the 1970s: stagflation, the energy crisis, the near bankruptcy of New York. The response to these crises was conservative in nature, pushing back against the economic management of the midcentury period. It is sometimes known as the “Washington Consensus,” a set of 10 policies that became the new economic common sense.

It is this “Washington Consensus” that I most want to put under the microscope. John Williamson's 1990 Peterson Institute of International Economics paper "What Washington Means by Policy Reform" is the touchstone Mike Konczal refers to for the "Washington Consensus." Looking at this document, one can see that, to this day, when policy folks talk about "structural reform," they are often talking about reform in line with the  "Washington Consensus." 

1. Fiscal Discipline

Fiscal discipline is the first tenet of the Washington Consensus. (All of this about the Washington consensus is "according to John Williamson in 1990.") I am a fiscal hawk in the sense that I worry quite a bit about the national debt. You can see this in my early post "Avoiding Fiscal Armageddon." Yichuan Wang and I interpreted the data as providing no support for the idea that national debt lowers GDP growth in "After Crunching Reinhart and Rogoff's Data, We Found No Evidence High Debt Slows Growth," but there we write:

We don’t want anyone to take away the message that high levels of national debt are a matter of no concern. As discussed in "Why Austerity Budgets Won't Save Your Economy," the big problem with debt is that the only ways to avoid paying it back or paying interest on it forever are national bankruptcy or hyper-inflation. And unless the borrowed money is spent in ways that foster economic growth in a big way, paying it back or paying interest on it forever will mean future pain in the form of higher taxes or lower spending.

What I said in "Why Austerity Budgets Won't Save Your Economy" is: 

To understand the other costs of debt, think of an individual going into debt. There are many appropriate reasons to take on debt, despite the burden of paying off the debt:

  • To deal with an emergency—such as unexpected medical expenses—when it was impossible to be prepared by saving in advance.

  • To invest in an education or tools needed for a better job.

  • To buy an affordable house or car that will provide benefits for many years.

There is one more logically coherent reason to take on debt—logically coherent but seldom seen in the real world:

  • To be able to say with contentment and satisfaction in one’s impoverished old age, “What fun I had when I was young!”

In theory, this could happen if when young, one had a unique opportunity for a wonderful experience—an opportunity that is very rare, worth sacrificing for later on. Another way it could happen is if one simply cared more in general about what happened in one’s youth than about what happened in one’s old age.

Tax increases and government spending cuts are painful. Running up the national debt concentrates and intensifies that pain in the future. Since our budget deficits are not giving us a uniquely wonderful experience now, to justify running up debt, that debt should be either (i) necessary to avoid great pain now, or (ii) necessary to make the future better in a big enough way to make up for the extra debt burden. 

My worries about the national debt are also an important impetus behind my arguing for a public contribution program, as introduced in "No Tax Increase Without Recompense" and developed in other posts linked in my bibliographic post "How and Why to Expand the Nonprofit Sector as a Partial Alternative to Government: A Reader’s Guide."  

But what about fiscal stimulus? I am firmly of the view that, other than automatic stabilizers (such as taxes that go up with income and benefits that increase with low income), monetary policy should take on the primary stabilization role. One of my signature efforts has been to figure out the most practical and acceptable possible ways to eliminate the zero lower bound. My organized bibliography for that effort is "How and Why to Eliminate the Zero Lower Bound: A Reader’s Guide." Once a central bank's target interest rate can go as low as necessary, aggregate demand is no longer scarce. So there is no excuse for a government to then run deficits beyond those induced by automatic stabilizers to stimulate the economy.

There are three exceptions to this generalization. First, as part of the monetary policy transmission mechanism, the fiscal arm of the government should spend most of the windfall from reduced interest expenses when interest rates go down and cut back spending to compensate for higher interest expenses when interest rates go up. (See "Negative Rates and the Fiscal Theory of the Price Level.") Most governments will do this without extra prompting. Ideally, the government should also do some intertemporal substitution in spending that responds to high or low interest rates in the way that would be optimal for a private corporation. Governments have been surprisingly slow to do this.

Second, in a monetary union such as the euro zone, where countries in disparate economic situations share monetary policy, an individual nation might need to use some sort of fiscal stimulus. For that I recommend the kind of credit policy I discuss in my paper "Getting the Biggest Bang for the Buck in Fiscal Policy," which is introduced in my blog post of the same name. The abstract for the paper clarifies the key issue for fiscal hawks who see the need for some stimulus:

In ranking fiscal stimulus programs, it is useful to focus on the ratio of extra aggregate demand to extra national debt that results. This note argues that (because of repayment after the end of a recession) “national lines of credit”--that is, government-issued credit cards with countercyclical credit limits and favorable interest rates—would generate a higher ratio of extra aggregate demand to extra national debt than tax rebates. Because it involves government loans that are anticipated in advance to involve some losses and therefore involve a fiscal cost even after efforts to minimize losses, such a policy lies between traditional monetary policy and traditional fiscal policy.

Third, because monetary policy has a lag of 9 months or so in its effects, the same kind of credit policies can be of some value in the first few quarters after an unexpected shock.

Other than these exceptions, I come down decisively in favoring monetary policy over fiscal policy for economic stabilization. See for example:

On the other hand, I do not always look like a fiscal hawk. In "What Should the Historical Pattern of Slow Recoveries after Financial Crises Mean for Our Judgment of Barack Obama's Economic Stewardship?" I strongly criticize Barack Obama for not politically prioritizing and pushing through a larger fiscal expansion in 2009. At that time, the fact that interest rates could go as far negative as needed with easy-to-implement policies was not well understood, so Barack Obama should have done at least three times the amount of fiscal stimulus that he did historically. Because he didn't, a big part of the harm of the Great Recession in the US was his fault. The political prioritization necessary to get a bigger fiscal stimulus package through could easily have meant not getting the Obamacare legislation in anything close to the actual "Patient Protection and Affordable Care Act" through. But to me, avoiding a significant part of the harm of the Great Recession at the cost of being forced to proceed with health care reform on a more bipartisan basis seems the better choice.  

On a more technical issue, I believe strongly that there should be a separate capital budget for national governments. Noah Smith and I argue this in "One of the Biggest Threats to America's Future Has the Easiest Fix" and I have thought hard about technical details of how to make a capital budget work well by keeping incentives to game the system mostly in check: see my powerpoint file "The Applied Theory of Capital Budgeting," which I presented at the Congressional Budget Office in May 2014. My post and the associated Powerpoint file "Discounting Government Projects" addresses another technical issue in capital budgeting. 

2. The Composition of Public Expenditures

The second tenet of the Washington Consensus is that health, education and infrastructure spending are especially good types of public expenditure and that indiscriminate subsidies are especially bad types of public expenditure. Here I am in total agreement. 

3. Tax Reform

For the most part, I don't want to talk about the current Republican tax reform plans being hatched in the House and Senate today. Those plans are a mix of very bad measures with some good technocratic measures. But I am sympathetic to widely agreed-upon principles of tax reform. John Williamson writes:

.. there is a very wide consensus about the most desirable method of raising whatever level of tax revenue is judged to be needed. The principle is that the tax base should be broad and marginal tax rates should be moderate.

I favor the more transparent approach of taxing the rich people who (for the most part) own corporations rather than the opaque approach of taxing the corporations themselves. And I favor consumption taxation, as you can see in "Scrooge and the Ethical Case for Consumption Taxation" and "VAT: Help the Poor and Strengthen the Economy by Changing the Way the US Collects Tax." 

I do not depart from the Washington Consensus here. 

4. Real Interest Rates Market-Determined and Positive

According to John Williamson, a fourth tenet of the Washington Consensus is that real interest rates should be market-determined, positive and moderate. Distinguishing between the short-run, medium-run and long-run as I do in "The Medium-Run Natural Interest Rate and the Short-Run Natural Interest Rate," in the medium-run and long-run, I certainly agree that interest rates should be market-determined. But a market-determined medium-run or long-run rate may or may not be positive. As John Williamson himself wrote in 1990:

The question obviously arises as to whether these two principles are mutually consistent. Under noncrisis conditions, I see little reason to anticipate a contradiction.

With policy heavy-weights such as Larry Summers and Olivier Blanchard talking about secular stagnation, I think the Washington Consensus may be moving toward realizing that situations where real interest rates need to be negative in the long-run are quite possible. 

I fully agree that there are many bad market interventions that push some interest rates down. A good example is the low interest rates given by state-owned banks to state-owned enterprises in current Chinese policy. These divert funds away from the non-state sector, raising the rates in the non-state sector, thereby making it harder for households and private businesses to borrow. 

In the short-run, the idea that interest rates should be market-determined and positive is not helpful. First, I see it as inevitable that some sort of monetary policy be central to interest-rate determination. There is no neutral "free-market" monetary policy. The gold standard is not a neutral monetary policy. The closest it is possible to come to a free-market monetary policy is for the central bank to do its best to get the economy quickly back to the natural level of output, the natural level of unemployment and the natural interest rate. I advocate that strongly, as you can see in my paper "Next Generation Monetary Policy." 

Second, I believe negative rates—both real and nominal—are crucial for cutting short recessions and enabling a lower inflation target. See "How Subordinating Paper Currency to Electronic Money Can End Recessions and End Inflation."

5. Free Capital Flows

Here I think the "Washington Consensus" shifted between when John Williamson was writing and now. I feel there has been more and more emphasis not just on competitive exchange rates, but also free capital flows. Here I favor a more managed version of international capital flows than the current consensus. See "Alexander Trentin Interviews Miles Kimball about Establishing an International Capital Flow Framework." 

6. Free Trade

I am in favor of free trade. Here I am in agreement with the Washington Consensus. But, in something the Washington Consensus did not push, I think the benefits are much greater for freer immigration than from freer trade. See ""The Hunger Games" Is Hardly Our Future--It's Already Here." But as I mentioned above, I think international capital flows should be better managed in order to get more balanced trade:

7. Foreign Direct Investment

I agree that encouraging foreign direct investment is a good thing. For many countries it is a very good thing. Looking at things from the standpoint of countries doing foreign direct investment, one of my favorite essays is "Nicholas Kristof: "Where Sweatshops are a Dream." 

8. Privatization

I agree that many enterprises are better run privately than by the government. But privatization of core government functions such as prisons has often led to very poor quality. For a country like the US, if further privatization took place, my guess is that it would be more likely to move in the wrong direction than in the right direction. 

Part of the problem with privatization of core government functions is the great danger of corruption in government contracting. Suppose I define: 

  • core government function = something where if it isn't done by the government itself, the government needs to contract with a private firm for the service.

Then one needs to consider whether any inefficiency of having the government do the job itself is outweighed by the likely corruption in the contracting relationship. 

In what might seem, but isn't, a view antithetical to a pro-privatization view,  I think the US government should take a much bigger role in bringing down the risk premium with a sovereign wealth fund, which would involve it owning, at least indirectly, a large amount of stock. See:

The reason a sovereign wealth fund wouldn't violate the principle of avoiding undue government meddling is that it would be required to hold only ETFs that had no voting rights. 

9. Deregulaton

Regulation is one of the areas where I most strongly disagree with the Washington Consensus. I think capital requirements/leverage limits are much too loose. I have said this strongly many times.  "Martin Wolf: Why Bankers are Intellectually Naked" is a good post to start with. I have also cheered on the efforts of the Consumer Financial Protection Bureau under Richard Cordray. I give the philosophical justification for the type of regulation done by the CFPB in "On the Consumer Financial Protection Bureau." I also think there are many types of wealth that are ill-gotten, even though they are legal. See "Odious Wealth: The Outrage is Not So Much Over Inequality but All the Dubious Ways the Rich Got Richer."

On the other hand, at the state and local level, regulation is often used as a tool to keep the poor from living next door or competing with middle-class jobs. That is, state and local regulation is often effectively a tool of oppression. I write about the common impulse behind immigration restrictions at the national level and land-use and occupational licensing restrictions at the state and local level in "Keep the Riffraff Out!"

Affordable housing in desirable cities for all the people who want it requires an adequate total amount of housing. That in turn requires allowing needed construction. In "Building Up With Grace," I call for every substantial city to have some district with no height limits that has excellent bus service to the rest of the city. This is for the sake of those of modest means who want to live and work in the city. (Genuine earthquake dangers might lead to some height limits, but these should not be used as an excuse beyond genuine safety needs.)

In relation to regulatory restrictions on construction, I find myself in sympathy with the great bulk of posts on the excellent Facebook group "Market Urbanism." I highly recommend it. 

I should note that allowing more construction has financial stability benefits as well as benefitting social justice. See "With a Regulatory Regime That Freely Accomodates Housing Construction, Lower Interest Rates Drive Down Rents Instead of Driving Up the Price of Homes." 

Occupational licensing requirements often keep those at the bottom of the heap out of jobs, as I discuss in my post “When the Government Says “You May Not Have a Job.” The extent to which this is done in practice is unconscionable. Fortunately, occupational licensing reform efforts are afoot. But these efforts could easily stall out. This is an important area to focus on. 

For those who haven't thought much about occupational licensing, there are two key related points to take away. First, occupational certification and occupational licensing are not the same thing. In "John Stuart Mill: Certification, Not Licensing" I write:

As for licensing itself, although they are often spoken of in the same breath, there is a world of difference between certification and licensing. Certification requirements say that you have to inform customers of your level of qualifications or lack of qualifications in unmistakable ways, according to a well-defined terminology established by the government. They are based on the principle of telling the truth and not deceiving, but do entail some details to make sure no one misunderstands. 

By contrast, licensing requirements say you can be fined or thrown in jail for getting paid for something that someone with an absolutely crystal clear idea of your lack of qualifications is perfectly happy to pay you to do. For example, I would run afoul of the law in Michigan if I cut someone else’s hair for pay–a law ultimately backed up by the threat of throwing me in jail, even if the initial penalty is only a fine. The real reason for that stipulation is that barbers want that barrier to entry in place (I think at least a year and a half of training), not any danger that I will seriously harm someone with a basic haircut. I express some of how wrong I think the overgrowth of licensing requirements is in my post “When the Government Says “You May Not Have a Job'."

I have no problem with certification—it simply makes things clear. But I do have a problem with licensing, which says to people "You may not have a job" unless they devote time and money to training they may not be able to afford.

(Update: See also my December 5, 2017 post "Against Occupational Licensing.")

For practical reform efforts, the second key point to make about occupational licensing is that establishing a low-hurdle licensing category in each general type of job has many of the good effects of having a certification regime instead of a licensing regime. It isn't too harmful to require that "barbers" have a year and a half of training if there is also an occupational licensing category of "haircutter" that requires only a week of training, and haircutters are legally allowed to do everything that barbers are allowed to do. In that case "barber" would indicate someone highly trained, which is useful, but barbers and haircutters would still compete. 

My post "Against Anticompetitive Regulation" discusses other regulatory issues as well. The title of that post indicates the very first question you should ask about any regulation: "Is the regulation about keeping everyone honest, or is it about keeping down the competition?"

Crucially, "keeping everyone honest" doesn't mean "ensuring high quality." If someone honestly signals that what they are selling is low-quality but inexpensive, they should be allowed to sell their honestly low-quality goods and services.

Overall, more regulation is needed of the financial industry; less regulation is needed for service jobs and housing construction. And it is important to watch out for firms running to the government to get the government to put an obstacle in the way of a potential competitor. Finally, regulations that make corporate deception illegal are almost always a good thing. After all, the key welfare theorems suggesting that a free market will do a good job all rely on people knowing and understanding the truth! (General anti-fraud principles in the legal code are helpful, but often don't do enough.)

10. Property Rights

There are many virtues to property rights as they exist in the United States. But I think we have gone much too far with intellectual property rights. See:

One of the bad aspects of trade negotiations in the last few years has been the emphasis by the United States on imposing its dysfunctional intellectual property system on the rest of the world. (See Dani Rodrik on one aspect of that here.) The United States should get its own house in order on intellectual property, and only then recommend its intellectual property system to other nations. 

On property rights more generally, I think if, by a high standard of proof, an action can be shown to be a bad action that should have been prohibited in the past, then taxing away the wealth resulting from that action is appropriate. If done right, this has good incentives: companies and people will try to avoid doing things that people in the future will realize were wrong. However, there may need to be some statute of limitations on this.

And where uncertainty about future legal treatment stands in the way of important investments, the government may need to provide better guarantees of future legal treatment. I am thinking here of the development of self-driving cars, that could be seriously hindered if there were too much legal uncertainty. Fortunately, that seems to have been avoided. 

                                            Markets Defining More and More of Our Lives

Leaving the Washington Consensus, the last of the three meanings of "Neoliberalism" Mike Konczal writes of is markets defining more and more of our lives:

The third meaning of “neoliberalism,” most often used in academic circles, encompasses market supremacy — or the extension of markets or market-like logic to more and more spheres of life. This, in turn, has a significant influence on our subjectivity: how we view ourselves, our society, and our roles in it. One insight here is that markets don’t occur naturally but are instead constructed through law and practices, and those practices can be extended into realms well beyond traditional markets.

Another insight is that market exchanges can create an ethos that ends up shaping more and more human behavior; we can increasingly view ourselves as little more than human capital maximizing our market values.

Here let me break out as a distinct problem the idea that companies should only be concerned about maximizing shareholder value. Even if "obeying the law" is added as a constraint on that goal, it still leads to serious problems, as corporations look for every possible loophole to maximize shareholder value even at the expense of social welfare. Although it isn't perfect, a much better goal for big companies, quite consistent with economic theory, would be to maximize the overall welfare of those people who hold index funds covering all the public companies in the nation or in the world.  

In his book Finance and the Good Society, Robert Shiller speaks approvingly of legal structures for corporations that stipulate that a given corporation should pursue goals beyond shareholder value maximization. This is likely to be helpful where it is used.

But what is most needed is for business school professors to quit teaching that maximizing shareholder value is the be-all and end-all duty of those who run public corporations—perhaps with obeying the law as an added duty. I am not at all satisfied with the alternatives proposed by most of those who want companies to pursue something other than shareholder value maximization. In the immediate future, I think what I mentioned above—"maximizing the overall welfare of those people who hold index funds covering all the public companies in the nation"—would be a good alternative to shareholder value maximization in business school instruction. With thought, I have no doubt that careful thinkers can come up with an even better alternative that still has some of the hard edge of economic theory but that is even more conducive to social welfare. 

More specifically on the issue of markets defining more and more of our lives, I think economists need to appreciate more all of the non-monetary motives that drive people. I wrote about this in "Scott Adams's Finest Hour: How to Tax the Rich." In addition to affecting taxation and being a big part of the argument for the public contribution program I have proposed in order to expand the non-profit sector, non-monetary motives are a key reason why current copyright law is off-track, as discussed in my post "Copyright." 

Religious motives are good examples of non-monetary motives though far from the only non-monetary motives. Personally, I know well how powerful non-monetary motives can be from my forty years as a Mormon. (See "Five Books That Have Changed My Life.") The posts I noted there can help you appreciate how big a difference non-monetary motives can make:

Also see this Bloomberg View article by Megan McArdle:

The biggest share of my research time is currently being devoted to collecting and analyzing data in order to write a paper with the working title of "What Do People Want?" with Dan Benjamin, Kristen Cooper and Ori Heffetz, supported by a brilliant and capable team of research assistants: Becky Royer, Tuan Nguyen, Tushar Kundu, Rosie Li and (early on) Samantha Cunningham, and by heavy-duty coding support from Robbie Strom and Itay Zandbank. That exercise demonstrates well how many things people care about—of which only some can be purchased in the market. I hope to share some of our latest results in a few months. But for now, take a look at the results from an earlier round of data collection that I discuss in "Judging the Nations: Wealth and Happiness Are Not Enough."

It is a big mistake to think that people only care about things that can be bought and sold. Acting as if people do care only about things that can be bought and sold impoverishes our interactions with one another. 

Markets are useful tools, but they have downsides as well as the many upsides that we teach in economics courses. 

                                                                  Conclusion

There are many areas where I agree with Neoliberalism. But I disagree with Neoliberalism in important areas:

  • the need for more financial regulation (particularly the need for stricter capital requirements and leverage limits and more regulation in the domain of the Consumer Financial Protection Bureau)

  • the need for negative interest rates

  • the need for international capital flow policies that lead to more balanced trade

  • the need for less restrictive intellectual property law

  • the perils of corporate decision-makers believing their job is "shareholder value maximization"

  • the downsides of excessive marketization—and even more the downsides of having a view of human beings as motivated almost entirely by the things money can buy

  • the prevalence of ill-gotten legal wealth in countries like the US

  • the virtues of (possibly debt-financed) sovereign wealth funds as a policy tool for countries such as the US, Japan, the UK and many other European countries

These differences seem important enough that I do not consider myself a Neoliberal. Supply-Side Liberalism is not Neoliberalism. It is a different animal. 

  

 

 

 

Mustafa Akyol—The Illogic of Globalization as a Scapegoat Everywhere: Who is Taking Advantage of Whom?

What is ironic in the world today is that conspiracy theorists in different societies are obsessed with the same scapegoat — globalization — but interpret it as a conspiracy only against their side. … In fact, there is a global conspiracy against neither Islam nor the West. Globalization has just forced different societies to interact more than ever — and many people are scared by what they see on the other side. Populists all over the world began taking advantage of those fears, telling us that we should be even more fearful still.
— Mustafa Akyol, “The Plot Against America or the Plot by America?” October 28, 2016 New York Times 

Election Day Special, 2016

On this US Election Day, 2016, I will be flying from Israel, where I gave two talks at the Bank of Israel, to Brussels, where I am a keynote speaker at the annual ECMI conference to be held at the National Bank of Belgium. But I voted by mail before I left on my Fall 2016 tour of European central banks. I hope all of my readers who are US citizens have plans to vote. David Leohnardt in the November 1, 2016 New York Times wrote this:

Voting plans increase voter turnout. In an experiment by David Nickerson and Todd Rogers, involving tens of thousands of phone calls, some people received a vague encouragement to vote. They were no more likely to vote than people who received no call. Other people received calls asking questions about their logistical plans — and became significantly more likely to vote. The questions nudged them.

Second, tell other people about your plan, and ask about theirs. The power of peer pressure increases voter turnout. One aggressive experiment mailed people a sheet of paper with their own turnout history and their neighbors’. A more gentle experiment presented Facebook users with head shots of their friends who had posted an update about having voted. Both increased turnout, as have many other experiments.

You don’t need an intricate effort to influence people, though. Post your own voting plan to Facebook, and ask your friends to reply with theirs. Text or call relatives in swing states and ask about their voting plans. Do the same when you see friends.

And here is Adam Grant in the October 1, 2016 New York Times

If we want people to vote, we need to make it a larger part of their self-image. In a pair of experiments, psychologists reframed voting decisions by appealing to people’s identities. Instead of asking them to vote, they asked people to be a voter. That subtle linguistic change increased turnout in California elections by 17 percent, and in New Jersey by 14 percent.

The American electorate overall has a great deal of wisdom, but is not able to fully express that wisdom with our current voting system. On that, take a look at last week’s post “Dan Benjamin, Ori Heffetz and Miles Kimball–Repairing Democracy: We Can’t All Get What We Want, But Can We Avoid Getting What Most of Us *Really* Don’t Want?

October and even November surprises keep coming in for both Donald Trump and Hillary Clinton. One I found interesting was the details David Barstow, Mike McIntire, Patricia Cohen, Susanne Craig and Russ Buettner reported on Donald Trumps tax avoidance approach in the October 31, 2016 New York Times. Essentially, evidence indicates Donald Trump was taking a large deduction on his taxes for his investor’s losses from investing in his project. The way he did that was by overvaluing partnership equity in the failed projects and purporting to reimburse his investors for their losses by giving them overvalued partnership equity. What I wasn’t totally clear about is whether these investors succeeded in deducting from their own taxes the very same losses, using a lower value of that partnership equity received that was inconsistent with the value that Donald Trump used. That is, did Donald Trump take his investor’s loss deductions away from them, or did he and his investors both successfully claim the same losses?

Finally, let me mention that a key issue in this election is the principle of the equality of all human beings, an issue I discussed in “Us and Them” and this past Sunday in “John Locke on the Equality of Humans.”

Dan Benjamin, Ori Heffetz and Miles Kimball—Repairing Democracy: We Can’t All Get What We Want, But Can We Avoid Getting What Most of Us *Really* Don’t Want?

The 2016 US presidential election is noteworthy for the low approval ratings of both major party candidates. For example, as of November 2, 2016, poll averages on RealClear Politics show 53.6% of respondents rating Hillary Clinton unfavorably, while only 43.9% of respondents rate her favorably; 58.9% of respondents rate Donald Trump unfavorably, while only 38.1% of respondents rate him favorably. Leaving aside those who vote for a minor party or write-in candidate, there is no question that on election day, many voters will think of what they are doing as voting against one of these two candidates rather than voting for one of them.

Out of all the many candidates who campaigned in the primaries to be President of the United States, how did the electoral system choose two who are so widely despised as the candidates for the general election? The party system for choosing the candidates for the general election may bear some of the blame, especially in an era of high political polarization. But another important characteristic of the current US electoral system is that one can only make a positive vote for a candidate, not a negative vote.  That is, in the current voting system, voters can only express one attitude towards a candidate—the belief that she or he would make the best president among the candidates. But, should this be the only attitude that comes into play when picking the most powerful person of the free world? Shouldn’t our voting system give voters a chance to say which candidate they think would make the worst president before we deposit the U.S. nuclear codes in a new president’s hands? And more generally, shouldn’t our voting system take into account how much voters like or dislike the candidates?

Our work on collective decision-making mechanisms for incorporating subjective well-being data into policy-making led us to stumble on a class of voting systems for multicandidate elections that we think might help in avoiding outcomes that a large share of people hate. For us, this research program began with “Aggregating Local Preferences to Guide Marginal Policy Adjustments” (pdf download) by Dan Benjamin, Ori Heffetz, Miles Kimball and Nichole Szembrot in the 2013 AEA Papers and Proceedings. More recently, “The Relationship Between the Normalized Gradient Addition Mechanism and Quadratic Voting” by Dan Benjamin, Ori Heffetz, Miles Kimball and Derek Lougee (on which Becky Royer worked as an extremely able research assistant) draws some connections between what we have come to call the “Normalized Gradient Addition (NGA) mechanism” and a broader literature. (Here is a link to a video of my presentation on that paper.)

Figure 1: Voting Diagram for Three Candidates

To better understand the NGA mechanism as applied to multicandidate voting, consider the simple case in which there are three candidates – Tom, Dick, and Jerry – as shown in Figure 1 above. In this case of multicandidate voting, we represent how close each candidate is to winning by a point in a triangle. The three vertices represent victory for one particular candidate, while the edges opposite a vertex represent that candidate being eliminated. The distance from each edge can be thought of as a kind of “notional probability” that a particular candidate would win if the selection process were somehow cut short and terminated in the middle of the action. Thus, the points in the interior of the triangle represent an unresolved situation in which each candidate is still treated as having a chance. Voters can choose vectors of a fixed unit length in any direction within the triangle. The current position in the triangle then gradually evolves in a direction determined by adding up all of these vector votes. 

To illustrate, In the picture on the left of Figure 1, there is a blue arrow pointing from the starting point upwards towards Dick. This is the only movement that our current voting system allows for; a positive vote for one candidate. But there is also the red arrow, pointing in the opposite direction. This corresponds to a “negative” vote, in which the voter’s only goal is to vote against Dick. Not only would our mechanism allow for both these positive and negative votes, but it would allow voters to have even more complex votes based on their specific preferences for each of the candidates, as indicated by all of the arrows in the picture on the right. This example can be extended to higher dimensions, in which there are more than three candidates. For example, the policy space would be modeled as a tetrahedron for four candidates, or a simplex for five or more candidates, with a vertex for each candidate.

Figure 2: Summing the Votes and Adjusting the Position in the Triangle

From these preference vectors, we can then add up the vectors across people to determine the direction in which the position in the triangle evolves. Figure 2 above depicts an example of a simple two-voter system. In this example, person 1’s vector points most closely towards Jerry, while person 2’s vector points most closely towards Dick. After summing these two vectors, a small number  times the resulting vector is added to the previous point in this triangle to get a new point. If that new point is outside the triangle, then the closest point on the boundary of the triangle is the new position instead. This procedure is then repeated until either a vertex is reached (decisive victory for one candidate) or all motion grinds to a halt because the votes exactly counterbalance one another. 

It is important to note that we would not need or expect all voters to understand this triangular representation of the voting mechanism. Our focus is on designing a survey that lets individuals easily provide the information needed to calculate the direction a particular voter would most like to go, without them having to know this representation of their vote explicitly.  

The voting process is a matter of giving a rating to each candidate on a scale from 0 to 100, where 0 is the rating for the least favored candidate and 100 is the rating for the most favored candidate. Giving a rating to each candidate allows a voter the options of:

  • a straight “positive” vote, by rating the most favored candidate 100 and all other candidates 0,

  • a straight “negative” vote, by rating the least favored candidate 0 and all other candidates 100,

  • anything in between a straight positive and a straight negative vote, by rating the least favored candidate 0, the most favored candidate 100 and other candidates in between.

Data Collection

In order to illustrate the process of having voters rate candidates, and investigate what type of votes people wanted to cast, we collected data on the University of Southern California’s Understanding America Study, between March 18 - 21, 2016, on preferences over the last five major party candidates standing at the time (Hillary Clinton, Ted Cruz, John Kasich, Bernie Sanders, and Donald Trump).  

We asked participants who they believed would make the best President of the United States out of the five candidates, and then asked them who would make the worst. We set their “best” candidate at a rating of 100 and their “worst” candidate at a rating of 0. We had two different approaches for having each individual rate candidates after this point. 

In our first approach, we simply asked participants to “rate the other candidates using a special scale, where [worst candidate] is a 0 and [best candidate] is a 100”, with no other instructions. Let’s refer to this approach as “unstructured ratings.”

In our second approach, we seek to elicit participants’ expected utilities for each candidate. That is, we want to identify how much each participant would value having each candidate as president compared to the other candidates. In doing so, we explained that choosing a rating X on the scale indicates that the participant feels indifferent between the following two situations: (1) knowing for sure that the candidate they are rating will be president, and (2) waking up on election day with their favorite candidate having an X% chance of winning and their most disliked candidate having a (100-X)% chance of winning. Figure 3 is a screenshot of the directions each participant received in this approach, including two examples for clarity, in which the voter had chosen Donald Trump as the “worst” candidate and Hillary Clinton as the “best” candidate.

Figure 3: Instructions for Expected-Utility Ratings

A priori we favor the expected-utility ratings over the unstructured ratings, but we will report results using the unstructured ratings for those who don’t share that view and to show that it matters what instructions were given regarding how to use the scale.  

Converting the Ratings Into Votes

In the simplest, most straightforward implementation of the NGA mechanism, we construct each individual’s vector vote from their ratings as follows:

  • Calculate the individual’s mean rating across all five candidates and the standard deviation of the individual’s ratings.

  • For each candidate, starting with the individual’s rating of that candidate, subtract the individual’s mean and divide by the individual’s standard deviation.

This procedure normalizes an individual’s candidate ratings to have mean zero and variance one. That way, the vector vote of each individual is ensured to be of length one. Although there are other strategic voting issues we will return to below, the normalization prevents anyone from having more influence than other voters simply by giving all extreme ratings (all 0’s or 100’s). We refer to this restriction—equivalent to the vector in the triangle, tetrahedron or simplex representation having a maximum length of 1–as the “variance budget.” That is, each voter has a restricted amount of variance in their normalized vector, so in effect, voters cannot express a stronger opinion about one candidate without having to express less strong opinions about other candidates. Visually, this “budget” ensures that each voter’s preference vector is of the same length in figures 1 and 2. 

The normalized ratings having a mean of zero represents something even more basic: since only one candidate will win in the end, one cannot raise the chances of one candidate without lowering the chances of at least some other candidates.

To us, there is an intuitive attraction to focusing on normalized ratings, even apart from the NGA motivation that led us to that focus. So we will use the normalized ratings extensively in our empirical analysis of the data.

Analyzing the Data

Who Would Win? The first question to ask of the data is who would have won? First, let’s see who would have won in our sample using the current voting system. We assume that participants vote for the candidate that they chose as the “best” candidate. Tables 1 and 2 show these results, broken up by unstructured and expected utility ratings. We see that in both types of ratings, Hillary Clinton outperforms the other candidates. Note that at this stage in the survey, both types of ratings ask the same question (“who would make the best candidate”), so it is expected that the results would be similar.  

Table 1: Number of “best” candidate ratings using unstructured ratings

Table 2: Number of “best” candidate ratings using expected utility ratings

From these results, we see that Hillary Clinton would be the nominated Democrat in both rating types, and Donald Trump would be the nominated Republican in our sample. Of those two remaining candidates, our sample of participants would elect Hillary Clinton, with 459 participants who prefer her, over Donald Trump, with 325 participants who prefer him.

Now, let’s look at how these results would change if we consider NGA as a multicandidate voting mechanism, as previously described. In the simplest, most straightforward implementation of NGA for a multicandidate election, the victor is the candidate with the greatest sum of normalized ratings across voters. (Note that it is possible to repeat the process of adding a small vector   based on the same information. Typically, this will lead first to a side or edge—one candidate being eliminated—and then to a vertex, one candidate being victorious.)  

As a prediction of what would happen in an actual multicandidate election using NGA, the results from our data need to be taken with a large grain of salt for at least three reasons. First, our survey was conducted months before November 8, when voters’ knowledge of the five candidates was still relatively limited—not to mention in an election cycle with lots of dramatic “October surprises.” Second, the total number of survey respondents is relatively small, and our survey respondents are not fully representative of the actual population of voters, though every effort was made to make the UAS survey as representative as possible of the adult US population overall. And third, our survey respondents knew that their answers to our survey would not determine who would become president, and so they were not subject to incentives for strategic misreporting that would arise in a real-world multicandidate election using NGA. But that makes the data even more interesting as an indication of which candidate would have been most acceptable to a wide range of voters. Here are averages of the normalized ratings for both the sample that was asked to give unstructured ratings and the sample that was asked to give expected-utility ratings:

Table 3: NGA Results Using Unstructured Ratings

Table 4: NGA Results Using Expected Utility Ratings

Thus, leaving aside any effects from strategic voting (and ignoring for the moment the timing of our survey and the non-representativeness of our sample), our data point to John Kasich as most likely to have won the election using NGA to resolve the multicandidate choice over all of these five candidates. While his mediocre performance under our current voting system suggests that he was not the favorite candidate of all that many voters, our respondents overall found him relatively acceptable.  

Bernie Sanders has the second-to-highest average rating, despite not performing very well in the primary. Donald Trump has the lowest average rating by far, with Ted Cruz second-to-lowest using the unstructured ratings and Hillary Clinton second-to-lowest using the expected-utility ratings. The most interesting point to take away is that, by the expected utility ratings, out of these five candidates, the current general election has come down to the two candidates with the lowest average ratings. (This is in line with the low approval ratings for both Donald Trump and Hillary Clinton.)

Expected-Utility Ratings vs. Unstructured Ratings. A striking difference between the expected utility ratings and the unstructured ratings is the greater prevalence of tendencies toward lower normalized ratings with the expected utility ratings.

One way to illustrate this difference is to look at scatterplots of the most extreme rating (in absolute value) in the normalized ratings vs. the second most extreme rating in the normalized ratings. In figures 4 and 5 below, we can see whether participants’ most extreme preferences were for a certain candidate (indicated by points with a positive x value) or against a certain candidate (indicated by points with a negative x value).  

Figure 4: Most Extreme vs. Second Most Extreme Ratings Using Unstructured Ratings

tumblr_inline_og19d2fP9I1r57lmx_400.png

Figure 5: Most Extreme vs. Second Most Extreme Ratings Using Expected Utility Ratings

Out of the expected-utility vector votes, 345 have the most extreme normalized rating negative, compared to 133 that have the most extreme normalized rating positive. By contrast, out of the unstructured vector votes, 211 have the most extreme normalized rating positive, compared to 120 that have the most extreme normalized rating negative. This trend suggests that participants emphasize their negative feelings toward candidates more in the expected utility ratings as compared to in the unstructured ratings.

This stark contrast between the expected utility ratings and the unstructured ratings can further be seen through the notable differences in the shape of the distribution between these two types of ratings. Skewness describes respondents’ tendencies to rate some candidates much higher than average (skewness > 0) in comparison to the standard deviation of 1 or much lower than average (skewness < 0). Intuitively, a set of ratings with a positive skewness is somewhat closer to being a “positive” vote, while a set of ratings with a negative skewness is somewhat closer to being a “negative vote.” Figure 6 shows that in the unstructured ratings, skewness tends to be more positive than in the expected utility ratings. Table 5 gives summary statistics corresponding to this graph. This indicates that respondents are closer to casting “positive” votes in the unstructured ratings. The expected utility ratings, on the other hand, tend to have a more negative skew, and are thus closer to being “negative” votes. Table 5 emphasizes this point, by showing that the average skew for unstructured ratings is indeed positive, while the average skew for the expected utility ratings is strongly negative.

Figure 6: Skewness of Unstructured vs. Expected Utility Ratings

Table 5: Skewness of Ratings

Thus, by both this measure of skewness and by the extreme ratings plots, the expected-utility ratings look closer to being negative votes (votes against a candidate) while the unstructured ratings look closer to being positive votes (votes for a candidate).

Why Are the Expected-Utility Ratings So Different from the Unstructured Ratings? A solid answer to the question of why the expected-utility ratings are so different from the unstructured ratings (and the related question of whether our a priori preference for the expected-utility ratings is justified empirically) would require additional data in another multicandidate election. But we are able to provide one hypothesis. Because our data were collected in the heat of the primaries, our respondents may have wanted to use the ratings to express their opinions about those primary battles, using a substantial portion of the 0 to 100 scale to express those opinions, and consequently squeezing down the amount of the scale left to express their opinions about the candidates in the party they favored less. The structure of expected-utility ratings would have pushed back against this tendency, asking the respondents, in effect, “Are you really willing to accept a substantial chance of your least favorite candidate winning in order to get your favorite candidate instead of your second- or third-choice?”

To see if this hypothesis is at all consistent with the data, consider the variance among an individual’s two or three ratings within the party of that individual’s favorite candidate. Tables 6 and 7 show that the within-party, within-voter variance is substantially greater for the unstructured ratings than for the expected utility ratings. This lends some support to the idea that those answering the unstructured ratings were more focused on the primaries, overstating their dislike for the “other” candidate(s) in the party, whereas in the expected utility ratings, participants were more likely to think about the general election and save more of the unit variance in normalized ratings for candidates in the other party.

Table 6: Among those whose top candidate was a Democrat, what was the average variance between Clinton and Sanders ratings?

Table 7: Among those whose top candidate was a Republican, what was the average variance between Cruz, Kasich, and Trump ratings?

Multiple-Stage NGA Voting.

In the current voting system, strategic voting for someone other than one’s most preferred choice is a commonplace. So there is no reason to dismiss a new voting system for having some degree of strategic misreporting. But to allow voters the simplicity of truthful reporting in their ratings without hurting themselves too much, we view it as desirable to have the incentives for strategic misreporting be relatively small. Given the issues taken care of by the normalization of the ratings, the incentive for strategic misreporting we have worried most about is the incentive to avoid giving a strong negative rating to a candidate who is going to be eliminated anyway, since doing so would dilute the ratings assigned to other candidates. That is, there is an incentive to free ride on the elimination of widely disliked candidates. Fortunately, modifications of the NGA mechanism can help reduce this incentive or help insure reasonable results despite some degree of strategic voting.

One modification of the NGA mechanism helpful in dealing with free riding in the elimination of widely disliked candidates is to vote in stages. Rather than taking ratings at one point in time to guide movement all the way to a vertex with one candidate winning, one can have a series of nonpartisan “open primaries” in which the notional probabilities of a candidate winning if things were ended prematurely are adjusted some distance, but not all the way to one candidate winning. This gives voters a chance to see if a candidate many thought would be quickly eliminated is doing well, making it worthwhile spending some of one’s variance budget voting against them in the next stage. On the other hand, taking the ending point of the adjustments in notional probabilities from the nonpartisan open primary as the starting point for the next stage ensures that all voters have some reward for the voting efforts they make, even in the first stage. 

Having multiple stages also serves other purposes. There could easily be candidates in an initially crowded field that voters simply don’t know much about and don’t want to invest in learning about because it seems those candidates have no chance. A nonpartisan open primary helps voters and journalists know which candidates are worth learning more about.

(Also, one practical issue with the early “primaries” is the large number of candidates a voter might be asked to rate. One way to handle this is to include an option for casting a straight positive or straight negative vote that effectively fills in 0’s and 100’s for all the candidates accordingly.) 

A Smoothed-Instant-Runoff Version of NGA for Multicandidate Elections

The NGA perspective from which we are looking at things suggests another, more technical way to reduce the incentive for strategic misreporting: using exactly the same kind of survey to elicit expected-utility ratings, but modifying the mechanism so that it automatically deemphasizes the ratings of candidates who are on their way out. This involves (a) demeaning using a weighted average that gives a low weight to candidates that have a currently low notional probability of winning, (b) slowing down (without stopping) the adjustment of notional probabilities that are already low, and (c ) steering vector votes toward focusing on candidates that still have a relatively high notional probability. There is a parameter that determines whether these three things happen only when the notional probability of a candidate is very low or more gradually. If these modifications happen only when the notional probability of a candidate is very low, the mechanism becomes a combination of the simplest implementation of NGA and the idea behind instant-runoff voting, where voters re-optimize once a candidate is eliminated. With less extreme values of the parameter, the spirit of instant-runoff voting is smoothed out. Regardless of that parameter, the basic NGA idea is preserved. 

A downside of the smoothed-instant-runoff version of NGA for multicandidate elections is its complexity. It would still be fully verifiable, but those who do not fully understand it might be suspicious of it. Nevertheless, to the extent it makes one aspect of strategic voting happen automatically without strategic misreporting, it would put less sophisticated voters more on a par with the more sophisticated voters. 

Incentives for Politicians

A great deal of research is needed to fully understand incentives for politicians under an NGA or Smoothed-Instant-Runoff NGA multicandidate voting system with multiple stages. However, we are willing to make some conjectures. If people view certain important candidates of an opposing party as “the devil,” the strong negative ratings for those “diabolical” candidates would open up an opportunity for centrist candidates like John Kasich whom few voters see as “diabolical.” It could even open up space for new centrist parties. 

Undoubtedly there are other effects that are harder to foresee, but a system that allows people to express strong negative views about a candidate should help avoid many possible bad outcomes. And the NGA system still allows people to express strong positive views about a candidate if they so choose. 

NOTE: Please consider this post the equivalent of a very-early-stage working paper. We would love to get comments. And just as for any other early-stage working paper, we reserve the right to copy wholesale any of the text above into more final versions of the paper. Because it is also a blog post, feel free to cite and quote. We want to thank Becky Royer for outstanding research and editorial assistance.

The Political Perils of Not Using Deep Negative Rates When Called For

Link to Jon Hilsenrath’s Wall Street Journal special report, updated August 26, 2016, “Years of Fed Missteps Fueled Disillusion With the Economy and Washington”

How well has what you have been doing been working for you?

People are quick to think that the political costs of deep negative rates to a central bank are substantial. But it is worth considering the political costs of not doing deep negative rates when the economic situation calls for it. Take as a case in point the failure of the Fed to do deep negative rates in 2009. Regardless of the reason for the Fed’s not doing deep negative rates in 2009, it is possible to see the consequences for the Fed’s popularity of the depth of the Great Recession and the slowness of the recovery. 

In his Wall Street Journal special report “Years of Fed Missteps Fueled Disillusion With the Economy and Washington,” Jon Hilsenrath tells the story of the Fed’s decline in popularity, and presents the following graphic: 

How Americans rate federal agencies

Share of respondents who said each agency was doing either a ‘good’ or ‘excellent’ job, for the eight agencies for which consistent numbers were available

The Alternative

There is no question that the Fed’s failure to foresee the financial crisis and its role in the bailouts contributed to its decline in popularity. But consider the popularity of the Fed by 2014 in two alternative scenarios: 

Scenario 1: The actual path of history in which the economy was anemic, leading to a zero rate policy through the end of 2014.

Scenario 2: An alternate history in which a vigorous negative interest rate policy met a firestorm of protest in 2009, but in which the economy recovered quickly and was on a strong footing by early 2010, allowing rates to rise back to 1% by the end of 2010 and to 2% in 2011.   

In Scenario 2, the deep negative rates in 2009 would have seemed like old news even by the time of the presidential election in 2012, let alone in 2014. In the actual history, Scenario 1, low rates are still an issue during the 2016 presidential campaign, because the recovery has been so slow. 

It Looks Good to Get the Job Done

At the end of my paper “Negative Interest Rate Policy as Conventional Monetary Policy” (ungated pdf download) published in the National Institute Economic Review, I discuss the politics of deep negative interest rates–not just for the United States, but also for other currency regions that needed them. My eighth and final point there is this:

Finally, the benefits of economic stabilisation should be emphasised. The Great Recession was no picnic. Deep negative interest rates throughout 2009 – somewhere in the –4 per cent to –7 per cent range – could have brought robust recovery by early to mid 2010. The output gaps the world suffered in later years were all part of the cost of the zero lower bound. These output gaps not only had large direct costs, they also distracted policymakers from attending to other important issues. For example, the later part of the Great Recession that could have been avoided by negative interest rate policy led to a relatively sterile debate in Europe between fiscal stimulus and austerity, with supply-side reform getting relatively little attention. And the later part of the Great Recession that could have been avoided by negative interest rate policy brought down many governments for whom thepolitical benefits of negative interest rate policy would have been immense. And for central banks, it looks good to get the job done.

Nate Cohn: How One 19-Year-Old Illinois Man is Distorting National Polling Averages

The link above is to a well-done New York Times article analyzing the results highlighted on the website for the USC Daybreak Poll that has made things look so much more favorable for Donald Trump than other polls.  

Let me emphasize that the underlying data for the Daybreak poll are extremely valuable. Having a panel makes it possible to answer many questions that cannot be answered well with a repeated cross-section. The problem is with the calculation for the highlighted comparison between Donald Trump and Hillary Clinton support. 

The most important problem with the graph highlighted on the Daybreak Poll website is the weighting by the candidate a poll respondent claimed to have voted for in the last election. Nate Cohn is good at talking about the biases that introduces because people underreport voting for the loser. Thus, forcing the weights to make the reports of who people voted for equal to the actual shifts the weights too much toward the sort of people who might have actually voted for the loser. Many self-reported “Obama” or minor-candidate voters were really Romney voters. People who admitted voting for Romney are more Republican than the overall set of people who actually voted for Romney. So inflating the weights of people who reported voting for loser Romney up to equal the fraction of those who actually voted for Romney makes things look more favorable for Trump than they should be. 

To me, the main way the data on voting in the last election should be used is in correcting for each demographic group the difference between the percent chance they said they would vote and whether they actually voted or not. It is not clear that this needs to use the self-reported voting after the fact at all; exit polls should provide good evidence on actual voting percentages by demographic group that can be compared to the probabilities people said in advance in each demographic group in this kind of data collection in 2012 (which I know was done on RAND’s American Life Panel in 2012).  

Quartz #67—>Nationalists vs. Cosmopolitans: Social Scientists Need to Learn from Their Brexit Blunder

Here is the full text of my 67th Quartz column, “Social scientists need to learn from their Brexit blunder, so we can learn from them,” now brought home to supplysideliberal.com. It was first published on June 29, 2016. Links to all my other columns can be found here.

I give my reasoning behind the first sentence of this column in my July 10, 2016 sermon “Us and Them.”

If you want to mirror the content of this post on another site, that is possible for a limited time if you read the legal notice at this link and include both a link to the original Quartz column and the following copyright notice:

© June 29, 2016: Miles Kimball, as first published on Quartz. Used by permission according to a temporary nonexclusive license expiring June 30, 2020. All rights reserved.


The worst thing about Brexit is a key reason Brexit gained so much support: opposition to immigration. Advocates for the UK leaving the European Union were not shy about pointing to opposition to immigration as a key to their success. Nigel Farage captured some of that spirit by declaring “This is a victory for ordinary people, for good people, for decent people.”

The rise in inequality and serious monetary policy mistakes—including the eurozone’s requiring many disparate economies to share monetary policy with Germany—may have set the stage for rebellion against the status quo. But Donald Trump’s “I love to see people take their country back” expresses the nationalism behind the direction of rebellion implicit in Brexit.

One of the most revealing pieces of data on the Brexit vote is Eric Kaufmann’s analysis of Brexit support among the over 24,000 survey respondents in the British Election Study. Support for Brexit was much higher among those who supported capital punishment, and support for the EU was much lower among respondents who supported the public whipping of sex offenders. That is, “hardliners” were much more likely to support Brexit.

I have written the above as if we know what happened with Brexit. And although I think I have the general drift of things right, one of the big messages of Brexit in the UK and of the rise of Donald Trump in the US is that social scientists need to up their game dramatically in understanding what people want and how they think. For some time, social scientists have made a special effort to understand ethnic and sexual minorities. But given how different hardliners are from the people many academic social scientists usually hang out with, and how many hardliners there are, social scientists need to spend a lot more time studying this group. (Though marred by condescension toward “conservatives,” George Lakoff’s book Moral Politics is an excellent place for academics to start in an effort to understand the hardliner worldview.)

In order to give non-pejorative labels to both sides, let me call those who, like me, favor more open immigration “Cosmopolitans” and those who favor more restrictive immigration (and other policies in the same spirit) “Nationalists.” As a Cosmopolitan, what I most want to know from social science is what interventions can help make people more accepting of foreigners. Somewhat controversially, it is now common in the US for elementary school teachers to make efforts to instill pro-environmental attitudes in schoolchildren. Whether or not those efforts make a difference to children’s attitudes, are there interventions or lessons that can make schoolchildren and the adults they grow up to be likely to feel more positive about the foreign-born in their midst? For example, having had a very good experience learning foreign languages on my commute by listening to Pimsleur CDs in my car, I wonder whether dramatically more effective Spanish language instruction for school children following those principles of audio- and recall-based learning with repetition at carefully graded intervals might make a difference in attitudes toward Hispanic culture and toward Hispanics themselves in the US.

Although it is the province of social scientists to test interventions intended to improve attitudes toward the foreign-born, many of the best interventions will be created by writers, artists, script-writers, directors, and others in the humanities. There are also many other marginalized groups in society, but the strength of anti-foreigner attitudes suggests the need for imaginative entertainment and cultural events to help people identify with human beings who were born in other countries.

It is obvious to anyone except those with their heads in the sand that Brexit in the UK and the rise of Donald Trump in the US are a wake-up call to the relatively Cosmopolitan elites who have been running those countries. But that doesn’t mean the Cosmopolitan faction among the elites must surrender to the Nationalists. Cosmopolitan elites are powerful, and shouldn’t go down without a fight.

What is clear is that the strategy of shaming Nationalists and ethnocentrists who say negative things about other groups has its limitations. My grandmother used to quote Dale Carnegie’s now politically incorrect couplet:

A man convinced against his will,

Is of the same opinion still.

Shaming may work to a point, but what is needed now is genuine persuasion about the humanity that we all share, regardless of where on earth we are born.

In addition to such gentle efforts to help people become more accepting of the foreign-born, there is also, in the US, the possibility of an immigrant-voter “nuclear option” for cementing a Cosmopolitan victory—one that works only if Donald Trump goes down in flames and takes the Republican Senate and House majorities down with him. In that situation the Democrats (perhaps with the help of the filibuster-busting “nuclear option”)–could force through a true “amnesty” bill for illegal immigrants, including full naturalization. This would bring millions of additional immigrants onto the voting rolls—the latest in many historical expansions of the franchise.

Back in 1996, historians William Strauss and Neil Howe predicted in The Fourth Turning that the first two decades of the 21st century would bring a political crisis when the senescence of earlier generations finally deprived polarized Baby Boomers of effective adult guidance. Whatever one’s judgment about the overall merits of the Strauss-Howe generational theory, this particular prediction has come true. In such a crisis, it really matters how things get resolved. History is written, by and large, by the victors, so whichever side comes out on top—Nationalists or Cosmopolitans—will look good in the history books.


The Federal Reserve System's Dysfunctional Governance in 1934

Currie described the situation in a 1934 memo to Eccles: “Decentralized control is almost a contradiction in terms. The more decentralization the less possibility there is of control.” The problem was that “[e]ven though the Federal Reserve Act provided for a very limited degree of centralized control, the system itself by virtue of necessity was forced to develop a more centralized control of open market operations.” The ad hoc institutional development consisted of “fourteen bodies composed of 128 men who either initiate policy or share in varying degrees in the responsibility for policy.” (The fourteen were the twelve Federal Reserve Banks, the Federal Reserve Board, and the once powerful Federal Advisory Council, a group of bankers that advised the Federal Reserve Board.) These various bodies, and their governors and boards, made governance and public accountability a virtual impossibility. Currie glumly concluded that “[s]uch a system of checks and balances is calculated to encourage irresponsibility, conflict, friction, and political maneuvering” such that “anybody who secures a predominating influence must concentrate on handling men rather than thinking about policies.“
— Peter Conti-Brown, The Power and Independence of the Federal Reserve

Selfishness and the Fall of Rome

Link to Adrian Goldsworthy’s How Rome Fell: The Death of a Superpower

pp. 418, 419: 

It is only human nature to lose sight of the wider issues and focus on immediate concerns and personal aims. In the Late Roman Empire this was so often all about personal survival and advancement–the latter bringing wealth and influence, which helped to increase security in some ways, but also rendered the individual more prominent and thus a greater target to others. Some officials enjoyed highly successful careers through engineering the destruction of colleagues. Performing a job well was only ever a secondary concern. Even emperors were more likely to reward loyalty over talent. Officials and commanders needed only to avoid making a spectacular mess of their job–and even then enough influence could conceal the facts or pass the blame onto someone else. None of this was entirely new, but it became endemic. When ‘everyone’ acted in the same way there was no real encouragement to honesty or even competence. The game was about personal success and this often had little connection to the wider needs of the empire. 

It was not a phenomenon unique to the Late Roman Empire, nor are its implications only of significance to the United States or indeed any other country. All human institutions, from countries to businesses, risk creating a similarly short-sighted and selfish culture. It is easier to avoid in the early stages of expansion and growth. Then the sense of purpose is likely to be clearer, and the difficulties or competition involved have a more direct and obvious impact. Success produces growth and, in time, creates institutions so large that they are cushioned from mistakes and inefficiency. The united Roman Empire never faced a competitor capable of destroying it. These days, countries and government departments do not easily collapse–and Western states do not face enemies likely to overthrow them by military force. In the business world the very largest corporations almost never face competitors that are truly their equal. Competition within the commercial market at any level is obviously rarely carried out on entirely equal terms. 

In most cases it takes a long time for serious problems or errors to be exposed. It is usually even harder to judge accurately the real competence of individuals and, in particular, their contribution to the overall purpose. Those in charge of overseeing a country’s economy generally reap the praise or criticism for decisions made by their predecessors in office. 

In Praise of the 9th Amendment

“The enumeration in the Constitution, of certain rights, shall not be construed to deny or disparage others retained by the people.”

Link to the Wikipedia article on the 9th Amendment to the Constitution of the United States

It is not now, but the 9th Amendment to the Constitution should be one of the greatest defenses of liberty that we have. Here is a very brief description of an interpretation that I find attractive: 

A libertarian originalist, Randy Barnett has argued that the Ninth Amendment requires what he calls a presumption of liberty. Barnett also argues that the Ninth Amendment prevents the government from invalidating a ruling by either a jury or lower court through strict interpretation of the Bill of Rights. According to Barnett, “The purpose of the Ninth Amendment was to ensure that all individual natural rights had the same stature and force after some of them were enumerated as they had before.”[13]

Randy Barnett’s key argument is that the many voters in the thirteen states who ratified the US Constitution would have understood the 9th amendment to mean that there was a personal sphere of liberty that encompassed a great deal. For example, based on the original public meaning of the 9th amendment, it wouldn’t take “penumbras” and “emanations” to see a guarantee of privacy rights in the US Constitution.  

Randy Barnett also argues that the Privileges or Immunities Clause of the 14th Amendment– “The Citizens of each State shall be entitled to all Privileges and Immunities of Citizens in the several States”–which the Supreme Court gutted in the Slaughterhouse Cases, was meant among other things to extend this presumption of liberty to actions of the states. The Supreme Court has partially restored the meaning of the Privileges or Immunities Clause by its interpretation of the Due Process Clause of the 14th amendment, but without as broad a scope of liberty. In particular, many aspects of economic liberty are no longer recognized as protected by the US Constitution.