Mustafa Akyol—The Illogic of Globalization as a Scapegoat Everywhere: Who is Taking Advantage of Whom?

What is ironic in the world today is that conspiracy theorists in different societies are obsessed with the same scapegoat — globalization — but interpret it as a conspiracy only against their side. … In fact, there is a global conspiracy against neither Islam nor the West. Globalization has just forced different societies to interact more than ever — and many people are scared by what they see on the other side. Populists all over the world began taking advantage of those fears, telling us that we should be even more fearful still.
— Mustafa Akyol, “The Plot Against America or the Plot by America?” October 28, 2016 New York Times 

Election Day Special, 2016

On this US Election Day, 2016, I will be flying from Israel, where I gave two talks at the Bank of Israel, to Brussels, where I am a keynote speaker at the annual ECMI conference to be held at the National Bank of Belgium. But I voted by mail before I left on my Fall 2016 tour of European central banks. I hope all of my readers who are US citizens have plans to vote. David Leohnardt in the November 1, 2016 New York Times wrote this:

Voting plans increase voter turnout. In an experiment by David Nickerson and Todd Rogers, involving tens of thousands of phone calls, some people received a vague encouragement to vote. They were no more likely to vote than people who received no call. Other people received calls asking questions about their logistical plans — and became significantly more likely to vote. The questions nudged them.

Second, tell other people about your plan, and ask about theirs. The power of peer pressure increases voter turnout. One aggressive experiment mailed people a sheet of paper with their own turnout history and their neighbors’. A more gentle experiment presented Facebook users with head shots of their friends who had posted an update about having voted. Both increased turnout, as have many other experiments.

You don’t need an intricate effort to influence people, though. Post your own voting plan to Facebook, and ask your friends to reply with theirs. Text or call relatives in swing states and ask about their voting plans. Do the same when you see friends.

And here is Adam Grant in the October 1, 2016 New York Times

If we want people to vote, we need to make it a larger part of their self-image. In a pair of experiments, psychologists reframed voting decisions by appealing to people’s identities. Instead of asking them to vote, they asked people to be a voter. That subtle linguistic change increased turnout in California elections by 17 percent, and in New Jersey by 14 percent.

The American electorate overall has a great deal of wisdom, but is not able to fully express that wisdom with our current voting system. On that, take a look at last week’s post “Dan Benjamin, Ori Heffetz and Miles Kimball–Repairing Democracy: We Can’t All Get What We Want, But Can We Avoid Getting What Most of Us *Really* Don’t Want?

October and even November surprises keep coming in for both Donald Trump and Hillary Clinton. One I found interesting was the details David Barstow, Mike McIntire, Patricia Cohen, Susanne Craig and Russ Buettner reported on Donald Trumps tax avoidance approach in the October 31, 2016 New York Times. Essentially, evidence indicates Donald Trump was taking a large deduction on his taxes for his investor’s losses from investing in his project. The way he did that was by overvaluing partnership equity in the failed projects and purporting to reimburse his investors for their losses by giving them overvalued partnership equity. What I wasn’t totally clear about is whether these investors succeeded in deducting from their own taxes the very same losses, using a lower value of that partnership equity received that was inconsistent with the value that Donald Trump used. That is, did Donald Trump take his investor’s loss deductions away from them, or did he and his investors both successfully claim the same losses?

Finally, let me mention that a key issue in this election is the principle of the equality of all human beings, an issue I discussed in “Us and Them” and this past Sunday in “John Locke on the Equality of Humans.”

How Negative Rates are Making the Swiss Want to Pay Their Taxes Earlier

Link to Ralph Atkins’s October 26, 2016 Financial Times article “Switzerland enjoys negative interest rates windfall: Taxpayers settle bills early and bond investors pay to lend money to government”

In “Swiss Pioneers! The Swiss as the Vanguard for Negative Interest Rates” I wrote:

there is no question that negative interest rates will require many detailed adjustments in how banks and other financial firms conduct their business. Like it or not, Swiss banks and the rest of the Swiss financial industry may be forced to lead the way in figuring out these adjustments, just as the Swiss National Bank is leading the way in figuring out how to conduct negative interest rate policy. The Swiss are eminently qualified for that pioneering role. The rest of the world would be well-advised to watch closely.

Some of the adjustments that need to be made in a negative rate environment are to the tax system. Recently, Swiss cantonal governments and the Swiss federal government have realized they can lower incentives for early tax payments, since low interest rates on other accounts provide an incentive to pay taxes early. Here are the two passages I found most interesting for the details reported:

Although Swiss retail banks have largely shielded ordinary bank customers from negative interest rates, companies face penalties for holding large amounts of cash. That has increased the appeal of incentives traditionally offered by Swiss cantons as well as the federal government for early tax payments.

Companies entitled to tax rebates had also waited to reclaim funds from the state, the finance ministry in Bern said. …

The federal government is not only enjoying a boost to its finances [from negative interest rates on its bonds up to a 20-year maturity]. It does not have to worry about paying charges on cash accounts either: it is specifically excluded from the negative interest rates imposed by the SNB, which acts as its banker.

All of these issues were quite predictable, but it is fascinating to see them actually playing out. This example is important because it indicates that some of the steps necessary to eliminate the zero lower bound that are not within the authority of central banks might be handled in a reactive way by other arms of government. 

Thanks to Ruchir Agarwal for pointing me to this article.

John Locke on the Equality of Humans

There are many dimensions of the principle of equality among humans. The most difficult is expressed by the  Biblical command “Love your neighbor as yourself” (Leviticus, 19:18; Matthew 22:39). The Martin Buber quotation above points to some of the bias toward self that would have to be overcome to actually obey this command. 

A much more modest demand is to treat the interests and concerns of two humans that come before you for judgment equally. This idea provides part of the context for the Levitical command “Love your neighbor as yourself.” Three verses earlier, Leviticus reads: 

You shall do no injustice in court. You shall not be partial to the poor or defer to the great, but in righteousness shall you judge your neighbor. (Leviticus 19:15)

Of course, a key question, as I put it in “Us and Them” is “whose well-being counts: who is in the charmed circle of people whose lives we are concerned about and who is not.” Jesus was asked almost exactly this question by a student of the Law of Moses who knew his Leviticus well: 

And, behold, a certain lawyer stood up, and tempted him, saying, Master, what shall I do to inherit eternal life? He said unto him, What is written in the law? how readest thou? And he answering said, Thou shalt love the Lord thy God with all thy heart, and with all thy soul, and with all thy strength, and with all thy mind; and thy neighbour as thyself. And he said unto him, Thou hast answered right: this do, and thou shalt live. But he, willing to justify himself, said unto Jesus, And who is my neighbour? (Luke 10:25-29)

Jesus answered by telling the story of the Good Samaritan–a despised outsider who was kinder to a man beaten by thieves than those of his own ethnicity. The rhetorical force of the Good Samaritan story, as I see it, is that someone who is kind to all human beings seems nobler than someone so ready to draw a line between the people who count and those who don’t that a few lines later the charmed circle of those who count has contracted perilously close to being a circle enclosing a single ego. 

Not just equal concern for the welfare of those in outgroups, but any significant concern for those in outgroups is very much still at issue in the modern world. The second figure above doesn’t plumb the full depth of unconcern for outgroups that we are still wrestling with. Take as given for the sake of argument that the welfare of citizens ought to count in political decision-making more than the welfare of non-citizens. It then makes a huge difference whether the welfare of non-citizens counts zero or counts at a fraction–say one-hundredth as much as the welfare of citizens. Why? Because as a practical matter there are many policies that raise the welfare of citizens a tiny bit, or seem to, but without doubt grievously hurt the welfare of non-citizens.

The dimension of equality most directly relevant for political philosophy is the one pointed to by Thomas Jefferson in the third figure above. There is not some person or group of people who have the inherent right to be rulers. Although this is the type of equality that John Locke needs most to make his argument in his 2d Treatise on Government:“On Civil Government,” he begins with the stronger “Love your neighbor as yourself” version of equality, pointing in section 5 to the theology of Richard Hooker:  

This equality of men by nature, the judicious Hooker looks upon as so evident in itself, and beyond all question, that he makes it the foundation of that obligation to mutual love amongst men, on which he builds the duties they owe one another, and from whence he derives the great maxims of justice and charity.

John Locke then quotes Richard Hooker as follows:       

The like natural inducement hath brought men to know that it is no less their duty, to love others than themselves; for seeing those things which are equal, must needs all have one measure; if I cannot but wish to receive good, even as much at every man’s hands, as any man can wish unto his own soul, how should I look to have any part of my desire herein satisfied, unless myself be careful to satisfy the like desire, which is undoubtedly in other men, being of one and the same nature? To have any thing offered them repugnant to this desire, must needs in all respects grieve them as much as me; so that if I do harm, I must look to suffer, there being no reason that others should shew greater measure of love to me, than they have by me shewed unto them: my desire therefore to be loved of my equals in nature, as much as possible may be, imposeth upon me a natural duty of bearing to them-ward fully the like affection; from which relation of equality between ourselves and them that are as ourselves, what several rules and canons natural reason hath drawn, for direction of life, no man is ignorant.  Eccl. Pol. Lib. i.

How is it that we human beings have the concept of human equality at all? I don’t know. But I have the sense that the way we see other human beings when we look at a crowd of strangers we know nothing about, who are all of a relatively homogeneous social group, has a lot to do with it. Because we have “theory of mind”–including a model in our own heads of how things appear to other people–we know that each of us, too, could seem like just a face in a crowd. That picture of just another face in a crowd is a starting point for conceiving of human equality. 

Prominent Exoplanet Researcher Found Guilty of Sexual Harassment

Unacceptable behavior by Geoff Marcy. Being a good scientist doesn’t give anyone a pass to make other people’s lives miserable in this way. 

On sexual harassment in science, also see Hope Jahren’s New York Times article “She Wanted to Do Her Research. He Wanted to Talk ‘Feelings.’” And John Johnson, in “Fed Up With Sexual Harassment: Defining the Problem” on the Women in Astronomy blog writes this:

If you are a man and struggle to see why an unwelcome sexual advance can be so disturbing, take my friend’s suggestion and ignore the gender mismatch. Instead of imagining a senior woman touching you, imagine a large, muscular man gazing seductively into your eye while touching your knee just before colloquium. How well would you remember the talk? What would be on your mind following the talk? Who would you talk to about the incident, especially if the man who has a crush on you has control over your career?

Division of Labor in Track-and-Hook Songwriting

In country music, the melody-and-lyrics method is still the standard method of writing songs. (Nashville is in some respects the Brill Building’s spiritual home.) But in mainstream pop and R&B songwriting, track-and-hook has taken over, for several reasons. For one thing, track-and-hook is more conducive to factory-style song production. Producers can create batches of tracks all at one time, and then e-mail the MP3s around to different topliners. It is common practice for a producer to send the same track to multiple topliners—in extreme cases, as many as fifty—and choose the best melody from among the submissions. Track-and-hook also allows for specialization, which makes songwriting more of an assembly-line process. Different parts of the song can be farmed out to different specialists—verse writers, hook smiths, bridge makers, lyricists—which is another precedent established by Cheiron. It’s more like writing a TV show than writing a song. A single melody is often the work of multiple writers, who add on bits as the song develops. …In a track-and-hook song, the hook comes as soon as possible. Then the song ‘vamps’—progresses in three- or four-chord patterns with little or no variation. Because it is repetitive, the vamp requires more hooks: intro, verse, pre-chorus, chorus, and outro hooks. ‘It’s not enough to have one hook anymore,’ Jay Brown explains. ‘You’ve got to have a hook in the intro, a hook in the pre, a hook in the chorus, and a hook in the bridge, too.’ The reason, he went on, is that ‘people on average give a song seven seconds on the radio before they change the channel, and you got to hook them.’
— John Seabrook, The Song Machine: Inside the Hit Factory

Dan Benjamin, Ori Heffetz and Miles Kimball—Repairing Democracy: We Can’t All Get What We Want, But Can We Avoid Getting What Most of Us *Really* Don’t Want?

The 2016 US presidential election is noteworthy for the low approval ratings of both major party candidates. For example, as of November 2, 2016, poll averages on RealClear Politics show 53.6% of respondents rating Hillary Clinton unfavorably, while only 43.9% of respondents rate her favorably; 58.9% of respondents rate Donald Trump unfavorably, while only 38.1% of respondents rate him favorably. Leaving aside those who vote for a minor party or write-in candidate, there is no question that on election day, many voters will think of what they are doing as voting against one of these two candidates rather than voting for one of them.

Out of all the many candidates who campaigned in the primaries to be President of the United States, how did the electoral system choose two who are so widely despised as the candidates for the general election? The party system for choosing the candidates for the general election may bear some of the blame, especially in an era of high political polarization. But another important characteristic of the current US electoral system is that one can only make a positive vote for a candidate, not a negative vote.  That is, in the current voting system, voters can only express one attitude towards a candidate—the belief that she or he would make the best president among the candidates. But, should this be the only attitude that comes into play when picking the most powerful person of the free world? Shouldn’t our voting system give voters a chance to say which candidate they think would make the worst president before we deposit the U.S. nuclear codes in a new president’s hands? And more generally, shouldn’t our voting system take into account how much voters like or dislike the candidates?

Our work on collective decision-making mechanisms for incorporating subjective well-being data into policy-making led us to stumble on a class of voting systems for multicandidate elections that we think might help in avoiding outcomes that a large share of people hate. For us, this research program began with “Aggregating Local Preferences to Guide Marginal Policy Adjustments” (pdf download) by Dan Benjamin, Ori Heffetz, Miles Kimball and Nichole Szembrot in the 2013 AEA Papers and Proceedings. More recently, “The Relationship Between the Normalized Gradient Addition Mechanism and Quadratic Voting” by Dan Benjamin, Ori Heffetz, Miles Kimball and Derek Lougee (on which Becky Royer worked as an extremely able research assistant) draws some connections between what we have come to call the “Normalized Gradient Addition (NGA) mechanism” and a broader literature. (Here is a link to a video of my presentation on that paper.)

Figure 1: Voting Diagram for Three Candidates

To better understand the NGA mechanism as applied to multicandidate voting, consider the simple case in which there are three candidates – Tom, Dick, and Jerry – as shown in Figure 1 above. In this case of multicandidate voting, we represent how close each candidate is to winning by a point in a triangle. The three vertices represent victory for one particular candidate, while the edges opposite a vertex represent that candidate being eliminated. The distance from each edge can be thought of as a kind of “notional probability” that a particular candidate would win if the selection process were somehow cut short and terminated in the middle of the action. Thus, the points in the interior of the triangle represent an unresolved situation in which each candidate is still treated as having a chance. Voters can choose vectors of a fixed unit length in any direction within the triangle. The current position in the triangle then gradually evolves in a direction determined by adding up all of these vector votes. 

To illustrate, In the picture on the left of Figure 1, there is a blue arrow pointing from the starting point upwards towards Dick. This is the only movement that our current voting system allows for; a positive vote for one candidate. But there is also the red arrow, pointing in the opposite direction. This corresponds to a “negative” vote, in which the voter’s only goal is to vote against Dick. Not only would our mechanism allow for both these positive and negative votes, but it would allow voters to have even more complex votes based on their specific preferences for each of the candidates, as indicated by all of the arrows in the picture on the right. This example can be extended to higher dimensions, in which there are more than three candidates. For example, the policy space would be modeled as a tetrahedron for four candidates, or a simplex for five or more candidates, with a vertex for each candidate.

Figure 2: Summing the Votes and Adjusting the Position in the Triangle

From these preference vectors, we can then add up the vectors across people to determine the direction in which the position in the triangle evolves. Figure 2 above depicts an example of a simple two-voter system. In this example, person 1’s vector points most closely towards Jerry, while person 2’s vector points most closely towards Dick. After summing these two vectors, a small number  times the resulting vector is added to the previous point in this triangle to get a new point. If that new point is outside the triangle, then the closest point on the boundary of the triangle is the new position instead. This procedure is then repeated until either a vertex is reached (decisive victory for one candidate) or all motion grinds to a halt because the votes exactly counterbalance one another. 

It is important to note that we would not need or expect all voters to understand this triangular representation of the voting mechanism. Our focus is on designing a survey that lets individuals easily provide the information needed to calculate the direction a particular voter would most like to go, without them having to know this representation of their vote explicitly.  

The voting process is a matter of giving a rating to each candidate on a scale from 0 to 100, where 0 is the rating for the least favored candidate and 100 is the rating for the most favored candidate. Giving a rating to each candidate allows a voter the options of:

  • a straight “positive” vote, by rating the most favored candidate 100 and all other candidates 0,

  • a straight “negative” vote, by rating the least favored candidate 0 and all other candidates 100,

  • anything in between a straight positive and a straight negative vote, by rating the least favored candidate 0, the most favored candidate 100 and other candidates in between.

Data Collection

In order to illustrate the process of having voters rate candidates, and investigate what type of votes people wanted to cast, we collected data on the University of Southern California’s Understanding America Study, between March 18 - 21, 2016, on preferences over the last five major party candidates standing at the time (Hillary Clinton, Ted Cruz, John Kasich, Bernie Sanders, and Donald Trump).  

We asked participants who they believed would make the best President of the United States out of the five candidates, and then asked them who would make the worst. We set their “best” candidate at a rating of 100 and their “worst” candidate at a rating of 0. We had two different approaches for having each individual rate candidates after this point. 

In our first approach, we simply asked participants to “rate the other candidates using a special scale, where [worst candidate] is a 0 and [best candidate] is a 100”, with no other instructions. Let’s refer to this approach as “unstructured ratings.”

In our second approach, we seek to elicit participants’ expected utilities for each candidate. That is, we want to identify how much each participant would value having each candidate as president compared to the other candidates. In doing so, we explained that choosing a rating X on the scale indicates that the participant feels indifferent between the following two situations: (1) knowing for sure that the candidate they are rating will be president, and (2) waking up on election day with their favorite candidate having an X% chance of winning and their most disliked candidate having a (100-X)% chance of winning. Figure 3 is a screenshot of the directions each participant received in this approach, including two examples for clarity, in which the voter had chosen Donald Trump as the “worst” candidate and Hillary Clinton as the “best” candidate.

Figure 3: Instructions for Expected-Utility Ratings

A priori we favor the expected-utility ratings over the unstructured ratings, but we will report results using the unstructured ratings for those who don’t share that view and to show that it matters what instructions were given regarding how to use the scale.  

Converting the Ratings Into Votes

In the simplest, most straightforward implementation of the NGA mechanism, we construct each individual’s vector vote from their ratings as follows:

  • Calculate the individual’s mean rating across all five candidates and the standard deviation of the individual’s ratings.

  • For each candidate, starting with the individual’s rating of that candidate, subtract the individual’s mean and divide by the individual’s standard deviation.

This procedure normalizes an individual’s candidate ratings to have mean zero and variance one. That way, the vector vote of each individual is ensured to be of length one. Although there are other strategic voting issues we will return to below, the normalization prevents anyone from having more influence than other voters simply by giving all extreme ratings (all 0’s or 100’s). We refer to this restriction—equivalent to the vector in the triangle, tetrahedron or simplex representation having a maximum length of 1–as the “variance budget.” That is, each voter has a restricted amount of variance in their normalized vector, so in effect, voters cannot express a stronger opinion about one candidate without having to express less strong opinions about other candidates. Visually, this “budget” ensures that each voter’s preference vector is of the same length in figures 1 and 2. 

The normalized ratings having a mean of zero represents something even more basic: since only one candidate will win in the end, one cannot raise the chances of one candidate without lowering the chances of at least some other candidates.

To us, there is an intuitive attraction to focusing on normalized ratings, even apart from the NGA motivation that led us to that focus. So we will use the normalized ratings extensively in our empirical analysis of the data.

Analyzing the Data

Who Would Win? The first question to ask of the data is who would have won? First, let’s see who would have won in our sample using the current voting system. We assume that participants vote for the candidate that they chose as the “best” candidate. Tables 1 and 2 show these results, broken up by unstructured and expected utility ratings. We see that in both types of ratings, Hillary Clinton outperforms the other candidates. Note that at this stage in the survey, both types of ratings ask the same question (“who would make the best candidate”), so it is expected that the results would be similar.  

Table 1: Number of “best” candidate ratings using unstructured ratings

Table 2: Number of “best” candidate ratings using expected utility ratings

From these results, we see that Hillary Clinton would be the nominated Democrat in both rating types, and Donald Trump would be the nominated Republican in our sample. Of those two remaining candidates, our sample of participants would elect Hillary Clinton, with 459 participants who prefer her, over Donald Trump, with 325 participants who prefer him.

Now, let’s look at how these results would change if we consider NGA as a multicandidate voting mechanism, as previously described. In the simplest, most straightforward implementation of NGA for a multicandidate election, the victor is the candidate with the greatest sum of normalized ratings across voters. (Note that it is possible to repeat the process of adding a small vector   based on the same information. Typically, this will lead first to a side or edge—one candidate being eliminated—and then to a vertex, one candidate being victorious.)  

As a prediction of what would happen in an actual multicandidate election using NGA, the results from our data need to be taken with a large grain of salt for at least three reasons. First, our survey was conducted months before November 8, when voters’ knowledge of the five candidates was still relatively limited—not to mention in an election cycle with lots of dramatic “October surprises.” Second, the total number of survey respondents is relatively small, and our survey respondents are not fully representative of the actual population of voters, though every effort was made to make the UAS survey as representative as possible of the adult US population overall. And third, our survey respondents knew that their answers to our survey would not determine who would become president, and so they were not subject to incentives for strategic misreporting that would arise in a real-world multicandidate election using NGA. But that makes the data even more interesting as an indication of which candidate would have been most acceptable to a wide range of voters. Here are averages of the normalized ratings for both the sample that was asked to give unstructured ratings and the sample that was asked to give expected-utility ratings:

Table 3: NGA Results Using Unstructured Ratings

Table 4: NGA Results Using Expected Utility Ratings

Thus, leaving aside any effects from strategic voting (and ignoring for the moment the timing of our survey and the non-representativeness of our sample), our data point to John Kasich as most likely to have won the election using NGA to resolve the multicandidate choice over all of these five candidates. While his mediocre performance under our current voting system suggests that he was not the favorite candidate of all that many voters, our respondents overall found him relatively acceptable.  

Bernie Sanders has the second-to-highest average rating, despite not performing very well in the primary. Donald Trump has the lowest average rating by far, with Ted Cruz second-to-lowest using the unstructured ratings and Hillary Clinton second-to-lowest using the expected-utility ratings. The most interesting point to take away is that, by the expected utility ratings, out of these five candidates, the current general election has come down to the two candidates with the lowest average ratings. (This is in line with the low approval ratings for both Donald Trump and Hillary Clinton.)

Expected-Utility Ratings vs. Unstructured Ratings. A striking difference between the expected utility ratings and the unstructured ratings is the greater prevalence of tendencies toward lower normalized ratings with the expected utility ratings.

One way to illustrate this difference is to look at scatterplots of the most extreme rating (in absolute value) in the normalized ratings vs. the second most extreme rating in the normalized ratings. In figures 4 and 5 below, we can see whether participants’ most extreme preferences were for a certain candidate (indicated by points with a positive x value) or against a certain candidate (indicated by points with a negative x value).  

Figure 4: Most Extreme vs. Second Most Extreme Ratings Using Unstructured Ratings

tumblr_inline_og19d2fP9I1r57lmx_400.png

Figure 5: Most Extreme vs. Second Most Extreme Ratings Using Expected Utility Ratings

Out of the expected-utility vector votes, 345 have the most extreme normalized rating negative, compared to 133 that have the most extreme normalized rating positive. By contrast, out of the unstructured vector votes, 211 have the most extreme normalized rating positive, compared to 120 that have the most extreme normalized rating negative. This trend suggests that participants emphasize their negative feelings toward candidates more in the expected utility ratings as compared to in the unstructured ratings.

This stark contrast between the expected utility ratings and the unstructured ratings can further be seen through the notable differences in the shape of the distribution between these two types of ratings. Skewness describes respondents’ tendencies to rate some candidates much higher than average (skewness > 0) in comparison to the standard deviation of 1 or much lower than average (skewness < 0). Intuitively, a set of ratings with a positive skewness is somewhat closer to being a “positive” vote, while a set of ratings with a negative skewness is somewhat closer to being a “negative vote.” Figure 6 shows that in the unstructured ratings, skewness tends to be more positive than in the expected utility ratings. Table 5 gives summary statistics corresponding to this graph. This indicates that respondents are closer to casting “positive” votes in the unstructured ratings. The expected utility ratings, on the other hand, tend to have a more negative skew, and are thus closer to being “negative” votes. Table 5 emphasizes this point, by showing that the average skew for unstructured ratings is indeed positive, while the average skew for the expected utility ratings is strongly negative.

Figure 6: Skewness of Unstructured vs. Expected Utility Ratings

Table 5: Skewness of Ratings

Thus, by both this measure of skewness and by the extreme ratings plots, the expected-utility ratings look closer to being negative votes (votes against a candidate) while the unstructured ratings look closer to being positive votes (votes for a candidate).

Why Are the Expected-Utility Ratings So Different from the Unstructured Ratings? A solid answer to the question of why the expected-utility ratings are so different from the unstructured ratings (and the related question of whether our a priori preference for the expected-utility ratings is justified empirically) would require additional data in another multicandidate election. But we are able to provide one hypothesis. Because our data were collected in the heat of the primaries, our respondents may have wanted to use the ratings to express their opinions about those primary battles, using a substantial portion of the 0 to 100 scale to express those opinions, and consequently squeezing down the amount of the scale left to express their opinions about the candidates in the party they favored less. The structure of expected-utility ratings would have pushed back against this tendency, asking the respondents, in effect, “Are you really willing to accept a substantial chance of your least favorite candidate winning in order to get your favorite candidate instead of your second- or third-choice?”

To see if this hypothesis is at all consistent with the data, consider the variance among an individual’s two or three ratings within the party of that individual’s favorite candidate. Tables 6 and 7 show that the within-party, within-voter variance is substantially greater for the unstructured ratings than for the expected utility ratings. This lends some support to the idea that those answering the unstructured ratings were more focused on the primaries, overstating their dislike for the “other” candidate(s) in the party, whereas in the expected utility ratings, participants were more likely to think about the general election and save more of the unit variance in normalized ratings for candidates in the other party.

Table 6: Among those whose top candidate was a Democrat, what was the average variance between Clinton and Sanders ratings?

Table 7: Among those whose top candidate was a Republican, what was the average variance between Cruz, Kasich, and Trump ratings?

Multiple-Stage NGA Voting.

In the current voting system, strategic voting for someone other than one’s most preferred choice is a commonplace. So there is no reason to dismiss a new voting system for having some degree of strategic misreporting. But to allow voters the simplicity of truthful reporting in their ratings without hurting themselves too much, we view it as desirable to have the incentives for strategic misreporting be relatively small. Given the issues taken care of by the normalization of the ratings, the incentive for strategic misreporting we have worried most about is the incentive to avoid giving a strong negative rating to a candidate who is going to be eliminated anyway, since doing so would dilute the ratings assigned to other candidates. That is, there is an incentive to free ride on the elimination of widely disliked candidates. Fortunately, modifications of the NGA mechanism can help reduce this incentive or help insure reasonable results despite some degree of strategic voting.

One modification of the NGA mechanism helpful in dealing with free riding in the elimination of widely disliked candidates is to vote in stages. Rather than taking ratings at one point in time to guide movement all the way to a vertex with one candidate winning, one can have a series of nonpartisan “open primaries” in which the notional probabilities of a candidate winning if things were ended prematurely are adjusted some distance, but not all the way to one candidate winning. This gives voters a chance to see if a candidate many thought would be quickly eliminated is doing well, making it worthwhile spending some of one’s variance budget voting against them in the next stage. On the other hand, taking the ending point of the adjustments in notional probabilities from the nonpartisan open primary as the starting point for the next stage ensures that all voters have some reward for the voting efforts they make, even in the first stage. 

Having multiple stages also serves other purposes. There could easily be candidates in an initially crowded field that voters simply don’t know much about and don’t want to invest in learning about because it seems those candidates have no chance. A nonpartisan open primary helps voters and journalists know which candidates are worth learning more about.

(Also, one practical issue with the early “primaries” is the large number of candidates a voter might be asked to rate. One way to handle this is to include an option for casting a straight positive or straight negative vote that effectively fills in 0’s and 100’s for all the candidates accordingly.) 

A Smoothed-Instant-Runoff Version of NGA for Multicandidate Elections

The NGA perspective from which we are looking at things suggests another, more technical way to reduce the incentive for strategic misreporting: using exactly the same kind of survey to elicit expected-utility ratings, but modifying the mechanism so that it automatically deemphasizes the ratings of candidates who are on their way out. This involves (a) demeaning using a weighted average that gives a low weight to candidates that have a currently low notional probability of winning, (b) slowing down (without stopping) the adjustment of notional probabilities that are already low, and (c ) steering vector votes toward focusing on candidates that still have a relatively high notional probability. There is a parameter that determines whether these three things happen only when the notional probability of a candidate is very low or more gradually. If these modifications happen only when the notional probability of a candidate is very low, the mechanism becomes a combination of the simplest implementation of NGA and the idea behind instant-runoff voting, where voters re-optimize once a candidate is eliminated. With less extreme values of the parameter, the spirit of instant-runoff voting is smoothed out. Regardless of that parameter, the basic NGA idea is preserved. 

A downside of the smoothed-instant-runoff version of NGA for multicandidate elections is its complexity. It would still be fully verifiable, but those who do not fully understand it might be suspicious of it. Nevertheless, to the extent it makes one aspect of strategic voting happen automatically without strategic misreporting, it would put less sophisticated voters more on a par with the more sophisticated voters. 

Incentives for Politicians

A great deal of research is needed to fully understand incentives for politicians under an NGA or Smoothed-Instant-Runoff NGA multicandidate voting system with multiple stages. However, we are willing to make some conjectures. If people view certain important candidates of an opposing party as “the devil,” the strong negative ratings for those “diabolical” candidates would open up an opportunity for centrist candidates like John Kasich whom few voters see as “diabolical.” It could even open up space for new centrist parties. 

Undoubtedly there are other effects that are harder to foresee, but a system that allows people to express strong negative views about a candidate should help avoid many possible bad outcomes. And the NGA system still allows people to express strong positive views about a candidate if they so choose. 

NOTE: Please consider this post the equivalent of a very-early-stage working paper. We would love to get comments. And just as for any other early-stage working paper, we reserve the right to copy wholesale any of the text above into more final versions of the paper. Because it is also a blog post, feel free to cite and quote. We want to thank Becky Royer for outstanding research and editorial assistance.

From Charters of Liberty Granted by Power to Charters of Power Granted by Liberty

IN 1792, in a short essay called ‘Charters,’ James Madison succinctly explained what he thought was the essential difference between the United States Constitution and the constitutions of every other nation in history. ‘In Europe,’ he wrote, ‘charters of liberty have been granted by power. America has set the example … of charters of power granted by liberty. This revolution in the practice of the world may, with an honest praise, be pronounced the most triumphant epoch of its history.’ The ‘charters of liberty … granted by power’ that Madison had in mind were the celebrated documents of freedom that kings and parliaments had issued throughout the ages, many still honored today: Magna Carta of 1215, the English Petition of Right of 1628, the English Bill of Rights of 1689. Documents like these had made the British constitution – unwritten though it was – the freest in the world prior to the American Revolution. A British subject enjoyed more room to express his opinions, more liberty to do as he liked with his property, more security against government intrusion, and greater religious toleration than the subject of any other monarchy in the known world. Yet for Madison and his contemporaries, that was not enough. He and his fellow patriots considered “charters of liberty … granted by power” a poor substitute for actual freedom because however noble their words, such charters were still nothing more than pledges by those in power not to invade a subject’s freedom. And because those pledges were ‘granted by power,’ they could also be revoked by the same power. If freedom was only a privilege the king gave subjects out of his own magnanimity, then freedom could also be taken away whenever the king saw fit.
— Timothy Sandefeur, The Permission Society: How the Ruling Class Turns Our Freedoms into Privileges and What We Can Do About It

Henry George: Morality is the Heart of Economics

Political economy is the simplest of the sciences. It is but the intellectual recognition, as related to social life, of laws which in their moral aspect men instinctively recognize, and which are embodied in the simple teachings of him whom the common people heard gladly. But, like Christianity, political economy has been warped by institutions which, denying the equality and brother-hood of man, have enlisted authority, silenced objection, and ingrained themselves in custom and habit of thought.
— Henry George, Protection or Free Trade.

Sun Balcony

deejayforte:

↟ “Imagination will often carry us to worlds that never were, but without it we go nowhere.” — Carl Sagan; astrophysicist, awesomist.

My favorite place in NYC is of course, The Rose Center for Earth and Space. In the center of this building is an object called the “Hayden Sphere” which serves as the museum’s planetarium and Sun (Sol) replica. I always imagined what this object would look like as an actual star—in the center of everything, which inspired this Cinemagraph.  

Instagram version

(Source: @deejayforte)

Sun Balcony

I love this picture. To me it looks like the observation deck from a hotel orbiting close to the Sun. – Miles

The Political Perils of Not Using Deep Negative Rates When Called For

Link to Jon Hilsenrath’s Wall Street Journal special report, updated August 26, 2016, “Years of Fed Missteps Fueled Disillusion With the Economy and Washington”

How well has what you have been doing been working for you?

People are quick to think that the political costs of deep negative rates to a central bank are substantial. But it is worth considering the political costs of not doing deep negative rates when the economic situation calls for it. Take as a case in point the failure of the Fed to do deep negative rates in 2009. Regardless of the reason for the Fed’s not doing deep negative rates in 2009, it is possible to see the consequences for the Fed’s popularity of the depth of the Great Recession and the slowness of the recovery. 

In his Wall Street Journal special report “Years of Fed Missteps Fueled Disillusion With the Economy and Washington,” Jon Hilsenrath tells the story of the Fed’s decline in popularity, and presents the following graphic: 

How Americans rate federal agencies

Share of respondents who said each agency was doing either a ‘good’ or ‘excellent’ job, for the eight agencies for which consistent numbers were available

The Alternative

There is no question that the Fed’s failure to foresee the financial crisis and its role in the bailouts contributed to its decline in popularity. But consider the popularity of the Fed by 2014 in two alternative scenarios: 

Scenario 1: The actual path of history in which the economy was anemic, leading to a zero rate policy through the end of 2014.

Scenario 2: An alternate history in which a vigorous negative interest rate policy met a firestorm of protest in 2009, but in which the economy recovered quickly and was on a strong footing by early 2010, allowing rates to rise back to 1% by the end of 2010 and to 2% in 2011.   

In Scenario 2, the deep negative rates in 2009 would have seemed like old news even by the time of the presidential election in 2012, let alone in 2014. In the actual history, Scenario 1, low rates are still an issue during the 2016 presidential campaign, because the recovery has been so slow. 

It Looks Good to Get the Job Done

At the end of my paper “Negative Interest Rate Policy as Conventional Monetary Policy” (ungated pdf download) published in the National Institute Economic Review, I discuss the politics of deep negative interest rates–not just for the United States, but also for other currency regions that needed them. My eighth and final point there is this:

Finally, the benefits of economic stabilisation should be emphasised. The Great Recession was no picnic. Deep negative interest rates throughout 2009 – somewhere in the –4 per cent to –7 per cent range – could have brought robust recovery by early to mid 2010. The output gaps the world suffered in later years were all part of the cost of the zero lower bound. These output gaps not only had large direct costs, they also distracted policymakers from attending to other important issues. For example, the later part of the Great Recession that could have been avoided by negative interest rate policy led to a relatively sterile debate in Europe between fiscal stimulus and austerity, with supply-side reform getting relatively little attention. And the later part of the Great Recession that could have been avoided by negative interest rate policy brought down many governments for whom thepolitical benefits of negative interest rate policy would have been immense. And for central banks, it looks good to get the job done.

Dan Bobkoff and Akin Oyedele: Economists Never Imagined Negative Interest Rates Would Reach the Real World--Now They’re Rewriting Textbooks

Link to Dan Bobkoff’s and Akin Oyedele’s October 23, 2016 Business Insider article “Economists never imagined negative interest rates — now they’re rewriting textbooks”

An October 23, 2016 Business Insider article emphasizes just how far negative interest rate policy has come in the last four years since I published “How Subordinating Paper Currency to Electronic Money Can End Recessions and End Inflation” (originally titled “How paper currency is holding the US recovery back”) and started following negative interest rate discussions closely. 

One of the big advances in fostering understanding of negative interest rate policy is the publication of Ken Rogoff’s book The Curse of Cash, which has a thorough discussion of the full-bore negative interest rate policy I distinguished from current negative interest rate policy in “If a Central Bank Cuts All of Its Interest Rates, Including the Paper Currency Interest Rate, Negative Interest Rates are a Much Fiercer Animal.” (See my post “Ana Swanson Interviews Ken Rogoff about The Curse of Cash for more about the book.) Ken has been on the hustings promoting his book, and in the process greatly raising journalists’ and their readers’ understanding of negative interest rate policy. This article has some audio of Ken explaining negative interest rates. 

Here is what Dan Bobkoff and Akin Oyedele write about the remarkable progress of negative interest rate practice:

The policy has evolved from radical idea to mainstream policy of postrecession governments in Europe and Asia. And in the US, Federal Reserve Board Chair Janet Yellen has said the US will not rule out using them if it needs to. …

In textbooks like Mishkin’s, a 0% interest rate was known as the “zero lower bound.” It just didn’t seem to make sense to go below that.

Now economists have to rename it. …

Today, countries with negative policy rates make up almost a quarter of global gross domestic product, according to the World Bank.

One element of Dan’s and Akin’s article deserves further discussion. They touch on the difficulty of passing through negative rates to household depositors:

“It’s very hard to obviously get depositors to accept negative interest rates for putting their money in there,” said Marc Bushallow, managing director of fixed income at Manning and Napier, which manages $35 billion in assets.

What’s much more likely is that only big banks will be forced to pay to lend money to one another. That would exempt small depositors from paying, but still have some of the stimulus effects that the central banks intend to have.

Something I emphasize in my talks to central banks is that a central bank is better off letting private banks handle much of the pass-through because the negative in regular people’s deposit and savings accounts that are likely to be a political problem a central bank represent a customer-relations problem for private banks that the private banks are likely to handle relatively carefully.

I think of negative deposit rates for small household checking and savings accounts as a big enough political problem for central banks that I have been strongly recommending to central banks that they use a tiered interest-on- reserves formula that actively subsidizes zero rates for small household checking and savings accounts. If a central bank can announce that it is trying to avoid having regular people with modest balances face negative rates in their checking or savings account, it should dramatically mitigate the political costs to a central bank of a vigorous negative interest rate policy. 

I have written about subsidizing zero rates for small household accounts in a number of posts:

Courage on the part of central bankers plus smart efforts to mitigate the political costs of a vigorous negative rate policy can do a great deal to advance negative interest rate policy as an element of the monetary policy toolkit. Nations that have such courageous and shrewd central bankers can then return to the Great Moderation, while maintaining low inflation targets. 

On Consent Beginning from a Free and Equal Condition

The assertion in Article 1 of the Universal Declaration of Human Rights that “All human beings are born free and equal in dignity and rights” still sounds radical when applied to undocumented immigrants and members of small sexual minorities. To back up this assertion, it is hard to do better than John Locke in section 4 of his 2d Treatise on Government: “On Civil Government”:

To understand political power right, and derive it from its original, we must consider, what state all men are naturally in, and that is, a state of perfect freedom to order their actions, and dispose of their possessions and persons, as they think fit, within the bounds of the law of nature, without asking leave, or depending upon the will of any other man.

A state also of equality, wherein all the power and jurisdiction is reciprocal, no one having more than another; there being nothing more evident, than that creatures of the same species and rank, promiscuously born to all the same advantages of nature, and the use of the same faculties, should also be equal one amongst another without subordination or subjection, unless the lord and master of them all should, by any manifest declaration of his will, set one above another, and confer on him, by an evident and clear appointment, an undoubted right to dominion and sovereignty.

When I read this, I see an image of two human beings meeting in the middle of a trackless wilderness. They may have come from civilized territories, but that is all far away. One might be bigger and stronger than the other, and so able to take advantage of the other, but there is no good and just reason why one should rule over the other. They both deserve to be free and equal in relation to each other.

Writing when he did, it is not surprising that John Locke refers to God, but he suggests a very high burden of proof if someone claims that God has put one human being above another.

John Locke’s picture of people starting out free and equal, without any hierarchy, as we typically think of our literal neighbors next door, is very powerful. Thinking about the morality that applies between neighbors from this angle has generated some of the most persuasive Libertarian writing. I am thinking particular of Michael Huemer’s book The Problem of Political Authority: An Examination of the Right to Coerce and the Duty to Obey. My reading of that book generated several posts:

I have thought that the starting point of being free and equal doesn’t absolutely have to point to a minimalist state. In particular, if someone would freely choose to belong to a state rather than stay in a separate, free and equal condition, then the state may be just. But there are some important considerations.

First, human beings are social creatures. It is not fair to imagine someone’s “free and equal” alternative as being alone. Rather, imagine the “free and equal” alternative as being in a highly social group of a few friends and family. Unless a state is better than that, it is not just.

Second while the provision of the basic justice of safety and protection from violence may be enough to justify the requiring a contribution to the resources necessary to provide that safety and protection if the individual would choose that protection over the state of nature even at the cost of the taxation, it seems unfair to use the surplus from the provision of protection at that cost to justify a government that goes beyond that protection. That is, think of two steps: a government that provide basic physical protection and justice at some cost. This is just getting people up to their basic rights–at a cost that someone has to bear. Then a government that goes beyond that had better provide surplus from the things that go beyond the basic provision of justice.

Let me give an example. A government might provide a commercial code and roads to make it easier to carry on commerce. If someone who doesn’t have to worry about basic security because of the minimal justice activities of a state, who was allowed to stop with those basic security benefits would still choose to join a state with that state that provided a commercial code and roads to make it easier to carry on commerce, then such a state providing infrastructure and a commercial code as well as basic physical safety might be just.  

Some of these functions–such as roads–might be provided by private parties rather than by the state, but in this way of thinking about things, the state is viewed as if it were a species of private organization. As long as the people subject to it would voluntarily choose to belong to it, even when they still had basic physical security when not belonging, then the demands of the state can be seen as like those of a private club. 

Looking at things this way, a basic right has to always be to leave the club if one wants to. And one should be able to continue to associate easily with others who have decided to leave the club, and even form an alternative club. Thus, from this point of view, for existing states to be just, it is crucial that there be spots on the earth where people can buy land along with the associated political rights to start a new nation on that land

If one wants to justify redistributive taxation, there is a twist one can put on this notion that free and equal individuals would have to voluntarily want to belong to a state for that state to be just. That is to change the question to whether someone would voluntarily choose to belong to a state over remaining free and equal with all others outside a state behind a Rawlsian veil of ignorance, not knowing if one would be talented or not and therefore not knowing if one was likely to be rich or poor. I think John Locke himself was more in the spirit of asking whether one would agree after knowing one’s level of talent to belong to a society. But would-be willingness to consent if one were behind a Rawlsian veil of ignorance might count for something in the justice of a state existing. 

Thinking of consent to belong to a state as compared to a free and equal state of nature, there is one very tough minimal requirement of justice that is not always noted: a state must not be dominated in attractiveness by another state that is willing to accept more members. And if State B is more attractive than State A for reasons that State A could imitate that calls into serious question the justice of State A as it is. Further, even if State B in this story doesn’t actually exist, but truly could exist in all practicality, much of the force of the argument remains. 

That is, the logic of consent from the free and equal state of nature means this: a state is unjust if it is doing things in a suboptimal way that people would migrate away from to a more optimized state. The reason is that no one would consent to be part of the state doing things in a suboptimal way if they could instead be part of the state doing things the optimal way. In other words, bad public policy that is bad enough people would want to migrate away from it is not just bad, it is unjust. 

One can combine this idea of a suboptimal policy being unjust because no one would consent to it if starting in a free and equal condition they could choose a similar state but with a better policy with the idea of consent from behind a Rawlsian veil of ignorance. If one would shift one’s decision of which state to be a part of from one with a policy that looks less attractive behind the veil of ignorance to one that looks more attractive behind the veil of ignorance, the justice of a state with a policy that would look less attractive behind the veil of ignorance stands in question. 

John Locke’s perspective of people beginning free and equal is very refreshing in a world still filled with domineering states. The world still has a long way to go on the way to freedom.