John Locke: Freedom is Life; Slavery Can Be Justified Only as a Reprieve from Deserved Death

Marketplace for selling innocent individuals who were enslaved. John Locke's account of the "Law of Nature" suggests that those who did the enslaving deserved death or slavery themselves. Image source

Marketplace for selling innocent individuals who were enslaved. John Locke's account of the "Law of Nature" suggests that those who did the enslaving deserved death or slavery themselves. Image source

In section 23 of his 2d Treatise on Government: “On Civil Government” (in Chapter IV "Of Slavery"), John Locke makes what I consider two logical errors. Taking as given the religious condemnation of suicide in his cultural milieu, he argues:

  • Since I do not have the right to kill myself, I also cannot give someone else the right to kill me.

  • Since freedom is so crucial to the preservation of my life, I also cannot give away my own freedom.

However, John Locke also suggests

  • If I commit a crime worthy of death, that the individual or group I have harmed can choose to commute a sentence of death to a sentence of slavery.

  • Being enslaved is no worse a punishment than death because, as a practical matter, it is very difficult to prevent me from killing myself if I viewed slavery as worse.

Here is the exact text:

This freedom from absolute, arbitrary power, is so necessary to, and closely joined with a man’s preservation, that he cannot part with it, but by what forfeits his preservation and life together: for a man, not having the power of his own life, cannot, by compact, or his own consent, enslave himself to any one, nor put himself under the absolute, arbitrary power of another, to take away his life, when he pleases. No body can give more power than he has himself; and he that cannot take away his own life, cannot give another power over it. Indeed, having by his fault forfeited his own life, by some act that deserves death; he, to whom he has forfeited it, may (when he has him in his power) delay to take it, and make use of him to his service, and he does him no injury by it: for, whenever he finds the hardship of his slavery outweigh the value of his life, it is in his power, by resisting the will of his master, to draw on himself the death he desires.

The lesser of the two logical problems is that John Locke effectively allows me to give away my freedom by committing a serious crime. So I can give away my freedom by committing a serious crime. John Locke could answer that since I do not have the right to commit the crime, I also do not have the right to give away my freedom in this way. And few people are eager to give away their freedom, so allowing such a loophole for giving away one's freedom is unlikely to be a practical problem. The usual temptations to give away one's freedom involve selling one's freedom in some way for something else one wants. And the usual temptations for crime are the hope of getting something one wants from the crime without losing one's freedom or suffering any other penalty. 

The bigger logical disjunction here is that for some reason, John Locke regards suicide as an alternative to slavery as a legitimate choice, while suicide under other circumstances is not. But if suicide as an alternative to slavery is legitimate, why wouldn't suicide as an alternative to an extraordinarily painful and lingering terminal disease be legitimate? (Suppose everyone agreed that enduring the extraordinarily painful and lingering terminal disease was worse than enduring slavery.) Or if it is illegitimate to commit suicide as an alternative to suffering under an extraordinarily painful and lingering terminal disease, shouldn't suicide as an alternative to a situation of bondage more bearable than that disease also be illegitimate?

For links to other John Locke posts, see these John Locke aggregator posts: 

 

  

Why GDP Can Grow Forever

Robert Gordon's argument that economic growth will slow down in the future made a big splash in 2012. He laid out his views in the Wall Street Journal op-ed shown above, as well is in other venues. His book The Rise and Fall of American Growth will come out on September 5, 2017. 

A key part of Robert Gordon's argument is that dramatic changes in people's lives from past economic growth are unlikely to be repeated. He writes in his 2012 op-ed:

The growth of the past century wasn't built on manna from heaven. It resulted in large part from a remarkable set of inventions between 1875 and 1900. These started with Edison's electric light bulb (1879) and power station (1882), making possible everything from elevator buildings to consumer appliances. Karl Benz invented the first workable internal-combustion engine the same year as Edison's light bulb.

his narrow time frame saw the introduction of running water and indoor plumbing, the greatest event in the history of female liberation, as women were freed from carrying literally tons of water each year. The telephone, phonograph, motion picture and radio also sprang into existence. The period after World War II saw another great spurt of invention, with the development of television, air conditioning, the jet plane and the interstate highway system.

The profound boost that these innovations gave to economic growth would be difficult to repeat. Only once could transport speed be increased from the horse (6 miles per hour) to the Boeing 707 (550 mph). Only once could outhouses be replaced by running water and indoor plumbing. Only once could indoor temperatures, thanks to central heating and air conditioning, be converted from cold in winter and hot in summer to a uniform year-round climate of 68 to 72 degrees Fahrenheit.

The main claim a typical reader would take from this passage that it will be hard to make as big a difference to people's lives with the next 150 years of technological progress as with the last 150 years of technological progress. Robert Gordon may turn out to be wrong on all counts. Some believe a technological "singularity" will come within the next 150 years that will dramatically change human existence to something "transhuman."

But even if Robert Gordon is right that the next 150 years of technological progress will not make anywhere near as big a difference to people's lives as the last 150 years of technological progress, I have a highly technical criticism to his transition from that claim to his statement

The profound boost that these innovations gave to economic growth would be difficult to repeat. 

Based on other things Robert Gordon has written, I interpret that as a statement about the effect of technological progress on real GDP growth. So interpreted, the statement "The profound boost that these innovations gave to economic growth would be difficult to repeat." may be true, but it does not follow logically from the difference technological progress has made to people's lives being hard to repeat.

The gap between "it is hard to repeat that improvement in people's lives" to "it is hard to repeat that boost to real GDP growth" has to do with what a weak reed real GDP growth is for understanding economic improvements. Let me leave aside all the many ways in which GDP can go up even if people's lives worsen. (See "Restoring American Growth: The Video" for a discussion of that.) If all non-market goods and parameters of income distribution stay the same and real GDP increases, people are indeed better off. But "How much better off?" What does it mean to say that GDP is 1% higher?

GDP was conceived with increases in quantity in mind. If people get more goods and services it is clear what an x% increase in GDP means. But more of exactly the same good or service becomes less useful very fast. If more people are getting goods that others already had, what is going on is also relatively clear. But what if enough people have all the want of exactly the same good or service they already have that entrepreneurs introduce a good that is in some respect new. It may be an entirely new good or something easy to see as an improvement in the quality of an existing good. In either case, the way government agencies factor this into GDP is that the value of the new good is measured by how much of more of the same people are willing to give up to get something new. Given how boring more of the same might be to at least some people, the amount some fraction of people are willing to give up of more of the same to get something new might be substantial. Therefore, the production of something wholly new or something seen as a higher quality modification of something old can count as a substantial addition to GDP growth.

In the extreme, if people became bored enough with more of the same, a set of truly tiny quality improvements could be counted as 3% growth in GDP, because a marginal 3% of boring products one doesn't need that much of is hardly any sacrifice at all.    

My technical point is relevant not only to Robert Gordon's argument, but also to Tom Murphy's arguments in his posts "Can Economic Growth Last?" and "Exponential Economist Meets Finite Physicist." To his statement "economic growth cannot continue indefinitely," I say,

It depends what you mean by economic growth. If you mean GDP growth, all it takes for it to grow forever at a rate always above a positive x% per year is for tiny quality improvements or novelties to be valued extremely highly relative to a higher quantity of the same old things. 

And it is not clear that what are seen as tiny quality improvements require any violations of the laws of physics, since quality improvements are all in the eye of the beholder. 

Despite my framing of this post as a correction to Robert Gordon's and Tom Murphy's arguments, the real moral of this post is the imperfections of real GDP growth as a measure of "economic growth" in the broader sense of people getting more of what they want. GDP is a quantity-metric measure of economic welfare. If quantity is no longer very valuable, a quantity-metric measure shows small improvements in quality or novelty as equivalent to large increases in quantity. 

Another way to look things is that Robert Gordon implicitly brings into his argument declining marginal utility. It is quite possible for economic growth to continue to be rapid by the conventional measure of GDP growth without it making as big a difference in people's lives as that rate of GDP growth made in the past.

 

Note: People's intuitions about declining marginal utility have other potential implications as well. See "Inequality Aversion Utility Functions: Would $1000 Mean More to a Poorer Family than $4000 to One Twice as Rich?"

 

Let's Set Half a Percent as the Standard for Statistical Significance

6lp8RGU2.jpeg

My many-times-over coauthor Dan Benjamin is the lead author on a very interesting short paper "Redefine Statistical Significance." He gathered luminaries from many disciplines to jointly advocate a tightening of the standards for using the words "statistically significant" to results that have less than a half a percent probability of occurring by chance when nothing is really there, rather than all results that—on their face—have less than a 5% probability of occurring by chance. Results with more than a 1/2% probability of occurring by chance could only be called "statistically suggestive" at most. 

In my view, this is a marvelous idea. It could (a) help enormously and (b) can really happen. It can really happen because it is at heart a linguistic rule. Even if rigorously enforced, it just means that editors would force people in papers to say "statistically suggestive” for a p of a little less than .05, and only allow the phrase "statistically significant" in a paper if the p value is .005 or less. As a well-defined policy, it is nothing more than that. Everything else is general equilibrium effects.

I previewed the paper and some of why tightening the standards for statistical significance could help enormously in "Does the Journal System Distort Scientific Research?" In the last few years, discipline after discipline has faced a "replication crisis" as results that were considered important could not be backed up by independent researchers. For example, here are links about the replication crisis in five disciplines:

Here is a key part of the argument in "Redefine Statistical Significance":

Multiple hypothesis testing, P-hacking, and publication bias all reduce the credibility of evidence. Some of these practices reduce the prior odds of [the alternative hypothesis] relative to [the null hypothesis] by changing the population of hypothesis tests that are reported. Prediction markets and analyses of replication results both suggest that for psychology experiments, the prior odds of [the alternative hypothesis] relative to [the null hypothesis] may be only about 1:10. A similar number has been suggested in cancer clinical trials, and the number is likely to be much lower in preclinical biomedical research. ...

A two-sided P-value of 0.05 corresponds to Bayes factors in favor of [the alternative hypothesis] that range from about 2.5 to 3.4 under reasonable assumptions about [the alternative hypothesis] (Fig. 1). This is weak evidence from at least three perspectives. First, conventional Bayes factor categorizations characterize this range as “weak” or “very weak.” Second, we suspect many scientists would guess that P ≈ 0.05 implies stronger support for [the alternative hypothesis] than a Bayes factor of 2.5 to 3.4. Third, using equation (1) and prior odds of 1:10, a P-value of 0.05 corresponds to at least 3:1 odds (i.e., the reciprocal of the product 1/10 × 3.4) in favor of the null hypothesis!

... In biomedical research, 96% of a sample of recent papers claim statistically significant results with the P < 0.05 threshold. However, replication rates were very low for these studies, suggesting a potential for gains by adopting this new standard in these fields as well.

In other words, as things are now, something declared "statistically significant" at the 5% level is much more likely to be false than to be true. 

By contrast, the authors argue, results declared significant at the 1/2 % level are at least as likely to be true as false, in the sense of being replicable about 50% of the time in psychology and about 85% of the time in experimental economics:

Empirical evidence from recent replication projects in psychology and experimental economics provide insights into the prior odds in favor of [the alternative hypothesis]. In both projects, the rate of replication (i.e., significance at P < 0.05 in the replication in a consistent direction) was roughly double for initial studies with P < 0.005 relative to initial studies with 0.005 < P < 0.05: 50% versus 24% for psychology, and 85% versus 44% for experimental economics.

What about the costs of a stricter standard for declaring statistical significance? The authors of "Redefine Statistical Significance" write:

For a wide range of common statistical tests, transitioning from a P-value threshold of [0.05] to [0.005] while maintaining 80% power would require an increase in sample sizes of about 70%. Such an increase means that fewer studies can be conducted using current experimental designs and budgets. But Figure 2 shows the benefit: false positive rates would typically fall by factors greater than two. Hence, considerable resources would be saved by not performing future studies based on false premises. Increasing sample sizes is also desirable because studies with small sample sizes tend to yield inflated effect size estimates, and publication and other biases may be more likely in an environment of small studies. We believe that efficiency gains would far outweigh losses.

They are careful to say that in some disciplines, even the half-percent standard for statistical significance is not strict enough:

For exploratory research with very low prior odds (well outside the range in Figure 2), even lower significance thresholds than 0.005 are needed. Recognition of this issue led the genetics research community to move to a “genome-wide significance threshold” of 5×10^{-8} over a decade ago. And in high-energy physics, the tradition has long been to define significance by a “5-sigma” rule (roughly a P-value threshold of 3×10^{-7} ). We are essentially suggesting a move from a 2-sigma rule to a 3-sigma rule.

Our recommendation applies to disciplines with prior odds broadly in the range depicted in Figure 2, where use of P < 0.05 as a default is widespread. Within those disciplines, it is helpful for consumers of research to have a consistent benchmark. We feel the default should be shifted.

To me, one of the biggest benefits of this shift might be a greater ability for people to publish results that do not reject the null hypothesis at conventional levels. These results too, are an important part of the evidence base. The authors of "Redefine Statistical Significance" are careful to say that people should be able to publish papers that have no statistically significant results:

We emphasize that this proposal is about standards of evidence, not standards for policy action nor standards for publication. Results that do not reach the threshold for statistical significance (whatever it is) can still be important and merit publication in leading journals if they address important research questions with rigorous methods. This proposal should not be used to reject publications of novel findings with 0.005 < P < 0.05 properly labeled as suggestive evidence. We should reward quality and transparency of research as we impose these more stringent standards, and we should monitor how researchers’ behaviors are affected by this change. Otherwise, science runs the risk that the more demanding threshold for statistical significance will be met to the detriment of quality and transparency.

I myself was shocked when I read my own words above on the screen:

... people should be able to publish papers that have no statistically significant results: ...

That it seems shocking to say a paper should be publishable with no statistically significant results is a symptom of how corrupt the system has become. A stronger standard of statistical significance is needed in order to fight that corruption, both by making results that are declared statistically significant more likely to be true and by making results that are not declared statistically significant more publishable.

 

Update: Also useful is this article by Valentin Amrhei, Fränzi Korner-Nievergelt and Tobias Roth on "significance thresholds and the crisis of unreplicable research."

Western Values, According to Stephen Miller and Donald Trump

Toward the end of the period when I attended the Mormon Church (late 1999 and early 2000), I was still occasionally teaching Sunday School classes and more frequently teaching "Priesthood Meeting Elder's Quorum" classes. Despite views that varied significantly from Mormon orthodoxy at that point, I had no trouble teaching lessons in good faith. Assigned to teach from the text of a top Mormon leader's sermon, I would simply cross out the parts I disagreed with and teach the lesson based on what remained. And there was always something important that remained. I suppose some people might think what was remaining was trite, but I never did. The basics that people with diverging views agree on are often the deepest and meatiest truths of all. 

My reaction to Donald Trumps speech in Poland on July 6, 2017 is similar. My disagreements with Donald Trump are profound—particularly on immigration: see for example

But I agree that what Donald Trump called "The West," deserves to be protected and defended, once one insists that anyone who accepts the values and principles of "The West" thereby becomes part of "The West," regardless of their national origin. (See my evocation of the principle of openness to newcomers in "'Keep the Riffraff Out!'")  

And what are those values? Donald Trump's chief speechwriter Stephen Miller wrote a beautiful passage that were delivered in the Remarks by President Trump to the People of Poland | July 6, 2017:

We write symphonies.  We pursue innovation.  We celebrate our ancient heroes, embrace our timeless traditions and customs, and always seek to explore and discover brand-new frontiers.

We reward brilliance.  We strive for excellence, and cherish inspiring works of art that honor God.  We treasure the rule of law and protect the right to free speech and free expression.

We empower women as pillars of our society and of our success.  We put faith and family, not government and bureaucracy, at the center of our lives.  And we debate everything.  We challenge everything.  We seek to know everything so that we can better know ourselves.

And above all, we value the dignity of every human life, protect the rights of every person, and share the hope of every soul to live in freedom.  That is who we are.  Those are the priceless ties that bind us together as nations, as allies, and as a civilization.

Where "God" is mentioned, I need to interpret the passage according to my own view of God. (See "Teleotheism and the Purpose of Life.") And those with Western values are much more divided about the role of government than this passage recognizes. But otherwise I agree. And I hope you do, too, whatever your view of the man who wrote those words and the man who spoke them. 

 

See also John O'Sullivan's thoughtful National Review article "Trump Defends the West in Warsaw"

Japan Shows How to Do Interest Rate Targets for Long-Term Bonds Instead of Quantity Targets

When the Fed began making large purchases of long-term Treasury bonds and mortgage-backed bonds—"QE"—I wondered why the Fed didn't announce an interest rate target for these bonds instead of a quantity target. An interest rate target for long-term bonds is the same thing as a price target, since there is a mechanical one-to-one relationship between prices and reported interest rates for bonds: by the present-value formula, higher prices are lower interest rates and lower prices are higher interest rates. One advantage of a interest rate target rather than quantity target for long-term bonds is that it would have given a better sense of the modest magnitude of stimulus provided by QE. 

In his July 6, 2017 Wall Street Journal article, Mike Bird points out in his title another possible benefit of a price target for long-term bonds rather than a quantity target: "Japan Shows Europe How to Dial Back Stimulus Without Spooking Investors." The Bank of Japan calls these price/interest rate targets for long-term bonds "yield curve control." Mike's argument is to point to the 2013 US "Taper Tantrum" and to the European Central Bank's current communications difficulties:

“Draghi is discovering that narratives contrary to the one you want to get across can take hold in the market,” said Grant Lewis, head of research at Daiwa Capital Markets Europe. ...

Germany’s 10-year bund yields rose by 0.2 of a percentage point in five days, the largest jump since 2015’s “bund tantrum” when investors dumped bonds as they also anticipated less stimulus. ...

The BOJ can keep its markets stable by setting a clear limit on what it will tolerate, analysts say. In early February, when 10-year yields rose as high as 0.15%, the central bank offered to buy an unlimited volume of bonds at a yield of 0.11%, pushing yields back down.

“It’s clearly been easier for (BOJ chief Haruhiko) Kuroda. He’s stood up and said yields will be held at these levels. Try and beat me, I’ve got infinite resources,” Mr. Lewis added. “That’s actually allowed them to start purchasing less.”

One important consideration for an interest rate target for long-term bonds is that, along with the target for safe short-term rates that all major central banks continue to set, this would have effectively set a target for the spread between long-term bonds and the safe short rate. Unlike the short-term safe rate, which can be set in a very wide range (that in fact should be wider than current custom: see my paper "Next Generation Monetary Policy"), there are likely to be real limits on what the spread between short-term and long-term rates can be set at before the central bank ended up with zero or all of a category of long-term bonds. (It would be interesting if a central bank ever chose to do a big short on long-term government bonds.) Thus, an interest-rate target for long-term bonds needs to be kept in a range that implies a reasonable spread between safe short-term rates and long-term interest rates of a given category. But even if a central bank explicitly said it would revise its interest rate target if it ended up with zero or above 90% of a category of bonds, that target would still be quite powerful in its effects on markets. 

 

The Scientific Approach to Monetary Rules

Nick Timiraos reported in the July 7, 2017 Wall Street Journal article shown above:

The Federal Reserve defended having the flexibility to set interest rates without new scrutiny from Capitol Hill in its semiannual report to Congress on Friday, warning of potential hazards if it were required to adopt a rule to guide monetary policy.

I think there is another approach that the Fed could take to a stress on monetary policy rules by Congress. Here is what I wrote in my new paper "Next Generation Monetary Policy," in the Journal of Macroeconomics:

Because optimal monetary policy is still a work in progress, legislation that tied monetary policy to a specific rule would be a bad idea. But legislation requiring a central bank to choose some rule and to explain actions that deviate from that rule could be useful. To be precise, being required to choose a rule and explain deviations from it would be very helpful if the central bank did not hesitate to depart from the rule. In such an approach, the emphasis is on the central bank explaining its actions. The point is not to directly constrain policy, but to force the central bank to approach monetary policy scientifically by noticing when it is departing from the rule it set itself and why.

I earnestly hope that any of you interested in monetary policy will read "Next Generation Monetary Policy." It distills all of my thoughts about monetary policy aside from my thoughts about negative interest rate policy (for which you should read the papers linked in my bibliographic post "How and Why to Eliminate the Zero Lower Bound: A Reader’s Guide"), relating them where appropriate to the potential for negative interest rate policy. To whet your appetite, here is the abstract:

Abstract: This paper argues there is still a great deal of room for improvement in monetary policy. Sticking to interest rate rules, potential improvements include (1) eliminating any effective lower bound on interest rates, (2) tripling the coefficients in the Taylor rule, (3) reducing the penalty for changing directions, (4) reducing interest rate smoothing, (5) more attention to the output gap relative to the inflation gap, (6) more attention to durables prices, (7) mechanically adjusting for risk premia, (8) strengthening macroprudential measures to reduce the financial stability burden on interest rate policy, (9) providing more of a nominal anchor.  

Freedom Under Law Means All Are Subject to the Same Laws

What does it mean to be free? If it means to have no legal restraints at all, then only one person at the apex of society can be free. If, instead, "freedom under law" is possible, it means to have the maximum amount of freedom that anyone in society has. That is, one can think of "freedom under law" as like a "most-favored-nation" clause:  "freedom under law" is facing only the restrictions on one's behavior that everyone faces.

Interestingly, this definition "freedom under law" works for both "freedom under natural law" and "freedom under civil law." Here is John Locke's explanation of freedom under law in section 22 of his 2d Treatise on Government: “On Civil Government” (in Chapter IV "Of Slavery"):

THE natural liberty of man is to be free from any superior power on earth, and not to be under the will or legislative authority of man, but to have only the law of nature for his rule. The liberty of man, in society, is to be under no other legislative power, but that established, by consent, in the commonwealth; nor under the dominion of any will, or restraint of any law, but what that legislative shall enact, according to the trust put in it. Freedom then is not what Sir Robert Filmer tells us, Observations, A. 55. “a liberty for every one to do what he lists, to live as he pleases, and not to be tied by any laws:” but freedom of men under government is, to have a standing rule to live by, common to every one of that society, and made by the legislative power erected in it; a liberty to follow my own will in all things, where the rule prescribes not; and not to be subject to the inconstant, uncertain, unknown, arbitrary will of another man: as freedom of nature is, to be under no other restraint but the law of nature.

For freedom under civil law, the key clause is Robert Filmer's: "to have a standing rule to live by, common to every one of that society ...". This idea of everyone being subjected to the same laws was taken seriously in late 19th century, early 20th century US constitutional law as the prohibition against "class legislation." A prohibition against "class legislation" has the potential to put a barrier in the way of special interests lobbying for laws that will inhibit competitors.

Among US Supreme Court decisions, Lochner v. New York is one of the most famous, and one of the most criticized. Going into why would take this post too far afield, but I want to quote the discussion of "class legislation" in David Bernstein's book "Rehabilitating Lochner." The principle against class legislation came up in that litigation because the limitation on bakers' hours at the heart of the case was in important measure an attempt to benefit other bakers by disadvantaging newly immigrant bakers. Here is David:

The liberty of contract doctrine arose from two ideas prominent in late-nineteenth-century jurisprudence. First, courts stated that so-called "class legislation"-legislation that arbitrarily singled out a particular class for unfavorable treatment or regulation-was unconstitutional. Courts used both the Due Process and the Equal Protection clauses as textual hooks for reviewing class legislation claims. Indeed, the opinions were often unclear as to whether the operative constitutional provision was due process, equal protection, both, or neither. Second, courts used the Due Process Clause to enforce natural rights against the states. Judicially enforceable natural rights were not defined by reference to abstract philosophic constructs. Rather, they were the rights that history had shown were crucial to the development of Anglo-American liberty.

CLASS LEGISLATION ANALYSIS AND THE DUE PROCESS CLAUSE

Opposition to class legislation had deep roots in pre-Civil War American thought. After the Civil War and through the end of the Gilded Age, leading jurists believed that the ban on class legislation was the crux of the Fourteenth Amendment, including both the Equal Protection and Due Process clauses. Justice Stephen Field wrote in 1883 that the Fourteenth Amendment was "designed to prevent all discriminating legislation for the benefit of some to the disparagement of others." Each American, Field continued, had the right to "pursue his [or her] happiness unrestrained, except by just, equal, and impartial laws." Justice Joseph Bradley, writing for the Court the same year, declared that "what is called class legislation" is "obnoxious to the prohibitions of the Fourteenth Amendment." In Dent v. West Virginia, the Court even declared that no equal protection or due process claim could succeed absent an arbitrary classification.--' Influential dictum from Leeper v. Texas suggested that the Fourteenth Amendment's due process guarantee is secured "by laws operating on all alike."

The Supreme Court, however, interpreted the prohibition on class legislation quite narrowly. In 1884 it unanimously rejected a challenge to a San Francisco ordinance that prohibited night work only in laundries.-' Justice Field explained that the law seemed like a reasonable fire prevention measure, and that it applied equally to all laundries. The following year, a Chinese plaintiff challenged the same laundry ordinance, alleging that its purpose was to force Chinese-owned laundries out of business. Field, writing again for a unanimous Court, announced that-consistent with centuries of Anglo-American judicial tradition and prior Supreme Court cases-the Court would not "inquire into the motives of the legislators in passing [legislation], except as they may be disclosed on the face of the acts, or inferable from their operation. ..."' The Court's refusal to consider legislative motive severely limited its ability to police class legislation. 

To my mind, the unwillingness to inquire into the motives of the legislators was a mistake. Looking for the motive to help one group even at the expense of another seems one of the easiest common-sense ways to figure out if something is class legislation. Nowadays, we recognize laws that are designed with the motive of disadvantaging African Americans as unconstitutional. This is that same principle applied to many classes of people. And even if the prohibition against class legislation were limited to a prohibition on legislation that would disadvantage the poorest of the poor, in line with John Rawls's recommendations in A Theory of Justice, it would be an extremely valuable principle. (For more thoughts on that score, see "Inequality Is About the Poor, Not About the Rich.")

Though defining "equality before the law" in particular cases is difficult, it seems to me that one way or another in all countries that believe in freedom and the rule of law, these ideas have a proper role in constitutional law:

  • "to have a standing rule to live by, common to every one of that society"

  • "the right to 'pursue his [or her] happiness unrestrained, except by just, equal, and impartial laws'"

  • "laws operating on all alike"

Addendum, August 4, 2019: As John L. Davidson points out, in general, inquiring into legislators’ motives is unworkable. The only time I think legislators’ motives should come into play in jurisprudence is when legislators’ motives were to do something constitutionally impermissible.

 

For links to John Locke posts on the previous 3 chapters of the 2d Treatise, see "John Locke's State of Nature and State of War."

Jason Fung: Dietary Fat is Innocent of the Charges Leveled Against It

See also "Sugar as a Slow Poison"

I highly recommend Jason Fung's book "The Obesity Code." Jason Fung lays out what I consider the most credible theory for what causes obesity—and implicitly what has led to the continuing dramatic rise in obesity across the developed world over the last century. In order to understand the overall argument, which I will discuss in a future post, it is important to know certain facts that the nutritional establishment is loath to communicate because they run counter to the message they have been giving for so long. Among these, one of the key facts is that there is no evidence that dietary fat is bad for health. In saying this, I leave aside the trans-fats, which with good reason are close to being banned. The use of dangerous trans-fats was encouraged by the suspicion cast on more time-tested dietary fats. 

The lack of evidence that dietary fat is bad for health is an important and credible null result because so much effort was expended looking for proof that dietary fat is bad, by researchers who believed that it is. Here are three passages that give the core of Jason Fung's account of that research, from Chapter 18, "Fat Phobia":  


In the 1950s, it was imagined that cholesterol circulated and deposited on the arteries much like sludge in a pipe (hence the popular image of dietary fat clogging up the arteries). It was believed that eating saturated fats caused high cholesterol levels, and high cholesterol levels caused heart attacks. This series of conjectures became known as the diet-heart hypothesis. Diets high in saturated fats caused high blood cholesterol levels, which caused heart disease.

The liver manufactures the overwhelming majority—80 percent—of the blood cholesterol, with only 20 percent coming from diet. Cholesterol is often portrayed as some harmful poisonous substance that must be eliminated, but nothing could be farther from the truth. Cholesterol is a key building block in the membranes that surround all the cells in our body. In fact, it’s so vital that every cell in the body except the brain has the ability to make it. If you reduce cholesterol in your diet, your body will simply make more.

The Seven Countries Study had two major problems, although neither was very obvious at the time. First, it was a correlation study. As such, its findings could not prove causation. Correlation studies are dangerous because it is very easy to mistakenly draw causal conclusions. However, they are often the only source of long-term data available. It is always important to remember that they can only generate hypotheses to be tested in more rigorous trials. The heart benefit of the low-fat diet was not proven false until 2006 with the publication of the Women’s Health Initiative Dietary Modification Trial and the Low-Fat Dietary Pattern and Risk of Cardiovascular Disease study, some thirty years after the low-fat approach became enshrined in nutritional lore. By that time, like a supertanker, the low-fat movement had gained so much momentum that it was impossible to turn it aside.

The association of heart disease and saturated fat intake is not proof that saturated fat causes heart disease. Some recognized this fatal flaw immediately and argued against making dramatic dietary recommendations based on such flimsy evidence. The seemingly strong link between heart disease and saturated fat consumption was forged with quotation and repetition, not with scientifically sound evidence. There were many possible interpretations of the Seven Countries Study. Animal protein, saturated fats and sugar were all correlated to heart disease. Higher sucrose intake could just as easily have explained the correlation to heart disease, as Dr. Keys himself had acknowledged.

It is also possible that higher intakes of animal protein, saturated fats and sugar are all merely markers of industrialization. Counties with higher levels of industrialization tended to eat more animal products (meat and dairy) and also tended to have higher rates of heart disease. Perhaps it was the processed foods. All of these hypotheses could have been generated from the same data. But what we got was the diet-heart hypothesis and the resulting low-fat crusade.


IN 1948, HARVARD University began a decades-long community-wide prospective study of the diets and habits of the town of Framingham, Massachusetts. Every two years, all residents would undergo screening with blood work and questionnaires. High cholesterol levels in the blood had been associated with heart disease. But what caused this increase? A leading hypothesis was that high dietary fat was a prime factor in raising cholesterol levels. By the early 1960s, the results of the Framingham Diet Study were available. Hoping to find a definitive link between saturated-fat intake, blood cholesterol and heart disease, the study instead found... nothing at all.

There was absolutely no correlation. Saturated fats did not increase blood cholesterol. The study concluded, “No association between percent of calories from fat and serum cholesterol level was shown; nor between ratio of plant fat to animal fat intake and serum cholesterol level.”

Did saturated fat intake increase risk of heart disease? In a word, no. Here are the final conclusions of this forgotten jewel: “There is, in short, no suggestion of any relation between diet and the subsequent development of CHD [coronary heart disease] in the study group.” 

This negative result would be repeatedly confirmed over the next half century. No matter how hard we looked, there was no discernible relationship between dietary fat and blood cholesterol. Some trials, such as the Puerto Rico Heart Health Program, were huge, boasting more than 10,000 patients. Other trials lasted more than twenty years. The results were always the same. Saturated-fat intake could not be linked to heart disease.

But researchers had drunk the Kool-Aid. They believed their hypothesis so completely that they were willing to ignore the results of their own study. For example, in the widely cited Western Electric Study, the authors note that “the amount of saturated fatty acids in the diet was not significantly associated with the risk of death from CHD.” This lack of association, however, did not dissuade the authors from concluding “the results support the conclusion that lipid composition of the diet affects serum cholesterol concentration and risk of coronary death.”

All these findings should have buried the diet-heart hypothesis. But no amount of data could dissuade the diehards that dietary fat caused heart disease. Researchers saw what they wanted to see. Instead, researchers saved the hypothesis and buried the results. Despite the massive effort and expense, the Framingham Diet Study was never published in a peer-reviewed journal. Instead, results were tabulated and quietly put away in a dusty corner—which condemned us to fifty years of a low-fat future that included an epidemic of diabetes and obesity.


Once the skewing effect of trans fats was taken into account, the studies consistently showed that high dietary fat intake was not harmful. The enormous Nurses’ Health Study followed 80,082 nurses over fourteen years. After removing the effect of trans fats, this study concluded that “total fat intake was not significantly related to the risk of coronary disease.” Dietary cholesterol was also safe. The Swedish Malmo Diet and Cancer Study and a 2014 meta-analysis published in the Annals of Internal Medicine reached similar conclusions.

And the good news for saturated fats kept rolling in. Dr. R. Krause published a careful analysis of twenty-one studies covering 347,747 patients and found “no significant evidence for concluding that dietary saturated fat is associated with an increased risk of CHD.” [Siri-Tarino PW et al. Meta-analysis of prospective cohort studies evaluating the association of saturated fat with cardiovascular disease. Am J Clin Nutr. 2010 Mar; 91(3):535–46. In fact, there was even a small protective effect on stroke. The protective effects of saturated fats were also found in the fourteen-year, 58,543-person Japan Collaborative Cohort Study for Evaluation of Cancer and the ten-year Health Professionals Follow-up Study of 43,757 men.


To back up Jason Fung's interpretation of the the Framingham Diet Study, take a look at Michael Eades's blog post "Framingham follies," which Jason Fung cites. And certainly don't dismiss the idea that "dietary fat is innocent of the charges leveled against it without reading "Framingham follies" as an indication of the kind of scientific conduct one needs to be at least alert for in the area of nutritional research. 

Also see the Wikipedia article "Saturated fat and cardiovascular disease controversy." 

When checking out all the links, remember the problem of multiple hypothesis testing. There are many possible health outcomes. If a researcher tests 20 of them, then even if there is no real relationship between a variable and any of the outcomes, there should on average be an apparent association with one of them by chance that can be reported as "significant at the 5% level." And the number of tests actually done can easily exceed the number of tests reported. 

 

Don't miss these other posts on diet and health and on fighting obesity:

Also see the last section of "Five Books That Have Changed My Life" and the podcast "Miles Kimball Explains to Tracy Alloway and Joe Weisenthal Why Losing Weight Is Like Defeating Inflation." If you want to know how I got interested in diet and health and fighting obesity and a little more about my own experience with weight gain and weight loss, see my post "A Barycentric Autobiography."