Exoplanets and Faith

I am pleased to see half of the Nobel Prize in Physics this year go to the first confirmed discovery of a planet orbiting a star like our sun. Since then, evidence for thousands of planets circling other stars has been gathered, including a kind of census conducted by the Kepler orbiting telescope, from which scientists drew this estimate:

There is at least one planet on average per star.[See abstract below.] About 1 in 5 Sun-like stars[a] have an "Earth-sized"[b] planet in the habitable zone.

I have had a longstanding interest in discoveries of planets around other stars. What I remember is how many false starts there were and the period when some scientists said that the lack of confirmed discoveries of planets around other stars meant that there might not be any. In hindsight, excessive optimism about the accuracy of detection methods led to a period of excessive pessimism about the existence of exoplanets.

To me, then, the eventual confirmed discoveries of exoplanets were a triumph of faith over doubt. By faith I simply mean a belief that influences action that, at the time, is based on inadequate evidence. In this sense, we all have to make decisions based on faith very frequently. I emphasize this point in my post “The Unavoidability of Faith.”

I’ll save any discussion of other intelligent life in the universe for another post, but I want to point out something very interesting about exoplanets from the standpoint of popular culture: being literally light-years away, sending probes to exoplanets is dauntingly difficult and might require not only key technological advances, but also enormous patience. But imaging exoplanets, while quite difficult, is something we can hope to do even in my lifetime, let alone in the lifetime of those who are now young graduate students. There is now a growing list of exoplanets that have officially agreed-upon proper names; there is hope that some exoplanets will become familiar to even elementary school students, as the list of their known properties grows.

It is hard to keep up with the onrushing discoveries about exoplanets, but I hope someone will put together a high-quality children’s book on exoplanets that reflects at least everything we know today. Both exoplanets themselves and their discovery are inspiring to me, and I think would be inspiring to many youngsters.

Adding a Variable Measured with Error to a Regression Only Partially Controls for that Variable

The Partitioned-Matrix Inversion Formula.    This image first appeared in the the post “   The Partitioned Matrix Inversion Formula   .”   Image created by Miles Spencer Kimball. I hereby give permission to use this image for anything whatsoever, as long as that use includes a link to this blog. For example, t-shirts with this picture (among other things) and supplysideliberal.com on them would be great! :)    Here is a link to the Wikipedia article “Block Matrix,”    which talks about the partitioned matrix inversion formula.

The Partitioned-Matrix Inversion Formula. This image first appeared in the the post “The Partitioned Matrix Inversion Formula.” Image created by Miles Spencer Kimball. I hereby give permission to use this image for anything whatsoever, as long as that use includes a link to this blog. For example, t-shirts with this picture (among other things) and supplysideliberal.com on them would be great! :) Here is a link to the Wikipedia article “Block Matrix,” which talks about the partitioned matrix inversion formula.

In “Eating Highly Processed Food is Correlated with Death” I observe:

In observational studies in epidemiology and the social sciences, variables that authors say have been “controlled for” are typically only partially controlled for. The reason is that almost all variables in epidemiological and social science data are measured with substantial error.

In the comments, someDude asks:

"If the coefficient of interest is knocked down substantially by partial controlling for a variable Z, it would be knocked down a lot more by fully controlling for a variable Z. "

Does this assume that the error is randomly distributed? If the error is biased (i.e. by a third underlying factor), I would think it could be the case that a "fully controlled Z" could either increase or decrease the the change in the coefficient of interest.

This post is meant to give a clear mathematical answer to that question. The answer, which I will back up in the rest of the post, is this:

Compare the coefficient estimates in a large-sample, ordinary-least-squares, multiple regression with (a) an accurately measured statistical control variable, (b) instead only that statistical control variable measured with error and (c) without the statistical control variable at all. Then all coefficient estimates with the statistical control variable measured with error (b) will be a weighted average of (a) the coefficient estimates with that statistical control variable measured accurately and (c) that statistical control variable excluded. The weight showing how far inclusion of the error-ridden statistical control variable moves the results toward what they would be with an accurate measure of that variable is equal to the fraction of signal in (signal + noise), where “signal” is the variance of the accurately measured control variable that is not explained by variables that were already in the regression, and “noise” is the variance of the measurement error.

To show this mathematically, define:

Y: dependent variable

X: vector of right-hand-side variables other than the control variable being added to the regression

Z: scalar control variable, accurately measured

v: scalar noise added to the control variable to get the observed proxy for the control variable. Assumed uncorrelated with X, Y and Z.

Then, as the sample size gets large:

Define the following notation for the part of the variance of Z and of the variance of Z+v that are orthogonal from X (that is, the parts that are unpredictable by X and so represents additional signal from Z that was not already contained in X, plus the variance of noise in the case of Z+v). One can call this “the unique variance of Z”:

I put a reminder of the partitioned matrix inversion formula at the top of this post. Using that formula, and the fact that the unique variance of Z is a scalar, one finds:

Thus, the OLS estimates are given by:

When only a noisy proxy for the statistical control variable is available (which is the situation 95% of the time), the formula becomes:

I claimed at the beginning of this post that the coefficients when using the noisy proxy for the statistical control variable were a weighted average of what one would get using only X on the right-hand side and what one would get using accurately measured data on Z. Note that what one would get using only X on the right-hand side of the equation is exactly what one would get in the limit as the variance of the noise added to Z (which is Var(v)) goes to infinity. So adding a very noisy proxy for Z is almost like leaving Z out of the equation entirely.

The weight can be interpreted from this equation:

Slide8.png

As noted at the beginning of the post, the right notion of the signal variance is the unique variance of the accurately measured statistical control variable. The noise variance is exactly what one would expect: the variance of v.

I have established what I claimed at the beginning of the post.

Some readers may feel that the limitation of Z being a single scalar variable is a big limitation. One can generalize the results to more statistical control variables. First, the results apply when adding many statistical control variables or their proxies, one at a time, sequentially. Second, one can show that if the parts of Z1 and Z2 that are orthogonal to X are themselves orthogonal to each other, then the effects of adding Z1 and Z2 are additive. Third, if one has a set of correlated statistical control variables or their proxies that you want to add, one can (A) transform units so the noise variance looks the same for each of these additional statistical control variables or their proxies (sphericalizing the noise), (B) orthogonalize relative to X then (C) find the principal components of the remainder of these statistical control variables or their proxies (which will have the same eigenvectors because of the sphericalization of the noise), then note that the affects of each of the principal components are now additive.

Conclusion: Almost always, one has only a noisy proxy for a statistical control variable. Unless you use a measurement error model with this proxy you will not be controlling for the underlying statistical control variable. You will only be partially controlling for it. Even if you do not have enough information to fully identify the measurement error model you must think about that measurement error model and report a range of possible estimates based on different assumptions about the variance of the noise.

Remember that any departure from the absolutely correct theoretical construct can count as noise. For example, one might think one has a totally accurate measure of income, but income is really acting as a proxy for a broader range of household resources. In that case, income is a noisy proxy for the household resources that were the correct theoretical construct.

I strongly encourage everyone reading this to vigorously criticize any researcher who claims to be statistically controlling for something simply by putting a noisy proxy for that thing in a regression. This is wrong. Anyone doing it should be called out, so that we can get better statistical practice and get scientific activities to better serve our quest for the truth about how the world works.

Here are links to other posts that touch on statistical issues:

The Carbohydrate-Insulin Model Wars

Writing on diet and health, I have been bemused to see the scientific heat that has raged over whether a lowcarb diet leads people to burn more calories, other things equal. It is an interesting question, because it speaks to whether in the energy balance equation

weight gain (in calorie equivalents) = calories in - calories out

the calories out are endogenous to what is eaten rather than simply being determined directly by conscious choices about how much to exercise.

My own view is that, in practice, the endogeneity of calories in to what is eaten is likely to be a much more powerful effect than the endogeneity of calories out to what is eaten. Metabolic ward studies are good at analyzing the endogeneity of calories out, but by their construction, abstract from any endogeneity of calories in that would occur in the wild by tightly controlling exactly what the subjects of the metabolic ward study eat.

The paper flagged at the top by David Ludwig, Paul Lakin, William Wong and Cara Ebeling is the latest salvo in an ongoing debate about a metabolic ward study done by folks associated with David Ludwig (including David himself). Much of the discussion is highly technical and difficult for an outsider to fully understand. But here is what I did manage to glean:

  1. Much of the debate is arising because the sample sizes in this and similar experiments are too small. I feel the studies that have been done so far amply justify funding for larger experiments. I would be glad to give input on my opinions about how such experiments could be tweaked to deliver more powerful and more illuminating results.

  2. One of the biggest technical issues beyond lower-than-optimal power involves how to control statistically for weight changes. Again, it is not so easy to fully understand all the issues with the time it is appropriate for me to devote to a single blog post, but I think weight changes need to be treated as an indicator of amount of fat burned with a large measurement error due to what I have called “mass-in/mass-out” effects. (See “Mass In/Mass Out: A Satire of Calories In/Calories Out.”) Whenever a right-hand-side variable is measured with error relative to the the exactly appropriate theoretical concept, a measurement error model is needed in order to get a consistent statistical estimate of the parameters of interest. I’ll write more (see “Adding a Variable Measured with Error to a Regression Only Partially Controls for that Variable”) about what happens when you try to control for something by using a variable afflicted with measurement error. (In brief, you will only be partially controlling for what you want to control for.)

  3. David Ludwig, Paul Lakin, William Wong and Cara Ebeling are totally correct in specifying what one should focus on as the hypothesis of interest:

Hall et al. set a high bar for the Carbohydrate-Insulin Model by stating that “[p]roponents of low-carbohydrate diets have claimed that such diets result in a substantial increase in … [TEE] amounting to 400–600 kcal/day”. However, the original source for this assertion, Fein and Feinman [18], characterized this estimate as a “hypothesis that would need to be tested” based on extreme assumptions about gluconeogenesis, with the additional qualification that “we [do not] know the magnitude of the effect.” An estimate derived from experimental data—and one that would still hold major implications for obesity treatment if true—is in the range of 200 kcal/day [3]. At the same time, they set a low bar for themselves, citing a 6-day trial [16] (confounded by transient adaptive responses to macronutrient change [3]) and a nonrandomized pilot study [5] (confounded by weight loss [8]) as a basis for questioning DLW methodology. Elsewhere, Hall interpreted these studies as sufficient to “falsify” the Carbohydrate-Insulin Model [19]—but they do nothing of the kind. Indeed, a recent reanalysis of that pilot study suggests an effect similar to ours (≈250 kcal/day) [20].

Translated, this says that a 200-calorie-a-day difference is enough to be interesting. (Technically, the authors say “kilocalories,” but dieters always call kilocalories somewhat inaccurately by the nickname “calories.”) That should be obvious. For many people, 200 calories would be around 10% of the total calories they would consume and expend in a day. If a 200-calorie-a-day difference isn’t obvious beyond statistical noise, a metabolic ward study is definitely underpowered and needs a bigger sample!

Conclusion. In conclusion, let me emphasize again that the big issue with the worst carbs is that they make people hungry again relatively quickly, so that they eat more. (See “Forget Calorie Counting; It's the Insulin Index, Stupid” for which carbs are the worst.) Endogeneity of calories in might be a bigger deal than endogeneity of calories out. Moreover, because it is difficult for the body to switch back and forth between burning carbs and burning fat, a highcarb diet makes it painful to fast, while a lowcarb highfat diet when eating makes it relatively easy to fast. And fasting (substantial periods of time with no food, and only water or unsweetened coffee and tea as drinks) is powerful both for weight loss and many other good health-enhancing effects.

Update: David Ludwig comments on Twitter:

Perhaps: “endogeneity of calories in to what is eaten is likely to be a much more powerful effect than the endogeneity of calories out to what is eaten.” But the latter is a unique effect predicted by CIM. And if CIM is true, both arise from excess calorie storage in fat cells.

For annotated links to other posts on diet and health, see:

Here are some diet and health posts on authors involved in the Carbohydrate-Insulin Model Wars:

John Locke Against Tyranny

The last five chapters of John Locke’s 2d Treatise on Government: Of Civil Government (XV–XIX) are an extended argument that the rule of tyrants is illegitimate and that the people are justified in overthrowing tyrants. The three chapters right before that (XII–XIV) lay out some of the things a ruler can appropriately do, providing a contrast to tyranny. The titles of my blog posts on these chapters provide a good outline of John Locke’s argument here. Take a look.

Chapter XII: Of the Legislative, Executive, and Federative Power of the Commonwealth

Chapter XIII: Of the Subordination of the Powers of the Commonwealth

Chapter XIV: Of Prerogative

Chapter XV: Of Paternal, Political, and Despotical Power, considered together

Chapter XVI: Of Conquest

Chapter XVII: Of Usurpation

Chapter XVIII: Of Tyranny

Chapter XIX: Of the Dissolution of Government

Links to posts on the earlier chapters of John Locke's 2d Treatise can be found here:

Posts on Chapters I–III:  John Locke's State of Nature and State of War 

Posts on Chapters IV–V:  On the Achilles Heel of John Locke's Second Treatise: Slavery and Land Ownership

Posts on Chapters VI–VII : John Locke Against Natural Hierarchy

Posts on Chapters VIII–XI: John Locke's Argument for Limited Government

How Negative Interest Rates Affect the Economy

Recently, I had an email query from a journalist about negative interest rates—asking in particular about how they would affect the economy. In answering, I was mindful of some of the criticisms that have been made of negative interest rates as a policy tool. I thought my readers might be interested in what I wrote, even though it didn’t make it into the newspaper article. Here it is:

Other countries have cut rates to as low as -.75%. From that experience, we know that going to negative rates as low as -.75% works just like any other rate cut in the Fed's target rate. Potential issues such as strains on bank profits or large-scale paper currency storage may arise at rates below -.75%, but not at mild negative rates.

Rate cuts work in every corner of the economy to encourage investment and consumption spending both by shifting the balance of power in favor of those most apt to spend and by giving an incentive to spend. In the case of negative rates, the carrot for those who spend is coupled with a stick for those sitting on pile of cash they resist putting to good use.

Other than banks that worry about things that haven't happened yet anywhere and those who have higher-rates-are-good ideologies or simply don't understand negative rates, complaints about negative interest rates are likely to come from those who don't want to spend.

Feel free to quote this.

My sentence

Rate cuts work in every corner of the economy to encourage investment and consumption spending both by shifting the balance of power in favor of those most apt to spend and by giving an incentive to spend.

is shorthand for what I say in these posts about the transmission mechanism for negative interest rates:

Many of the details I give about the experience with negative interest rates so far are taken from My new IMF Working Paper with Ruchir Agarwal: “Breaking Through the Zero Lower Bound” (pdf) (or on IMF website).

I have an annotated bibliography of what I have written on negative interest rate policy at this link.

The Four Food Groups Revisited

Image created by Miles Spencer Kimball. I hereby give permission to use this image for anything whatsoever, as long as that use includes a link to this blog. In this blog post I question my assertion of half a century ago that what is depicted above makes for a good diet.

Image created by Miles Spencer Kimball. I hereby give permission to use this image for anything whatsoever, as long as that use includes a link to this blog. In this blog post I question my assertion of half a century ago that what is depicted above makes for a good diet.

In elementary school, back in the 1960’s, I drew illustration of what were then called “The Four Food Groups” as a school assignment. Historically, the formulation of the recommendations to eat a substantial amount from each food group each day may have owed as much to agricultural and broader food business lobbying as to nutrition science. But those recommendations were not as much at variance with reasonably informed nutritional views back then as they are to reasonably informed nutritional views now. Let me give you my view on these four food groups.

Milk Group

I consume quite a bit of milk and cheese, but only because I love dairy. I think of milk and cheese as being somewhat unhealthy. There are two issues. One is the issue that animal protein might be especially good fuel for cancer cells. I wrote about that in these posts:

The other is that the majority of milk sold is from cows with a mutation that makes a structurally weak protein from which a truly nasty 7-amino-acid peptide breaks off. Fortunately, that issue can be largely avoided by eating goat and sheep cheese rather than cow cheese and by drinking A2 milk (which I just saw at Costco yesterday; I have seen it for a while at Whole Foods and, in my area, at Safeway). I wrote about that in these posts:

If you do consume milk, I have some advice here to drink whole milk. (100 calories worth of whole milk will be more satiating than 100 calories of skim milk.)

As for cream and butter, since they have relatively little milk protein, and are quite satiating, I think of them as being some of the healthiest dairy products, though their calories do count in these circumstances:

To preview what I will say again below, in bread and butter, it is the bread that is unhealthy, not the butter. And, in the extreme, eating butter straight is a lot better than the many ways we find to almost eat sugar straight. Eating sugar will make you want more and more and more. At least eating butter straight is self-limiting because butter is relatively satiating.

Meat Group

Meat has the same problem milk does: animal protein typically being abundant in amino acids such as glycine that are especially easy for even metabolically damaged cancer cells to burn as fuel.

Also, because of the protein content, many types of meat ramp up insulin somewhat, as you can see from the tables in “Forget Calorie Counting; It's the Insulin Index, Stupid.” David Ludwig points out that meat often also raises glucagon, which is a little like an anti-insulin hormone, but in my own experience, eating beef, for example, tends to leave me somewhat hungry afterwards, which is consistent with the insulin effect being significantly stronger. (Of course, since almost all the meat I eat is at restaurant meals once—or occasionally twice—a week, it might be something else stimulating my insulin than just the meat.)

I do regularly put one egg in “My Giant Salad.” That at least doesn’t ramp up my insulin levels too much. For why that matters, see “Obesity Is Always and Everywhere an Insulin Phenomenon.” However, the reason I don’t put in two eggs is that I am worried about too much animal protein.

Sometimes nuts are included in the meat group. I view true nuts as very healthy—an ideal snack on the go if you are within your eating window. See “Our Delusions about 'Healthy' Snacks—Nuts to That!

I haven’t made up my mind about beans—which are also sometimes included in the meat group. They are often medium high on the insulin index, just as beef is. And there are worries based on Steven Gundry’s hypotheses about we and our microbiome not being fully adapted to new world food. See:

As the Wikipedia article “Beans” currently says:

Most of the kinds commonly eaten fresh or dried, those of the genus Phaseolus, come originally from the Americas, being first seen by a European when Christopher Columbus, during his exploration of what may have been the Bahamas, found them growing in fields.

Fruit and Vegetable Group

Nutritionally, the fruit and vegetable group is really at least five very different types of food:

1. Vegetables with easily digested starches: Think potatoes here. Avoid them like the nutritional plague they are. Easily-digested starches turn into sugar quite readily. Also think of peas—and if you count it as a vegetable rather than as a quasi-grain.

2. Vegetables with resistant starch: A reasonable amount of these is OK. I am thinking of green bananas and sweet potatoes. (Beans I discussed above.)

3. Nonstarchy vegetables: Very healthy. Here is a list of nonstarchy vegetables from Wikipedia:

4. Botanical fruits: Tomatoes and cucumbers, eggplant, squash and zucchini are botanically fruits that we call vegetables for culinary purposes. Many of these botanical fruits that we eat are new world foods that Steven Gundry’s worrisome hypotheses about our inadequate adaptation to New World foods would apply to. So I try to eat these only sparingly. However, as I write in “Reexamining Steve Gundry's `The Plant Paradox, the evidence for tomatoes—though, perhaps strangely to you, more positive for cooked tomatoes than raw tomatoes—is so positive it is probably good to continue eating them freely.

On both identifying botanical fruits and identifying good vegetables with resistant starches, Steven Gundry’s lists of good and bad foods according to his lights (which include other particular slants he has on things as well) are quite helpful. You might want to take a more positive attitude toward botanical fruits than Steven Gundry, but it is good to know which vegetables are really botanical fruits to see if you notice any reaction when you eat them. A lot of the clinical experience on which Steven Gundry bases his advice is experience with patients who have autoimmune problems, so I would advise adhering to Steven Gundry’s theories more closely if you have autoimmune problems. It is a worthy experiment, in which you are exactly the relevant guinea pig.

5. True fruits: For true fruits, the problem is that sugar is still sugar, even if it is the fructose in fruit that would be extremely healthy if only it were sugar-free. Because of their sugar content, true fruits should be eaten only sparingly. I discuss “The Conundrum of Fruit” in a section of “Forget Calorie Counting; It's the Insulin Index, Stupid.”

The bottom line is that even vegetables and fruit—which have gotten a very good reputation—have both good and bad and borderline foods among them.

Breads and Cereals

Avoid this group. Just look at the tables in “Forget Calorie Counting; It's the Insulin Index, Stupid” and “Using the Glycemic Index as a Supplement to the Insulin Index.” Also, as an additional mark against ready-to-eat breakfast cereal, see what I say in “The Problem with Processed Food.” Cutting out sugar and foods in this category, along with starchy vegetables, is the key first step to weight loss and better health. On that see:

If you avoid all processed foods made with grains and avoid corn and rice (including brown rice), there may be some other whole grains that are OK. Based on the insulin kicks indicated in “Forget Calorie Counting; It's the Insulin Index, Stupid,” I consider steel-cut plain oatmeal as one of the best whole foods to risk, and the only one I trust that is a reasonably common food in the US.

There is substantial debate here. Some experts are more positive about whole grains. But, given the current state of the evidence, I think it is much safer to lean toward the nonstarchy vegetables that almost all experts think are quite healthy (if one leaves aside the botanical fruits).

Ideas Missing from the ‘Four Food Groups’ Advice

Some key bits of advice are simply missing from the discussion of the four food groups. For example, there is fairly wide agreement that high quality olive oil is quite healthy. It goes well with nonstarchy vegetables! Many people like their olive oil with a little vinegar in it, which is good too.

The biggest idea missing from the ‘Four Food Groups’ advice is that evidence is rolling in that when you eat is, if anything, even more important than what you eat for good health. If you are an adult in good health and not pregnant, you should try to restrict your eating to no more than an 8-hour eating window each day. (That probably means skipping breakfast, which is just as well, since most of the typical American breakfast foods these days are quite unhealthy.) But you can ease into that by working first at getting things down to a 12-hour eating window. We simply aren’t designed to have food all the time; that was a pretty rare situation for our distant ancestors. Our bodies need substantial breaks from food in order to refurbish everything. Here are just a few of my posts on that:

For annotated links to other posts on diet and health, see:

Will Your Uploaded Mind Still Be You? —Michael Graziano

On August 18, 2019, I posted “On Being a Copy of Someone's Mind.” I was intrigued to see from the teaser for Michael Graziano’s new book Rethinking Consciousness: A Scientific Theory of Subjective Experience, published as an op-ed in the Wall Street Journal on September 13, 2019, that Michael Graziano has been thinking along similar lines.

Michael explains the the process of copying someone’s mind (obviously not doable by human technology yet!) this way:

To upload a person’s mind, at least two technical challenges would need to be solved. First, we would need to build an artificial brain made of simulated neurons. Second, we would need to scan a person’s actual, biological brain and measure exactly how its neurons are connected to each other, to be able to copy that pattern in the artificial brain. Nobody knows if those two steps would really re-create a person’s mind or if other, subtler aspects of the biology of the brain must be copied as well, but it is a good starting place.

Michael nicely describes the experience of the copy, which following Robin Hanson I call an “em,” short for “brain emulation” in “On Being a Copy of Someone's Mind”:

Suppose I decide to have my brain scanned and my mind uploaded. Obviously, nobody knows what the process will really entail, but here’s one scenario: A conscious mind wakes up. It has my personality, memories, wisdom and emotions. It thinks it’s me. It can continue to learn and remember, because adaptability is the essence of an artificial neural network. Its synaptic connections continue to change with experience.

Sim-me (that is, simulated me) looks around and finds himself in a simulated, videogame environment. If that world is rendered well, it will look pretty much like the real world, and his virtual body will look like a real body. Maybe sim-me is assigned an apartment in a simulated version of Manhattan, where he lives with a whole population of other uploaded people in digital bodies. Sim-me can enjoy a stroll through the digitally rendered city on a beautiful day with always perfect weather. Smell, taste and touch might be muted because of the overwhelming bandwidth required to handle that type of information. By and large, however, sim-me can think to himself, “Ah, that upload was worth the money. I’ve reached the digital afterlife, and it’s a safe and pleasant place to live. May the computing cloud last indefinitely!”

Then Michael goes on to meditate on the fact that there can then be two of me and whether that makes the copy not-you. Here is what I said on that, in “On Being a Copy of Someone's Mind”:

On the assumption that experience comes from particles and fields known to physics (or of the same sort as those known to physics now), and that the emulation is truly faithful, there is nothing hidden. An em that is a copy of you will feel that it is a you. Of course, if you consented to the copying process, an em that is a copy of you will have that memory, which is likely to make it aware that there is now more than one of you. But that does NOT make it not-you.

You might object that the lack of physical continuity makes the em copy of you not-you. But our sense of physical continuity with our past selves is largely an illusion. There is substantial turnover in the particular particles in us. Similarity of memory—memory now being a superset of memory earlier, minus some forgetting—is the main thing that makes me think I am the same person as a particular human being earlier in time.

… after the copying event these two lines of conscious experience are isolated from one another as any two human beings are mentally isolated from one another. But these two consciousnesses that don’t have the same experience after the split are both me, with a full experience of continuity of consciousness from the past me. If one of these consciousnesses ends permanently, then one me ends but the other me continues. It is possible to both die and not die.

The fact that there can be many lines of subjectively continuous consciousness that are all me may seem strange, but it may be happening all the time anyway given the implication of quantum equations taken at face value that all kinds of quantum possibilities all happen. (This is the “Many-Worlds Interpretation of Quantum Mechanics.”)

In other words, it may seem strange that there could be two of you, but that doesn’t make either of them not-you.

Michael points out that the digital world the copy lives in (if the copy is not embedded in a physical robot) will interact with the physical world we live in:

He may live in the cloud, with a simulated instead of a physical body, but his leverage on the real world would be as good as anyone else’s. We already live in a world where almost everything we do flows through cyberspace. We keep up with friends and family through text and Twitter, Facebook and Skype. We keep informed about the world through social media and internet news. Even our jobs, some of them at least, increasingly exist in an electronic space. As a university professor, for example, everything I do, including teaching lecture courses, writing articles and mentoring young scientists, could be done remotely, without my physical presence in a room.

The same could be said of many other jobs—librarian, CEO, novelist, artist, architect, member of Congress, President. So a digital afterlife, it seems to me, wouldn’t become a separate place, utopian or otherwise. Instead, it would become merely another sector of the world, inhabited by an ever-growing population of citizens just as professionally, socially and economically connected to social media as anyone else.

He goes on to think about whether ems, derived from copying of minds, would have a power advantage over flesh-and-blood humans, and goes on to wonder if a digital afterlife has the same kind of motivational consequences as telling people about heaven and hell. What he misses is Robin Hanson’s incisive economic analysis in the The Age of Em that, because of the low cost of copying and running ems, there could easily be trillions of ems and only billions of flesh-and-blood humans once copying of minds gets going. There could be many copies derived from a single flesh-and-blood human, but with different experiences after the copying event from the flesh-and-blood human. (There would be the most copies of those people who are the most productive.) I think Robin’s analysis is right. That means that ems would be by far the most common type of human being. Fortunately, they are likely to have a lot of affection for the flesh-and-blood human beings they were copied from and for other flesh-and-blood human beings they remember from their remembered from their past as flesh-and-blood human beings. (However, some ems might be copied from infants and spend almost all of their remembered life in cyberspace.)

Michael also writes about how brain emulation could make interstellar travel possible. He talks about many ems keeping each other company on an interstellar journey, but the equivalent of suspended animation is a breeze for for ems, so there is no need for ems to be awake during the journey at all. Having many backups of each type of em can make the journey safer, as well. The other thing that makes interstellar travel easier is that, upon arrival in another solar system or other faraway destination, when physical action is necessary, the robots some of the ems are embedded in to do those actions can be quite small.

But while interstellar travel becomes much easier with ems, Robin Hanson argues that the bulk of em history would take place before ems had a chance to get to other solar systems: it is likely to be easy and economically advantageous to speed up many ems to a thousand or even a million times the speed of flesh-and-blood human beings. At those speeds, the time required for interstellar travel seems much longer. Ems would want to go to the stars for the adventure and for the honor of being founders and progenitors in those new environments, but in terms of subjective time, a huge amount of time would have passed for the ems back home on Earth before any report from the interstellar travelers could ever be received.

Robin Hanson argues that, while many things would be strange to us in The Age of Em, that most ems would think “It’s good to be an em.” (Of course, their word for it would likely be different from “em.”) I agree. I think few ems would want to be physical human beings living in our age. Just as from our perspective now (if enlightened by a reasonably accurate knowledge of history), the olden days will be the bad old days. I, for one, would love to experience The Age of Em as an em. It opens up so many possibilities for life!

I know that an accurate copy of my mind would feel it was me, and I, the flesh-and-blood Miles, consider any such being that comes to exist to be me. In Robin’s economic analysis, the easy of copying ems leads to stiff competition among workers, so even if a copy of me were there in The Age of Em, I wouldn’t expect there to be many copies of me. I very much doubt that I would be among the select few who had billions of copies, or even a thousand copies. But I figure that at worst, a handful of copies could make a living as guinea pigs for social science research, where diversity of human beings subjects come from makes things interesting. And if the interest rates in The Age of Em are as high as Robin thinks they would be, by strenuous saving, the ems that are me might be able to save up enough money after a time or working in order to not have to work any more.

Measuring Learning Outcomes from Getting an Economics Degree

This post both gives a rundown of what I view as key concepts in economics for undergraduates and discusses how to measure whether students have learned them.

I am proud of my Economics Department here at the University of Colorado Boulder. We have been talking about measuring learning outcomes quite seriously. We have three measurement tools in mind:

  1. A department-wide quiz that will be something like 15 multiple choice questions administered in every economics class. Although we are still deciding exactly what it will look like, here are some possible details we talked about so far:

    • We need to do it in classes for logistical reasons: we have no other proctoring machinery at that scale.

    • We want it to be a low-stakes test because we can better measure what the students have in long-term memory if we give no incentive to study for it. Therefore, there will be no consequences for getting the wrong answer.

    • We are deferring any differentiation of the department-wide quiz by which class a student is taking. Initially at least, it will be the same in all economics classes and a mandated part of all economics classes. (One exception: we have not yet thought hard about whether the department-wide quiz should be given in the Principles classes. The argument for is that it would be a learning experience for the students. Also, it would provide some baseline data. But our primary data-collection interest is in learning about our majors.)

    • We haven’t discussed this, but the door is open to having some penalty for not taking a certain number of these quizzes before graduation; that is basically a penalty for low attendance at class. The penalty could simply be going to a special administration of all the quizzes that were given during their years in the major. This make-up quiz would provide valuable evidence about selection in who attends classes.

    • We are OK with students taking the same quiz more than once because taking a test is, itself, a learning experience. See “The Most Effective Memory Methods are Difficult—and That's Why They Work.”

    • The questions will change each semester to rotate through different concepts, so that by the time a student is finished with an Economics degree, we will know how they fared on a reasonably wide range of concepts. Of course, how their knowledge deepens from year to year is also of interest, so some questions are likely to be asked in a higher fraction of semesters.

  2. An exit survey will ask students about their experience and measure things that a quiz can’t.

    • In addition to asking about students’ experience with their classes, we can ask about what job they have next.

    • We also talked about the possibility of having at least one essay question on the substance of economics on this survey. (We would then pay some of our graduate students to grade it.) As we are thinking of things now, this, too, would be no-stakes. Nothing would happen to the student if they had a bad essay.

    • In order to get a good response rate, we hope to make completing the exit survey a requirement for graduation.

  3. Instructors will report students’ involvement in activities that involve integrative skills. We hope to get data at the student level. For example, with each of these subdivided into “individually” and “in a group”:

    • writing

    • giving presentations

    • analyzing data

    • interpreting data analysis done by others

For the quiz, since I am the only macroeconomist on the committee, I have been thinking about macroeconomic concepts I would want students to know, as well as microeconomic concepts that are especially important to macro where there is some variance in what students are taught. A department-wide quiz meant to measure students’ long-run learning has two somewhat distinct purposes:

  • to see if students are learning what we are trying to teach them.

  • to nudge faculty to try to teach students important concepts that may be neglected or distorted.

If there is a reason for concern that a concept might be getting neglected in the overall curriculum, it doesn’t need to be quite as important a concept to warrant inclusion in the quiz. If it is concept to which we know a lot of teaching effort is being devoted, then it has to be a very important concept to be included.

Here are some of the concepts that are on my wishlist to test students knowledge of in the department-wide quizzes. First, here are some concepts I worry may be getting neglected or distorted:

  1. Positive interest rates are when, overall, the borrower is paying the lender for the use of funds. Negative interest rates are when, overall, the lender is paying the borrower for taking care of funds.

  2. Central banks like the Fed change interest rates not only by changing the money supply, but also by directly changing the interest rates they pay and the interest rates at which they lend.

  3. Ordinarily, the main job of central banks like the Fed is to keep the economy at the level of output and employment that leaves inflation steady.

  4. When central banks cut interest rates it stimulates the economy (a) by shifting the balance of power in terms of what they can afford toward those who want to borrow and spend relatively to those who want to save instead of spend and (b) by giving everyone, both borrowers and lenders, an extra incentive to spend if they can afford to. Raising interest rates does the reverse.

  5. Measures other than interest rate changes that affect consumption, investment or government purchases can substitute for interest rate changes in stimulating or reining in spending. These may be important instruments of policy if either (a) they act faster than interest rate changes or (b) interest rates are a matter of concern for reasons beyond their effect on overall spending.

  6. A key determinant—may economists argue the key determinant—of the balance of trade is the decisions of the domestic government, firms and households about whether and how much to buy of foreign assets (assets denominated in another currency) and the decisions of foreign governments, firms and households about whether and how much to buy of domestic assets.

  7. “Capital requirements” require banks to be getting a certain minimum fraction of their funding from stockholders as opposed to from borrowing. Thus, they could also be called “equity requirements.” Many of the economists concerned about financial stability argue that high levels of bank borrowing that led to low fractions of stockholder equity contributed to the Financial Crisis in 2008 and that higher capital (equity) requirements are an important measure to reduce the chances of another serious financial crisis.

  8. The replication argument highlights the fact that any claim of decreasing returns to scale can be seen as one of three things: (a) a factor of production that is being held fixed, or is not scaling up along with everything else, (b) the price of a factor of production going up as production is expanded or (c) something like an integer constraint. See “There Is No Such Thing as Decreasing Returns to Scale.”

I have good reason to worry about the last, number 8. Both in formal classroom visits and informally as I glance in classrooms with open doors, I know enough about what is being taught in the microeconomics classes to think that (in large measure because of the nature of most micro textbooks) they may talk about decreasing returns to scale without being clear about the replication argument. I won’t belabor this here because I have made my case in “There Is No Such Thing as Decreasing Returns to Scale.” But it is something I feel strongly about.

As for important concepts that I have reason to think we put a lot of effort into teaching, but need to see if our efforts are working, let me go with the ten concepts Greg Mankiw lays out in the first chapter of his Principles of Economics textbook. Greg’s words are in bold, my commentary on each one follows.

People face tradeoffs. This is truly fundamental to economics. I can’t tell you how many times it helped me think through an issue for a blog post to say “The pluses of this policy are …. The minuses are ….” Besides helping students understand economics, the principle that there are pluses and minuses to almost everything will help them be fairminded. It means one should listen respectfully to others since, even if you ultimately decide they are wrong on a decision overall, they may help identify a minus to the decision you wanted to recommend, which may help you identify what you think is a better choice than your initial idea. (That better choice still may not be the choice they want).

The cost of something is what you give up to get it. I taught this as thinking clearly of two different, mutually exclusive choices and laying out every aspect in which the two situations are different. The cost of one choice is not being able to make other mutually exclusive choices. Part of the art of economics is identifying which other choices should be compared to any given choice.

Rational people think at the margin. I am not altogether happy with Greg’s use of the word “rational” here. The trouble with the word “rational” is that it has too many meanings. It is fine in context. But in a very broad-ranging discussion, “rational” needs to be replaced by “obeys this axiom.” In economics, there are at least as many meanings to the word “rational” as there are attractive axioms for decision-making. For the students, I think it would be much better to say: “For any choice, often some of the most important alternative choices to compare it to are choices that are just a little different. This is called ‘thinking at the margin.’ Thinking at the margin allows us to use the power of calculus, though the basic ideas can be shown graphically, without calculus.”

People respond to incentives. In practical policy discussions, this principle is a great part of the value-added from having an economist in the room. A big share of the practical value of this principle is in identifying side-effects of policies. Economists are good at pointing out the (often unintended) incentives of a policy and the side effects those incentives will create. Intended incentives of policies are also important, but economists love thinking about incentives so much, they may overestimate the size of the effects of those intended incentives in the interval before solid evidence about the effect sizes for those incentives is available.

Trade can make everyone better off. Here, the economic concept is every bit as much about transactions within a country—trade between individuals—as transactions between countries. Logically, those are very similar. The good side of trade definitely needs to be taught. But pecuniary externalities are real. A and B trading can make it so I get a worse deal in trading with A. This doesn’t take away the principle that trade is vital for one’s welfare. (Barring a divided self) I always want to be able to freely trade myself. But to get a better deal in trading myself, I might want to interfere with other people trading. An interesting example of this insight is the minimum wage. It doesn’t serve any purpose for me for me to be subject to the minimum wage; I can already reject job offers whose wage was lower than I am willing to accept. But it might benefit me personally for other people to be subject to the minimum wage so that don’t compete with me and bid down the wage I can get.

Markets are usually a good way to organize economic activity. Here is my take on that, drawn from a cutout from my column “America's Big Monetary Policy Mistake: How Negative Interest Rates Could Have Stopped the Great Recession in Its Tracks.”

John von Neumann, who revolutionized economics by inventing game theory (before going on to help design the first atom bomb and lay out the fundamental architecture for nearly all modern computers), left an unfinished book when he died in 1957: The Computer and the Brain. In the years since, von Neumann’s analogy of the brain to a computer has become commonplace. The first modern economist, Adam Smith, was unable to make a similarly apt comparison between a market economy and a computer in his books, The Theory of Moral Sentiments or in the The Wealth of Nations, because they were published, respectively, in 1759 and 1776—more than 40 years before Charles Babbage designed his early computer in 1822. Instead, Smith wrote in The Theory of Moral Sentiments:

“Every individual … neither intends to promote the public interest, nor knows how much he is promoting it … he intends only his own security; and by directing that industry in such a manner as its produce may be of the greatest value, he intends only his own gain, and he is in this, as in many other cases, led by an invisible hand to promote an end which was no part of his intention.”

Now, writing in the 21st century, I can make the analogy between a market economy and a computer that Adam Smith could not. Instead of transistors modifying electronic signals, a market economy has individuals making countless decisions of when and how much to buy, and what jobs to take, and companies making countless decisions of what to buy and what to sell on what terms. And in place of a computer’s electronic signals, a market economy has price signals. Prices, in a market economy, are what bring everything into balance.

Governments can sometimes improve market outcomes. Here, one of the key practical points is that a blanket statement or attitude that government regulations are bad or that government regulations are good is unlikely to hold water. The devil is in the details. Enforcing property rights is fundamental to a free-market economy—and what it takes to enforce property rights can be viewed as a form of regulation. On the other hand, it is easy to find examples of bad regulations. One good way to identify bad regulations is to look for things that both (a) tell people they can’t do something they want to do, and therefore reduce freedom (most regulations do this) and (b) reduce the welfare of the vast majority of people. Many regulations are like this. They can exist because they benefit a small slice of people who influence the government to impose those regulations. (And it is easy to find regulations for which, any reasonable way of totaling up the benefit to that slice minus the cost to the vast majority of people will leave the regulation looking bad. For example, the dollar benefits and costs may make the regulation look bad, and it may benefit relatively rich people at the expense of poorer people.)

A country’s standard of living depends on its ability to produce goods & services. Here the thing I want to add as a key concept for the students is that the ability to produce goods and services has increased dramatically in the last 200 to 250 years, and that technological progress continues now at a rate that rivals the rate at which it improved during the Industrial Revolution. Many voters are grumpy in part because the ability to produce goods and services is not improving as rapidly as it did in the immediate postwar era from 1947–1973 and in the brief period from 1995–2003, but is still improving substantially each decade. The late Hans Rosling has a wonderful four minute video I run in class for my students showing the dramatic improvements in per capita income and health across the world in the last century.

Prices rise when the government prints too much money. My main perspective on this is that I want students to know that central banks are especially responsible for the unit of account function of money. If they do a bad job, money serves as a bad unit of account. I have a blog post giving my views on “The Costs of Inflation.” One small point: I always stress to students that the Fed doesn’t literally print money. What it does is to create money electronically—as a higher number in an account on a computer. That higher number in an account on a computer than gives the account holder the right to ask for more paper currency. That paper currency is printed by the Mint. Then each regional Federal Reserve Bank gets enough paper currency from the Mint to take care of any anticipated requests for paper currency by those whose “reserve accounts” or other accounts with the Fed give them the right to ask for paper currency.

Society faces a short-run tradeoff between inflation and unemployment. Under this heading, I would include basic concepts about aggregate demand—how much people, firms and the government want to spend. When prices or wages are sticky, people wanting to spend more leads to more getting produced. When more is being produced, unemployment goes down. In a theoretical world unlike the one we live in, in which prices and wages were perfectly flexible, people wanting to spend more would lead to higher prices, with no change it what gets produced. That theoretical world is relevant because it is probably a reasonable description of what happens in the long-run. The transition from what happens in the short-run to what happens in the long-run means that higher aggregate demand (higher desired spending) will raise output in the short-run and raise prices in the long run, relative to what they would have been otherwise. How fast prices rise is an area of debate.

I am excited about measuring what students have learned in a department-wide way, and firmly of the view that what we should most hunger to measure is how much stays learned even years after students have taken a class. I am confident that the truth about how much students have learned in the long-term sense will be bracing and will lead to a more realistic sense of how many concepts can be taught and a greater interest in designing and implementing more engaging ways to help students learn the very most important concepts in economics.

I have two columns very closely related to this post:

You also might be interested in my move to the University of Colorado Boulder:

Increasing Returns to Duration in Fasting

I am in the middle of my annual anti-cancer fast. (See “My Annual Anti-Cancer Fast.”) That has led me to think about returns to scale—in this case, returns to duration—in fasting.

Cautions about Fasting.

Before I dive into the technical details, let me repeat some cautions about fasting. I am not going to get into any trouble for telling people to cut out added sugar from their diet, but there are some legitimate worries about fasting. Here are my cautions in “Don't Tar Fasting by those of Normal or High Weight with the Brush of Anorexia”:

  • Definitely people should not do fasting for more than 48 hours without first reading Jason Fung’s two books The Obesity Code (see “Obesity Is Always and Everywhere an Insulin Phenomenon” and “Five Books That Have Changed My Life”) and The Complete Guide to Fasting.

  • Those under 20, pregnant or seriously ill should indeed consult a doctor before trying to do any big amount of fasting.

  • Those on medication need to consult their doctor before doing much fasting. My personal nightmare as someone recommending fasting is that a reader who is already under the care of a doctor who is prescribing medicine might fail to consult their doctor about adjusting the dosage of that medicine in view of the fasting they are doing. Please, please, please, if you want to try fasting and are on medication, you must tell your doctor. That may involve the burden of educating your doctor about fasting. But it could save your life from a medication overdose.

  • Those who find fasting extremely difficult should not do lengthy fasts.

  • But, quoting again from “4 Propositions on Weight Loss”: “For healthy, nonpregnant, nonanorexic adults who find it relatively easy, fasting for up to 48 hours is not dangerous—as long as the dosage of any medication they are taking is adjusted for the fact that they are fasting.”

Let me add to these cautions: If you read The Complete Guide to Fasting you will learn that fasting more than two weeks (which I have never done and never intend to do) can lead to discomfort in readjusting to eating food when the fast is over. Also, for extended fasts, you need to take in some minerals/electrolytes. If not, you might get some muscle cramps. These are not that dangerous but are very unpleasant. What I do is simply take one SaltStick capsule each day.

Health Benefits of Fasting.

That said, appropriate fasting is a very powerful boost to health. See for example,

Let me be clear that I am talking about fasting by not eating food, but continuing to drink water (or tea or coffee without sugar). I haven’t come across any claim of a health benefit from not drinking water during fasting, as is urged for religious reasons in both Islamic and Mormon fasting. And there are many reasons to think that drinking a lot of water is good for health.

Fasting is also the central element in losing weight and keeping it off. I have often emphasized the importance of eating a low-insulin-index diet; the most important reason to eat a low-insulin-index diet is that it makes fasting easier. Indeed a famous experiment during World War II involved feeding conscientious objectors a small amount of calories overall in a couple of meals a day that were high on the insulin index. This caused enormous suffering. Don’t do this! In my experience, it is much easier to eat nothing than to eat a few high-insulin-index calories a day. The current top six posts in my bibliographic post “Miles Kimball on Diet and Health: A Reader's Guide” are clear about the importance of fasting:

Also, there is reason to think fasting can prevent cancer:

Increasing Returns to Duration in Fasting.

People differ in their tolerance for fasting. In “Forget Calorie Counting; It's the Insulin Index, Stupid” I talk in some detail about how to do a modified fast. Everything I say here about returns to duration applies to modified fasts as well as more complete fasts. The main difference is that a modified fast has a higher number of calories consumed than in a complete fast, which has zero calories consumed. While calories in/calories out thinking is quite unhelpful to people making incremental changes to the typical American diet, when your insulin level is as low as it is during an extended fast or a modified fast, calories in/calories out thinking is a better guide. (See “Maintaining Weight Loss” and “How Low Insulin Opens a Way to Escape Dieting Hell.)

The reason I think fasting should have increasing returns to duration is all about glycogen. Glycogen is an energy-storage molecule in your muscles and liver. It is the quick-in, quick-out energy storage molecule, while body fat is slow-in, slow-out energy storage. In the first day or two of fasting, much of the energy you need will be drawn from your glycogen stores. It is only as your glycogen stores are run down that the majority of the energy you need will be drawn from your body fat.

Moreover, based on my own experience, I theorize that when you end your fast and resume eating, you will have an enhanced appetite in order to replenish your glycogen stores. By contrast, I think of the amount of body fat having a weaker effect on appetite. (Though likely weaker, it is an important effect. See “Kevin D. Hall and Juen Guo: Why it is so Hard to Lose Weight and so Hard to Keep it Off.”)

The bottom line of this view is that, for weight loss or maintenance of weight loss, the time it takes to deplete glycogen by fasting is like a fixed cost. Then, after the first couple of days, you should be able to burn something like .6 pounds of body fat per day while fasting. I got that figure from approximating the number of calories in a pound of body fat by 3600, and assuming 2160 pounds burned per day because those who regularly fast get an efficient metabolism (which I think has good anti-cancer properties). This online article says that the average American woman and man, who likely have relatively inefficient metabolisms, burn respectively 2400 and 3100 calories per day.

The “fixed cost” of glycogen burning that substitutes at the beginning for body-fat burning may not literally happen all at the beginning. There is probably some fat-burning early on, but every calorie drawn from glycogen burning isn’t drawn from fat burning, so fat burned in the first couple of days is less than it is during later days. After a few days, glycogen reserves will be gone so that all calories will have to come from fat burning. The theory I am propounding is that the glycogen reserves will bounce back fast when you resume eating, but that body-fat burned will be a relatively long-lasting effect.

When your body is primarily burning fat rather than glycogen, you are said to be in a state of “ketosis.” Those on “keto” diets try to get to fat burning faster by eating so few carbs and so much dietary fat that it is hard for their bodies to replenish glycogen stores—which replenish most easily from carbs. To me, keto diets are a little extreme on what you eat. I would rather eat a wider variety of foods and rely on fasting to get me into ketosis.

Let me say a little more about “keto.” First, many “keto” products are very useful products for those on a low-insulin-index diet. And in terms of explaining what you are doing, “keto” may communicate a reasonably approximation to people who don’t know the term “low-insulin-index diet.” Of course there are people who don’t know “keto” either. For them, “lowcarb” has to do, despite all of its inaccuracy. I have this pair of blog posts comparing a low-insulin-index diet (which is what I recommend to complement fasting), a keto diet and a lowcarb diet:

The bottom line of this post is that, for those who can tolerate either fasting or modified fasting, fasting is a magic bullet for weight loss. (If you want a wonkish discussion of this, see “Magic Bullets vs. Multifaceted Interventions for Economic Stimulus, Economic Development and Weight Loss.”)

For annotated links to other posts on diet and health, see: