Consensual, Non-Solipsistic Experience Machines
Experience machines get a bad rap. People often use thought experiments involving a machine that gives an ultra-high-quality illusion of life experience to argue that people want things to be “real.” There is a grain of truth to this, but I want to narrow down what I think people most want to be real: the other human beings one seems to be interacting with.
Being in an experience machine alone doesn’t sound at all attractive to me on a long-term basis, even if the experiences themselves are extremely pleasant and engaging. But if the possible experiences go far beyond what is otherwise possible, it seems great to be in an experience machine in which I genuinely interact with billions of other people as real as me—including all the people I care most about.
A movie, “The Matrix” seems to take the opposite view: that there is something wrong with being embedded in an experience machine along with a huge number of other human beings one is genuinely interacting with. But “The Matrix” brings in another element that is detestable: being put in the experience machine without one’s own consent. It may be that we irreduceably hate things being done to them without their consent, or it may be that being put in a situation without one’s consent is such a reliable indicator of something being wrong with that situation that we are justifiably suspicious. In any case, being put into an experience machine without one’s own consent is not a logically necessary element of the experience machine experience.
Rather than the situation in “The Matrix,” the type of experience machine that I think of first is the situation of the trillions of human beings in Robin Hanson’s fascinating book The Age of Em:
The reason Robin Hanson predicts there will be trillions of digital humans if certain technological conditions hold is that it takes a lot fewer resources to support a digital human than a flesh-and-blood human. I don’t think the digital humans would pine for being flesh-and-blood humans if the quality of their experience was at least as good as the experience of flesh-and-blood humans if they share that kind of experience with trillions of other digital humans and they know the truth about everything in the sense that no one is lying to them about the bit picture.
Postscript: In “The Matrix,” the motivation for keeping humans in the matrix without their consent is lame: in violation of the laws of thermodynamics, the humans connected in give off more useable, low-entropy energy than they take in. A much more plausible motivation could have been borrowed from Dan Simmons’s Hyperion Cantos: using spare brain power of a huge number of humans for hypercomputing. In the Hyperion Cantos, though, this is done by an organ giving them physical immortality rather than by putting them in virtual reality, but one could easily have stipulated in “The Matrix” that using spare brain cycles is easier if someone is embedded in virtual reality.
To avoid giving too much credit to Dan Simmons, however, let me mention that Dan Simmons makes a huge ecological blunder in one of the novels in the Hyperion Cantos, by have a planet where creatures eat humans and human eat those creatures, but no new energy for the ecosystem is being brought in from the outside. This could last for only a short time before the ecosystem shrank away to nothing. (Despite this complaints, I think often about the Hyperion Cantos and highly recommend them.)
Related Posts: