Christof Koch: Will Machines Ever Become Conscious?
In his Scientific American article “Will Machines Ever Become Conscious?” Christof Koch questions whether a brain emulation would be truly conscious:
Let us assume that in the future it will be possible to scan an entire human brain, with its roughly 100 billion neurons and quadrillion synapses, at the ultrastructural level after its owner has died and then simulate the organ on some advanced computer, maybe a quantum machine. If the model is faithful enough, this simulation will wake up and behave like a digital simulacrum of the deceased person—speaking and accessing his or her memories, cravings, fears and other traits.
If mimicking the functionality of the brain is all that is needed to create consciousness, as postulated by GNW theory, the simulated person will be conscious, reincarnated inside a computer. Indeed, uploading the connectome to the cloud so people can live on in the digital afterlife is a common science-fiction trope.
IIT posits a radically different interpretation of this situation: the simulacrum will feel as much as the software running on a fancy Japanese toilet—nothing. It will act like a person but without any innate feelings, a zombie (but without any desire to eat human flesh)—the ultimate deepfake.
To create consciousness, the intrinsic causal powers of the brain are needed. And those powers cannot be simulated but must be part and parcel of the physics of the underlying mechanism.
He seems to have written an entire book on this theme: The Feeling of Life Itself: Why Consciousness Is Widespread but Can't Be Computed. However, my comments here are only on the Scientific American article, not the book (which I haven’t read).
Christof bases his skepticism that a brain emulation would be truly conscious on Integrated Information Theory. He writes:
Giulio Tononi, a psychiatrist and neuroscientist at the University of Wisconsin–Madison, is the chief architect of IIT, with others, myself included, contributing. The theory starts with experience and proceeds from there to the activation of synaptic circuits that determine the “feeling” of this experience. Integrated information is a mathematical measure quantifying how much “intrinsic causal power” some mechanism possesses. Neurons firing action potentials that affect the downstream cells they are wired to (via synapses) are one type of mechanism, as are electronic circuits, made of transistors, capacitances, resistances and wires.
But when I read the Wikipedia article “Integrated information theory,” it seems like a synapse by synapse simulation of a human brain would satisfy perfectly well all of the claimed requirements of consciousness. Take a look for yourself and see what you think.
I am totally willing to concede that general artificial intelligence, built on other principles than copying the workings of the human brain, might or might not be conscious, depending on the particular design.
Christof seems to get awfully close to agreeing with me about the consciousness of very, very detailed brain emulations that required copying from a particular individual when he writes:
In principle, however, it would be possible to achieve human-level consciousness by going beyond a simulation to build so-called neuromorphic hardware, based on an architecture built in the image of the nervous system.
I don’t see why a detailed logical image of the nervous system can’t do just as much. After all, things can’t be computed in electronic computers without one electron causally affecting another at every step. So there is causal power there. In any case, in his Scientific American article, Christof has failed to make clear how a very, very, very detailed synapse by synapse brain emulation is different from “neuromorphic hardware” in terms of causal powers. Computer programs have “intrinsic causal power” too.
Update, December 22, 2019: Robin Hanson comments on Twitter:
"But [by] 'Integrated information theory,' it seems like a synapse by synapse simulation of a human brain would satisfy perfectly well all of the claimed requirements of consciousness." Yes, that's completely obvious. Quite crazy to think otherwise.
Related Posts: