CHAPTER EIGHT

IS YOUR MIND A SOFTWARE PROGRAM?

I think the brain is like a programme … so it’s theoretically possible to copy the brain onto a computer and so provide a form of life after death.

STEPHEN HAWKING1

One morning I awoke to a call from a New York Times reporter. She wanted to talk about Kim Suozzi, a 23-year-old who had died of brain cancer. A cognitive-science major, Kim was eagerly planning for graduate school in neuroscience. But the day she learned she had an exciting new internship, she also learned she had a brain tumor. She posted on Facebook: “Good news: got into The Center for Behavioral Neurosciences’ BRAIN summer program.… Bad news: a tumor got into my BRAIN.”2

In college, Kim and her boyfriend, Josh, had shared a common passion for transhumanism. When conventional treatments failed, they turned to cryonics, a medical technique that uses ultracold temperatures to preserve the brain upon death. Kim and Josh hoped to make the specter of death a temporary visitor. They were banking on the possibility that her brain could be revived at some point in the distant future, when there was a cure for her cancer and a means to revive cryonically frozen brains.

Kim Suozzi during an interview at Alcor, next to the containers where she and others are now frozen. (Alcor)

So Kim contacted Alcor, a nonprofit cryopreservation center in Scottsdale, Arizona. She launched a successful online campaign to get the eighty thousand dollars needed for the cryopreservation of her head. To facilitate the best possible cryopreservation, Kim was advised to spend the last weeks of her life near Alcor. So Kim and Josh moved to a hospice in Scottsdale. In her last weeks, she denied herself food and water to hasten her death, so the tumor would not further ravage her brain.3

Cryonics is controversial. Cryopreservation is employed in medicine to maintain human embryos and animal cells for as long as three decades.4 But when it comes to the brain, cryopreservation is still in its infancy, and it is unknown whether someone cryopreserved using today’s incipient technology could ever be revived. But Kim and Josh had weighed the pros and cons carefully.

Sadly, although Kim would never know this, her cryopreservation did not go smoothly. When the medical scans of her brain arrived, they revealed that the cryoprotectant only reached the outer portion of her brain, possibly due to vascular impairment from ischemia, leaving the remainder vulnerable to ice damage.5 Given the damage, the author of the New York Times article, Amy Harmon, considered the suggestion that once uploading technology becomes available, Kim’s brain be uploaded into a computer program. As she noted, certain cryopreservation efforts are turning to uploading as a means of digitally preserving the brain’s neural circuits.6

Harmon’s point was that uploading technology might benefit Kim and, more generally, those patients whose cryopreservation and illness may have damaged too much of the brain for a biological revival. The idea was that, in Kim’s case, the damaged parts of the biological brain could be repaired digitally. That is, the program that her brain was uploaded to could include algorithms carrying out computations that the missing parts were supposed to achieve. And this computer program—this was supposed to be Kim.7

Oh boy, I thought. As a mother of a daughter only a few years younger than Kim, I had trouble sleeping that night. I kept dreaming of Kim. It was bad enough that cancer stole her life. It is one thing to cryopreserve and revive someone; there are scientific obstacles here, and Kim knew the risks. But uploading is another issue entirely. Why see uploading as a means of “revival”?

Kim’s case makes all our abstract talk of radical brain enhancements so much more real. Transhumanism, fusion-optimism, artificial consciousness, postbiological extraterrestrials—it all sounds so science fiction–like. But Kim’s example illustrates that, even here on Earth, these ideas are altering lives. Stephen Hawking’s remarks voice an understanding of the mind that is in the air nowadays: the view that mind is a program. The New York Times piece reported that Kim herself had this view of the mind, in fact.8

Chapters Five and Six urged that uploading is far-fetched, however. It seems to lack clear support from theories of personal identity. Even modified patternism didn’t support uploading. To survive uploading, your mind would have to transfer to a new location, outside your brain, through an unusual process in which information about every molecule in your brain is sent to a computer and converted into a software program. Objects that we ordinarily encounter do not “jump” across spacetime to new locations in this way. No single molecule in your brain moves to the computer, but somehow, as if by magic, your mind is supposed to transfer there.9

This is perplexing. For the transfer to happen, the mind must be radically unlike ordinary physical objects. My coffee cup is here, next to my laptop; when it moves, it follows a path through spacetime. It isn’t dismantled, measured, and then, across the globe somewhere, configured with new components that mirror its measurements. And if it were, we wouldn’t think it was the same cup, but a replica.

Furthermore, recall the reduplication problem (see Chapter Six). For instance, suppose you try to upload, and consider a scenario in which your brain and body survive the scan, as may be the case with more sophisticated uploading procedures. Suppose your upload is downloaded into an android body that looks just like you, seeming human. Feeling curious, you decide to meet your upload in a bar. As you sip a glass of wine with your android double, the two of you debate who is truly the original—who is truly you. The android argues convincingly that it is the real you, for it has all your memories and even remembers the beginning of the surgical procedure in which you were scanned. Your doppelgänger even asserts that it is conscious. This may be true, for we saw that if the upload is extremely precise, it may very well have a conscious mental life. But that doesn’t mean it is you, for you are sitting right across from it in the bar.

In addition, if you really uploaded, you would be in principle downloadable to multiple locations at once. Suppose a hundred copies of you were downloaded. You would be multiply located, that is, you would be in multiple places at the same time. This is an unusual view of the self. Physical objects can be located in different places at different times, but not at the same time. We seem to be objects, albeit of a special kind: We are living, conscious beings. For us to be an exception to the generality about the behavior of macroscopic objects would be stupendous metaphysical luck.10

THE MIND AS THE SOFTWARE OF THE BRAIN

Such considerations motivate me to resist the siren song of digital immortality, despite my broadly transhumanist views. But what if Hawking and the others are right? What if we are lucky, because the mind truly is a kind of software program?

Suppose that Will Castor, the scientist who develops uploading in the movie Transcendence and becomes the first test case, was presented with the doubts raised in the last section. We tell him that the copy is not the same as the original. It is unlikely a mere information stream, running on various computers, would truly be him. He might offer the following reply:

The Software Response. Uploading the mind is like uploading software. Software can be uploaded and downloaded across great distances in seconds, and can even be downloaded to multiple locations at once. We are not like ordinary physical objects at all—our minds are instead programs. So if your brain is scanned under ideal conditions, the scanning process copies your neural configuration (your “program” or “informational pattern”). You can survive uploading insofar as your pattern survives.

The software response draws from a currently influential view of the nature of the mind in cognitive science and philosophy of mind that regards the mind as being a software program—a program that the brain runs.11 Let’s call this position “the Software View.” Many fusion-optimists appeal to the Software View, along with their patternism. For instance, the computer scientist Keith Wiley writes, in response to my view:

The mind is not a physical object at all and therefore properties of physical objects (continual path through space and time) need not apply. The mind is akin to what mathematicians and computer scientists call ‘information,’ for brevity a nonrandom pattern of data.12

If that is right, your mind can be uploaded and then downloaded into a series of different kinds of bodies. This is colorfully depicted in Rudy Rucker’s dystopian novel Software, where the character runs out of money to pay for decent downloads and, out of desperation, dumps his consciousness into a truck. Indeed, perhaps an upload wouldn’t even need to be downloaded at all. Perhaps it can just reside somewhere in a computer simulation, as in the classic film The Matrix, in which the notorious Smith villain has no body at all, residing solely in the Matrix—a massive computer simulation. Smith is a particularly powerful software program. Not only can he appear anywhere in the Matrix in pursuit of the good guys, he can be in multiple locations at once. At various points in the movie, Neo even finds himself fighting hundreds of Smiths.

As these science fiction stories illustrate, the Software View seems natural in the age of the Internet. Indeed, elaborations of it even describe the mind with expressions like “downloads” “apps” and “files.” As Steven Mazie at Big Think puts it:

Presumably you’d want to Dropbox your brain file (yes, you’ll need to buy more storage) to avoid death by hard-drive crash. But with suitable backups, you, or an electronic version of you, could go on living forever, or at least for a very, very long time, “untethered,” as Dr. Schneider puts it, “from a body that’s inevitably going to die.”13

Another proponent of patternism is the neuroscientist and head of the Brain Preservation Foundation, Ken Hayworth, who is irked by my critique of patternism. To him it is apparently really obvious that the mind is a program:

It always boggles my mind that smart people continue to fall into this philosophical trap. If we were discussing copying the software and memory of one robot (say R2D2) and putting it into a new robot body would we be philosophically concerned about whether it was the ‘same’ robot? Of course not, just as we don’t worry about copying our data and programs from an old laptop to a new one. If we have two laptops with the same data and software do we ask if one can ‘magically’ access the other’s RAM? Of course not.14

So, is the Software View correct? No. The software approach to the mind is deeply mistaken. It is one thing to say that the brain is computational; this is a research paradigm in cognitive science that I am quite fond of (see, for instance, my earlier book, The Language of Thought). Although the Software View is often taken as being part and parcel of the computational approach to the brain, many metaphysical approaches to the nature of mind are compatible with a computational approach to the brain.15 And, as I’ll explain shortly, the view that the mind or self is software is one we should do without.

Before I launch into my critique, let me say a bit more about the significance of the Software View. There are at least two reasons the issue is important. First, if the Software View is correct, patternism is more plausible than Chapters Five and Six indicated. My objections involving spatiotemporal discontinuity and reduplication can be dismissed, although other problems remain, such as deciding when an alteration in a pattern is compatible with survival and when it is not.

Second, if the Software View is correct, it would be an exciting discovery, because it would provide an account of the nature of the mind. In particular, it might solve a central philosophical puzzle known as the Mind-Body Problem.

THE MIND-BODY PROBLEM

Suppose that you are sitting in a cafe studying right before a big presentation. All in one moment, you taste the espresso you sip, feel a pang of anxiety, consider an idea, and hear the scream of the espresso machine. What is the nature of these thoughts? Are they just a matter of physical states of your brain, or are they something more? Relatedly, what is the nature of your mind? Is your mind just a physical thing, or is it something above and beyond the configuration of particles in your brain?

These questions pose the Mind-Body Problem. The problem is where to situate mentality within the world that science investigates. The Mind-Body Problem is closely related to the aformentioned Hard Problem of Consciousness, the puzzle of why physical processes are accompanied by subjective feeling. But the focus of the Hard Problem is consciousness, whereas the Mind-Body Problem focuses on mental states more generally, even nonconscious mental states. And instead of asking why these states must exist, it seeks to determine how they relate to what science investigates.

Contemporary debates over the Mind-Body Problem were launched more than 50 years ago, but some classic positions began to emerge as early as the pre-Socratic Greeks. The problem is not getting any easier. There are some fascinating solutions, to be sure. But as with the debate over personal identity, there are no uncontroversial ones in sight. So, does the Software View solve this classic philosophical problem? Let’s consider some influential positions on the problem and see how the Software View compares.

Panpsychism

Recall that panpsychism holds that even the smallest layers of reality have experience. Fundamental particles have minute levels of consciousness, and in a watered-down sense, they are subjects of experience. When particles are in extremely sophisticated configurations—such as when they are in nervous systems—more sophisticated, recognizable forms of consciousness arise. Panpsychism may seem outlandish, but the panpsychist would respond that their theory actually meshes with fundamental physics, because experience is the underlying nature of the properties that physics identifies.

Substance Dualism

According to this classic view, reality consists of two kinds of substances, physical things (e.g., brains, rocks, bodies) and nonphysical ones (i.e., minds, selves, or souls). Although you personally may reject the view that there’s an immaterial mind or soul, science alone cannot rule it out. The most influential philosophical substance dualist, René Descartes, thought that the workings of the nonphysical mind corresponded with the workings of the brain, at least during one’s lifetime.16 Contemporary substance dualists offer sophisticated nontheistic positions, as well as intriguing and equally sophisticated theistic ones.

Physicalism (or Materialism)

We discussed physicalism briefly in Chapter Five. According to physicalism, the mind, like the rest of reality, is physical. Everything is either made up of something that physics describes or is a fundamental property, law, or substance figuring in a physical theory. (Here, by “physical theory,” physicalists tend to gesture toward the content of the final theory of everything that a completed physics uncovers, whatever that is.) There are no immaterial minds or souls, and all of our thoughts are ultimately just physical phenomena. This position has been called “materialism,” but it is now more commonly called “physicalism.” Because there is no second immaterial realm, as substance dualism claimed, physicalism is generally regarded as a form of monism—the claim that there is one fundamental type of category to reality—in this case, the category of physical entities.

Property Dualism

The point of departure for this position is the hard problem of consciousness. Proponents of property dualism believe that the best answer to the question, “why does consciousness need to exist?” is that consciousness is a fundamental feature of certain complex systems. (Paradigmatically, such features emerge from the biological brain, but perhaps one day, synthetic intelligences will have such features as well). Property dualists, like substance dualists, claim that reality divides into two distinct realms. But property dualists reject the existence of souls and immaterial minds. Thinking systems are physical things, but they have nonphysical properties (or features). These nonphysical features are basic building blocks of reality, alongside fundamental physical properties, but unlike panpsychism, these basic features are not microscopic—they are features of complex systems.

Idealism

Idealism is less popular than the other views, but it has been historically significant. Idealists hold that fundamental reality is mind-like. Some advocates of this view are panpsychists, although a panpsychist can also reject idealism, claiming that there is more to reality than just minds or experiences.17


There are many intriguing approaches to the nature of mind, but I’ve focused on the most influential. Should the reader wish to consider solutions to the Mind-Body Problem in more detail, there are several excellent introductions available.18 Now that we’ve considered these positions, let us turn back to the Software View and see how it fares.

ASSESSING THE SOFTWARE VIEW

The Software View has two initial flaws, both of which can be remedied, I believe. First, not all programs are the sorts of things that have minds. The Amazon or Facebook app on your smartphone doesn’t have a mind, at least if we think of minds in the normal sense (i.e., as something only highly complex systems, such as brains, have). If minds are programs, they are programs of a very special sort, having layers of complexity that fields like psychology and neuroscience find challenging to describe. A second issue is that, as we’ve seen, consciousness is at the heart of our mental lives. A zombie program—a program incapable of having experience—just isn’t the sort of thing that has a mind.

Yet these points are not decisive objections, for if proponents of the Software View agree with one or both of these criticisms, they can qualify their view. For instance, if they agree with both criticisms, they can restrict the Software View in the following way:

Minds are programs of a highly sophisticated sort, which are capable of having conscious experiences.

But adding some fine print doesn’t fix the deeper problems I will raise.

To determine whether the Software View is plausible, let us ask: What is a program? As the image on the following page indicates, a program is a list of instructions in lines of computer code. The lines of code are instructions in a programming language that tell the computer what tasks to do. Most computers can execute several programs and, in this way, new capacities can be added or deleted from the computer.

A line of code is like a mathematical equation. It is highly abstract, standing in stark contrast with the concrete physical world around you. You can throw a rock. You can lift a coffee cup. But just try to throw an equation. Equations are abstract entities; they are not situated in space or time.

Now that we appreciate that a program is abstract, we can locate a serious flaw in the Software View. If your mind is a program, then it is just a long sequence of instructions in a programming code. The Software View is saying the mind is an abstract entity. But think about what this means. The field of philosophy of mathematics studies the nature of abstract entities like equations, sets, and programs. Abstract entities are said to be nonconcrete: They are nonspatial, nontemporal, nonphysical, and acausal. The inscription “5” is here on this page, but the actual number, as opposed to the inscription, isn’t located anywhere. Abstract entities are not located in space or time, they are not physical objects, and they do not cause events to occur in the spatiotemporal manifold.

How can the mind be an abstract entity, like an equation or the number 2? This seems to be a category mistake. We are spatial beings and causal agents; our minds have states that cause us to act in the concrete world. And moments pass for us—we are temporal beings. So, your mind is not an abstract entity like a program. Here, you may suspect that programs are able to act in the world. What about the last time your computer crashed, for instance? Didn’t the program cause the crash? But this confuses the program with one of its instantiations. At one instance, say, when Windows is running, the Windows program is implemented by physical states within a particular machine. The machine and its related process are what crashes. We might speak of a program crashing, but on reflection, the algorithm or lines of code (i.e., the program) do not literally crash or cause a crash. The electronic states of a particular machine cause the crash.

So the mind is not a program. And there is still reason to doubt that uploading the mind is a true means for Kim Suozzi, or others, to survive. As I’ve been stressing throughout the second part of this book, assuming that there are selves that persist over time, biologically based enhancements that gradually restore and cautiously enhance the functioning of the biological brain are a safer route to longevity and enhanced mental abilities, even if the technology to upload a complete human brain is developed. Fusion-optimists tend to endorse both rapid alterations in psychological continuity or radical changes in one’s substrate. Both types of enhancements seem risky, at least if one believes that there is such a thing as a persisting self.

I emphasized this in Chapters Five and Six as well, although there, my rationale did not concern the abstract nature of the Software View. There, my caution stemmed from the controversy in metaphysics over which, if any, competing theories of the nature of the person are correct. This left us adrift concerning whether radical, or even moderate, enhancements are compatible with survival. We can now see that just as patternism about the survival of the person is flawed, so too, the related Software View is problematic. The former runs afoul of our understanding the nature of personhood, while the latter ascribes a physical significance to abstractions that they do not possess.

I’d like to caution against drawing a certain conclusion from my rejection of the Software View, however. As I’ve indicated, the computational approach to the mind in cognitive science is an excellent explanatory framework.19 But it does not entail the view that the mind is a program. Consider Ned Block’s canonical paper, “The Mind as the Software of the Brain.”20 Aside from its title, which I obviously disagree with, it astutely details many key facets of the view that the brain is computational. Cognitive capacities, such as intelligence and working memory, are explainable via the method of functional decomposition; mental states are multiply realizable; and the brain is a syntactic engine driving a semantic one. Block is accurately describing an explanatory framework in cognitive science by isolating key features of the computational approach to the mind. None of this entails the metaphysical position that the mind is a program, however.

The Software View isn’t a viable position, then. But you might wonder whether the transhumanist or fusion-optimist could provide a more feasible computationalist approach to the nature of the mind. As it happens, I do have a further suggestion. I think we can formulate a transhumanist-inspired view in which minds are not programs per se, but program instantiations—a given run of a program. We will then need to ask whether this modified view is any better than the standard Software View.

COULD LIEUTENANT COMMANDER DATA BE IMMORTAL?

Consider Lieutenant Commander Data, the android from Star Trek: The Next Generation. Suppose he finds himself in an unlucky predicament, on a hostile planet, surrounded by aliens that are about to dismantle him. In a last-ditch act of desperation, he quickly uploads his artificial brain onto a computer on the Enterprise. Does he survive? And could he, in principle, do this every time he’s in a jam, so that he’d be immortal?

If I’m correct that neither Data’s nor anyone else’s mind is a software program, this has bearing on the question of whether AIs, including uploads, could achieve immortality or, rather, whether they could achieve what we might call “functional immortality.” (I write “functional immortality,” because the universe may eventually undergo a heat death that no life can escape. But I’ll ignore this technicality in what follows.)

It is common to believe that an AI could achieve functional immortality by creating backup copies of itself and thus transfer its consciousness from one computer to the next when an accident happens. This view is encouraged by science fiction stories, but I suspect it is mistaken. Just as it is questionable whether a human could achieve functional immortality by uploading and downloading herself, so, too, we can question whether an AI would genuinely survive. Insofar as a particular mind is not a program or abstraction but a concrete entity, a particular AI mind is vulnerable to destruction by accident or the slow decay of its parts, just as we are.

This is hardly an obvious point. It helps to notice that there is an ambiguity as to whether “AI” refers to a particular AI (an individual being) or to a type of AI system (which is an abstract entity). By analogy, “the Chevy Impala” could mean the beat-up car you bought after college, or it could mean the type of car (i.e., the make and model). That would endure even after you scrapped your car and sold it for parts. So, it is important to disambiguate claims about survival. Perhaps, if one wants to speak of types of programs as “types of mind,” the types could be said to “survive” uploading, according to two watered-down notions of survival. First, at least in principle, a machine that contains a high-fidelity copy of an uploaded human brain can run the same program as that brain did before it was destroyed by the uploading procedure. The type of mind “survives,” although no single conscious being persists. Second, a program, as an abstract entity, is timeless. It does not cease to exist, because it is not a temporal being. But this is not “survival” in a serious sense. Particular selves or minds do not survive in either of these two senses.

This is all highly abstract. Let’s return to the example of Lieutenant Commander Data. Data is a particular AI, and as such, he is vulnerable to destruction. There may be other androids of this type (individual AIs themselves), but their survival does not ensure the survival of Data, it just ensures the “survival” of Data’s type of mind. (I write “survival” in scare quotes to indicate that I am referring to the aforementioned watered-down sense of survival.)

So there Data is, on a hostile planet, surrounded by aliens that are about to destroy him. He quickly uploads his artificial brain onto a computer on the Enterprise. Does he survive or not? On my view, we now have a distinct instance (or, as philosophers say, a “token”) of the type of mind Data being run by that particular computer. We could ask: Can that token survive the destruction of the computer by uploading again (i.e., transferring the mind of that token to a different computer)? No. Again, uploading would merely create a different token of the same type. An individual’s survival depends on where things stand at the token level, not at the level of types.

It is also worth underscoring that a particular AI could still live a very long time, insofar as its parts are extremely durable. Perhaps Data could achieve functional immortality by avoiding accidents and having his parts replaced as they wear out. My view is compatible with this scenario, because Data’s survival in this case does not happen by transferring his program from one physical object to another. On the assumption that one is willing to grant that humans survive the gradual replacement of their parts over time, why not also grant it in the case of AIs? Of course, in Chapter Five, I emphasized that it is controversial whether persons survive the replacement of parts of their brains; perhaps the self is an illusion, as Derek Parfit, Friedrich Nietzsche, and the Buddha have suggested.

IS YOUR MIND A PROGRAM INSTANTIATION?

The central claim of my discussion of Data is that survival is at the token level. But how far can we push this observation? We’ve seen that the mind is not a program, but could it be the instantiation of a program—the thing that runs the program or stores its informational pattern? Something that instantiates a program is a concrete entity—paradigmatically, a computer, although technically, a program instantiation involves not just the computer’s circuitry but also the physical events that occur in the computer when the program is running. The pattern of matter and energy in the system corresponds, in possibly nontrivial ways, to elements of the program (e.g., variables, constants).21 Let us call this position the Software Instantiation View of the Mind:

The Software Instantiation View of the Mind (SIM)

The mind is the entity running the program (where a program is the algorithm that the brain implements, something in principle discoverable by cognitive science).

This new position does not serve the fusion-optimist well, however. This is not a view that is accurately expressed by the slogan: “The mind is the software of the brain.” Rather, this view is claiming that mind is the entity running the program. To see how different SIM is from the Software View, notice that SIM doesn’t suggest that Kim Suozzi can survive uploading; my earlier concerns involving spatiotemporal discontinuities still apply. As with modified patternism, each upload or download is not the same person as the original, although it has the same program.

The above definition specifies that the program runs on a brain, but we can easily broaden it to other substrates, such as silicon-based computers:

The Software Instantiation Approach to the Mind (SIM*)

The mind is the entity running the program (where a program is the algorithm that the brain or other cognitive system implements, something in principle discoverable by cognitive science).

SIM*, unlike the original Software View, avoids the category mistake of viewing the mind as abstract. But like the original view and the related patternist position, it draws from the computational approach to the brain in cognitive science.

Does SIM* provide a substantive approach to the Mind-Body Problem? Consider that it hasn’t told us about the underlying metaphysical nature of the thing that runs the program (i.e., the mind). So it is uninformative. For the Software Instantiation approach to serve as an informative theory of the nature of the mind, it needs to take a stand on each of the aforementioned positions on the nature of the mind.

Consider panpsychism, for instance. Does the system that instantiates the program consist of fundamental elements that have their own experiences? SIM* leaves this question open. Furthermore, SIM* is also compatible with physicalism, the view that everything either is made of something that physics describes or is a fundamental property, law, or substance figuring in a physical theory.

Property dualism is also compatible with the mind being a program instantiation. For instance, consider the most popular version of the view, David Chalmers’s naturalistic property dualism. According to Chalmers, features like seeing the rich hues of a sunset, or smelling the aroma of the espresso are properties that emerge from complex structures. Unlike panpsychism, these fundamental consciousness properties are not found in fundamental particles (or strings)—they are at a higher level, inhering in highly complex systems. Nevertheless, these properties are basic features of reality.22 So physics will be incomplete, no matter how sophisticated it gets, because in addition to physical properties, there are novel fundamental properties that are nonphysical. Notice that SIM* is compatible with this view, because the system that runs the program could have certain properties that are nonphysical and are basic features of reality in their own right.

Recall that substance dualism claims that reality consists of two kinds of substances: physical entities (e.g., brains, rocks, bodies) and nonphysical ones (i.e., minds, selves, or souls). Nonphysical substances could be the sort of entities that run a program, so SIM* is compatible with substance dualism. This may sound strange, so it pays to consider how such a story would go; the details depend on the kind of substance dualism that is in play.

Suppose a substance dualist says, as Descartes did, that the mind is wholly outside of spacetime. According to Descartes, although the mind isn’t in spacetime, throughout a person’s life, one’s mind is still capable of causing states in one’s brain, and vice versa.23 (How does it do this? I’m afraid Descartes never gave a viable account, claiming, implausibly, that mind-brain interactions happened in the pineal gland.)

How would a program implementation view be compatible with Cartesian dualism? In this case, the mind, if a program instantiation, would be a nonphysical entity that is outside of spacetime. During a person’s worldly life, the mind causes states of the brain. (Notice that the nonphysical mind is not an abstract entity, however, as it has causal and temporal properties. Being nonspatial is a necessary condition of being abstract, but it is not a sufficient condition.) We might call this view Computational Cartesianism. This may sound odd, but experts on functionalism, like the philosopher Hilary Putnam, have long recognized that computations of a Turing machine can be implemented in a Cartesian soul.24

The picture that Computational Cartesianism offers of mind-body causation is perplexing, but so was the original Cartesian view that the mind, although nonspatiotemporal, somehow stands in a causal relationship with the physical world.

Not all substance dualisms are this radical, in any case. For instance, consider non-Cartesian substance dualism, a view held by E. J. Lowe. Lowe held that the self is distinct from the body. But in contrast to Cartesian dualism, Lowe’s dualism doesn’t claim either that the mind is separable from the body or that it is nonspatial. It allows that the mind may not be able to exist without a body and that, being spatiotemporal, it possesses properties, such as shape and location.25

Why did Lowe hold this position? Lowe believed that the self is capable of survival across different kinds of physical substrates, so it has different persistence conditions than the body. We’ve seen that such claims about persistence are controversial. But you do not need to share Lowe’s intuitions about persistence; the point here is simply to raise a different, non-Cartesian, substance dualist position. SIM* is compatible with non-Cartesian substance dualism, because a program instantiation could be this sort of nonphysical mind as well. This position is harder to dismiss than Cartesianism, as minds are part of the natural world. Yet again, the Software Instantiation View remains silent.

In essence, although SIM* does not venture the implausible claim that the mind is abstract, it tells us little about the nature of mind, except that it is something that runs a program. Anything could do that, in principle—Cartesian minds, systems made of fundamental experiential properties, and so on. This isn’t much of a position on the Mind-Body Problem then.

At this point, perhaps a proponent of SIM* would say that they intend to be making a different sort of metaphysically substantive claim, however, one that concerns the persistence of the mind over time. Perhaps they hold the following view:

Being a program instantiation of type T is an essential property of one’s mind, without which the mind couldn’t persist.

Recall that your contingent properties are ones that you can cease to have and still continue to exist. For instance, you may change your hair color. Your essential properties, in contrast, are those that are essential to you. Earlier, we considered the debate over the persistence of persons; in a similar vein, the proponent of SIM* can say that being an instantiation of program T is essential to one’s continuing to have the mind one has, and one’s mind would cease to exist if T changed to a different program, P.

Is this a plausible position? One problem is that a program is just an algorithm, so if any lines of the algorithm change, the program changes. The brain’s synaptic connections are constantly altered to reflect new learning, and when you learn something, such as a new skill, this leads to changes in your “program.” But if the program changes, the mind ceases to exist, and a new one begins. Ordinary learning shouldn’t result in the death of your mind.

Proponents of the program instantiation view can respond to this objection, however. They could say that a program can exhibit historical development, being expressed by an algorithm of type T1 at time t, and, at a later time, be expressed by an algorithm that is a modified version of T1, T2. Although, technically, T1 and T2 are distinct, consisting of at least some different instructions, T1 is an ancestor of T2. So the program continues. On this view, the person is the instantiation of a certain program, and the program can change in certain ways but still remain the same program.

RIVERS, STREAMS, AND SELVES

Notice that this is just the aforementioned transhumanist “patternist” view, modified to hold that the person is not the pattern but the instantiation of the pattern. Earlier, we discussed a similar view, called “modified patternism.” So we’ve come full circle. Recall Kurzweil’s remark:

I am rather like the pattern that water makes in a stream that rushes past the rocks in its path. The actual molecules of water change every millisecond, but the pattern persists for hours or even years.26

Of course, Kurzweil knows that over time, the pattern will change. After all, this is a passage from his book on becoming posthuman during the singularity. Kurzweil’s remark may strike a chord with you: In an important sense, you seem to be the same person you were a year ago, despite the changes to your brain, and perhaps you even could survive the loss of many of your memories or the addition of some new neural circuitry, such as an enhancement to your working memory system. Perhaps, then, you are like the river or stream.

The irony is that the metaphor of a river was used by the pre-Socratic philosopher Heraclitus to express the view that reality is flux. Persisting things are an illusion, including the permanence of a persisting self or mind. Thousands of years ago, Heraclitus wrote: “No man ever steps in the same river twice, for it’s not the same river and he’s not the same man.”27

Yet Kurzweil is saying that the self survives the flux of change. The challenge for modified patternists is to resist Heraclitus’s move: to show that there is a permanent self, against the backdrop of continual change, rather than the mere illusion of permanence. Can the modified patternists impose the permanence of the self on the Heraclitan flux of ever-changing molecules in the body?

Here, we run up against a familiar problem. Without a firm handle on when a pattern implementation does or does not continue, we do not have good reason to appeal to the Software Instantiation View either. In Chapter Five, we asked: If you are the instantiation of a certain pattern, what if your pattern shifts? Will you die? The extreme cases, like uploading, seemed clear. However, mere everyday cellular maintenance by nanobots to overcome the slow effects of aging would perhaps not affect the identity of the person. But we saw that the middle-range cases are unclear. Remember, the path to superintelligence may very well be a path through middle-range enhancements that add up, over time, to major changes to one’s cognitive and perceptual makeup. Further, as we observed in Chapter Five, selecting a boundary seems arbitrary, for once a boundary is selected, an example can be provided that suggests the boundary should be pushed outward.

So then, if proponents of the Software Instantiation View have a position about persistence in mind, the same old issue rears its ugly head. We have indeed come full circle, and we are left with an appreciation of how perplexing and controversial these mysteries about the nature of the mind and self are. And this, dear reader, is where I want to leave you. For the future of the mind requires appreciating the metaphysical depth of these problems.

Now let’s bring our discussion home, summing things up by returning to the Suozzi case.

RETURNING TO ALCOR

Three years after Kim’s death, Josh gathered her special belongings and returned to Alcor. He was making good on a promise to her, delivering her things where she can find them if she is brought back to life.28 Frankly, I wish the conclusion of this chapter had been different. If the Software View was correct, then at least in principle, minds could be the sort of thing that can be uploaded, downloaded, and rebooted. This would allow an afterlife for the brain, if you will—a way in which a mind, like Kim’s, could survive the death of the brain. Yet our reflections revealed that the Software View turned minds into abstract objects. So we considered a related view, one in which the mind is a program instantiation. We then saw that the Program Instantiation View does not support uploading either, and although it is an interesting approach, it is too uninformative, from a metaphysical standpoint, to be much of an approach to the nature of mind.

Although I do not have access to the medical details of Kim’s cryopreservation, I find hope in the New York Times report that there was imaging evidence that the outer layers of Kim’s brain were successfully cryopreserved. As Harmon notes, the brain’s neocortex seems central to who we are, being key to memory and language.29 So perhaps a biologically-based reconstruction of the damaged parts would be compatible with the survival of the person. For instance, I’ve observed that, even today, there is active work on hippocampal prosthetics; perhaps parts of the brain like the hippocampus are rather generic, and replacing them with biological or even AI-based prosthetics doesn’t change who one is.

Of course, I’ve stressed throughout this book that there is tremendous uncertainty here, due to the perplexing and controversial nature of the personal identity debate. But Kim’s predicament is not like that of someone seeking an optional brain enhancement, such as a shopper strolling into our hypothetical Mind Design Center. A shopper browsing a menu of enhancements can comfortably reject an enhancement because it strikes them as too risky, but a patient on the brink of death, or who requires a prosthetic to be cryogenically revived, may have little to lose, and everything to gain, in pursuing a high-risk cure.

Desperate times call for desperate measures. A decision to use one, or even several, neural prosthetics, to facilitate Kim’s revival seems rational, if the technology is perfected. In contrast, I have no confidence that resorting to uploading her brain would be a form of revival. Uploading, at least as a means of survival, rests on flawed conceptual foundations.

Should uploading projects be scrapped then? Even if uploading technology doesn’t fulfill its original promise of digital immortality, perhaps it can nevertheless benefit our species. For instance, a global catastrophe may make Earth inhospitable to biological life forms, and uploading may be a way to preserve the human way of life and thinking, if not the actual humans themselves. And if these uploads are indeed conscious, this could be something that members of our species come to value, when confronted with their own extinction. Furthermore, even if uploads aren’t conscious, the use of simulated human minds for space travel could be a safer, more efficient way of sending intelligent beings into space than sending a biological human. The public tends to find manned missions to space exciting, even when robotic missions seem more efficient. Perhaps the use of uploaded minds would excite the public. Perhaps these uploads could even run terraforming operations on inhospitable worlds, readying the terrain for biological humans. You never know.

In addition, brain uploading could facilitate the development of brain therapies and enhancements that could benefit humans or nonhuman animals, because uploading part or all of a brain could help generate a working emulation of a biological brain that we could learn from. AI researchers who aim to build AIs that rival human-level intelligence may find it a useful means of AI development. Who knows, perhaps AI that is descended from us will have a greater chance of being benevolent toward us.

Finally, some humans will understandably want digital doubles of themselves. If you found out that you were going to die soon, you may wish to leave a copy of yourself to communicate with your children or complete projects that you care about. Indeed, the personal assistants—the Siris and Alexas of the future—might be uploaded copies of deceased humans we have loved deeply. Perhaps our friends will be copies of ourselves, tweaked in ways we find insightful. And perhaps we will find that these digital copies are themselves sentient beings, deserving to be treated with dignity and respect.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.3.175