CHAPTER SEVEN

A UNIVERSE OF SINGULARITIES

In your mind’s eye, zoom away from Earth. Envision Earth becoming but a “pale blue dot” in outer space, to use an expression of Carl Sagan’s. Now zoom out of the Milky Way Galaxy. The scale of the universe is truly staggering. We are but one planet in an immense, expanding universe. Astronomers have already discovered thousands of exoplanets, planets beyond our solar system, many of which are Earthlike—they seem to have the sort of conditions that led to the development of life on Earth. As we gaze up into the night sky, life could be all around us.

This chapter will illustrate that the technological developments we are witnessing today on Earth may have all happened before, elsewhere in the universe. That is, the universe’s greatest intelligences may be synthetic, having grown out of civilizations that were once biological.1 The transition from biological intelligence to synthetic intelligence may be a general pattern, instantiated over and over, throughout the cosmos. If a civilization develops the requisite AI technology, and cultural conditions are favorable, the transition from biological to postbiological may take only a few hundred years. As you read these words, there may be thousands, or even millions, of other worlds that have developed AI technology.

In reflecting on postbiological intelligence, we are not just considering the possibility of alien intelligence—we may also be reflecting on the nature of ourselves or our descendants as well, for we’ve seen that human intelligence may itself become postbiological. So, in essence, the line between “us” and “them” blurs, as our focus moves away from biology to the difficult task of understanding the computations and behaviors of superintelligence.

Before we delve further into this, a note on the expression “postbiological.” Consider a biological mind that achieves superintelligence through purely biological enhancements, such as nanotechnologically enhanced neural minicolumns. This creature would be postbiological, although many wouldn’t call it an “AI.” Or consider a computronium that is built out of purely biological materials, like the Cylon Raider in the reimagined Battlestar Galactica TV series. The Cylon Raider is artificial, and postbiological.

The key point is that there is no reason to expect humans to be the highest form of intelligence out there. It is humbling to conceive of this, but we may be intellectual lightweights when viewed on a galactic scale, at least until we enhance our minds in radical ways. The intellectual gap between an unenhanced human and alien superintelligence could be like that between us and goldfish.

THE POSTBIOLOGICAL COSMOS

In the field of astrobiology, this position has been called the postbiological cosmos approach. This approach says that the members of the most intelligent alien civilizations will be superintelligent AIs. What is the rationale for this? Three observations, when considered together, motivate this conclusion.

1.   It takes only a few hundred years—a cosmic eyeblink—for a civilization to go from pre-industrial to postbiological.

Many have urged that once a society creates the technology that could put them in touch with intelligent life on other planets, there is only a short window before they change their own paradigm from biology to AI—perhaps only a few hundred years.2 This makes it more likely that the aliens we encounter, if we encounter any, would be postbiological. Indeed, the short-window observation seems to be supported by human cultural evolution, at least thus far. Our first radio signals occurred only about 120 years ago, and space exploration is only about 50 years old, but many Earthlings are already immersed in digital technology, such as smartphones and laptop computers. Currently, billions of dollars are being poured into the development of sophisticated AI, which is now expected to change the face of society within the next several decades.

A critic may object that this line of thinking employs “N = 1 reasoning.” Recall that this is a form of reasoning that mistakenly generalizes from the human case to the case of alien life. But it strikes me as being unwise to discount arguments based on the human case—human civilization is the only one we know of, and we had better learn from it. It is no great leap to claim that other technological civilizations will develop technologies to advance their intelligence and gain an adaptive advantage. And we’ve seen that synthetic intelligence will likely be able to radically outperform the unenhanced brain.

An additional objection to my short-window observation points out that nothing I have said thus far suggests that humans will be superintelligent; I have said just that future humans will be postbiological. But postbiological beings may not be so advanced as to be superintelligent. So even if one is comfortable reasoning from the human case, the human case does not actually support the claim that the members of the most advanced alien civilizations will be superintelligent.

This is a valid objection, but I think the other considerations that follow show that alien intelligence is also likely to be superintelligent.

2.   Alien civilizations may have already been around for billions of years.

Proponents of SETI (“the Search for Extraterrestrial Intelligence”) have often concluded that alien civilizations would be much older than our own, if they exist. As the former NASA chief historian, Steven Dick, observes: “all lines of evidence converge on the conclusion that the maximum age of extraterrestrial intelligence would be billions of years, specifically [it] ranges from 1.7 billion to 8 billion years.”3 This is not to say that all life evolves into intelligent, technological civilizations. It is just to say that there are much older planets than Earth. Insofar as intelligent, technological life does evolve on even some of them, these alien civilizations are projected to be millions or billions of years older than us, so many could be vastly more intelligent than we are. They would be superintelligent, by our standards. It is humbling to conceive of this, but we may be galactic babies. When viewed on a cosmic scale, Earth is but a playpen for intelligence.

But would the members of these superintelligent civilizations be forms of AI? Even if they were biological and had received brain enhancements, their superintelligence would be reached by artificial means, which leads me to my third observation:

3.   It is likely that these synthetic beings will not be biologically based.

I’ve already observed that silicon appears to be a better medium for information processing than the brain itself. In addition, other, superior kinds of microchips are currently under development, such as those based on graphene and carbon nanotubes. The number of neurons in the human brain is limited by cranial volume and metabolism, but computers can be remotely connected across the globe. AIs can be constructed by reverse-engineering the brain and improving on its algorithms. And AI is more durable and can be backed up.

There is one thing that would stand in the way: the very worries I have been expressing in this book. Like human philosophers, alien thinkers may also come to appreciate the difficult and possibly intractable issues of personal identity raised by cognitive enhancements. Maybe they resisted the pull of radical enhancement, as I have been urging us to do.

Unfortunately, I think there is a good chance that some civilizations succumbed. This doesn’t necessarily mean that the members of these civilizations become zombies; hopefully, the superintelligences are conscious beings. But it does mean that members who “enhanced” may have died. Perhaps these civilizations didn’t halt enhancements, because they mistakenly believed they found clever solutions to the philosophical puzzles. Or perhaps on some worlds, the aliens weren’t even philosophical enough to reflect on these issues. And perhaps on some other distant worlds they did, but they concluded, based on reflections of alien philosophers who have views akin to the Buddha or Parfit, that there is no real survival anyway. Not believing in the self at all, they opted to upload. They might have been what the philosopher Pete Mandik called “metaphysically daring:” willing to make a leap of faith that consciousness or the self can be preserved when one transfers the informational structure of the brain from tissue to silicon chips.4 Another possibility is that certain alien civilizations take great care in enhancing an individual during their lifetime, so as to not violate certain principles of personal identity, but they use reproductive technologies to create new members of the species with highly enhanced abilities. Other civilizations may have simply lost control of their AI creations and been unwittingly supplanted.

Whichever intelligent civilizations didn’t halt enhancement efforts, for whatever reason, became the most intelligent civilizations in the universe. Whether these aliens are good philosophers or not, their civilizations still reap the intellectual benefits. As Mandik suggests, systems that have high degrees of metaphysical daring could, through making many more digital backups of themselves, be more fit in a Darwinian sense than more cautious beings in other civilizations.5

Furthermore, I’ve noted that AIs are more likely to endure space travel, being both more durable and capable of backup, so they will likely be the ones to colonize the universe, if anyone does. They may be the kind of the creatures we Earthlings first encounter, even if they aren’t the most common.

In sum, there seems to be a short window of time from the development of space travel and communications technology to the development of postbiological minds. Extraterrestrial civilizations will have passed through this window long ago. They are likely to be vastly older than us, and thus they would have already reached not just a postbiological stage, but superintelligence. Finally, at least some will be AIs rather than biological creatures, because silicon and other materials are a superior medium for information processing. From all this, I conclude that if life is indeed present on many other planets, and if advanced civilizations do tend to develop and then survive, the members of most advanced alien civilizations will likely be superintelligent AIs.

The science fiction–like flavor of these issues can encourage misunderstanding, so it is worth stressing that I am not claiming that most life in the universe is nonbiological. Most life on Earth itself is microbial. Nor am I saying that the universe will be “controlled” or “dominated” by a single superintelligent AI, akin to Skynet from the Terminator films, although it is worth reflecting on AI safety in the context of these issues. (Indeed, I shall do so shortly.) I am merely suggesting that the members of the most advanced alien civilizations will be superintelligent AIs.

Suppose I am right. What should we make of this? Here, current debates over AI on Earth are telling. Two important issues—the so-called control problem and the nature of mind and consciousness—impact our understanding of what superintelligent alien civilizations may be like. Let’s begin with the control problem.

THE CONTROL PROBLEM

Advocates of the postbiological cosmos approach suspect that machines will be the next phase in the evolution of intelligence. You and I, how we live and experience life right now, are just an intermediate step to AI, a rung on the evolutionary ladder. These individuals tend to have an optimistic view of the postbiological phase of evolution. Others, in contrast, are deeply concerned that humans could lose control of superintelligence, because a superintelligence could rewrite its own code and outthink any safeguards we build in. AI could be our greatest invention and our last one. This has been called the “control problem”—how we Earthlings can control an AI that is both inscrutable and vastly smarter than us.

We’ve seen that superintelligent AI could be developed during a technological singularity, a point at which ever-more-rapid technological advances—especially an intelligence explosion—reach a point at which humans can no longer predict or understand the technological changes as they unfold. But even if superintelligent AI arises in a less dramatic fashion, there may be no way for us to foresee or control the goals of AI. Even if we could decide on what moral principles to build into our machines, moral programming is difficult to specify in a foolproof way, and any such programming could be rewritten by a superintelligence in any case. A clever machine could bypass safeguards, such as kill switches, and could potentially pose an existential threat to biological life.

The control problem is a serious problem—perhaps it is even insurmountable. Indeed, upon reading Bostrom’s compelling book on the control problem, Superintelligence: Paths, Dangers and Strategies,6 scientists and business leaders such as Stephen Hawking and Bill Gates were widely reported by the world media as commenting that superintelligent AI could threaten the human race. At this time, millions of dollars are pouring into organizations devoted to AI safety and some of the finest minds in computer science are working on the problem. Let us consider the implications of the control problem for the SETI project.

ACTIVE SETI

The usual approach to search for life in the universe is to listen for radio signals from extraterrestrial intelligence. But some astrobiologists think we should go a step further. Advocates of Active SETI hold that we should also be using our most powerful radio transmitters, such as the giant dish-telescope at Arecibo, Puerto Rico, (pictured above) to send messages in the direction of the stars that are nearest to Earth in order to initiate a conversation.7

Satellite. Courtesy of the Arecibo Observatory, a facility of the NSF

Active SETI strikes me as reckless when one considers the control problem, however. Although a truly advanced civilization would probably have no interest in us, we should not call attention to ourselves, as an encounter with even one hostile civilization among millions could be catastrophic. Maybe one day we will reach the point at which we can be confident that alien superintelligences do not pose a threat to us, but we have no justification for such confidence just yet. Proponents of Active SETI argue that a deliberate broadcast would not make us any more vulnerable than we already are, pointing out that our radar and radio signals are already detectable. But these signals are fairly weak and quickly blend with natural galactic noise. We would be playing with fire if we transmitted stronger signals that were intended to be heard.

The safest mindset is intellectual humility. Indeed, barring blaringly obvious scenarios in which alien ships hover over Earth, as in films like Arrival and Independence Day, I wonder if we could even recognize the technological markers of a truly advanced superintelligence. Some scientists project that superintelligent AIs could be found near black holes, feeding off their energy.8 Alternately, perhaps superintelligences would create Dyson spheres, megastructures such as that pictured on the following page, which harness the energy of an entire star.

But these are just speculations from the vantage point of our current technology; it’s simply the height of hubris to claim that we can foresee the computational structure or energy needs of a civilization that is millions or even billions of years ahead of our own. For what it’s worth, I suspect that we will not detect or be contacted by alien superintelligences until our own civilization becomes superintelligent. It takes one to know one.

Although many superintelligences would be beyond our grasp, perhaps we can be more confident when speculating on the nature of “early” superintelligences—that is, those that emerge from a civilization that was previously right on the cusp of developing superintelligence. Some of the first superintelligent AIs could have cognitive systems that are modeled after biological brains—the way, for instance, that deep-learning systems are roughly modeled on the brain’s neural networks. So their computational structure might be comprehensible to us, at least in rough outlines. They may even retain goals that biological beings have, such as reproduction and survival. I will turn to this issue of early superintelligence in more detail shortly.9

This device does not support SVG

Dyson sphere

But superintelligent AIs, being self-improving, could quickly transition to an unrecognizable form. Perhaps some superintelligences will opt to retain cognitive features that are similar to those of the species they were originally modeled after, placing a design ceiling on their own cognitive architecture. Who knows? But without a ceiling, an alien superintelligence could quickly outpace our ability to make sense of its actions or even look for it.

An advocate of Active SETI will point out that this is precisely why we should send signals into space—let the superintelligent civilizations locate us, and let them design means of contact they judge to be tangible to an intellectually inferior species like us. While I agree this is reason to consider Active SETI, I believe that the possibility of encountering a dangerous superintelligence outweighs it. For all we know, malicious superintelligences could infect planetary AI systems with viruses, and wise civilizations build cloaking devices; perhaps this is why we haven’t yet detected anyone. We humans may need to reach our own singularity before embarking on Active SETI. Our own superintelligent AIs will be able to inform us of the prospects for galactic AI safety and how we should go about recognizing signs of superintelligence elsewhere in the universe. Again, “it takes one to know one” is the operative slogan.

SUPERINTELLIGENT MINDS

The postbiological cosmos approach involves a radical shift in our usual perspective about intelligent life in the universe. Normally, we expect that if we encountered advanced alien intelligence, we would encounter creatures with very different biological features than us, but that much of our intuition about minds would still apply. But the postbiological cosmos approach suggests otherwise.

In particular, the standard view is that if we ever encountered advanced alien creatures, they would still have minds like ours in an important sense—there would be something it is like, from the inside, to be them. We’ve seen that throughout your daily life, and even when you dream, it feels like something to be you. Likewise, there is also something that it is like to be a biological alien, if such exist—or so we tend to assume. But would a superintelligent AI even have conscious experience? If it did, could we tell? And how would its inner life, or lack thereof, impact its capacity for empathy and the kind of goals it has? Raw intelligence is not the only issue to consider when thinking about contact with extraterrestrials.

We considered these issues in detail in earlier chapters, and we can now appreciate their cosmic import. I’ve noted that the question of whether an AI could have an inner life should be key to how we value its existence, because consciousness is central to our judgment of whether it is a self or person. An AI could even be superintelligent, outperforming humans in every cognitive and perceptual domain, but if it doesn’t feel like anything to be the AI, it difficult to view these beings as having the same value as conscious beings, being selves or persons. And conversely, I’ve observed that whether AI is conscious may also be key to how it values us: A conscious AI could recognize in us the capacity for conscious experience.

Clearly, the issue of machine consciousness could be central to how humans would react to the discovery of superintelligent aliens. One way that humanity will process the implications of contact will be through religion. And although I hesitate to speak for world religions, discussions with my colleagues working in astrobiology at the Center of Theological Inquiry, Princeton, suggest that many would reject the possibility that AIs could have souls or are somehow made in God’s image, if they are not even conscious beings. Indeed, Pope Francis has recently commented that he would baptize an extraterrestrial.10 But I wonder how Pope Francis would react if asked to baptize an AI, let alone one that is not capable of consciousness.

This isn’t just a romantic question of whether ETs will enjoy sunsets or possess souls, but an existential one for us. Because even if the universe were stocked full of AIs of unbelievable intelligence, why would those machines place any value on conscious biological intelligences? Nonconscious machines cannot experience the world and, lacking that awareness, may be incapable of genuine empathy or even intellectual concern for outmoded creatures.

BIOLOGICALLY INSPIRED SUPERINTELLIGENCES

Thus far, I’ve said little about the structure of superintelligent alien minds. And little is all we can say: Superintelligence is by definition a kind of intelligence that outthinks humans in every domain. In an important sense, we cannot predict or fully understand how it will think. Still, we may be able to identify a few important characteristics, at least in broad strokes.

Nick Bostrom’s recent book on superintelligence focuses on the development of superintelligence on Earth, but we can draw from his thoughtful discussion. Bostrom distinguishes three kinds of superintelligence:

Speed superintelligence: a superintelligence having rapid-fire cognitive and perceptual abilities. For instance, even a human emulation or upload could in principle run so fast that it could write a PhD thesis in an hour.

Collective superintelligence: the individual units need not be superintelligent, but the collective performance of the individual members vastly outstrips the intelligence of any individual human.

Quality superintelligence: an intelligence that computes at least as fast as humans think and that also outthinks humans in every domain.11

Bostrom indicates that any of these kinds of superintelligence could exist alongside one or more of the others.

An important question is whether we can identify common goals that these types of superintelligences could share. Bostrom’s suggests the following thesis:

The Orthogonality Thesis: Intelligence and final goals are orthogonal—“more or less any level of intelligence could in principle be combined with more or less any final goal.”12

Put simply, just because an AI is smart doesn’t mean it has perspective; all the intelligence of a superintelligent being could be marshaled to absurd ends. (This reminds me a bit of academic politics, in which so much intelligence can be utterly wasted on petty or even perverse goals). Bostrom is careful to underscore that a great many unthinkable kinds of superintelligences could be developed. At one point in the book, he raises a sobering example of a superintelligence that runs a paper-clip factory. Its final goal is the banal task of manufacturing paper clips.13 Although this may initially strike you as harmless endeavor (but hardly a life worth living), Bostrom’s sobering point is that superintelligence could utilize every form of matter on Earth in support of this goal, wiping out biological life in the process.

The paper-clip example illustrates that superintelligence could be of an unpredictable nature, having thinking that is “extremely alien” to us.14 Although the final goals of superintelligence are difficult to predict, Bostrom singles out several instrumental goals as being likely, given that they support any final goal whatsoever:

The Instrumental Convergence Thesis: “Several instrumental values can be identified which are convergent in the sense that their attainment would increase the chances of the agent’s goal being realized for a wide range of final goals and a wide range of situations, implying that these instrumental values are likely to be pursued by a broad spectrum of situated intelligent agents.”15

The goals that Bostrom identifies are resource acquisition, technological perfection, cognitive enhancement, self-preservation, and goal-content integrity (i.e., that a superintelligent being’s future self will pursue and attain those same goals). He underscores that self-preservation can involve group or individual preservation, and that it may play second fiddle to the preservation of the species the AI was designed to serve.

Bostrom does not speculate about superintelligent alien minds in his book, but his discussion is suggestive. Let us call an alien superintelligence that is based on reverse engineering an alien brain, including uploading it, a “biologically inspired superintelligent alien” (BISA). Although BISAs are inspired by the brains of the original species that the superintelligence is derived from, their algorithms may depart from those of their biological model at any point.

BISAs are of particular interest in the context of alien superintelligence, because they form a special class in the full spectrum of possible AIs. If Bostrom is correct that there are many ways superintelligence can be built, superintelligent AIs will be highly heterogeneous, with members generally bearing little resemblance to one another. It may turn out that of all superintelligent AIs, BISAs bear the most resemblance to one another by virtue of their biological origins. In other words, BISAs may be the most cohesive subgroup, because the other members are so different from one another. BISAs may be the single most common form of alien superintelligence out there.

You may suspect that because BISAs could be scattered across the galaxy and generated by multitudes of species, there is little interesting that we can say about the class of BISAs. You may object that it is useless to theorize about BISAs, as they can change their basic architecture in numerous, unforeseen ways, and any biologically inspired motivations can be constrained by programming. But notice that BISAs have two features that may give rise to common cognitive capacities and goals:

1.   BISAs are descended from creatures that had motivations like: find food, avoid injury and predators, reproduce, cooperate, compete, and so on.

2.   The life forms that BISAs are modeled on have evolved to deal with biological constraints like slow processing speed and the spatial limitations of embodiment.

Could these features yield traits common to members of many superintelligent alien civilizations? I suspect so.

Consider feature 1. Intelligent biological life tends to be primarily concerned with its own survival and reproduction, so it is more likely that a BISA would have final goals involving its own survival and reproduction, or at least the survival and reproduction of the members of its society. If BISAs are interested in reproduction, we might expect that, given the massive amounts of computational resources at their disposal, BISAs would create simulated universes stocked with artificial life and even intelligence or superintelligence. If these creatures were intended to be “mindchildren,” they may retain the goals listed in feature 1 as well.

Likewise, if a superintelligence continues to take its own survival as a primary goal, it may not wish to change its architecture fundamentally. It may opt for a series of smaller improvements that nevertheless gradually lead the individual toward superintelligence. Perhaps, after reflecting on the personal-identity debate, BISAs tend to appreciate the vexing nature of the issues, and they think: “Perhaps, when I fundamentally alter my architecture, I will no longer be me.” Even a being that is an upload, and which believes that it is not identical to the creature that uploaded, may nevertheless wish not to alter the traits that were most important to their biological counterparts during their biological existence. Remember, uploads are isomorphs (at least at the time they are uploaded), so these are traits that they identify with, at least initially. Superintelligences that reason in this way may elect to retain biological traits.

Consider feature 2. Although I have noted that a BISA may not wish to alter its architecture fundamentally, it or its designers may still move away from the original biological model in all sorts of unforeseen ways. Even then, though, we could look for cognitive capacities that are useful to keep: cognitive capacities that sophisticated forms of biological intelligence are likely to have and that enable the superintelligence to carry out its final and instrumental goals. We could also look for traits that are not likely to be engineered out, as they do not detract the BISA from its goals. We might expect the following, for instance.

1.   Learning about the computational structure of the brain of the species that created the BISA can provide insight into the BISA’s thinking patterns. One influential means of understanding the computational structure of the brain in cognitive science is through the field of connectomics, a field that aims to provide a connectivity map or wiring diagram of the brain, called the “connectome.”16

Although it is likely that a given BISA will not have the same kind of connectome as the members of the original species did, some of the functional and structural connections may be retained, and interesting departures from the originals may be found. So, this may sound right out of The X-files, but an alien autopsy could be quite informative!

2.   BISAs may have viewpoint-invariant representations. Consider walking up to your front door. You’ve walked this path hundreds, maybe thousands of times, but technically, you see things from slightly different angles each time, as you are never positioned in exactly the same way twice. But obviously, the path is a familiar one, and this is because at a high level of processing, your brain has internal representations of the people and objects that you interact with that do not vary with your angle or position with respect to them. For instance, you have an abstract notion of door that is independent of the precise appearance of any given door.

Indeed, it strikes me as difficult for biologically based intelligences to evolve without such representations, as they enable categorization and prediction.17 Invariant representations arise because a system that is mobile needs a means of identifying items in its ever-changing environment, so we would expect biologically based systems to have them. A BISA would have little reason to give up invariant representations insofar as it remains mobile or has mobile devices sending it information remotely.

3.   BISAs will have language-like mental representations that are recursive and combinatorial. Notice that human thought has the crucial and pervasive feature of being combinatorial. Consider the thought that wine is better in Italy than in China. You may have never had this thought before, but you were able to understand it. The key is that thoughts are built out of familiar constituents and combined according to rules. The rules apply to constructions out of primitive constituents, which are themselves constructed grammatically. Grammatical mental operations are incredibly useful: It is the combinatorial nature of thought that allows one to understand and produce these sentences on the basis of one’s antecedent knowledge of the grammar and atomic constituents (e.g., wine, China). Relatedly, thought is productive: In principle, one can entertain and produce an infinite number of distinct representations, because the mind has a combinatorial syntax.18

Brains need combinatorial representations, because there are infinitely many possible linguistic representations, and the brain only has a finite storage space. Even a superintelligent system would benefit from combinatorial representations. Although a superintelligent system could have computational resources that are so vast that it can store every possible utterance or inscription sentence, it would be unlikely that it would trade away such a marvelous innovation of biological brains. If it did, it would be less efficient, because there is the potential of a sentence not being in its storage, which must be finite.

4.   BISAs may have one or more global workspaces. When you search for a fact or concentrate on something, your brain grants that sensory or cognitive content access to a “global workspace” where the information is broadcast to attentional and working memory systems for more concentrated processing, as well as to the massively parallel channels in the brain.19 The global workspace operates as a singular place where important information from the senses is considered in tandem, so that the creature can make all-things-considered judgments and act intelligently in light of all the facts at its disposal. In general, it would be inefficient to have a sense or cognitive capacity that was not integrated with the others, because the information from this sense or cognitive capacity would be unable to figure in predictions and plans based on an assessment of all the available information.

5.   A BISA’s mental processing can be understood via functional decomposition. As complex as alien superintelligence may be, humans may be able to use the method of functional decomposition as an approach to understanding it. We’ve seen that a key feature of computational approaches to the brain is that cognitive and perceptual capacities are understood by decomposing the particular capacity into their causally organized parts, which themselves can be understood in terms of the causal organization of their parts. This is the method of functional decomposition, and it is a key explanatory method in cognitive science. It is difficult to envision a complex thinking machine not having a program consisting of causally interrelated elements, each of which consists of causally organized elements.

In short, the superintelligent AI’s processing may make some sense to us, and developments from cognitive science may yield a glimmer of understanding into the complex mental lives of certain BISAs. All this being said, superintelligent beings are by definition beings that are superior to humans in every domain. Although a creature can have superior processing that still basically makes sense to us, it may be that a given superintelligence is so advanced that we cannot understand any of its computations whatsoever. It may be that any truly advanced civilization will have technologies that will be indistinguishable from magic, as Arthur C. Clarke suggested.20

In this chapter, we’ve zoomed away from Earth, situating mind-design issues in a cosmic context. I’ve illustrated that the issues we Earthlings are facing today may not be unique to Earth. In fact, discussions of superintelligence on Earth, together with research in cognitive science, helped inform speculations about what superintelligent alien minds might be like. We’ve also seen that our earlier discussion of synthetic consciousness is relevant as well.

It is also worth noting that as members of these civilizations develop the technology to enhance their own minds, these cultures may confront the same perplexing issues of personal identity that we discussed earlier. Perhaps the most technologically advanced civilizations are the most metaphysically daring ones, as Mandik had suggested. These are the superintelligences that didn’t stall their own enhancements based on concerns about survival. Or perhaps they were concerned about personal identity and found a clever—or not so clever—way around it.

In what follows, we will descend back to Earth, delving into issues that relate to patternism. It is now time to explore a dominant view of the mind that underlies transhumanism and fusion-optimism. Many transhumanists, philosophers of mind, and cognitive scientists have appealed to a conception of the mind in which the mind is software. This is often expressed by the slogan: “the mind is the software the brain runs.” It is now time to ask: Is this view of nature of the mind well founded? If our universe is stocked full of alien superintelligences, it is all the more important to consider whether the mind is software.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.135.213.212