20

The Future of New Media

Embodying Kurzweil's Singularity in Dollhouse, Battlestar Galactica, and Gamer

David Golumbia

ABSTRACT

In this chapter, David Golumbia argues that computationalist ideology – the view that everything in reality is ultimately made up of computation – can be seen with particular clarity in what he calls “the uploading story.” According to this story, we human beings are on the verge of losing our human embodiment and our contact with everyday reality, and moving into a world where our minds merge with computers and we inhabit virtual realities much like those seen in videogames. A prominent exemplar of this story is found in the work of inventor and futurist Ray Kurzweil, who claims that we are heading toward a profound transformation he and others call “The Singularity.” Golumbia argues that the premise of Kurzweil's “Singularity” reads very much like the fantastic versions of “uploading” that have been widespread in computational thought for decades. To expose the conceptual flaws in the uploading story and its incarnation in Kurzweil's “Singularity,” Golumbia examines a selection of recent television series and films that use this story as their narrative foundation. Dollhouse, Battlestar Galactica, and Gamer constitute a kind of demonstrative interrogation of the uploading fantasy, Golumbia argues, revealing its central contradiction – namely, that it relies not on actual technological advances but on distortions beyond any recognizable limits of our conceptions of the human, the mind, and the body.

New Media After the Singularity

Computationalist ideology boasts no more technically accomplished, influential, or persuasive a public advocate than Ray Kurzweil.1 It also has no influential advocate whose views jettison (often without considering them at all) so many of the hard-won insights of the centuries-long project of humanistic inquiry, including parts that intersect with pillars of the scientific enterprise. Kurzweil does not just paint what might reasonably be called the standard computationalist picture according to which mind itself and therefore human society operate almost exclusively via algorithms (a view explicated at length in both Kurzweil 1990 and 1999); he just as often takes the most extreme position possible, insisting that there truly can be nothing in the universe at any level other than computation.2 Nearly inseparable from this perspective is the view that the “biological understanding” of human being is both misguided and “pessimistic” (see especially The Singularity Is Near, 2005, p. 12). To Kurzweil this old-fashioned understanding is something we will soon shed, overcoming the “inherent limits” of our human bodies, at which point “we will be able to reengineer all of the organs and systems in our biological bodies and brains to be vastly more capable” than they are today (Kurzweil, 2005, p. 27).

In Kurzweil's future, once we understand the deep ubiquity of computation, it will become impossible to talk about something distinct called media: “as virtual reality from within the nervous system becomes competitive with real reality in terms of resolution and believability, our experiences will increasingly take place in virtual environments” (Kurzweil, 2005, p. 29), in a vision not far away from the one articulated by early advocates of Virtual Reality (hereafter VR).3 According to this view, in a manner more profound than anything we see today, we will live in media when we become machines. Kurzweil sometimes portrays this transformation as a discrete event and sometimes as an ongoing process of which it is but one crucial step. Following the practice of a range of technological-eschatological thinkers, Kurzweil names it “the Singularity.”4

In part a kind of ne plus ultra of media convergence, the Singularity will have changed the world so much by 2099 that:

There is no longer any clear distinction between humans and computers.

Most conscious entities do not have a permanent physical presence.

Machine-based intelligences derived from extended models of human intelligence claim to be human, although their brains are not based on carbon-based cellular processes, but rather electronic and photonic equivalents. (Kurzweil, 1999, p. 280)

One of the cardinal reasons for approaching computationalism as an ideology rather than a philosophical framework is that computationalist thought is more interested in using rhetorical constructions to keep a certain power in place than it is in fidelity to its principles or to the facts. While Kurzweil often writes in a mode that rejects the significance of almost everything we recognize today as “human,” declaring with near-certainty and in the present tense that in 2099, “most conscious entities do not have a permanent physical presence,” he just as often retreats into truisms and tautologies that contradict these most extreme pronouncements, or that provide rhetorical catachreses just where argument is required: “future machines will be human, even if they are not biological. This will be the next step in evolution [. . .] Most of the intelligence of our civilization will ultimately be nonbiological” (Kurzweil, 2005, p. 30). Today, in our definition of these words, the phrase “non-biological human” is a contradiction – an oxymoron on the order of “round square” or “living dead” – which does nothing to dispel its power, but much to open the question of its function. It is not a logical contradiction so much as a conceptual one: there are many things that human beings are not which would be similarly (but perhaps less centrally) meaningful if appended to the word human: non-terrestrial human; non-homo-Sapiens human; and so on. It also shows how critical it is to examine closely the uses and functions of the term human. In fact, Kurzweil uses human much the way he uses mind (in the latter case conforming fully to the “Cartesian myth” thoroughly discredited in Anglo-American philosophy no later than 1949 by Ryle's Concept of Mind), collapsing fully distinct meanings to blur the most critical elements of his story.

Each of Kurzweil's books carries a version of the same future timeline, stretching out millions of years beyond us; indeed, this is the feature of his work that he has emphasized more and more (it takes up a good part of the latter third of The Age of Spiritual Machines, 1999, but is the overt subject of the twice-as-long Singularity, 2005). Wikipedia maintains a web page of about 8,000 words, apart from his main biographical entry, devoted to these predictions. It seems critical to Kurzweil not merely to prove that a merger of human and computer is logically or physically possible, but to convince us that it must happen this one way – along the way, then, to show us that mastering the future is possible, desirable, legible. Otherwise, his thesis comes down to the view that “things will change a lot in the future” – typically, the thesis Kurzweil claims his opponents reject – rather than that this particular set of future events will come about. On the contrary, what is put at stake by Kurzweil's predictive calculus is unpredictability, openness to possibility, and the understanding that we can't know what will come, even if we can fantasize about it.

The Future of Cartesian Dualism

Katherine Hayles (1999) famously begins How We Became Posthuman by retelling what she calls “a roboticist's dream that struck [her] as a nightmare”:

Reading Hans Moravec's Mind Children [. . .] I happened upon the passage where he argues it will soon be possible to download human consciousness into a computer. To illustrate, he invents a fantasy scenario in which a robot surgeon purees the human brain in a kind of cranial liposuction, reading the information in each molecular layer as it is stripped away and transferring the information to a computer. At the end of the operation, the cranial cavity is empty, and the patient, now inhabiting the metallic body of the computer, wakens to find his consciousness exactly the same as it was before. (p. 1)

Hayles's interrogation of the “downloading” fantasy (which now seems more often referred to as an “uploading” fantasy) explored the interstices between human and machine being. In this endeavor it is complete enough, and casts enough doubt on the cybernetic paradigm itself, that it may have obscured the need for analyses of some of the fantasy's most recent manifestations.5 As Kurzweil's writing and recent media both show, though, there is a fuller narrative to the uploading story, one involving commitment to a particular picture of the world that is much more detailed and specific than this brief rehearsal of Moravec (1988) suggests.

The dream Hayles reads in Moravec is not entirely original to him, as Hayles is quick to point out; it has sources in at least two sets of familiar literary-philosophical tropes. The first, visible in Moravec's story to some extent, is commonly traced back at least to Descartes: the idea that despite a commitment to secular materialism – which in the wake of scientific investigation seems to entail the physical materiality of everything in the world – mind and body remain not just conceptually but ideologically distinguishable. (To some, that is, Descartes holds on to the fundamental religious idea of the soul, even as he appears to take all of empirical science and “reason” as his starting points. This association persists to the present day.)

The second source for Moravec's story, somewhat buried but still visible in it and more prominent in other versions, is the story usually referred to as the Allegory of the Cave, found in Plato's Republic (514a–520a; 2007, pp. 240–248): namely that our everyday reality might somehow be a projection, representation, or dream, so that the pursuit of wisdom or knowledge can also be seen as seeing through an illusory veil to a new, transcendent reality.6 (We should also not lose sight of the fact that this Allegory is one of the main presentations of Plato's theory of Forms, a theory that today holds little currency among philosophers precisely because of its apparent commitment to a defined and transcendent space – a Heaven-like space – where problems of word–world reference do not occur.) These two streams have merged into a single fantasy that we find repeated endlessly in contemporary media and thought: scenarios in which we are able to remove our consciousnesses from our bodies and then put them into computers, where they (we) live “inside” the computer's representations as we see them today from the outside: that is, scenarios in which we can upload ourselves into something like a character in World of Warcraft (hereafter WoW), complete with some sort of virtual body relative to that world.

A rough version of the elements and background assumptions typically found in the canonical version of the uploading fantasy goes something like this:

  1. Computers will reach a point where not just unexpected, emergent, or conscious (in general) behaviors appear to develop, but where the phenomenon we call human consciousness spontaneously develops within the computer.
  2. This will coincide with an algorithmic analysis of the human brain (perhaps performed by the spontaneous consciousnesses that emerged of their own accord in the first trope) that describes the entire mind as such, so that a “human mind” per se can be developed as an explicit software program. By “running” this program, the computer is able to emulate (or become?) a “real” human mind.
  3. Human beings will also develop the means to transfer or copy their (individual) minds from their bodies into computers capable of running “human mind software.”
  4. When human beings upload themselves into computers, they will be able to seamlessly slip between the “worlds” that we today perceive from outside the computer (such as games and movies) and what we call reality now. The distinction between “fiction” and “reality” will turn out to have been illusory. What today we perceive using our bodies (such as WoW avatars) will be “real” inside the computer.

Despite the number of features demanded by the fantasy, and despite the strange improbability of many of them happening as part of a single event or process, it is remarkable how many narratives share or respond to them, much as most vampire narratives play off or refer to, even if they do not accept, the many canonical details of the vampire story. In the uploading story, there is a strong conceptual basis for the connection between the parts, in part because they engage in the same rhetorical sleight-of-hand, subtly putting figure where ground should be, and allowing language to make logically impossible constructions appear tenable.

Accepting that there is some fantasy sense in which the uploading story could be true, programs like Battlestar Galactica and Dollhouse exploit a logical crux built into the story that is hard to see in very schematic depictions. Proposals for uploading our minds require one of two things to be possible: either (a) it is possible to remove our minds from our bodies, to separate out exactly that part of us that is “mind” and the part that is not, and to transfer that part exclusively to a machine, thus leaving the body an empty, un-minded husk which it seems hard to call “alive” (an aporia explored with great effectiveness in Dollhouse, discussed below); or (b) the human mind is or is very much like a software program, and like any software program, it can be perfectly copied in exactly its current state from computer to computer and medium to medium (the premise on which the Battlestar Galactica/Caprica franchise is built).

Read closely, the arguments Kurzweil and other uploading advocates present directly address the possibility of copying rather than moving the mind, and Kurzweil is often explicit that this is what computers will enable. But this presents a philosophical conundrum that Kurzweil's critics seem unable to make him conceptualize.7 It is fundamental to our sense of human consciousness that it presents itself to itself as unique: that is to say that the experience of embodied awareness and mental experience I am having right now is mine and mine alone, and in fact defines me, and is only accessible to me; philosophers sometimes call this “qualia.” Were it in fact possible to duplicate that entire experience somewhere else, to make a copy of me in such a way that I were aware of it, I would lose the core notion of uniqueness of self that is arguably constitutive of the experience of selfhood. I am not going to speculate on what my sense of self would be, were the self something that could be copied; my point is simply that the uniqueness of selfhood is a part of consciousness deeply embedded within us, and so that if the uploader's thesis is that consciousness can be copied, he ends up reasoning in a circular fashion. Whatever “I” am, it cannot be copied; something that can be copied cannot be “I.” If the singularity enables the copying of consciousness, it is copying something other than what we are, either because we cannot be uploaded, or because what can upload itself is not us.

While the copying version of the fantasy violates its initial premise – that it is possible exactly to copy ourselves to a machine – the moving fantasy is far more violent, and one that even Kurzweil rarely presents in a serious fashion. One can reflect on some of the details of Moravec's story a bit more to see this. In what state of consciousness is the patient while his brain is being pureed, even if sedated? Where is his mind if he wakes up mid-operation? Where would “he” be? When he wakes up “inside” a machine, by definition he no longer has contact with the body and bodily environment that have shaped him to that point; at the same time his body no longer has a mind. Here again, it is the apparently scientific, materialist proposal that turns out to violate mind–brain identity: my consciousness is acutely aware of things I call “my hand” and “my foot,” and while it can sustain a certain amount of challenge to these presumptions (though typically only at great pain, e.g., by amputation), it seems absurd on the face to claim that the unique version of “me” that wakes up inside the computer would “be” me, since every one of my critical representations of my body to myself would have become illusory. It is also hard to see how it could consider itself to be anything but dead, given that my body would be deprived of those brain functions that occur at sub- or unconscious levels while mediating autonomic body functions.8

In his own words, Kurzweil proposes that “there is a specific game plan for achieving human-level intelligence in a machine”:

having tracked the progress being made in accumulating all of the (yes, exponentially increasing) knowledge about the human brain and its algorithms, I believe that it is a conservative scenario to expect that within thirty years we will have detailed models of the several hundred information processing organs we collectively call the human brain. (Kurzweil, 2001)

The “several hundred information processing organs we collectively call the human brain” – while he will often enough backtrack from such sentiments, here it is clear that to Kurzweil the brain, intelligence, mind, and information processing are all the same thing (as are “human consciousness” and “human-level intelligence”): information processing. Nevertheless, he is certain: “there will be no distinction, post-Singularity, between human and machine or between physical and virtual reality” (Kurzweil, 2005, p. 9). This is not a serious scientific or philosophical thesis but instead a powerful rhetorical construction, precisely the one we see deployed over and over again in media, regardless of its coherence.

Kurzweil's proposals repeatedly fall victim to the kind of category error that, according to the philosopher Hilary Putnam, inform the construction of what are known in contemporary Anglo-American philosophy (in large part because of Putnam's own work) as “brain in a vat” thought problems. Most famously, in Reason, Truth, and History (Putnam, 1982) Putnam asks us to consider the now-traditional scenario, in which a person's brain

has been removed from the body and placed in a vat of nutrients which keeps the brain alive. The nerve endings have been connected to a super-scientific computer which causes the person whose brain it is to have the illusion that everything is perfectly normal. There seem to be people, objects, the sky, etc.; but really, all the person (you) is experiencing is the result of electronic impulses travelling from the computer to the nerve endings. The computer is so clever that if the person tries to raise his hand, the feedback from the computer will cause him to “see” and “feel” the hand being raised. (Putnam, 1982, p. 6)

Here Putnam reiterates the uploading story to demonstrate its incoherence. Putnam imaginatively grants what Kurzweil seems not quite to have imagined: suppose all human beings are brains in vats, we are all connected, and our hallucinatory world – our VR space – is made so as to perfectly mimic the real world that exists outside the vat. “Suppose this whole story were actually true. Could we, if we were brains in a vat in this way, say or think that we were?” Putnam asks, and then answers, “although it violates no physical law [. . .] It cannot possibly be true” (Putnam, 1982, p. 7, emphasis in original).

Putnam attacks the problem through logical proof, but his argument also translates, usefully for this discussion, into problems of frame and reference: although the brains in the vat “can think and ‘say’ any words we can think and say, they cannot (I claim) refer to what we can refer to. In particular, they cannot think or say that they are brains in a vat (even by thinking ‘we are brains in a vat’)” (p. 8). Another way of saying this is to reverse perspective: if we are brains in a vat, what do we mean by “reality” – the place in which the vat sits – in the first place? Is it just like the world our brains collectively imagine, or the world in which we think of ourselves as a brain in a vat? If it is what we usually think of as the “real” world in this scenario – where we can see the brain in its vat – how do we know anything about it at all? How do we know it is of such a nature to have “brains” and “vats”?9 This paradox applies even more strongly to versions of the story in which we see humans attached from birth to machines that generate virtual worlds, as in The Matrix (Wachowski & Wachowski, 1999) – how and where, we have to ask, did those human bodies, raised from embryos by machines, gain the bodily experience with which to create the imagined environment they appear to inhabit? Part of Putnam's point is that the only way they could have is if mind and body were two separate substances, just as there would be two different “levels” of reality to which different (but oddly duplicative) versions of ourselves have access. This view is not just bizarre; it is incoherent. Much simpler is the pragmatic and materialist picture: mind and body are one, “mind” develops as body develops, and all of its “representations,” “pictures,” and “thoughts” are constructed relative to its embodied experience.

Why, if Kurzweil's story is correct, would our disembodied selves “inside” the computer want, as he always insists they will, to have sex, or to eat, or even to play games like World of Warcraft? Do we not do each of these things precisely because of the direct connections on which they rely between our “minds” and our “bodies,” unified so that we need not even think about them? Now, when humans interact with media of any sort, they do so largely because of somatic systems that may or may not rise to anything like the level of thought. Today, when my WoW avatar reaches a cliff edge, I experience vertigo because my somatic and perceptual systems are activating memories of and responses to such a scene, all over my body, not just in my brain case: it feels almost as if I am standing on the cliff edge, and if my avatar jumps, I will feel very distinctly as if I have jumped too – in my body. It is because my body is playing that game that I experience it at all. A simulated and disembodied “intelligence” version of me – the “mind” that supposedly can be extracted out of my body and uploaded to the computer world – kicks out the somatic supports that constitute the mind in the first place. The problem for Kurzweil is that his proposal is that something like me can be uploaded to the computer – but in every version of the scenario we are able to describe, and this is also Putnam's insight, though we start from the premise that we can separate our minds from our bodies, the premise is legible only if we also (covertly) assume we cannot.

Kurzweil never fully posits that emergent forms of artificial intelligence (AI) will recreate human embodiment, that this embodiment might be required for human consciousness or that human brains will be raised from nascence inside a computer (indeed, to Kurzweil, “consciousness” is a relatively mystical thing that is the same regardless of where it exists; this leads to some of his wildest conclusions: after the Singularity, “our civilization will then expand outward, turning all the dumb matter and energy we encounter into sublimely intelligent – transcendent – matter and energy. So in a sense, we can say that the Singularity will ultimately infuse the universe with spirit”; Kurzweil, 2005, p. 389). He posits instead that the human brain currently simulates its own embodiment, as if it were dreaming all the time and as if that dream had no relation to embodied reality, and as if dreaming and reality pointed for us at the same thing. On this view, if we suddenly found ourselves to be brains in a vat – that is, found ourselves conscious of having been disconnected from our bodies and reduced to brains, and now were able to see that brain and that vat and its artificial hormones and its machinic embodiment – we would also inherently know that the “brain in a vat” reality was the “real” reality. Despite our memory of having had a real human body, we will pass into this virtual world without recognizing that it is virtual, without consciousness that we had just given up our material existence. Again, for Kurzweil, “there will be no distinction, post-Singularity, between [. . .] physical and virtual reality” (Kurzweil, 2005, p. 9); this can only be true if the distinction between physical reality and VR has been false all along, because we are about to find out that we have always had a kind of spirit-existence, a transcendent and disembodied “intelligence” – or, in more traditional terms, a soul.

The Future of “Human Biology”

Media representations of the uploading fantasy do not just portray in exquisite detail what the fantasy entails and what it silently pushes into the background; they are themselves part of the discourse that constitutes the fantasy, so much so that the “brain in a vat” arguably occurs in science fiction narratives like Donovan's Brain (novel and radio in the early 1940s, and film in 1953) before it does in philosophy or computer theory. One of the most elaborate analytical media constructions of this sort is the figure of the “humanoid Cylons” depicted in the reimagined television series Battlestar Galactica (Moore & Eick, 2004–2009; hereafter BSG) and its prequel, Caprica (Moore, Eick, Espenson, Murphy, & George, 2009–present). There are fantastically rich narratives in these programs that deserve sustained attention, but here I will highlight just two major representational strategies in the overall architecture of the BSG/Caprica world. The first is found in what is probably the central premise of the rebooted BSG. In the original (late 1970s) version of Battlestar Galactica, the robotic Cylons, while they possessed some crudely humanoid characteristics (e.g., bipedalism), were unmistakably machines. The premise of the new BSG, on the other hand, is that in the distant past the Cylons found a way to duplicate the biological structure of the human body and so to create infinite copies of any person's body they choose: rather than avoiding the difficult question of whether we upload via move or copy, BSG embraces the copy wholeheartedly. Thus a great deal of the length of the series is devoted to the consequences of the Cylons apparently being able to create “perfect copies” of themselves, and no less to their fundamental antagonism to human life, which as in most versions of the fantasy is thematically tied to their simulative origin.

As the series progresses, the use of the copy trope itself turns out to multiply rather than reduce the mise-en-abyme generated by the uploading story. It turns out that each one of the “copies,” since it is indistinguishable – perhaps entirely indistinguishable – from a human being with a human consciousness, must by definition feel itself to be unique and to undergo unique experiences. Since mind and body are separate, if the body of that copy dies, its mind can then be “re-uploaded” into a new body; but this new second-generation copy has memories that no other version of itself has. It has become unique, and rather than being copied it is actually being moved. This raises a bizarre question of origin on which the series to my knowledge never reflects. Did the Cylons learn how to “build human beings” (which we do not see), or did they learn how to separately build human bodies and human brains? Even though the latter is not the overt story, it is in fact what we see repeatedly in the series: a “dumb” body (presumably without a brain) being born in an incubator tub, into which the Number 6 consciousness, with all its particular memories intact, is downloaded. So, can the Cylons simply create as many “Number 6” models as they choose (this is demonstrated several times), and if they do so, which current Number 6 do they copy? Is there an “original” version of Number 6 that is ordinarily “used” to create a new copy? If so, why can't the current “individual” Number 6 also be copied?

This representational abyss extends to the foundational premise of the program and eventually to its entire plot: if the Cylons are able to perfectly replicate human beings, what does it mean to call them “Cylons”? What does it mean to say that an entity is a perfect copy of a human being and yet is not a human being – that is, it both is X and is not X? Thus there is no attempt to distinguish at any level other than the verbal the distinction in the series between human and Cylon, but the need to use human beings in their bodies to portray the Cylons continually militates against the overt insistence that they are machines. The need for embodiment in serial television thus exposes the rhetorical trick Kurzweil tries to play with catachrestic phrases like “machines will be human”; one can say anything, but trying to realize logical and conceptual contradictions is easier said than done. The Cylons simply are human begins in every meaningful sense; eventually the substance of the series becomes devoted to just this conundrum, and it becomes nearly impossible for the program to offer the characters themselves any way to make the distinction, so that they too begin to wonder not merely whether they are human or Cylon, but also whether there is in fact any difference between the two.

The second articulation of the uploading story in these programs is found in the new Caprica series, a prequel series to BSG set in something like a near future similar to our present day, in which we see the discovery of the technology that led to the creation of the Cylons-that-are-also-humans. We learn in Caprica much that had not been clear in BSG, not least that the Cylon technology stems from the desire of its creator to reanimate the dead, and in particular a dead loved one – his daughter. (Again, where we are told we will find science, we find spiritual belief instead, and a profound enmeshment in bodily death and the persistence of the soul.) In fact, the story amplifies the aporetic qualities of its impossible central figure, Zoe: her avatar inside the VR space in Caprica, called “V-World,” turns out to be unique in critical ways and to violate the rules stipulated by the dramatic scenario. In the first episode of the series, “Pilot,” we meet Zoe in the flesh briefly, and we see her playing with her peers in a V-World club (called V-Club). Here avatars indulge in precisely the activities our bodies are (generally) barred from doing in physical reality: they pursue somatic pleasure in its most extreme forms, including sex, drugs, and violence. Among the very first sequences of the series is one including not one but two virtual Zoes, both inside V-Club. Through perspective shots we are made to identify with the Zoe watching the scene from a balcony as the focal character, as the other Zoe becomes the center of attention for the sybaritic and, it appears, murderous crowd. Just as they are about to kill her, the “second” Zoe disappears, and the “real” Zoe and her friend begin to talk about the copy “eventually becoming perfect.” By the end of the Pilot the physically real (outside of V-World) Zoe dies, but the second digital copy persists inside V-World. Because she is embodied for us by the same person who plays the “real” Zoe (Alessandra Toressani), we are led to believe that this Zoe is exactly the same as the “real” Zoe, so much so that it hardly seems to feel that it is a copy, despite knowing this to be the case.

It turns out that this copy is the first instance of the “mind” or “soul” that animates the “human” Cylons, under the watchful eye of Zoe's grieving father Daniel Graystone (played by Eric Stoltz). As the first season progresses we learn that Daniel is the founder of the company that created V-World, and that Zoe was no less gifted a programmer than he is, but was even more driven by apparently eccentric and extreme views. The viewer knows, as Daniel does not, at least at first, that Zoe was part of the radical sect that caused the explosion in which she died (possibly by design). As we learn about it in retrospect, Zoe's work seems very familiar; it appears to have been lifted almost directly out of one of Kurzweil's texts:

Zoe designed a program that allowed her to create a virtual duplicate of herself using a compilation of various personal records, a feat that impressed even her own computer-gifted father. Zoe believed that the creation of her holographic avatar was the next step towards a divinely-inspired plan, and designed a biofeedback subroutine that allowed her avatar to feel what she felt in the real world. (“Zoe Graystone,” 2010)

When we see Zoe inside the VR system of Caprica, both we and she are entirely unaware of the physically embodied Zoe; indeed, we are introduced to her specifically so as to confuse the distinction, because all sorts of bodily sensation are provided to the VR user as if this were feasible in any system like the one described in the series. Furthermore, we are told that this fantasy world, like a contemporary computer game, exists on a shared central server running computer software, so that what happens when the user plays the game sits exactly atop the blurred line between copy and move, a feint which the program plays on from the beginning. Where most users' interaction with the game is portrayed as a “movement” of consciousness – that is, consciousness is a unique thing that can be moved out of the body, but then exists only “in” the game – Zoe is from the beginning portrayed as a “copy”; her VR “self” appears to exist whether or not the “real” Zoe is communing with the doppelgänger. It is hard not to see almost all of the episodes in the first season, which repeatedly dwell on images of “digital Zoe” downloading herself from V-World into a robotic Cylon frame in “real” reality, as a kind of repetition compulsion, showing the impossible entry of the “digital human self” into our reality. This is also a scene we see repeatedly inside V-World itself: a condition is artificially imposed in which if you “die” once inside V-World, you can no longer enter it. Rather than avoiding death – pursuing immortality, following Kurzweil – all that many of the avatars appear to be able to do inside V-World is to try to end their lives, and to maintain hope that they will somehow be able to continue to live inside V-World. This special, soul-like permanence of the imagined self is granted only to Zoe, who, rather than embodying the ability to move human consciousness from site to site, instead embodies the impossible exception.

Transcending Us, or, Ending the Future

In the BSG enterprise and especially in Caprica, we see all of the elements of the uploading story realized in human bodies, including the fundamental notion that there is a world “inside” the computer that can or should be anything but a description or projection of the material world we inhabit. All such depictions rely, as do Kurzweil's claims, on the idea of a VR system perfect enough that it fully replicates the entire human sensorium. Typically when such a scenario is ficitonalized for us, only dream-like or impossible events make apparent to the viewer that this is anything but reality (contrary to “real-life” VR, where elements of physical reality have yet to be fully eliminated from the apparatus). Conceptually, it is vital to note how hard it is to depict such worlds without replicating so many features of our world that it begins to beg the question of whether a nonreal but fully immersive environment (in every sense, so that we could remotely tele-touch with another human being using the system inside the VR world) is even imaginatively coherent. Where the uploading story relies on an outmoded notion of the mind–body connection, it is no less archaic in its sturdy insistence that we will be able to live inside our dreams. Arguably, it is just the fact that they are not real in the way our experience is real that makes the transformed representations of experience in dreams, novels, movies, and videogames valuable to begin with. Mind and body, according to most current scientific and philosophical theories, are one; fiction and reality, just as undeniably, are not.

Gamer (Neveldine & Taylor, 2009) confronts this potential aporia directly, by dispensing altogether with the idea that we might somehow synthesize experience. The movie is well known for its extended depiction of “Slayers,” a version of a videogame (almost identical to a current first-person shooter game) played in our world, but which is nevertheless controlled by players also in our world at computer terminals, (partially) immersed in the screen-world of Slayers, which thus functions as a kind of fictional experience for them. (Perhaps due to its clever focus on the fiction/reality part of the uploading fantasy, the film does create an aporia of sorts in describing an interface that both allows the in-game character [e.g., Kable, the Gerard Butler character] to retain near-complete control over his body while at the same time passing many of these controls to the player [the 17-year-old boy Simon]. The film attributes this connection to relatively generic “self-replicating nanites” that somehow allow two consciousnesses to “exist” in a single body: the structure is aporetic in that the film does not even attempt to make clear how this kind of shared consciousness could possibly “feel” to either participant.)

Nonetheless, this conceptual crux should not distract us from the starkly credible vision of future media forms found in Gamer, a vision that to my knowledge is unique to the film. This becomes especially clear when we look at the second videogame presented in the movie, a version of something like The Sims that in the movie is called, provocatively, “Society.” In Society, we see something that looks very close to traditional VR at first: the scene opens on a sparkling, brightly colored, playful world in which the only activities appear to be based on pleasure. Using digital animation techniques, the live-action sequence of rollerbladers and skateboarders in a central city square appears clearly as part of our reality (like Slayers), yet also as if it is inside a videogame – which also deliberately looks like (and extends) current techniques of digital image “polishing” to originally photographed moving images. In the brief scene from Society, we watch as two very attractive skaters approach each other, as if to dance, and eventually begin kissing and touching each other. The scene pulls back and we see a vibrant city square, for some reason bounded into just that area, filled with skaters, dancers, and so on. The viewer is disoriented because she has no subjective point from which to view the scene; the characters do not jibe with any presented yet in the film, and appear to simply be enjoying some kind of drug-induced hyper-awareness. Then the scene pulls back again; the viewer sees that the Society scenes are in fact being viewed on a computer screen, but that the players of both the attractive male and female in-game characters are middle-aged, very over-weight, sedentary men, greedily eating food at their computer terminals as they “play” their respective characters.

All of these scenes of Society are in fact conveyed via a talking-heads current affairs discussion program on television within the “real” reality of the film. Slayers, we have already learned, is positioned as a way for prisoners to shorten or even commute their sentences. Society, on the other hand, is fully or at least partly consensual: not only do the players (the obese men at their computer terminals) pay for the privilege of role-playing the characters, but all the characters inside the game (in what appears as VR world from the player's perspective) are paid for their “work” inside the game, so that the question of its consensual nature is paralleled directly with that of sex work. In fact, we learn that the woman in the main scene we see from Society, played by the model Amber Valetta, is Kable's wife, and eventually that both of them have been imprisoned in their game roles by a wide conspiracy headed by Castle, the man who invented both games. Then we learn that the “nanite” technology, foisted off with a few lines of jargon early in the film, entails exactly the same displacement of mind from body, the same untenable dualism, and the consequent paradoxical assertion of a mind that both does and does not inhabit the body in which it finds itself.

In Dollhouse (Whedon, 2009–2010), a serial television program developed for Fox by Joss Whedon and Eliza Dushku (who plays the show's focal character, Echo), we find what is arguably the fullest and most detailed portrayal of the uploading fantasy in recent media. The show, which ultimately ran for two seasons that were able to be scripted into a single story arc by Whedon et al., included some of the same conceits found in Gamer, articulated in much more detail than even Kurzweil has been able to do. The series reveals all of the problematic and incoherent assumptions that underlie the fantasy. In Dollhouse, we begin again from something like our contemporary moment and place in history, differing from ours only in the existence of some version of copy/move technology, allowing human consciousness as such to be copied intact from a living body to computer software and vice versa. In this version of the story, a human subject sits in a chair and has her brain “scanned” by electromagnetic probes; the result is an extracted computer program that contains the entirety of the individual's consciousness. Inversely, the “doll” sits in the chair and has someone's consciousness downloaded into her. Here, we are explicitly and necessarily told that the copy ability exists alongside the ability to move; but as I have argued, positing the availability of copy turns out to violate the boundaries of the self too much to sustain consistent representation. As if to emphasize the aporetic nature of this fundamental operation, the program suggests, without ever explicitly stating, that these software programs must be uniquely stored in containers that are not directly linked to a computer network. We are frequently presented with the relatively absurd spectacle of characters living sometime later than us in our world, carrying around an eight-track-tape size physical computer disk as if it were some kind of analog media and there were no way to make infinite copies of it. (Of course, it emerges at various points in the series that these files can be transmitted over computer networks and stored like any other files, putting more emphasis on the fantastic – impossible and archaic – nature of the copy operation itself.)

We learn about the possibility of moving consciousness, however, via the presentation of the show's main figure, the “doll,” a human being who has voluntarily submitted to (a) having her consciousness copied into software; (b) having the consciousness in her body deleted; (c) making herself available to have other consciousnesses (and pieces of consciousnesses) downloaded into her “empty” body, which is then used to execute “client engagements” (often of a sexual nature). Sometimes the program suggests that (a) and (b) are one operation, and at other times two.

While these abilities are presented as simple and technical in the program's initial premise, this technological innovation turns out to invoke almost inevitably the entire suite of tropes in the uploading fantasy. To begin with, the idea of a “doll” – a fully functioning body that exists “without consciousness” – is impossible to comprehend and almost as hard to represent; we see at least half-a-dozen such “dolls” in their “empty” state over the course of the series, and it is never clear how the line has been drawn between the parts of the consciousness that make up the self (e.g., Echo's prior human self, Caroline) and the “consciousness of body” that allows her instead to walk around as a “mindless” body named Echo. In fact, representation of the dolls becomes an insoluble problem for the show's version of the uploading story, which it eventually uses as a way to resolve the series as a whole.

As an example of the dolls' impossible status, consider that the “empty” dolls in Dollhouse in their uninhabited state routinely speak their normal language with complete fluency; yet when a new personality is downloaded into Echo, we see that she has learned its language(s) to perfection – while retaining Echo's own original English fluency, regardless of whether the new character speaks English. Is language part of consciousness or isn't it? It seems to exist in both places, but few serious accounts of consciousness would exclude all of language fluency. Are we to believe that it is doubled, existing as a whole both in the mind and in the body? And if so, what exactly are we to understand as the nature of mind and body? The dolls as well have full awareness and control of their bodies, while at the same time the downloaded consciousnesses too have control of their new “home” bodies; so who is it that experiences hunger, intimacy, sociality, interaction – the “mind” left in the doll, or the “mind” taken out of her? (That is, if Echo has control of her body, how can control of body be part of what has been transferred or copied to her “mind file”?) Third, skills and abilities work both for empty and inhabited dolls as if “written down” in abstract and disembodied form, so that for example a recorded personality will know how to do karate, and then this skill will be present in Echo on “assignment”; yet surely the knowledge of doing karate relies in large part on what have become automatic, unconscious body motions established through extensive practice and training – specific to that body, to that body's nervous system, to its history, to the making-automatic of what must be thought that characterizes learning and practice. Despite Echo's consciousness having been “removed” from her body, it seems as if a lot of it remains there – and it is no accident that this is very much how it looks in the show, as there is almost no other way to depict this, unless the body is completely deactivated, as in the brain-in-a-vat story.

All of these contradictions emerge from the simple attempt to depict a realistic scenario in which mind and body actually could be separated. Despite its seductiveness as a trope, performances of human beings in extended media like Dollhouse remind us to think very carefully about what we mean by the fantasies we propose and imagine. Abruptly, at the end of the first season, an epilogue (“Epitaph One”) shifts the story just 10 years into the future, showing the consequences of this technology being available – that is, of the Singularity having happened, in Kurzweil's terms. The results are almost immediately apocalyptic. The inability for each of us to locate and understand who and what each of us are, and most especially what we are to each other, and what our bodies are, results in total social breakdown. The few remaining “Actuals” (personalities still inhabiting their original bodies) hold on to their identities with near-religious fervor, and even they do not seem entirely sure of their identities.

This projection of the future eventually twists again: toward the end of the second, final season, we revisit this 2019 scenario. This time, though, it is inside the VR world of a body in a vat whose brain has somehow been displaced: that is, inside Echo's comatose body, pumped with fluids and wrapped into a plastic chrysalis. We learn that many such empty bodies are kept in a facility called “The Attic,” and the linked bodies are stimulated with fear hormones to generate cerebral activity, effectively causing the brains to imagine their worst nightmares over and over again. That is the scenario in which the VR world becomes “real.” Inside “The Attic” what appear to be the doll's original personalities exist in the apocalyptic future seen in “Epitaph One,” linking together the singularity story as the series concludes. Somehow their comatose bodies and consciousnesses, living in constant fear, generate cerebral activity that can be linked, so that the series' villain, the Rossum Corporation, can build “the world's first supercomputer,” allowing it to seize control of and exploit the uploading technology.

Inside Dollhouse's own act of telling the future, inside the VR, the technology's original developer loops through history repeatedly in his own nightmare, obsessively calculating the chances that the mind-transfer technology leads to the apocalypse, which he believes is 97% (“The Attic”). This programmed or perfectly predicted future, too, makes this character Clyde stand in very closely for something like Kurzweil's position. Now, to restore order and bring narrative closure, every other aporia must be exploited, so that Caroline/Echo is able to force herself and several other dolls back into the Attic reality and reawaken their bodies, all violating stipulations the series itself establishes. At the end of the series, after the inevitable (but likely incomplete) victory of the non-copied humans (“Actuals” in the show's terminology), we find an underlying narrative that pits interactive sociality against prediction and control as modes for imagining and controlling the future. Despite the “human” victory, the show thus ends with a number of powerful images of solipsistic self-enclosure, suggesting that the original premise of the series, because it posits mind as a kind of body-independent, mobile medium, ultimately makes impossible both human identity and human freedom.

So among the inevitable aporetic elements of the series' end is the fact that Echo, the “body without consciousness” portrayed by Eliza Dushku, turns out to have developed consciousness after all – insisting, rightly on my view, that consciousness is something like an effect of the body rather than an intact program that can be separated from it. This Echo becomes a kind of meta-consciousness, remembering and incorporating all of the selves downloaded into her body before, which Dollhouse can only represent as a series of discrete scenes and voices “inside” the one head of Echo. At the series' end, this singular society of minds is depicted living alone in a bunker, secreted for an unknown future, away from apocalyptic war, as if a kind of living tomb, an archive of (parts of?) the selves that are now lost. As if to emphasize the point, the series' final scenes depict Echo/Caroline uploading into herself the “self” of her boyfriend, Paul. This finale shows us a new “imagined” reality apparently inside Echo's head where the two of them meet, in an eternal space of undifferentiated closeness, but also not entirely an other, and so not providing exactly that which intimate relationships supply: the other. There are few depictions of the continued existence of the human race that so clearly also depict that future as always archived, self-enclosed, unable to refer, than the final scenes of Dollhouse.

The Futur of New Media

Throughout his writings, and especially in those of his later career, Jacques Derrida draws our attention to the linguistic and conceptual means by which we all envision, construct, plan, and inhabit the future. Most famously, Derrida persistently reflects on the distinction in French between the nouns l'avenir and futur, both of which are typically translated as the noun “the future” in English, but the first of which, a nominalized form of the standard verb “to come,” suggests for Derrida a different range of meanings from those of le futur:

In general, I try to distinguish between what one calls le futur and l'avenir. Futur is that which – tomorrow, later, next century – will be. There's a future which is predictable, programmed, scheduled, foreseeable [Derrida pronounces each of these words in English after saying them in French]. But there is a future, l'avenir, to come [in English], which refers to someone who comes whose arrival is totally unexpected. For me, that is the real future. That which is totally unpredictable. The Other who comes without my being able to anticipate their arrival. So if there is a real future beyond this other known future, it's l'avenir in that it's the coming of the Other, when I am completely unable to foresee their arrival. (Derrida, quoted in Dick & Ziering, 2002)10

Despite his well-known but undeserved reputation for obliterating all differences, such passages are characteristic of Derrida, who is just as likely to discover a hidden or covert difference to which we have failed to attend as he is to suggest that a specific division (e.g., speech/writing) is in some way less than it appears. On the contrary, if there are core features of Derridean “doctrine,” one would be exactly this Levinasian insistence on the radical necessity of an openness toward the unknown, especially the unknown other.

The future we see in the singularity story stands out for its emphasis on programmability, predictability, calculation – all the qualities to which Derrida suggests we remain alert. In a wide sense, his point is that a programmed future is altogether dystopic: if we know with certainty what is going to happen there is almost no reason to deploy our wills toward those certainties; or put differently, it is hard to see what there is for our wills to do without a future open to chance and to the exercise of will.11 Like the ideal world of Plato's forms, the programmed future is a limit case that ends in perfect stasis: it is a vision that is at its core unthinkable, the full satisfaction of desire that nevertheless remains unsatisfied. The limit case of programmability is much like the limit case of immortality Hägglund (2008) sees at the heart of Derrida's writing: what we use as a telos for so many positive practices establishes itself as a nightmare limit. “We,” in this critical sense, cannot become immortal, because we are not immortal; when we become immortal, we lose one of the fundamental supports on which our self-definition rests. As with the rest of the uploading fantasy, whether or not there could (or do) exist immortal beings, they cannot be us.12

It is surely no coincidence that one site for the opposing sentiment – the view that we should and want to predict the future accurately – is found alongside the rest of the pieces of the singularity story. While futurism is found in every society, and varieties of scientific and/or scientistic futurism parallel the development of science itself, there can be no denying that part of Kurzweil's attractiveness to his constituency lies just in his repeated offering of specific, detailed accounts of future events. A standard feature of every Kurzweil text is the invocation of Moore's Law. While today it is referred to exclusively as a “Law,” meaning something like “Physical Law,” it was originally offered by Intel co-founder Gordon Moore as a description of the logarithmic development of computer memory and processing power: roughly that the total power doubles every year or two (depending on which version of the Law one accepts).13 To Kurzweil, like other computing enthusiasts, Moore's Law is a species of general physical laws of acceleration, which he considers vital to the coming of the Singularity:

During the 1990s, I gathered empirical data on the apparent acceleration of all information-related technologies and sought to refine the mathematical models underlying these observations. I developed a theory I call the law of accelerating returns, which explains why technology and evolutionary processes in general progress in an exponential fashion. (Kurzweil, 2005, p. 3)

Kurzweil is so sure of the inexorability of his conclusions that he confidently recounts telling James Watson (co-winner of the Nobel Prize for the discovery of DNA), in 2003, that Watson was wrong in thinking that within 50 years we will have drugs that enable us to eat all the food we want without consequence; as Kurzweil explains, in his 2005 volume, he told the “shortsighted” Watson that “these will be available in five to ten years, not fifty” (Kurzweil, 2005, p. 12).

To control, predict, and program the future: one sees this goal promoted everywhere today, so much so that despite the near-universal horror which most audiences experienced at seeing people arrested for “precrimes” in Steven Spielberg's Minority Report (2002), such technologies are in fact emerging at an accelerating pace. Google and other corporations at the forefront of computational research routinely create “predictive algorithms” for a variety of domains, including human behavior.14 Such practices are a step up from, but related to, the kinds of statistical behavior prediction for which computers have long been used, primarily in military, commercial, and electoral contexts. The more one imagines oneself in the position of absolute leader, Hobbesian prince, the easier it is to see the desirability of managing the rest of the social sphere through every sort of rigid control. From such a position it follows with a certain brutal logic that the leader is entitled to domains of knowledge-power to which those below him on the hierarchy are not entitled. In this sense there is an interesting and suggestive connection between the early parts of the uploading story and its wildest predictive aspects. Rather than pursuing real transformation of our world – a kind of transformation of “the human” that we understand well in the name of politics, especially democratic politics – Kurzweil pursues a categorical shift that relies on the stability of the very categories it claims it can transcend. It only makes sense to “transcend the human” if there will still be humans there who have not experienced the transcendence; otherwise we would all still be human. This is one of the reasons why the central focus on individual will, power, and control is so prominent and so hard to jettison in new media narratives like Dollhouse.

At the pinnacle of the Singularity we find paradox atop paradox. The most intense desire of the protagonists of almost every version of the story, from the Rossum Corporation to Kurzweil himself, we see again and again, is to cheat death – to become immortal. We separate the mind from the body to put it in a machine so that it can live forever. Such a desire suggests a profound failure to come to terms with one of the things that most fundamentally makes us human: death itself. Kurzweil's frequent references to the fragility and brokenness of the human body – surely emphasizing the negative rather than positive aspects of bodily experience – famously mask a deep wound, the death of Kurzweil's father, a loss that he seems particularly unable to accept (Vance, 2010). So does his relentless pursuit of longevity at all costs, as evidenced in his deep commitment to alternative treatments of many kinds (see, e.g., Kurzweil & Grossman, 2004). “We will gain power over our own fates,” he writes, “our mortality will be in our own hands” (Kurzweil, 2005, p. 9), both of which suggest a strikingly negative view of our present-day life. So does his philosophical position on death:

Is death desirable? The “inevitability” of death is deeply ingrained in human thinking. If death seems unavoidable, we have little choice but to rationalize it as necessary, even ennobling. The technology of the Singularity will provide practical and accessible means for humans to evolve into something greater, so we will no longer need to rationalize death as a primary means of giving meaning to life. (Kurzweil, 2005, p. 326)

It is familiar to consider Derrida's relatively obscure writings nihilistic in some way – a quality the writings themselves do not display. However, computationalism (even of the high form articulated by Kurzweil), which has a much wider and more influential presence in our world than direct philosophical analysis ever could, is profoundly nihilistic with regard to our own, current, physical existence. Derrida writes often about the role of death in our sense of self: “Everyone must assume his own death, that is to say the one thing in the world that no one else can either give or take: therein resides freedom and responsibility” (Derrida 1995, p. 44).15 Words like “freedom” and “responsibility” occur infrequently in Kurzweil's lexicon; he writes instead with great passion that humans will “transcend biology,” and presumably transcend the freedom and responsibility that characterize our material existence. If such “deeply ingrained” parts of “human thinking” like death itself are to be jettisoned as part of that transcendence, in what sense will that new, evanescent, “sublimely intelligent” consciousness be human?

NOTES

1 I use “computationalism” here as I develop it in The Cultural Logic of Computation (2009), where I expand on the philosophical doctrine of computationalism, especially as it functioned rhetorically in Anglo-American analytic philosophy of the 1960s and 1970s, as an example of a larger conceptual, political, and rhetorical ideology committed to the view that “the mind is a computer” whatever that might turn out to mean – the words of the formula being more important than the particular facts. While computationalism was considered untenable for much of the 1980s and 1990s by many analytic philosophers, it is experiencing a resurgence (see Piccinini, 2010).

2 Something like the view that everything is computation has more serious advocates, particularly among physicists, according to which many principles of fundamental physics can be understood as encoded transmissions of information; Piccinini (2010) is the leading philosophical advocate of such views, and provides some guidance to the work in physics as well as consequences for the study of cognition, but is acutely aware of the many conceptual pitfalls Kurzweil ignores. Wolfram (2002) is the best known popularization of this work, which despite being better science than Kurzweil, seems committed to building out a very similar rhetorical-ideological structure.

3 This is especially clear in some of the original pronouncements of the man to whom the coinage of the term VR is usually attributed, the computer scientist and entrepreneur Jaron Lanier. Yet both technological limitations and a clear view of some of the philosophical problems described here have led him to become an especially trenchant spokesperson for what we might call the anti-Singularity perspective. Lanier (2010) provides a complete discussion of both the VR story and the current culture of computing; but see in particular Lanier (2000) for a strong technical critique of Kurzweil's writings to that point, to which Kurzweil (2001) offers an especially contradictory and in many parts disingenuous response.

4 The word “singularity” occurs across discourses and with many different meanings. A detailed New York Times article about the “singulatarian movement” (Vance, 2010) repeatedly cites fellow travelers who do not subscribe to Kurzweil's vision but take Kurzweil's vision nevertheless as the central one to be contended with; the article identifies technologists no less prominent than Sergey Brin and Larry Page, co-founders of Google, as explicit followers of Kurzweil's ideas.

5 An even more critical take on some of the same material in Hayles's book is found in The Cybernetic Hypothesis (Tiqqun Collective, 2001), which has only recently been made widely available.

6 The frequent connection between mechanistic theories of mind and certain high forms of metaphysics is one of the main targets of Richard Rorty's critical analyses: “it is pictures rather than propositions, metaphors rather than statements, which determine our philosophical convictions. The picture which holds traditional philosophy captive is that of the mind as a great mirror, containing various representations – some accurate, some not – and capable of being studied by pure, nonempirical methods [e.g., introspection]” (Rorty, 1979, p. 12).

7 See, e.g., the remarkably disdainful and uncomprehending responses to his critics in Richards (2002), especially his discussion of Searle (Kurzweil, 2002), which focuses on some of the issues I raise here.

8 While some attention is paid to this image in Battlestar Galactica, the most sustained serious attempt of which I am aware to depict what it might feel like to wake up “in” a mind that has been separated from its body is Joseph McElroy's powerful 1977 novel Plus, in which a dying man allows his brain to be integrated into a satellite that is then sent into Earth orbit to monitor solar events; see Proietti (2004) for a brief but well-observed and original reading of the novel that demonstrates its engagement with issues similar to those discussed here.

9 See “Models and Reality” (1977) for Putnam's most detailed account of the logical problems raised by the brain-in-a-vat story.

10 This quote appears during roughly the first two minutes of the film. I have slightly modified the film's English subtitles to reflect Derrida's spoken words.

11 Readers of Derrida will know that the term singularity does occur in his writings, as has been brought out most fully by the literary critic Derek Attridge (2004). Consistent with the general difference in orientation, though, where Kurzweil uses the word to indicate a single, transcendent moment when reality shifts, in Derrida it usually occurs to gesture at the profound uniqueness of identity, to which he also attaches the term ipseity. Thus for Derrida we might call it part of the condition of being-in-the-world that there are singularities (and others) everywhere. The concept also occurs in Derrida's writings on messianism and “the event,” which deserve more sustained examination than space permits.

12 This is a prevalent theme in fictional portrayals of immortality across space, time, and genre. Almost invariably, when a human being becomes immortal in such fictions, their primary goal instantly shifts to the pursuit not just of mortality but even of death itself, as if to realize the limit of life-without-death is to suddenly come to value death above all human things.

13 The original source for Moore's Law is Gordon Moore (1965): “The complexity for minimum component costs has increased at a rate of roughly a factor of two per year. [. . .] Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years” (cited in “Moore's Law,” 2010). It was not Moore but later commentators who insisted on the lawlike nature of the description.

14 “Predictive analytics,” applied to any domain up to and including human behavior, is one of the fastest-growing fields of computer engineering (see “Predictive Analytics,” 2010); IBM is one of its foremost developers (as is Google). Recent work has even focused on predicting crime; marketing for such projects revels in rather than recoiling at comparisons to Minority Report: one press release praises “Minority Report-style technology being tested by two British forces following success in the United States; the system [. . .] evaluates patterns of past and present incidents, then combines the information with a range of data including crime reports, intelligence briefings, offender behavior profiles, and even weather forecasts” (“Sophisticated Crime Software,” 2010).

15 In addition to The Gift of Death (1995), for close connections between identity/ipseity, the Other, death, and futurity, see Derrida (2001, 2002) and Derrida and Roudinesco (2004).

REFERENCES

Attridge, D. (2004). The singularity of literature. New York, NY: Routledge.

Aubuchon, R. & Moore, R. D. (Writers), & Reiner, J. (Director). (2010, January 22). Pilot [Television series episode]. In Moore, R., Eike, D., Espenson, J., Murphy, K., & George, C. (Producers), Caprica. Vancouver, British Columbia, Canada: David Eick Productions/SyFy.

Derrida, J. (1995). The gift of death. Chicago, IL: University of Chicago Press.

Derrida, J. (2001). The work of mourning. Chicago, IL: University of Chicago Press.

Derrida, J. (2002). Negotiations: Interviews and interventions, 1971–2001. Stanford, CA: Stanford University Press.

Derrida, J., & Roudinesco, E. (2004). For what tomorrow: A dialogue. Stanford, CA: Stanford University Press.

Dick, K., & Ziering, A. (Directors). (2002). Derrida [Motion picture]. Los Angeles, CA: Jane Doe Films.

Golumbia, D. (2009). The cultural logic of computation. Cambridge, MA: Harvard University Press.

Hägglund, M. (2008). Radical atheism: Derrida and the time of life. Stanford, CA: Stanford University Press.

Hayles, K. N. (1999). How we became posthuman: Virtual bodies in cybernetics, literature, and informatics. Chicago, IL: University of Chicago Press.

Kurzweil, R. (1990). The age of intelligent machines. Cambridge, MA: MIT Press.

Kurzweil, R. (1999). The age of spiritual machines: When computers exceed human intelligence. New York, NY: Penguin.

Kurzweil, R. (2001, July 13). One-half of an argument (Reply to Lanier) [Weblogcomment]. Retrieved July 1, 2010, from http://www.kurzweilai.net/meme/frame.html?main=/articles/art0236.html

Kurzweil, R. (2002). Locked in his Chinese room: Response to John Searle. In J. W. Richards (Ed.), Are we spiritual machines? Ray Kurzweil vs. the critics of strong AI (pp. 128–171). Seattle, WA: The Discovery Institute.

Kurzweil, R. (2005). The singularity is near: When humans transcend biology. New York, NY: Penguin.

Kurzweil, R., & Grossman, T. (2004). Fantastic voyage: Live long enough to live forever. New York, NY: Rodale Press.

Lanier, J. (2000, November 11). One half a manifesto. Edge. Retrieved August 16, 2010, from http://www.edge.org/3rd_culture/lanier/lanier_index.html

Lanier, J. (2010). You are not a gadget: A manifesto. New York, NY: Knopf.

McElroy, J. (1977). Plus: A novel. New York, NY: Carroll & Graf.

Moore, G. (1965). Cramming more components into integrated circuits. Electronics Magazine, 38(8). Retrieved from http://download.intel.com/museum/Moores.../Gordon_Moore_1965_Article.pdf

Moore, R. D., & Eick, D. (Producers). (2004–2009). Battlestar Galactica [Television series]. Vancouver, British Columbia, Canada: David Eick Productions/The Sci-Fi Channel.

Moore, R. D., Eick, D., Espenson, J., Murphy, K., & George, C. (Producers). (2009–present). Caprica [Television series]. Vancouver, British Columbia, Canada: David Eick Productions/SyFy.

Moore's Law. (2010). Retrieved June 8, 2010, from the Wikipedia Wiki: http://en.wikipedia.org/wiki/Moore's law

Moravec, H. (1988). Mind children: The future of robot and human intelligence. Cambridge, MA: Harvard University Press.

Neveldine, M., & Taylor, B. (Directors). (2009). Gamer [Motion picture]. Los Angeles, CA: Lionsgate.

Piccinini, G. (2010). Computation in physical systems. Stanford Encyclopedia of Philosophy. Retrieved August 10, 2010, from http://plato.stanford.edu/entries/computation-physicalsystems/

Plato. (2007). The republic. New York, NY: Penguin.

Predictions made by Ray Kurzweil. (2010). Retrieved July 20, 2010, from the Wikipedia Wiki: http://en.wikipedia.org/wiki/Predictions_made_by_Ray_Kurzweil

Predictive analytics. (2010). Retrieved August 16, 2010, from the Wikipedia Wiki: http://en.wikipedia.org/wiki/Predictive_analytics

Proietti, S. (2004, August). Joseph McElroy's cyborg Plus. Electronic Book Review. Retrieved June 10, 2010, from http://www.electronicbookreview.com/thread/criticalecologies/seeing

Putnam, H. (1977). Models and reality. In Realism and reason: Philosophical papers volume 3 (pp. 1–25). New York, NY: Cambridge University Press.

Putnam, H. (1982). Reason, truth, and history. New York, NY: Cambridge University Press.

Richards, J. W. (Ed.). (2002). Are we spiritual machines? Ray Kurzweil vs. the critics of strong AI. Seattle, WA: The Discovery Institute.

Rorty, R. (1979). Philosophy and the mirror of nature. Princeton, NJ: Princeton University Press.

Ryle, G. (1949). The concept of mind. Chicago, IL: University of Chicago Press.

Searle, J. (2002). I married a computer. In J. W. Richards (Ed.), Are we spiritual machines? Ray Kurzweil vs. the critics of strong AI (pp. 56–77). Seattle, WA: The Discovery Institute.

Sophisticated crime software helps police predict violent offences. (2010, July 27). [Press release]. Retrieved August 9, 2010, from http://homelandsecuritynewswire.com/sophisticated-crime-software-helps-police-predict-violent-offences

Spielberg, S. (Director). (2002). Minority Report [Motion picture]. Los Angeles, CA. Twentieth Century Fox.

Tancharoen, M. & Whedon, J. (Writers), & Cassaday, J. (Director). (2009, December 2). The Attic [Television series episode]. In J. Whedon (Producer), Dollhouse. Los Angeles, CA: Twentieth Century Fox.

Tancharoen, M., Whedon, J., & Chambliss, A. (Writers), & Solomon, D. (Director). (2010, January 29). Epitaph Two: Return [Television series episode]. In J. Whedon (Producer), Dollhouse. Los Angeles, CA: Twentieth Century Fox.

Tancharoen, M. & Whedon, J. (Writers), & Solomon, D. (Director). (2009, June 17). Epitaph One [Television series episode]. In J. Whedon (Producer), Dollhouse. Los Angeles, CA: Twentieth Century Fox.

Thompson, B. & Weddle, D. (Writers), & Woolnough, J. (Director). (2006, February 24). Downloaded [Television series episode]. In R. D. Moore & D. Eick (Producers), Battlestar Galactica. Vancouver, British Columbia, Canada: David Eick Productions/The Sci-Fi Channel.

Tiqqun Collective. (2001). The cybernetic hypothesis. Tiqqun 2. Anonymous English translation made available in 2009 at http://cybernet.jottit.com/

Vance, A. (2010, June 11). Merely human? That's so yesterday New York Times. Retrieved June 13, 2010, from http://www.nytimes.com/2010/06/13/business/13sing.html?pagewanted=all

Wachowski, A., & Wachowski, M. (Directors). (1999). The Matrix [Motion picture]. Los Angeles, CA: Warner Bros.

Whedon, J.(Producer). (2009–2010). Dollhouse [Television series]. Los Angeles, CA: Twentieth Century Fox.

Wolfram, S. (2002). A new kind of science. Champaign, IL: Wolframs Media.

Zoe Graystone. (2010). Retrieved May 7, 2010, from the Battlestar Wiki: http://en.battlestarwiki.org/wiki/Zoe_Graystone

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.139.108.22