NOTES

INTRODUCTION: YOUR VISIT TO THE CENTER FOR MIND DESIGN

1. Contact, film directed by Robert Zemeckis, 1997.

2. See, for example, the open letter https://futureoflife.org/ai-open-letter/, Bostrom (2014), Cellan-Jones (2014), Anthony (2017), and Kohli (2017).

3. Bostrom (2014).

4. Solon (2017).

CHAPTER ONE: THE AGE OF AI

1. Müller and Bostrom (2016).

2. Giles (2018).

3. Bess (2015).

4. Information about some of this research can be found at clinicaltrials.gov, a database of privately and publicly funded clinical studies conducted around the world. See also publicly available discussions of some of the research conducted by the Defense Advanced Research Projects Agency (DARPA), which is the emerging technologies wing of the U.S. Department of Defense: DARPA (n.d. a); DARPA (2018); MeriTalk (2017). See also Cohen (2013).

5. Huxley (1957, pp. 13–17). For a variety of classic papers on transhumanism, see More and Vita-More (2013).

6. Roco and Bainbridge (2002); Garreau (2005).

7. Sandberg and Bostrom (2008).

8. DARPA (n.d. b).

9. Kurzweil (1999, 2005).

CHAPTER TWO: THE PROBLEM OF AI CONSCIOUSNESS

1. Kurzweil (2005).

2. Chalmers (1996, 2002, 2008).

3. The Problem of AI Consciousness is also distinct from a classic philosophical problem called “the problem of other minds.” Each of us can tell, by introspection, that we are conscious, but how can we really be sure that the other humans around us are? This problem is a well-known form of philosophical skepticism. A common reaction to the problem of other minds is to hold that although we cannot tell with certainty that the people around us are conscious, we can infer that other normal humans are conscious, because they have nervous systems like our own, and they exhibit the same basic kinds of behaviors, such as wincing when in pain, seeking friendships, and so on. The best explanation for the behaviors of other humans is that they are also conscious beings. After all, they have nervous systems like ours. The problem of other minds is different from the Problem of AI Consciousness, however. For one thing, it is posed in the context of human minds, not machine consciousness. Furthermore, the popular solution to the problem of other minds is ineffective in the context of the Problem of AI Consciousness. For AIs do not have nervous systems like our own, and they may behave in quite alien ways. Additionally, if they do behave like humans, it may be because they are programmed to behave as if they feel, so we can’t infer from their behavior that they are conscious.

4. Biological naturalism is often associated with the work of John Searle. But “biological naturalism,” as used here, doesn’t involve Searle’s broader position about physicalism and the metaphysics of mind. For this broader position, see Searle (2016, 2017). For our purposes, biological naturalism is just a generic position denying synthetic consciousness, as used in Blackmore (2004). It is worth noting that Searle himself seemed sympathetic to the possibility of neuromorphic computation being conscious; the target of his original paper is the symbol processing approach to computation, in which computation is the rule-governed manipulation of symbols (see his chapter in Schneider and Velmans, 2017).

5. Searle (1980).

6. See the discussion in Searle (1980), who raises the issue and responds to the reply.

7. Proponents of a view known as panpsychism suggest that there is a minuscule amount of consciousness in fundamental particles, but even they think higher-level consciousness involves the complex interaction and integration among various parts of the brain, such as the brainstem and thalamus. I reject panpsychism in any case (Schneider 2018b).

8. For influential depictions of this sort of techno-optimism, see Kurzweil (1999, 2005).

9. This leading explanatory approach in cognitive science has been called the method of functional decomposition, because it explains the properties of a system by decomposing it into the causal interaction between constituent parts, which are themselves often explained by the causal interaction between their own subsystems (Block 1995b).

10. In philosophical jargon, such a system would be called a “precise functional isomorph.”

11. I’m simplifying things by merely discussing neural replacement in the brain. For instance, perhaps neurons elsewhere in the nervous system, such as the gut, are relevant as well. Or perhaps more than just neurons (e.g., glial cells) are relevant. This kind of thought experiment could be modified to suppose that more than neurons in the brain are replaced.

12. Chalmers (1996).

13. Here I am assuming that biochemical properties could be included. In principle, if they are relevant to cognition, then an abstract characterization of the behavior of such features could be included in a functional characterization.

14. A complete, precise copy could occur in the context of brain uploading, however. Like the case of the isomorph of you, human brain uploading remains far in the future.

CHAPTER THREE: CONSCIOUSNESS ENGINEERING

1. Boly et al. (2017), Koch et al. (2016), Tononi et al. (2016).

2. My search was conducted on February 17, 2018.

3. Davies (2010); Spiegel and Turner (2011); Turner (n.d.).

4. For a gripping tale of one patient’s experience, see Lemonick (2017).

5. See McKelvey (2016); Hampson et al. (2018); Song et al. (2018).

6. Sacks (1985).

CHAPTER FOUR: HOW TO CATCH AN AI ZOMBIE

1. Axioms for functional consciousness in highly intelligent AI have been formulated by Bringsjord and Bello (2018). Ned Block (1995a) has discussed a related notion of “access consciousness.”

2. Bringsjord and Bello (2018).

3. See Schneider and Turner (2017); Schneider (forthcoming).

4. Of course, this is not to suggest that deaf people can’t appreciate music at all.

5. Those familiar with Frank Jackson’s Knowledge Argument will recognize that I am borrowing from his famous thought experiment involving Mary, a neuroscientist, who is supposed to know all the “physical facts” about color vision (i.e., facts about the neuroscience of vision) but who has never seen red. Jackson asks: What happens when she sees red for the first time? Does she learn something new—some fact that goes beyond the resources of neuroscience and physics? Philosophers have debated this case extensively, and some believe the example sucessfully challenges the idea that consciousness is a physical phenomenon (Jackson 1986).

6. Schneider (2016).

7. See Koch et al. (2016); Boly et al. (2017).

8. Zimmer (2010).

9. Tononi and Koch (2014, 2015).

10. Tononi and Koch (2015).

11. I will subsequently refer to this level of Φ, rather vaguely, as “high Φ,” because calculations of Φ for the biological brain are currently intractable.

12. See Aaronson (2014a, b).

13. Harremoes et al. (2001).

14. UNESCO/COMEST (2005).

15. Schwitzgebel and Garza (forthcoming).

CHAPTER FIVE: COULD YOU MERGE WITH AI?

1. It should be noted that transhumanism by no means endorses every sort of enhancement. For example, Nick Bostrom rejects positional enhancements (enhancements primarily employed to increase one’s social position) yet argues for enhancements that could allow humans to develop ways of exploring “the larger space of possible modes of being” (Bostrom 2005a, p. 11).

2. More and Vita-More (2013); Kurzweil (1999, 2005); Bostrom (2003, 2005b).

3. Bostrom (1998); Kurzweil (1999, 2005); Vinge (1993).

4. Moore (1965).

5. For mainstream anti-enhancement positions on this question, see, e.g., Annas (2000), Fukuyama (2002), and Kass et al. (2003).

6. For my earlier discussion, see Schneider (2009a, b, c). See also Stephen Cave’s intriguing book on immortality (2012).

7. Kurzweil (2005, p. 383).

8. There are different versions of the psychological continuity theory. One could, for instance, appeal to (a): the idea that memories are essential to a person. Alternatively, one could adopt (b): one’s overall psychological configuration is essential, including one’s memories. Herein, I shall work with one version of this latter conception—one that is inspired by cognitive science—although many of the criticisms of this view will apply to (a) and other versions of (b) as well.

9. Kurzweil (2005, p. 383). Brain-based Materialism, as discussed here, is more restrictive than physicalism in the philosophy of mind, for a certain kind of physicalist could hold that you could survive radical changes in your substrate, being brain-based at one time, and becoming an upload at a later time. For broader discussions of materialist positions in philosophy of mind, see Churchland (1988) and Kim (2005, 2006). Eric Olson has offered an influential materialist position on the identity of the self, arguing that one is not essentially a person at all; one is, instead, a human organism (Olson 1997). One is only a person for part of one’s life; for instance, if one is brain-dead, the human animal does not cease to exist, but the person has ceased to exist. We are not essentially persons. I’m not so sure we are human organisms, however. For the brain plays a distinctive role in one’s identity, and if the brain were transplanted, one would transfer with the brain. Olson’s position rejects this, as the brain is just one organ among many (see his comments in Marshall [2019]).

10. Sociologist James Hughes holds a transhumanist version of the no-self view. See Hughes (2004, 2013). For surveys of these four positions, see Olson (1997, 2017) and Conee and Sider (2005).

11. This is a version of a computational theory of mind that I criticize in Chapter Eight, however. It should also be noted that computational theories of mind can appeal to various computational theories of the format of thought: connectionism, dynamical systems theory (in its computational guise), the symbolic or language of thought approach, or some combination thereof. These differences will not matter for the purposes of our discussion. I’ve treated these issues extensively elsewhere. (See Schneider 2011).

12. Kurzweil (2005, p. 383).

13. Bostrom (2003).

14. Chapter Eight discusses the transhumanists’ computational approach to the mind in more detail.

CHAPTER SIX: GETTING A MINDSCAN

1. Sawyer (2005, pp. 44–45).

2. Sawyer (2005, p. 18).

3. Bostrom (2003).

4. Bostrom (2003, section 5.4).

CHAPTER SEVEN: A UNIVERSE OF SINGULARITIES

1. Here I am indebted to the groundbreaking work by Paul Davies (2010), Steven Dick (2015), Martin Rees (2003), and Seth Shostak (2009), among others.

2. Shostak (2009), Davies (2010), Dick (2013), Schneider (2015).

3. Dick (2013, p. 468).

4. Mandik (2015), Schneider and Mandik (2018).

5. Mandik (2015), Schneider and Mandik (2018).

6. Bostrom (2014).

7. Shostak (2015).

8. Dyson (1960).

9. Schneider, “Alien Minds,” in Dick (2015).

10. Consolmagno and Mueller (2014).

11. Bostrom (2014).

12. Bostrom (2014, p. 107).

13. Bostrom (2014, pp. 107–108, 123–125).

14. Bostrom (2014, p. 29).

15. Bostrom (2014, p. 109).

16. Seung (2012).

17. Hawkins and Blakeslee (2004).

18. Schneider (2011).

19. Baars (2008).

20. Clarke (1962).

CHAPTER EIGHT: IS YOUR MIND A SOFTWARE PROGRAM?

1. This quote is from The Guardian (2013).

2. Harmon (2015a, p. 1).

3. See Harmon (2015a); Alcor Life Extension Foundation (n.d.).

4. Crippen (2015).

5. I’m grateful to Kim Suozzi’s boyfriend, Josh Schisler, for a helpful email and telephone conversations about this (August 26, 2018).

6. Harmon (2015a).

7. Harmon (2015a).

8. Harmon (2015a).

9. Schneider (2014); Schneider and Corabi (2014). For an overview of different steps of uploading, see Harmon (2015b).

10. Schneider (2014); Schneider and Corabi (2014). We never observe physical objects to inhabit more than one location. This is true even for quantum objects, which collapse upon measurement. The supposed multiplicity is only indirectly observed, and it provokes intense debate among physicists and philosophers of physics.

11. For instance, Ned Block (1995b) wrote a canonical paper on this view, titled “The Mind Is the Software of the Brain.” Many academics appealing to the Software View are more interested in characterizing how the mind works rather than venturing claims about brain enhancement or uploading. I will focus on claims by fusion-optimists, as they are the ones making claims about radical enhancement.

12. Wiley (2014).

13. Mazie (2014).

14. Hayworth (2015).

15. Schneider (2011).

16. Descartes (2008).

17. For a helpful new collection on idealism, see Pearce and Goldschmidt (2018). For a discussion of why some versions of panpsychism are forms of idealism, see Schneider (2018a).

18. See, e.g., Heil (2005), Kim (2006).

19. See Schneider (2011b) for a defense.

20. Block (1995b).

21. The notion of an implementation has been problematic for a variety of reasons. For discussion, see Putnam (1967) and Piccinini (2010).

22. Chalmers (1996).

23. Descartes (2008).

24. Putnam (1967).

25. Lowe (1996, 2006). Lowe preferred to speak of the self, rather than the mind, so I am taking the liberty of using his position in the context of a discussion of the mind.

26. Kurzweil (2005, p. 383).

27. Graham (2010).

28. See Schipp (2016).

29. Harmon (2015a).

APPENDIX: TRANSHUMANISM

1. This document appears at the website of the transhumanist organization Humanity+ (Humanity+, n.d.). It also appears in More and Vita-More (2013), an informative volume that includes other classic transhumanist papers. See also Bostrom (2005a) for a history of transhumanist thought.

2. See Bostrom (2003) and Chislenko et al. (n.d.).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.139.224