CHAPTER 7

Permissionless Evolution of Ethics—Artificial Intelligence

Margaret A. Goralski and Krystyna Górniak-Kocikowska

When Darwin introduced the theory of evolution, a deep change occurred not only in science, but also in many unanticipated areas including ­Western ethical theory. Similarly, the development of computer ­science, along with other emerging and converging technologies, especially robotics and artificial intelligence (AI), is causing a profound change to ethical ­theories as well as to the critical decision-making process necessary to define the ethics and morals of killer robots, in the case of autonomous drones, ­missiles, and the more ubiquitous autonomous cars.

One can look at AI as the result of the convergence of two approaches to knowledge that have dominated Western civilization since ancient times; one of them focused on humans, and the other on the world. AI is the result of a growing knowledge about the world (science and technology) as well as a deeper understanding of humans—not only human biology but also human values, including the ethical values—and both areas of knowledge are necessary for creation of human-like intelligence, which AI is supposed to be. Moreover, in order to interact with humans effectively and also for commercial success, AI has to be human-like, at least for the near future.

However, the recent, very rapid ascendance of the importance of AI and AI-related research makes feasible a theory, which has been discussed for some time; namely, that AI could (some think it will) self-evolve and become autonomous. The authors would like to further explore the ethical consequences of such a possibility.

In the early days, computer scientists who worked on the creation of AI were not particularly interested in exploring ethical issues related to AI. They were focused mainly on the challenges posed by the attempts to approximate the way computer programs “think” to the way humans think. Their main interest was in what was possible. They questioned whether computers could be taught to think like humans.

In those days, ethical questions and concerns related to AI were voiced mainly by philosophers and sci-fi writers. Many of them worked under the assumption that anything and everything would be possible in the AI arena eventually. They were creating thought experiments on yet another assumption, namely, that AI could approximate human intelligence to the point of making the two indistinguishable. For instance, Philip K. Dick, author of the book Do Androids Dream of Electric Sheep on which the movie Blade Runner was based, wrote about androids exploiting the boundaries between human and machine. In his imagined future, androids were so sophisticated that they could look just like a human and could be programmed to believe that they were human, complete with fake childhood memories. People would wonder if their friends and loved ones were really human, but most of all they would wonder about themselves—if they were really human. Identity confusion was a recurring theme in Dick’s work. Then, David Hanson created a replica of Philip K. Dick in 2005, with true-to-life artificial skin that looks real and the intelligence to be able to think, feel, build relationships with people as “he” understands one’s speech. The artificial Philip K. Dick can carry on a natural conversation while his computer brain gathers information about the questioner and evolves by constructing responses and formulating facial mannerisms that mimic the questioner (Goralski and Gorniak-Kocikowska 2014).

For quite a while, AI-related ethical issues were treated by ­philosophers and science fiction writers as being similar to or sometimes even identical with human-related ethical problems. Not only was AI dealt with from an anthropocentric point of view, it was also anthropomorphic. It is only recently that discussion of AI is moving past the confines of human brains and therefore beyond the domain of human thought.

Recently, AI reached the point of commercial viability and its development and commercial applications accelerated greatly. The presence of AI will soon be ubiquitous in a wide range of areas where it will interact with humans or otherwise have an impact on human lives. This causes understandable concern voiced by some top scientists and AI experts such as Stephen Hawking and Elon Musk (Cuthbertson 2017; Shead 2017). Instilling ethics and morals into AI has become of critical importance.

Human ethics pertains to human actions and interactions being judged as moral or immoral, ethical or unethical. The process is difficult and often confusing for many reasons, one of them being that there are—in the Western civilization alone—several different, often competing, ethical theories. Another difficulty is caused by the fact that there is no clear line separating these two terms, ethics and morals. They are frequently used as synonyms, which can significantly amplify the already difficult and often confusing issues under discussion. However, the majority of philosophers think of ethics as a theory of morality (moral theory) in which case morality is seen as the practical application—affirmative or negative—of one’s ethical beliefs and judgments. In this chapter, we will follow this distinction.

In general, there are two fundamental ethical questions: (1). How can one know for certain what is the right (good) thing to do? (2). How can one make other people, possibly all people, take the right (good) action? Obviously, these are questions not only of great importance but of great complexity and difficulty as well. Western philosophers (the authors purposefully omit non-Western traditions due to the space restraints for this essay) have been discussing these issues at least since the age of Socrates and Plato. Ethics also constitutes a huge component of most religious systems worldwide. Yet in “real life” there is still no agreement how to answer the two questions mentioned above.

Ethics, as a theory, is meant usually to apply to the entire humankind (to be universal). However, it is sometimes directed purposefully to just one particular group of people. Such is the case with professional ethics. The need for the establishment of professional ethics is closely related to the fact that many activities particular to a profession and not related to other areas of human life are subject to ethical/moral judgment. With the development of the modern concept of “profession” numerous professional ethics were established, among them business ­ethics ­(Moriarty 2016) and computer ethics (ICT ethics), and now AI ethics.

Darwin’s theory generated ideas Darwin himself most likely did not anticipate such as, among others, the creation of environmental ethics and animal ethics, which are different from the earlier ethical concerns regarding animals, where animal well-being was basically intended to serve the purpose of making humans into better beings (in an ethical sense). Environmental and animal ethics today are ethical theories in which anthropocentrism is significantly weakened. There seems to be a progressing emancipation of these new branches of ethics from the controlling power of a human-centered approach and a shift toward a more co-existential and possibly a dialogical one. That is probably one of the chief reasons why the anthropomorphic approach to the nonhuman environment remains quite strong.

Just as in the case of Darwin’s theory of evolution, with its unforeseen consequences for ethics, so today, too, the development of AI and its commercial applications change ethics, opening new fields of investigation and new vistas in the old ones.

The AI revolution leads to even greater challenges than those caused by Darwin’s theory. This includes ethics. Assuming AI’s ability to self-evolve and to become autonomous, one should also accept the possibility of AI creating its own value system, ethical values being of greatest interest here. Therefore, a differentiation should be made between AI ethics as a branch of human professional ethics, meant for humans whose profession it is to deal with AI in a variety of ways, and the ethics of AI as ethics created by AI for itself (at this point, let’s treat it as an AI equivalent of universal human ethics). The problem of AI-related ethics (both AI ethics and the ethics of AI) is a tremendously important, but also tremendously complex, issue for a variety of reasons but mainly because it enters truly uncharted territories.

In 2014, researchers were exploring how they might create robots endowed with their own sense of morality.

A group of researchers from Tufts University, Brown University, and Rensselaer Polytechnic Institute were collaborating with the U.S. Navy in a multi-year effort to explore how they might create robots endowed with their own sense of morality. If the researchers were successful, then they would create an artificial intelligence that was able to autonomously assess a difficult situation and then make a complex ethical decision that could override the instructions that it had been given (Borghino 2014, para. 1).

The technology advanced from this project could be used to assist soldiers in battle via medical robots, but the technology could also be turned into a sophisticated war machine.

Joseph R. Carvalko (2014) discussed how easy it is to turn a machine that was made for humanitarian purposes into a machine that could be made for war. “We trained our creation to recognize how to tell a river from a riverbank, how to tell a boat from a river, how to tell a sampan from a patrol boat, but unfortunately not how to tell a good guy, military or civilian, from a bad guy” (para. 17). He stated that it was ironic how technology that was so beneficial or neutral could more times than not degenerate into a weapon of war (One should remember though that humans too do not always know “how to tell a good guy, military or civilian, from a bad guy.”).

John Markoff (2014) writes about an Air Force B-1 bomber that was launched as an experimental missile. The pilots would direct the missile, but then halfway to its destination communication would be severed with the operators and, without human oversight, the missile on its own would determine which of three ships to attack—striking a 260-foot unmanned freighter. This is the future, and to some degree already the reality, of warfare—wars guided by software. Drones that can be operated by humans remotely miles away, but more commonly wars that will be carried out by weapons that rely on AI without human intervention. Weapons are becoming smarter and nimbler—increasingly difficult for humans to control or defend against. “Britain, Israel, and Norway are already deploying missiles and drones that carry out attacks against enemy radar, tanks, and ships without direct human control” (Markoff 2014, para. 5).

Representatives from 87 nations, United Nation agencies, the International Committee of the Red Cross, and the Campaign to Stop Killer Robots met for the first multilateral meeting in May 2014 to confront the challenge of fully autonomous weapons that could select and attack targets without human control. The United Nations special rapporteur Christof Heyns asked for a moratorium on development of these kinds of weapons, but government intervention and concerns will not deter the development of advanced autonomous weapons that can kill without oversight. Military analysts argue that autonomous smart weapons reduce casualties and indiscriminate killing (Markoff 2014). “We must be vigilant to spot those instances where scientific progress serves peace and reconciliation on the one hand and war on the other, or how technology fortifies effectiveness in a national vital endeavor, but weakens our cherished values” (Carvalko 2014, p. 22).

AI already supplants human decisions in a variety of fields: medical diagnostics (IBM’s famous Watson; APACHE medical systems), driverless trains in cities worldwide (Lin 2013), and Wall Street’s high-speed stock trading floor. Nick Bostrom (2008), Oxford Future of Humanity Institute, states:

We will have superhuman artificial intelligence within the first third of the next century We can expect superintelligence to be developed once there is human-level artificial intelligence By a “superintelligence” we mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. (para. 1 and 2)

Once AI reaches a point where it could improve itself and create other intelligent forms of AI, then these more powerful superintelligences could prevail.

Many AI and robotic ethicists argue that Asimov’s “Three Laws of Robotics” are too simplistic for our contemporary world. Asimov’s laws are clearly directed to the creators of AI, that is, to computer scientists. He follows the idea that the main purpose of the creation and development of AI is to make it human-like as perfectly, as possible. The new approach is that with the sophistication of computers and integration of AI into human lives, a new set of laws are required (Cuthbertson 2017). Stephen Hawking and Tesla CEO Elon Musk have created a set of 23 principles—the Asilomar AI Principles, which establish guidelines to help self-thinking machines remain safe for humans and “act in humanity’s best interest” (Shead 2017). Musk believes that AI has the potential to be more damaging than nuclear weapons and Hawking believes that AI has the potential to end humanity. On the other hand, they believe that AI could slow global warming or find a cure for cancer (Shead 2017).

According to Ronald Arkin, director of the mobile robot laboratory at Georgia Institute of Technology, “The issues of morality in general are very vague … We still argue as human beings about the correct moral framework we should use … consequentialist utilitarian or Kantian deontological” (Goldhill 2016, para. 5). Philosophers, computer scientists, cosmologists, and business people are all grappling with the future ethics and morality of robots and AI because the future is now.

Once confined to the minds of science fiction writers, a car that would drive itself was a vision for the future. One did not have to think about how to instill ethics and morals into autonomous cars. While a human driver might be forgiven for swerving into traffic rather than hitting a pedestrian, would an automated car have that same leeway or should programmers and designers be held responsible for not programming in the most ethical decision? Will they be able to agree upon what kind of ethics to program into the car’s computer? Will an autonomous car know and be able to choose the lesser of two evils? Lin (2013) asks, “Is it better to save an adult or child? What about saving two (or three or ten) adults versus one child?” Hawking (2017) asks similar questions. Programmers will have to think about and factor in all of these difficult decisions. They will have to foresee a myriad of scenarios and set forth guiding principles for each scenario (Lin 2013). “It matters to the issue of responsibility and ethics whether an act was premeditated (as in the case of programming a robot car) or reflexively without any deliberation (as may be the case with human drivers in sudden crashes)” (Lin 2013, para. 17). Programmers and designers will have to foresee all kinds of mundane occurrences, for example, a dog running into traffic as well as lesser scenarios, which might happen rarely but still be fatal. Lin brings up many future responsibilities of not only programmers and designers, but also of society itself and even the car, for example, should a car be loyal to its owner and value his/her life more than unknown drivers or pedestrians; would the AI of robot cars with a highly developed sense of self-identity try to protect itself from destruction in a crash; would robot cars be more or less susceptible to hacking; could other drivers be tempted to “game” an autonomous car forcing it to slow down or swerve. However, according to Grey (2014), “Autonomous cars don’t have to be perfect, they just need to be better than us. Human drivers kill 40,000 people a year with cars just in the United States” (2014).

Ethics and morals are difficult to reduce into algorithms (Lin 2016), but this is the contemporary world. There are profound changes in ethical theories as well as in the critical decision-making process that is necessary to define the ethics and morals of killer robots, missiles, and—on the other end of the spectrum—robotic caregivers, doctors, and teachers, but especially in the more ubiquitous but imminent use of autonomous cars. Since 2006, when the Pentagon’s research agency offered a $2 ­million prize to gather new ideas for the future of unmanned warfare by challenging entrepreneurs and universities to create a driverless car, until today, there have been great strides in the development of autonomous cars. With almost all car manufacturers now competing for this market, autonomous cars are a reality.

Humankind is experiencing a profound permissionless evolution of ethics as AI has gained prominence from the minds of philosophers and science fiction writers to the reality of a world where numerous writers from a variety of fields are stating that humans no longer need apply since this new evolution of AI will make them defunct (Gray 2014; Kolbert 2016). Ethics and morals have become more prominent in all walks of life since AI has become so ubiquitous.

Thus far, the consensus is that AI is purely rational and there are strong indicators that it already can surpass the rational thinking of even exceptionally smart people. Therefore, there is a possibility of AI perfecting one of the rational ethical systems created by humans (for instance, Kant’s ethics or Utilitarianism) and deliver an unquestionable proof for the superiority of that ethical system over all other ethical systems created by humans, thus eliminating them. It could also be possible that AI would develop a perfectly rational ethical system different from any of those created thus far by humans simply for the reason that humans lack the capacity to do so; humans would create such a system, if they were able to. In both cases this could happen if self-evolving AI remains anthropomorphic, the way it has been first created. Also in both cases there should not be a great dissonance, if any, between humans and AI over ethics, providing that humans would accept to act on the ethical principles worked out by AI. Would they? Ever since Asimov, all the way to Hawking, the approach that “We must make sure we control AI and not it us” and that AI should be engaged “in the service of humans” (Penn 2017) has been unquestionable. Bill Gates, co-founder of Microsoft, and Hawking have warned of the dangers of AI becoming too powerful for humans to control (Ashrafian 2015). However, the authors believe that humans must think of ways for humankind to survive or even flourish even when not in control of AI and when AI won’t continue to serve humans, which is the situation that Gates and Hawking among others worry about. Such a situation is possible and perhaps inevitable. The authors also accept the possibility, or even likelihood, of self-evolving autonomous AI evolving in a direction different from being anthropomorphic and creating for itself (and acting accordingly) a non-anthropocentric ethics that would not acknowledge human superiority. It even could be incompatible with any human ethics. That, of course, would be the most serious and important problem for humans to solve. However, due to the limitations of this chapter, this option will not be discussed, but this issue should be examined very carefully and seriously in the near future.

References

Ashrafian, H. March 26, 2015. “Intelligent Robots Must Uphold Human Rights.” Nature 519, p. 391.

Borghino, D. May 13, 2014. “Scientists Try to Teach Robots Morality.” Tufts University. http://newatlas.com/machine-ethics-artificial-intelligence/32036/

Bostrom, N. 1997, 1998, 2000, 2005, 2008. “How Long Before Superintelligence?” https://.nickbostrom.com/superintelligence.html

Carvalko, J.R. 2014. “Self Absorption.” Institute for Ethics and Emerging Technologies. https://ieet.org/index.php/IEET/more/carvalko20141219 (accessed December 14, 2014).

Cuthbertson, A. 2017. “Elon Musk and Stephen Hawking Warn of Artificial Intelligence Arms Race.” Newsweek. http://.newsweek.com/ai-asilomar-principles-artificial-intelligence-elon-musk-550525 (accessed January 13, 2017).

Goldhill, O. April 3, 2016. “Can we trust robots to make moral decisions?” Quartz Media. https://qz.com/653575/can-we-trust-robots-to-make-moral-decisions/ (accessed January 3, 2016).

Goralski, M., and K. Górniak-Kocikowska. 2014. “A New Frontier in Ethics Education: Robotics,” paper presented at the Academy of International Business—Northeast Chapter Special Conference, Tianjin China (accessed January 11, 2014).

Gray, C.G.P. 2014. “Humans Need Not Apply.” YouTube. https://.youtube.com/watch?v=7Pq-S557XQU (accessed August 13, 2014).

Kolbert, E. 2016. “Our Automated Future—How long will it be before you lose your job to a robot?” The New Yorker. http://.newyorker.com/magazine/2016/12/19/our-automated-future (accessed December 19 and 26, 2016).

Lin, P. 2016. “Why Ethics Matters for Autonomous Cars.” In Autonomous Driving, 69–85. Berlin Heidelberg: Springer.

Lin, P. 2013. “The Ethics of Autonomous Cars.” The Atlantic. https://.theatlantic.com/technology/archive/2013/10/the-ethics-of-autonomous-cars/280360/ (accessed October 8, 2013).

Markoff, J. “Fearing Bombs that Can Pick Whom to Kill.” The New York Times. https://.nytimes.com/2014/11/12/science/weapons-directed-by-robots-not-humans-raise-ethical-questions.html?mcubz=1 (accessed November 11, 2014).

Moriarty, J. 2016. “Business Ethics.” Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/ethics-business/ (accessed November 17, 2016).

Penn, F. 2017. “Stephen Hawking: Self-Evolving Artificial Intelligence (AI) Has a Free Will & May Destroy H.” YouTube. https://.youtube.com/watch?v=U2yhVCTC4sg

Robin, M.H. “Death by Robot.” The New York Times Magazine, MM16 (accessed January 11, 2015).

Shead, S. 2017. “Stephen Hawking and Elon Musk Backed 23 Principles to Ensure Humanity Benefits from AI.” Business Insider. http://.businessinsider.com/stephen-hawking-elon-musk-backed-asimolar-ai-principles-for-artificial-intelligence-2017-2 (accessed February 1, 2017).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.6.243