© Donald J. Norris 2017

Donald J. Norris, Beginning Artificial Intelligence with the Raspberry Pi, 10.1007/978-1-4842-2743-5_1

1. Introduction to Artificial Intelligence

Donald J. Norris

(1)Barrington, New Hampshire, USA

This chapter provides a straightforward introduction to artificial intelligence (AI) , which in turn helps provide a framework for comprehending what AI is all about and why it is such an exciting and rapidly evolving field of study. Let’s start with some historical facts about the origins of AI.

AI Historical Origins

Remarkably, AI, or something akin to it, has been around for a very long time. It has been recorded that ancient Greek philosophers discussed automatons or machines with inherent intelligence. In 1517, the Prague Golem was created; it is shown in Figure 1-1.

A436848_1_En_1_Fig1_HTML.jpg
Figure 1-1. Prague Golem

The Golem is made of clay, but according to Jewish folklore, it could be animated to carry out various acts of vengeance and retribution to parties responsible for anti-Semitic acts.

René Descartes, a famous French philosopher, wrote in 1637 about the impossibility of machine intelligence in his Discourse on Method treatise. Descartes was not advocating AI, but the treatise does show it was on his mind.

A more fanciful AI experiment example—or more appropriately stated, a hoax—is an automated chess player that made the rounds in Europe in the late 18th to mid-19th centuries. It was known as The Turk. A lithograph of it on a modern stamp is shown in Figure 1-2.

A436848_1_En_1_Fig2_HTML.jpg
Figure 1-2. Automated chess player

It was purported to be an intelligent machine that could play a game of chess against a human opponent. In reality, there was a human chess player jammed into the machine’s supporting box. He operated manipulators to move the machine’s chess pieces. I would suppose that there must have been a miniature periscope or peephole available to allow this hidden chess player the opportunity to surveil the chessboard. The odd name The Turk is from the German word Schachtürke, which means “automaton chess player.” The typical human chess master hidden in the box was so skilled that he would often win matches against notable opponents, including Napoleon Bonaparte and Benjamin Franklin. It was not until many years later that a real machine was available to actually play a reasonable chess game.

The advent of a scientific AI approach waited until 1943, upon the publication a paper by McCulloch and Pitts, in which they described “perceptrons,” a mathematical model based on real biological brain cells called neurons. In their paper, they accurately described how neuron cells fired in a binary fashion, similar to electronic binary circuits. They also went well beyond that simple comparison to show how such cells could dynamically change their function with time, essentially creating rudimentary behavioral actions. This seminal paper was the first in a long series that established an important AI research area concerned with neural networks. I discuss this topic in greater detail in a later chapter.

In 1947, Alan M. Turing wrote:

In my opinion, this problem of making a large memory available at reasonably short notice is much more important than doing operations such as multiplication at high speed. Speed is necessary if the machine is to work fast enough for [it] to be commercially valuable, but a large storage is necessary if it is to be capable of anything more than rather trivial operations. The storage capability is therefore the more fundamental requirement.

Turing, who many readers may recognize as the genius behind the effort to decode the German Enigma machine that considerably shortened the duration of WWII, also recognized in this short paragraph that any future machine “intelligence” would be predicated upon having sufficient machine memory available and not be solely reliant on computing speed. I have more to say about Turing a bit later in this chapter when the Turing test is discussed.

In 1951, a young mathematics PhD candidate named Marvin Minsky, along with Dean Edmonds, designed and built an analog computer based on the perceptrons described in the McCulloch and Pitts paper. This computer was named the Stochastic Neural Analog Reinforcement Computer (SNARC) . It consisted of 40 vacuum tube neuron modules, which in turn controlled many additional valves, motors, gears, clutches, and actuators. This system was a randomly connected network of Hebb synapses that made up a neural network learning machine. The SNARC was possibly the first artificial self-learning machine. It successfully modeled the behavior of a rat traversing a maze in a search of food. This system exhibited some rudimentary “learning” behaviors that allowed the rat sim to eventually negotiate the maze.

A real turning point in AI progress happened in 1956 during an AI conference at Dartmouth College. This meeting was held at the behest of Minsky, John McCarthy, and Claude Shannon to explore the new field of AI. Claude Shannon has often been referred to as the “father of information theory” in recognition of his brilliant work accomplished at the prestigious Bell Telephone Lab in Holmdel, NJ.

John McCarthy was no slouch either, as he was the first to use the phrase “artificial intelligence,” and the creator of the Lisp programming language family. He was a significant influence in the design of the ALGOL programming language. He also contributed significantly to the concept of computer timesharing, which makes modern computer networks possible. Minsky and McCarthy were also the founders of the MIT Media Lab, now known as the MIT Computer Science and Artificial Intelligence Lab.

Returning to the 1956 conference, McCarthy stated this now classic definition of AI, which as far as I know, remains the “gold standard” that most people use when asked to define AI:

It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.

McCarthy used the phrase human intelligence in this definition, which I further explore a little later in this chapter. There were many other fundamental AI concepts set forth in this conference, which I cannot further explain in this book, but I urge interested readers to further explore.

The 1960s was a very progressive decade in terms of AI research. Arguably, the work of Newell and Simon in detailing the General Problem Solver algorithm stands out. This approach used both computer and human problem-solving techniques. Unfortunately, computer development was still evolving, and memory and speed capabilities to efficiently handle the algorithm’s requirements were simply not present. (Remember Turing’s warning that I earlier discussed.) The General Problem Solver project was eventually abandoned—not because it was theoretically incorrect, but because the hardware needed to implement it was simply not available.

Another significant AI contribution during this 1960s was Lofti Zadeh’s introduction of fuzzy sets and logic, which were the foundation of the impressive AI branch known as fuzzy logic. Zadeh discussed how computers do not necessarily have to behave in a precise and discrete logical pattern, but instead take a more human-like fuzzy logic approach. I present an interesting fuzzy logic project in Chapter 5.

One unfortunate outcome from the ongoing research in the 1960s was the prediction that a computer could mimic a human brain. Of course, the computing power available to do fundamental research on how a human brain realistically functions was simply not available at that time. This led to much disappointment and disillusionment in the AI community.

The process of mimicking or somehow copying how the human brain works, and placing that functionality into a machine, has been termed as the classical AI approach. This has led to deep divisions within the AI community, where many researchers believe that machines should become intelligent in their own manner rather than mimicking human intelligence. The later approach has been termed modern AI.

There was considerable work in the late 1960s on how a computer could converse with a human by using natural language instead of computer code. One clever program created by Joseph Weisenbaum during this time was named ELIZA. While primitive by today’s standards, it was still able to fool some users into thinking that they were conversing with another human instead of a machine. The ELIZA project brings up a very interesting topic regarding how one might determine if a machine has reached some level of “intelligence.” One good answer lies in what is known as the Turing test, which I mentioned earlier. In a 1950 article in the Journal of Computing Machinery and Intelligence, Alan Turing discussed what he felt were sufficient conditions for considering a machine to have reached an intelligent state. He essentially argued that if a machine could successfully fool a knowledgeable human observer into thinking that he was having a conversation with another human instead of a machine, then the machine could be considered intelligent. Of course, the conversation would have had to done using a neutral communications channel to avoid the obvious clues of voice or appearance giving away the machine. Teletypes were the communication devices used in the 1950s to implement the neutral channel. The Turing test is still a reasonable benchmark, even considering today’s technologies. One could even use highly effective modern voice recognition and synthesis technologies to further fool the observer. The Turing test is still controversial among philosophers and other interested parties who discuss the nature of intelligence.

In the 1970s, AI was slow to mature, due to the slow growth of computing technology . There was a lot of interest in natural language processing and image recognition and analysis, but unfortunately, the computers available to researchers were still quite limited and not up to these difficult tasks. It soon became apparent that there would have to be significant improvement in processing power before AI could really progress. In addition, there were also significant philosophical arguments against AI, including the famous “Chinese room” argument postulated by John Searle. Minsky argued against Searle’s hypothesis, which only led to a lot of infighting and misdirection in ongoing research. Meanwhile, McCarthy argued for a modern AI approach, stating that human intelligence and machine intelligence are different and should be treated that way.

The 1980s showed considerable improvement in AI development due to the onset of the PC and many researchers taking on McCarthy’s pragmatic approach. The advent of expert systems happened in this timeframe, which showed great promise and actual applications in the business and industrial/manufacturing sectors. I demonstrate several expert system applications in later chapters. The classic AI methodology continued; however, the modern approach was rapidly gaining acceptance, and perhaps more importantly, was used in many real-world situations. Coincidentally, there was a lot being done with robotics and real robot development at this time. AI research naturally gravitated to this area, because the areas seemed perfectly complementary. The age of practical AI had finally arrived and future developments came quickly, as the age of modern computing was also happening. It was about this time that the real impact of Moore’s law became apparent. Moore’s law refers to Gordon Moore, one of Intel’s founders, who stated in 1965: “The number of transistors per square inch on integrated circuits has doubled every year since their invention.”

This exponential growth in density seems to correlate nicely with the incredible improvement in computer performance, which is so sorely need for AI improvement and growth.

Significant milestones where reached in the 1990s, including the impressive win in 1997 by IBM’s Deep Blue computer system over world grand-champion chess master, Garry Kasparov. Despite how impressive this win was, there was cold water thrown on this event. The stark reality of the win should be tempered by the following observation of McCarthy when he was asked specifically about a computer winning Go, a traditional Chinese board game :

The Chinese and Japanese game of Go is also a board game in which the players take turns moving. Go exposes the weakness of our present understanding of the intellectual mechanisms involved in human game playing. Go programs are very bad players, in spite of considerable effort (not as much as for chess). The problem seems to be that a position in Go has to be divided mentally into a collection of suppositions which are first analyzed separately followed by an analysis of their interaction. Humans use this in chess also, but chess programs consider the position as a whole. Chess programs compensate for the lack of this intellectual mechanism by doing thousands or, in the case of Deep Blue, many millions of times as much computation.

This prescient analysis should assuage any reader’s fear that computers are any nearer obtaining a human-level intellect featured in many science fiction movies, including The Terminator series, 2001: A Space Odyssey, and the classic War Games. There is a long way to go and much more research to be completed before computing systems become truly intelligent. This is the subject of the next section.

Intelligence

Discussing the nature of intelligence is always a topic in beginning AI courses. Students most often wind up using circular reasoning when trying to come to grips with how to define what it is and how to recognize it. Exploring intelligence also usually ends in creating an almost endless list of questions, such as:

  • Are mice intelligent?

  • What does it mean for a machine to be intelligent?

  • Are dolphins the smartest mammals in the sea?

  • How would an extraterrestrial recognize intelligence on Earth?

One could continue ad infinitum with questions like these. Perhaps, on retrospect, just creating questions like these is a sure sign of intelligence. You can now see what I meant by circular reasoning. It turns out that agreeing to a common definition of intelligence is a difficult, if not impossible, action. There are dictionary definitions of intelligence, such as the following from Meriam-Webster online :

  1. a (1): the ability to learn or understand or to deal with new or trying situations : reason ; also: the skilled use of reason (2): the ability to apply knowledge to manipulate one’s environment or to think abstractly as measured by objective criteria (as tests), b Christian Science : the basic eternal quality of divine Mind, c: mental acuteness : shrewdness

  2. a: an intelligent entity; especially : angel , b: intelligent minds or mind <cosmic intelligence>

  3. the act of understanding : comprehension

  4. a: information , news b: information concerning an enemy or possible enemy or an area; also: an agency engaged in obtaining such information

  5. the ability to perform computer functions

As you can readily see, the dictionary editors were widely diverse in trying to capture the definition of intelligence, including human behaviors, spiritual aspects, religion, and finally and somewhat interesting, a fifth-level definition of performing computer functions.

The online Macmillan dictionary offers a much more concise definition :

The ability to understand and think about things , and to gain and use knowledge

I am positive that if I went to other online dictionaries, I would see many other definitions, which is why trying to pin down intelligence is so hard. Consequently, not having an agreed standard regarding what intelligence is makes it nearly impossible to recognize it when it is happening on a consistent and agreed upon basis.

Intelligence is also related to both sensory inputs and motor or actuating outputs. Obviously, our brains are contained in our human bodies, which are also nicely equipped with five sensory systems—vision, hearing, taste, touch, and smell. These sensory systems are an integral part of our intelligence; however, it has been repeatedly demonstrated that there are still very intelligent human beings who have lost one or more of their sensory inputs. The human body is quite remarkable in its ability to compensate when a particular sensory system has been injured or destroyed. Likewise, human intelligence is also linked somewhat to our motor skills; however, I would argue not as much as the sensory inputs. Losing the ability to speak has not diminished the intellect and genius of Steven Hawking. Having the ability to walk, run, drive a car, or pilot an airplane gives individuals the opportunity to explore and understand their environment, and consequently, expand their knowledge and experiences, but not necessarily improve or expand their intelligence—unless you subscribe to the notion that knowledge and intelligence are synonymous.

It is only a small leap to study animals and consider whether or not they possess intelligence. Birds can fly and typically have much better vision than humans have. Does this mean that they possess intelligence beyond the human species, at least in those two areas? The answer is obviously unknowable, which leads to the following reasonable conclusion: animal and machine intelligence should simply be accepted for what it is and not be compared to human intelligence. Trying to make the latter comparison is simply like comparing apples and oranges; it is truly meaningless.

My goal in the foregoing discussion is to reiterate the premise of the modern approach to AI, in that machine intelligence should be considered by itself and not be compared to human intelligence. This is the underlying premise of the book, where projects explore machine advantages, but are neither expected to nor even desired to emulate or simulate human intelligence.

Strong AI vs. Weak AI , Broad AI vs. Narrow AI

There are additional descriptors that are commonly applied to AI, as you may have inferred from this section’s title. AI work and research that attempts to simulate human reasoning to the maximum level possible is sometimes called strong AI. I would presume that proponents of the classical AI approach would also hardily endorse this terminology. This strong adjective contrasts sharply with the weak AI adjective that simply relates to getting practical AI systems to function effectively, without regard to the human analog. This approach is what I have referred to as the modern approach. I do not know how these strong and weak terms arose, but I suspect they exist to cast a prerogative shadow on the modern approach, which is unfortunate because both approaches are equally valid and deserving of equal importance and recognition. I have only introduced these terms so that you understand their significance if you happen to read about AI applications or projects. I do not use either term; instead, I just focus on the AI applications— regardless of their being strong or weak.

The other pair of terms I used in the section title are broad AI and narrow AI. Broad AI is concerned with general reasoning and not related to a specific task or application. I suppose that broad AI and strong AI would tend to have a natural bond, as both relate to the human context of reasoning and thinking. Narrow AI focuses on AI applied to specific tasks and it is not too generalized. However, there are exceptions, which tend to easily break the broad and narrow AI definitions. Google has developed systems that are excellent in predicting or characterizing how “things” should be described or arranged. Google applications exhibit both broad and narrow AI aspects regarding generalizations, as well as specific cataloging functions. Amazon, likewise, has intelligent agents, which tend to be excellent in both generalizations and making specific customer recommendations.

I close this section with Figure 1-3, which is a word cloud that I created using Mathematica running on a Raspberry Pi 3. This figure is a simply a graphical representation of the many different words that are commonly used with AI. Wikipedia was the source for all the words shown in the figure.

A436848_1_En_1_Fig3_HTML.jpg
Figure 1-3. A word cloud on artificial intelligence

Reasoning

I repeatedly used the words reason and reasoning in the previous discussions. But what do they really mean and how are they related to AI? Reasoning describes creating or considering a reason. The word reason means to think about how things or ideas relate to what is known—or more simply, knowledge. A few reasoning examples help clarify the thoughts that I am attempting to convey.

  • Learning is the process of building a new knowledge set based upon examining or discussing existing knowledge sets. Sets in this context are any data collections, whether or not based in reality.

  • Use of language is the conversion of words, whether written or spoken into ideas and supportive relationships.

  • Inference based on logic means deciding whether something is true based upon logical relationships.

  • Inference based on evidence means deciding whether something is true based upon all the supportive available evidence.

  • Natural language generation exists to satisfy communication goals and objectives using a given language.

  • Problem solving is the process of determining how to achieve a set goal or an objective.

Any of these activities must necessarily involve reasoning to achieve a satisfactory end result. Please note that nowhere in the list do I limit the reasoning to only human beings. Some of these activities are certainty suitable to be implemented by machines, and in some cases, even by animals. There have been endless experiments that have satisfactorily demonstrated that animals can solve problems, especially if it involves getting to food.

There is a recent proliferation of voice-activated Internet devices, including Amazon’s Alexa, Microsoft’s Cortana, Apple’s Siri, and Google’s Home. These are either standalone devices or applications that are installed on smartphones. In any case, they are well equipped to recognize voice inquires, translate the inquiries into actionable Internet queries, and finally, relay the results to the user in a highly understandable format, usually as a well-spoken female voice. These devices/applications must use some level of reasoning to carry out their intended functions, even if to reply they do not understand the user’s request.

AI Categories

Table 1-1 is a list that I created to show most of the categories that make up modern-day AI. I do not think it is comprehensive. There are likely some categories that have been inadvertently omitted. I did overtly omit some categories, such as the history and philosophy of AI, because they were not directly pertinent to the intent of this table.

Table 1-1. Modern AI Categories

Category

Brief Description

Affective computing

The study and development of systems and devices that can recognize, interpret, process, and simulate human affects.

Artificial immune systems

Intelligent, rule-based machine learning systems based primarily on the inherent principles and processes contained within vertebrate immune systems.

Chatterbot

A type of conversational agent or computer program designed to simulate an intelligent conversation with one or more human users through text or audio channels.

Cognitive architecture

A theory about the structure of the human mind. One of the main goals is to incorporate concepts from cognitive psychology into a comprehensive computer model.

Computer vision

An interdisciplinar​y field that deals with how computers can gain high-level understanding from digital images or videos.

Evolutionary computing

The use of evolutionary algorithms based on Darwinian principles from which the name is derived. These algorithms belong to a family of trial-and-error problem solvers and use metaheuristic or stochastic global methods to determine many solutions.

Gaming AI

AI used in games to generate intelligent behaviors, primarily in non-player characters (NPCs), often simulating human-like intelligence.

Human-Computer-Interface

(HCI)

HCI researches the design and use of computer technology, focused on the interfaces between people (users) and computers.

Intelligent soft assistant or intelligent personal assistant (IPA)

A software agent that can perform tasks or services for an individual. These tasks or services are usually based on user input, location awareness, and the ability to access information from a variety of online sources. Examples of such an agent are Apple’s Siri, Amazon’s Alexa, Amazon’s Evi, Google’s Home, Microsoft’s Cortana, the open source Lucida, Braina (application developed by Brainasoft for Microsoft Windows), Samsung’s S Voice, and LG G3’s Voice Mate.

Knowledge engineering

Refers to all technical, scientific, and social aspects involved in building, maintaining, and using knowledge-based systems.

Knowledge representation (KR)

Dedicated to representing information about the world in a form that a computer system can utilize to solve complex tasks, such as diagnosing a medical condition or having a dialog in a natural language.

Logic programming

A type of programming largely based on formal logic. Any program written in a logic programming language is a set of sentences in logical form, expressing facts and rules about some problem domain. Major logic programming language families include Prolog, answer set programming (ASP), and Datalog.

Machine learning

(ML)

ML in the AI context provides computers the ability to learn without being explicitly programmed. Shallow and deep learning are two major subfields.

Multi-agent system

(M.A.S.)

M.A.S. is a computerized system composed of multiple interacting intelligent agents within an environment.

Robotics

Robotics is the interdisciplinar​y branches of engineering and science that includes mechanical engineering, electrical engineering, computer science, AI, and others.

Robots

A robot is a machine, especially one programmable by a computer, which is capable of carrying out a complex series of actions autonomously.

Rule engines or systems

Rule-based systems are used to store and manipulate knowledge to interpret information in a useful way.

Turing test

The Turing test is a test, developed by Alan Turing in 1950, of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.

I will repeat that this table does not cover all modern AI research and activities, but it certainly highlights most of the important ones. I only demonstrate a few of the listed AI categories in this book, but even those should provide reasonable insight on how AI may be implemented using relatively simple computer resources.

At this point, I believe it is appropriate to discuss AI as it affects modern society in ways well beyond the scope of this book. I provide this brief discussion in hopes of enhancing my readers’ knowledge and understanding of how AI affects us—one and all—in our daily lives.

AI and Big Data

Most readers have heard the term big data, but like most people, you may not have an appreciation of what it is and how it affects our modern society. There are many definitions of big data, just as there are many definitions of AI. The definition I like is rather simple: a data collection characterized by huge volumes, rapid velocity, and great variety.

The magnitude of the huge volumes can be characterized by saying it is typically measured in petabytes (PB), where one PB equals one million gigabytes (GB). That is truly a huge amount of data. The rapid velocity mentioned in the definition refers to how rapidly the data is generated or created. One need only look at Facebook to appreciate the rapidity of new content that is constantly being created by hundreds of millions of online users. Finally, the great variety phrase in the definition refers to the various data types that go into making up the huge data flows. This includes pictures, video, audio, as well as plain old text. An average photo uploaded to Facebook likely takes about four to five megabytes of storage. Multiply that by the multimillions of photos that are constantly uploaded, and you soon realize the nature of big data. So how does AI affect big data? The answer is that an AI learning system when applied to a big data set allows users to extract useful information from a huge and noisy input. Typical computer systems that can handle big data are composed of thousands of processors working together in a parallel fashion to greatly speed up the data reduction process often referred to as MapReduce. IBM’s Watson computer is a prime example of such a system. It has implemented expert medical systems by using a rules-based engine and processing many thousands, if not millions, of medical records. The end result is a computer system that assists doctors in diagnosing illnesses and related maladies, which do not have obvious or relatable symptoms to known diseases.

Amazon’s website is integrated with an impressive AI system that easily compiles a detailed profile of each potential or actual customer that repeatedly visits its site. It matches the customer’s searches with those of other customers that have searched or inquired about similar products. It further tries to predict what might interest a site visitor based on their past searches and orders. All the data that the Amazon system uses is transactional, basically identifying what potentially interests its customers. This transactional data, which likely qualifies as big data, is the primary input into Amazon’s AI computer systems. The output is the profile that I mentioned, but it may be also considered a set of characterizations attached to the potential or actual customer; for example, a resulting in a website suggestion may look like the following:

“You may be interested in Robert Heinlein’s book The Moon is a Harsh Mistress because you have purchased the following books:”

  • Full Moon

  • Star Wars: The Empire Strikes Back

  • The Shawshank Redemption

This list of seemingly unconnected books likely shows that the customer has an interest in the Moon, conflict in outer space, or injustice in a prison, all of which are touched in some fashion in Heinlein’s book. (Incidentally, Heinlein’s book received the Hugo Award for best science fiction novel in 1967.) Making this obscure connection between the customer’s past book purchases and Heinlein’s book content requires a significant computer analysis effort, as well as access to a huge database.

The biggest global user of big data analysis is the US government in the execution of the Global War on Terrorism (GWOT) . The US National Security Agency (NSA) is at the forefront of detecting possible/likely terrorist attacks on the homeland. Its annual classified budget has been estimated at more than $15 billion, with the vast majority of it spent on collecting and analyzing all sorts of big data in the fight against GWOT. What it collects and how it conducts big data analysis is ultrasecret, but it is quite reasonable to assume that all appropriate AI techniques are used by the NSA experts, many of whom I expect are also experts at conducting secret AI research. This is not a conspiracy theory on my part, but simply what any reasonable layperson should expect.

This section concludes my introduction to AI, which although somewhat abbreviated, hopefully contained sufficient information to provide you with a reasonable background to start the study of specific AI concepts. This begins in the next chapter.

Summary

I began the chapter with an historical overview of AI that started in ancient times and proceeded to modern times. This shows that mankind has thought about making machines to accomplish intelligent actions for a very long time. It is only in very recent times that computers have been developed with the capabilities to implement intelligent actions

There was a brief discussion on the differences between the classic and modern approaches to AI development. In brief, the classic approach attempts to have computers mimic or simulate the human brain, whereas the modern approach simply takes advantage of a computer’s inherent speed and processing power to implement AI on it. I also defined additional terms, such as broad AI and narrow AI and strong AI and weak AI.

The brief inspection of the nature of intelligence was presented to pique your curiosity and to think about how you might recognize if intelligence is present in machines or animals. A brief section on reasoning followed, which included some examples to help recognize reasoning when it is incorporated into AI applications.

I next presented a list of AI categories to help explain important and current AI R&D efforts. Only a few of the AI categories can be demonstrated in this book.

The chapter finished with a discussion on how AI influences modern society, especially when dealing with big data.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.64.126