This chapter lays a foundation for one of the most influential forces available in modern times, the computer. A computer is an electronic device, operating under the control of instructions, which tells the machine what to do. It is capable of accepting data (input), processing data arithmetically and logically, producing output from the processing, and storing the results for future use. The chapter begins with the characteristics, evolution, and various generations of computers. The discussion also explores the classification of computers and their features. The chapter concludes with an overview on basic computer units and computer applications.
After reading this chapter, you will be able to understand:
The characteristics of computers that make them an essential part of every technology
The evolution of computers, from refining of abacus to supercomputers
The advancement in technology that has changed the way the computers operate, leading to powerful, efficient and cheaper computers
The classification of computers into micro, mini, mainframe and supercomputers
The computer system, which includes components such as the Central Processing Unit (CPU) and I/O devices
The application of computers in various fields, which increases efficiency, thus, resulting in proper utilization of resources
In the beginning of the civilization, people used fingers and pebbles for computing purposes. In fact, the word digitus in Latin actually means finger and calculus means pebble. This gives a clue into the origin of early computing concepts. With the development of civilization, the computing needs also grew. The need for a mechanism to perform lengthy calculations led to the invention of, first, calculator and then computers.
The term computer is derived from the word compute, which means to calculate. A computer is an electronic machine devised for performing calculations and controlling operations that can be expressed either in logical or in numerical terms. In simple words, a computer is an electronic device that performs diverse operations with the help of instructions to process the data in order to achieve desired results. Although the application domain of a computer depends totally on human creativity and imagination, it covers a huge area of applications including education, industries, government, medicine, scientific research, law, and even music and arts.
Computers are one of the most influential forces available in modern times. Harnessing the power of computers enables relatively limited and fallible human capacities for memory, logical decision making, reaction and perfection to be extended to almost infinite levels. Millions of complex calculations can be done in a mere fraction of time; difficult decisions can be made with unerring accuracy for comparatively little cost. Computers are widely seen as instruments for future progress and as tools to achieve sustainability by way of improved access to information with the help of video-conferencing and e-mail. Indeed, computers have left such an impression on modern civilization that we call this era as the “information age”.
The human race developed computers so that it could perform intricate operations, such as calculation and data processing, or simply for entertainment. Today, much of the world's infrastructure runs on computers and it has profoundly changed our lives, mostly for the better. Let us discuss some of the characteristics of computers, which make them an essential part of every emerging technology and such a desirable tool in human development.
Although processing has become less tedious with the development of computers, it is still a time-consuming and expensive job. Sometimes, a program works properly for some period and then suddenly produces an error. This happens because of a rare combination of events or due to an error in the instruction provided by the user. Therefore, computer parts require regular checking and maintenance in order to give correct results. Furthermore, computers need to be installed in a dust-free place. Generally, some parts of computers get heated up due to heavy processing. Therefore, the ambient temperature of the computer system should be maintained.
A computer can only perform what it is programmed to do.
The computer needs well-defined instructions to perform any operation. Hence, computers are unable to give any conclusion without going through intermediate steps.
A computer's use is limited in areas where qualitative considerations are important. For instance, it can make plans based on situations and information, but it cannot foresee whether they will succeed.
The need for a device to do calculations along with the growth in commerce and other human activities explains the evolution of computers. Having the right tool to perform these tasks has always been important for human beings. In their quest to develop efficient computing devices, humankind developed many apparatuses. However, many centuries elapsed before technology was adequately advanced to develop computers.
In the beginning, when the task was simply counting or adding, people used either their fingers or pebbles along lines in the sand. In order to conveniently have the sand and pebbles all the time, people in Asia Minor built a counting device called abacus. This device allowed users to do calculations using a system of sliding beads arranged on a rack. The abacus was simple to operate and was used worldwide for centuries. In fact, it is still used in many countries (Figure 1.1).
Figure 1.1 Abacus
With the passage of time, humankind invented many computing devices, such as Napier bones and slide rule. It took many centuries, however, for the next significant advancement in computing devices. In 1642, a French mathematician, Blaise Pascal, invented the first functional automatic calculator. This brass rectangular box, also called a Pascaline, used eight movable dials to add numbers up to eight figures long (Figure 1.2).
Figure 1.2 Pascaline
In 1694, a German mathematician, Gottfried Wilhem von Leibniz, extended Pascal's design to perform multiplication, division and to find square root. This machine is known as the Stepped Reckoner. It was the first mass-produced calculating device, which was designed to perform multiplication by repeated addition. Like its predecessor, Leibniz's mechanical multiplier worked by a system of gears and dials. The only problem with this device was that it lacked mechanical precision in its construction and was not very reliable.
In 8196, Hollerith found the Tabulating Machine Company, which was later named IBM (International Business Machines). IBM developed numerous mainframes and operating systems, many of which are still in use today. For example, IBM co-developed OS/2 with Microsoft, which laid the foundation for Windows operating systems.
The real beginning of computers as we know them today, however, lay with an English mathematics professor, Charles Babbage. In 1822, he proposed a machine to perform differential equations, called a Difference Engine. Powered by steam and as large as a locomotive, the machine would have a stored program and could perform calculations and print the results automatically. However, Babbage never quite made a fully functional difference engine and in 1833 he quitted working on this machine to concentrate on the Analytical Engine. The basic design of this engine included input devices in the form of perforated cards containing operating instructions and a “store” for memory of 1,000 numbers of up to 50 decimal digits long. It also contained a control unit to allow processing instructions in any sequence and output devices to produce printed results. Babbage borrowed the idea of punch cards to encode the machine's instructions from Joseph-Marie Jacquard's loom. Although the analytical engine was never constructed, it outlined the basic elements of a modern computer.
In 1889, Herman Hollerith, who worked for the US Census Bureau, also applied Jacquard's loom concept to computing. Unlike Babbage's idea of using perforated cards to instruct the machine, Hollerith's method used cards to store the data, which he fed into a machine that compiled the results mechanically (Figure 1.3).
Figure 1.3 Hollerith's Tabulator
The start of World War II produced a substantial need for computer capacity, especially for military purposes. One early success was the Mark I, which was built as a partnership between Harvard Aiken and IBM in 1944. This electronic calculating machine used relays and electromagnetic components to replace mechanical components. In 1946, John Eckert and John Mauchly of the Moore School of Engineering at the University of Pennsylvania developed the Electronic Numerical Integrator and Calculator (ENIAC). This computer used electronic vacuum tubes to make the internal parts of the computer. It embodied almost all the components and concepts of today's high-speed, electronic computers. Later on, Eckert and Mauchly also proposed the development of the Electronic Discrete Variable Automatic Computer (EDVAC). It was the first electronic computer to use the stored program concept introduced by John Von Neumann. It also had the capability of conditional transfer of control, that is, the computer could stop any time and then resume operations. In 1949, at the Cambridge University, a team headed by Maurice Wilkes developed the Electronic Delay Storage Automatic Calculator (EDSAC), which was also based on John Von Neumann's stored program concept. This machine used mercury delay lines for memory and vacuum tubes for logic. The Eckert–Mauchly Corporation manufactured the Universal Automatic Computer (UNIVAC) in 1951 and its implementation marked the real beginning of the computer era.
In the 1960s, efforts to design and develop the fastest possible computer with the greatest capacity reached a turning point with the Livermore Advanced Research Computer (LARC), which had access time of less than 1 μs (pronounced as microsecond) and the total capacity of 100,000,000 words. During this period, the major computer manufacturers began to offer a range of capabilities and prices, as well as accessories such as card feeders, page printers and cathode ray tube displays. During the 1970s, the trend shifted towards a larger range of applications for cheaper computer systems. During this period, many business organizations adopted computers for their offices. The vacuum deposition of transistors became the norm and entire computer assemblies became available on tiny “chips”.
In the 1980s, Very Large Scale Integration (VLSI) design, in which hundreds of thousands of transistors were placed on a single chip, became increasingly common. The “shrinking” trend continued with the introduction of personal computers (PCs), which are programmable machines small enough and inexpensive enough to be purchased and used by individuals. Microprocessors equipped with the read-only memory (ROM), which stores constantly used and unchanging programs, performed an increased number of functions. By the late 1980s, some PCs were run by microprocessors that were capable of handling 32 bits of data at a time and processing about 4,000,000 instructions per second. By the 1990s, PCs became part of everyday life. This transformation was the result of the invention of the microprocessor, a processor on a single integrated circuit (IC) chip. The trend continued leading to the development of smaller and smaller microprocessors with a proportionate increase in processing powers. The computer technology continues to experience huge growth. Computer networking, electronic mail and electronic publishing are just a few applications that have grown in recent years. Advances in technologies continue to produce cheaper and more powerful computers, offering the promise that in the near future, computers or terminals will reside in most, if not all, homes, offices and schools.
The history of computer development is often discussed with reference to the different generations of computing devices. In computer terminology, the word generation is described as a stage of technological development or innovation. A major technological development that fundamentally changed the way computers operate, resulting in increasingly smaller, cheaper, more powerful, and more efficient and reliable devices, characterizes each generation of computers. According to the type of “processor” installed in a machine, there are five generations of computers.
First-generation computers were vacuum tubes/thermionic valve-based machines. These computers used vacuum tubes for circuitry and magnetic drums for memory. A magnetic drum is a metal cylinder coated with magnetic iron oxide material on which data and programs can be stored. The input was based on punched cards and paper tape, and the output was in the form of printouts (Figure 1.4).
Figure 1.4 Vacuum Tube
First-generation computers relied on binary-coded language also called machine language (language of 0s and 1s) to perform operations and were able to solve only one problem at a time. Each machine was fed with different binary codes and hence, were difficult to program. This resulted in lack of versatility and speed. In addition, to run on different types of computers, instructions must be rewritten or recompiled.
Examples: ENIAC, EDVAC and UNIVAC.
Second-generation computers used transistors, which were superior to vacuum tubes. A transistor is made up of semiconductor material like germanium and silicon. It usually has three leads (see Figure 1.5) and performs electrical functions such as voltage, current or power amplification with low power requirements. Since a transistor is a small device, the physical size of computers was greatly reduced. Computers became smaller, faster, cheaper, energy efficient and more reliable than their predecessors. In second-generation computers, magnetic cores were used as the primary memory and magnetic disks as the secondary storage devices. However, they still relied on punched cards for the input and printouts for the output.
Figure 1.5 Transistor
One of the major developments of this generation includes the progress from machine language to assembly language. Assembly language uses mnemonics (abbreviations) for instructions rather than numbers, for example, ADD for addition and MULT for multiplication. As a result, programming became less cumbersome. Early high-level programming languages such as COBOL and FORTRAN also came into existence in this period.
Examples: PDP-8, IBM 1401 and IBM 7090.
The development of the integrated circuit, also called an IC, was the trait of the third-generation computers. An IC consists of a single chip (usually silicon) with many components such as transistors and resistors fabricated on it. ICs replaced several individually wired transistors. This development made computers smaller in size, reliable and efficient (Figure 1.6).
Figure 1.6 Integrated Circuit
Instead of punched cards and printouts, users interacted with third-generation computers through keyboards and monitors, and interfaced with the operating system. This allowed the device to run many different applications simultaneously with a central program that monitored the memory. For the first time, computers became accessible to mass audience because they were smaller and cheaper than their predecessors.
Examples: NCR 395 and B6500.
The fourth generation is an extension of third generation technology. Although, the technology of this generation is still based on the IC, these have been made readily available to us because of the development of the microprocessor (circuits containing millions of transistors). The Intel 4004 chip, which was developed in 1971, took the IC one step further by locating all the components of a computer (CPU, memory and I/O controls) on a minuscule chip. A microprocessor is built on to a single piece of silicon, known as chip. It is about 0.5 cm along one side and no more than 0.05 cm thick.
The fourth-generation computers led to an era of Large Scale Integration (LSI) and VLSI technology. LSI technology allowed thousands of transistors to be constructed on one small slice of silicon material, whereas VLSI squeezed hundreds of thousands of components on to a single chip. Ultra Large Scale Integration (ULSI) increased that number to the millions. This way computers became smaller and cheaper than ever before (Figure 1.7).
Figure 1.7 Microprocessor
The fourth-generation computers became more powerful, compact, reliable and affordable. As a result, it gave rise to the PC revolution. During this period, magnetic core memories were substituted by semiconductor memories, which resulted in faster random access main memories. Moreover, secondary memories such as hard disks became economical, smaller and bigger in capacity. The other significant development of this era was that these computers could be linked together to form networks, which eventually led to the development of the Internet. This generation also saw the development of the Graphical User Interfaces (GUIs), mouse and handheld devices. Despite many advantages, this generation required complex and sophisticated technology for the manufacturing of the CPU and the other components.
Examples: Apple II, Altair 8800 and CRAY-1.
The dream of creating a human-like computer that would be capable of reasoning and reaching at a decision through a series of “what-if-then” analyses has existed since the beginning of computer technology. Such a computer would learn from its mistakes and possess the skill of experts. These are the objectives for creating the fifth generation of computers. The starting point for the fifth generation of computers had been set in the early 1990s. The process of developing fifth-generation computers is still in the development stage. However, the expert system concept is already in use. The expert system is defined as a computer system that attempts to mimic the thought process and reasoning of experts in specific areas. Three characteristics can be identified with the fifth-generation computers. These are:
These days, computers are available in many sizes and types. Some computers can fit in the palm of the hand, while some can occupy the entire room. Computers also differ based on their data-processing abilities. Based on the physical size, performance and application areas, we can generally divide computers into four major categories: micro, mini, mainframe and supercomputers (Figure 1.8).
A microcomputer is a small, low-cost digital computer, which usually consists of a microprocessor, a storage unit, an input channel and an output channel, all of which may be on one chip inserted into one or several PC boards. The addition of power supply and connecting cables, appropriate peripherals (keyboard, monitor, printer, disk drives and others), an operating system and other software programs can provide a complete microcomputer system. The micro-computer is generally the smallest of the computer family. Originally, these were designed for individual users only, but nowadays they have become powerful tools for many businesses that, when networked together, can serve more than one user. IBM-PC Pentium 100, IBM-PC Pentium 200 and Apple Macintosh are some of the examples of microcomputers. Microcomputers include desktop, laptop and hand-held models such as Personal Digital Assistants (PDAs).
Desktop Computer: The desktop computer, also known as the PC, is principally intended for stand-alone use by an individual. These are the most-common type of microcomputers. These microcomputers typically consist of a system unit, a display monitor, a keyboard, an internal hard disk storage and other peripheral devices. The main reason behind the importance of the PCs is that they are not very expensive for the individuals or the small businesses. Some of the major PC manufacturers are APPLE, IBM, Dell and Hewlett-Packard (Figure 1.9).
Figure 1.9 Desktop Computer
Laptop: A laptop is a portable computer that a user can carry around. Since the laptop resembles a notebook, it is also known as the notebook computer. Laptops are small computers enclosing all the basic features of a normal desktop computer. The biggest advantage of laptops is that they are lightweight and one can use them anywhere and at anytime, especially when one is travelling. Moreover, they do not need any external power supply as a rechargeable battery is completely self-contained in them. However, they are expensive as compared to desktop computers (Figure 1.10).
Figure 1.10 Laptop
Hand-held Computers: A hand-held computer such as a PDA is a portable computer that can conveniently be stored in a pocket (of sufficient size) and used while the user is holding it. PDAs are essentially small portable computers and are slightly bigger than the common calculators. A PDA user generally uses a pen or electronic stylus, instead of a keyboard for input. As shown in (Figure 1.11), the monitor is very small and is the only apparent form of output. Since these computers can be easily fitted on the top of the palm, they are also known as palmtop computers. Handheld computers usually have no disk drive; rather, they use small cards to store programs and data. However, they can be connected to a printer or a disk drive to generate output or store data. They have limited memory and are less powerful as compared to desktop computers. Some examples of hand-held computers are Apple Newton, Casio Cassiopeia and Franklin eBookMan.
In the early 1960s, Digital Equipment Corporation (DEC) started shipping its PDP series computer, which the press described and referred to as minicomputers. A minicomputer is a small digital computer, which normally is able to process and store less data than a mainframe but more than a microcomputer, while doing so less rapidly than a mainframe but more rapidly than a microcomputer. It is about the size of a two-drawer filing cabinet. Generally, these computers are used as desktop devices that are often connected to a mainframe in order to perform the auxiliary operations (Figure 1.12).
Figure 1.12 Minicomputer
A minicomputer (sometimes called a mid-range computer) is designed to meet the computing needs of several people simultaneously in a small- to medium-sized business environment. It is capable of supporting from four to about 200 simultaneous users. It serves as a centralized storehouse for a cluster of workstations or as a network server. Minicomputers are usually multi-user systems so these are used in interactive applications in industries, research organizations, colleges and universities. They are also used for real-time controls and engineering design work. Some of the widely used minicomputers are PDP 11, IBM (8000 series) and VAX 7500.
A mainframe is an ultra-high performance computer made for high-volume, processor-intensive computing. It consists of a high-end computer processor, with related peripheral devices, capable of supporting large volumes of data processing, high-performance online transaction processing, and extensive data storage and retrieval. Normally, it is able to process and store more data than a minicomputer and far more than a microcomputer. Moreover, it is designed to perform at a faster rate than a minicomputer and at even more faster rate than a microcomputer. Mainframes are the second largest (in capability and size) of the computer family, the largest being the supercomputers. However, mainframes can usually execute many programs simultaneously at a high speed, whereas supercomputers are designed for a single process (Figure 1.13).
Figure 1.13 Mainframe
The mainframe allows its users to maintain a large amount of data storage at a centralized location and to access and process these data from different computers located at different locations. It is typically used by large businesses and for scientific purposes. Some examples of the mainframe are IBM's ES000, VAX 8000 and CDC 6600.
Supercomputers are the special-purpose machines, which are especially designed to maximize the numbers of floating point operations per second (FLOPS). Any computer below one gigaflop per second is not considered a supercomputer. A supercomputer has the highest processing speed at a given time for solving scientific and engineering problems. Essentially, it contains a number of CPUs that operate in parallel to make it faster. Its processing speed lies in the range 400–10,000 MFLOPS (millions of floating point operations per second). Due to this feature, supercomputers help in many applications including information retrieval and computer-aided designing (Figure 1.14).
Figure 1.14 Supercomputer
A supercomputer can process a great deal of data and make extensive calculations very quickly. It can resolve complex mathematical equations in a few hours, which would have taken many years when performed using a paper and pencil or using a hand calculator. It is the fastest, costliest and most powerful computer available today. Typically, supercomputers are used to solve multivariant mathematical problems of existent physical processes, such as aerodynamics, metrology and plasma physics. They are also required by the military strategists to simulate defence scenarios. Cinematic specialists use them to produce sophisticated movie animations. Scientists build complex models and simulate them in a supercomputer. However, a supercomputer has limited broad-spectrum use because of its price and limited market. The largest commercial uses of supercomputers are in the entertainment/advertising industry. CRAY-3, Cyber 205 and PARAM are some well-known supercomputers.
In 2003, India developed the PARAM Padma supercomputer, which marks an important step towards high-performance computing. The PARAM Padma computer was developed by India's Center for Development of Advanced Computer (C-DAC) and promises processing speeds of up to 1 teraflop per second (1 trillion processes per second).
A computer can be viewed as a system, which consists of a number of interrelated components that work together with the aim of converting data into information. In a computer system, processing is carried out electronically, usually with little or no intervention from the user.
The general perception of people regarding the computer is that it is an “intelligent thinking machine”. However, this is not true. Every computer needs to be instructed exactly what to do and how to do. The instructions given to computers are called programs. Without programs, computers would be useless. The physical parts that make up a computer (the CPU, input, output and storage unit) are known as hardware. Any hardware device connected to the computer or any part of the computer outside the CPU and working memory is known as a peripheral. Some examples of peripherals are keyboards, mouse and monitors.
There are several computer systems in the market with a wide variety of makes, models and peripherals. In general, a computer system comprises the following components:
Figure 1.15 Components of a Computer System
Central Processing Unit: The CPU, also known as a processor, is the brain of the computer system that processes data (input) and converts it into meaningful information (output). It is referred to as the administrative section of the computer system that interprets the data and instructions, coordinates the operations, and supervises the instructions. The CPU works with data in discrete form, that is, either 1 or 0. It counts, lists, compares and rearranges the binary digits of data in accordance with the detailed program instructions stored within the memory. Eventually, the results of these operations are translated into characters, numbers and symbols that can be understood by the user. The CPU itself has three parts:
Note: The circuits necessary to create a CPU for a PC are fabricated on a microprocessor.
Input, Output and Storage Unit: The user must enter instructions and data into the computer system before any operation can be performed on the given data. Similarly, after processing the data, the information must go out from the computer system to the user. For this, every computer system incorporates the I/O unit that serves as a communication medium between the computer system and the user.
An input unit accepts instructions and data from the user with the help of input devices such as keyboard, mouse, light pen, etc. Since the data and instructions entered through different input devices will be in different form, the input unit converts them into the form that the computer can understand. After this, the input unit supplies the converted instructions and data to the computer for further processing.
The output unit performs just opposite to that of input unit. It accepts the output (which is in machine-coded form) produced by the computer, converts them into the user-understandable form and supplies the converted results to the user with the help of an output device such as printer, monitor and plotter.
Besides, a computer system incorporates a storage unit to store the input entered through the input unit before processing starts and to store the results produced by the computer before supplying them to the output unit. The storage unit of a computer comprises two types of memory/storage: primary and secondary. The primary memory, also called the main memory, is the part of a computer that holds the instructions and data currently being processed by the CPU, the intermediate results produced during the course of calculations and the recently processed data. While the instructions and data remain in the main memory, the CPU can access them directly and quickly. However, the primary memory is quite expensive and has a limited storage capacity.
Due to the limited size of the primary memory, a computer employs the secondary memory, which is extensively used for storing data and instructions. It supplies the stored information to the other units of the computer as and when required. It is less expensive and has higher storage capacity than the primary memory. Some commonly used secondary storage devices are floppy disks, hard disks and tape drives (Figure 1.16).
A computer performs three basic steps to complete any task: input, processing and output. A task is assigned to a computer in a set of step-by-step instructions, which is known as a program. These instructions tell the computer what to do with the input in order to produce the required output. A computer functions in the following manner:
Step 1 | The computer accepts the input. The computer input is whatever entered or fed into a computer system. The input can be supplied by the user (such as by using a keyboard) or by another computer or device (such as a diskette or CD-ROM). Some examples of input include the words and symbols in a document, numbers for a calculation, instructions for completing a process, and so on. |
Step 2 | The computer processes the data. During this stage, the computer follows the instructions using the data that have been input. Examples of processing include calculations, sorting lists of words or numbers and modifying documents according to user instructions. |
Step 3 | The computer produces output. Computer output is the information that has been produced by a computer. Some examples of computer output include reports, documents and graphs. Output can be in several formats, such as printouts, or displayed on the screen (Figure 1.17). |
In the last few decades, computer technology has revolutionized the businesses and other aspects of human life all over the world. Practically, every company, large or small, is now directly or indirectly dependent on computers for data processing. Computer systems also help in the efficient operation of railway and airway reservation, hospital records, accounts, electronic banking and so on. Computers not only save time, but also save paper work. Some of the areas where computers are being used are listed below.
3.145.175.253