In the beginning of civilisation, people used fingers and pebbles for computing purposes. In fact, the word digitus in Latin actually means finger and calculus means pebble. This gives a clue into the origin of early computing concepts. With the development of civilisation, the computing needs also grew. The need for a mechanism to perform lengthy calculations led to the invention of the first calculator, and then computers.
The term computer is derived from the word compute, which means to calculate. A computer is an electronic machine, devised for performing calculations and controlling operations that can be expressed either in logical or numerical terms. In simple words, a computer is an electronic device that performs diverse operations with the help of instructions to process the data in order to achieve desired results. Although the application domain of a computer depends totally on human creativity and imagination, it covers a huge area of applications including education, industries, government, medicine, scientific research, law, and even music and arts.
Computers are one of the most influential forces available in modern times. Harnessing the power of computers enables relatively limited and fallible human capacities for memory, logical decision-making, reaction, and perfection to be extended to almost infinite levels. Millions of complex calculations can be done in a mere fraction of time; difficult decisions can be made with unerring accuracy for comparatively little cost. Computers are widely seen as instruments for future progress and as tools to achieve sustainability by way of improved access to information with the help of video conferencing and e-mail. Indeed, computers have left such an impression on modern civilisation that we call this era as the “information age”.
The human race developed computers so that it could perform intricate operations, such as calculation and data processing, or simply for entertainment. Today, much of the world's infrastructure runs on computers and it has profoundly changed our lives, mostly for the better. Let us discuss some of the characteristics of computers, which make them an essential part of every emerging technology and such a desirable tool in human development.
Although processing has become less tedious with the development of computers, it is still time-consuming and expensive job. Sometimes, a program works properly for some period and then suddenly produces an error. This happens because of a rare combination of events or due to an error in the instruction provided by the user. Therefore, computer parts require regular checking and maintenance in order to give correct results. Furthermore, computers need to be installed in a dust free place. Generally, some parts of computers get heated up due to heavy processing. Therefore, the ambient temperature of the computer system should be maintained.
THINGS TO REMEMBER
Limitations of a Computer
The need for calculations with the growth in commerce and other human activities explain the evolution of computers. Computers were preceded by many devices which mankind developed for their computing requirements. However, many centuries elapsed before technology was adequately advanced to develop computers. In order to understand the recent impact of computers, it is worthwhile to have a look at the evolution of computers.
In ancient times, people used fingers to perform the calculations such as addition and subtraction. Even today, simple calculations are done on fingers. Soon, mankind realised that it would be easier to do calculations with pebbles as compared to fingers. Consequently, pebbles were used to represent numbers, which led to the development of sand tables. They are known to be the earliest device for computation.
A sand table consists of three grooves in the sand with a maximum of 10 pebbles in each groove. To increase the count by one, a pebble has to be added in the right hand groove. When ten pebbles were collected in the right groove, they were removed and one pebble was added to the adjacent left groove. Afterward, sand tables were modified extensively and these modifications resulted in a device known as Abacus.
Figure 1.1 Sand Table Showing 125
Figure 1.2 Abacus
Abacus emerged around 5000 years ago in Asia Minor and in some parts of the world it is still in use. The word ‘abacus’ was derived from Arabic word ‘abaq’ which means ‘dust’. The first abacus was simply a portable sand table; a board with dust strung across it. An abacus consists of a wooden frame, which has two parts: upper and lower. The upper part contains two beads and lower part contains five beads per wire. A raised bead in upper denotes 0 whereas a lowered bead denotes digit 5. In the lower part, a raised bead stands for 1 and a lowered bead for 0. This device allows users to do computations using a system of sliding beads arranged on a rack. Manipulating the beads on the wires carry out arithmetic operations.
In 1614, a Scottish mathematician, John Napier, made a more sophisticated computing machine called Napier bones. This was a small instrument made of 10 rods, on which the multiplication table was engraved. It was made of strips of ivory bones, and so the name Napier bones. This device enabled multiplication in a fast manner, if one of the numbers was of one digit only (for example, 6 × 6745). Incidentally, Napier also played a key role in the development of logarithms, which stimulated the invention of ‘slide rule’ that substituted the addition of logarithms for multiplication. This was a remarkable invention as it enabled the transformation of multiplication and division into simple addition and subtraction.
Figure 1.3 Napier Bones
The invention of logarithms influenced the development of another famous invention known as slide rule. In AD 1620, the first slide rule came into existence. It was jointly devised by two British mathematicians, Edmund Gunter and William Oughtred. It was based on the principle that actual distances from the starting point of the rule is directly proportional to the logarithm of the numbers printed on the rule. The slide rule is embodied by two sets of scales that are joined together, with a marginal space between them. This space is enough for the free movement of the slide in the groove of the rule.
Figure 1.4 Slide Rule
The suitable alliance of two scales enabled the slide rule to perform multiplication and division by a method of addition and subtraction.
In 1623, Wilhelm Schickard invented the ‘calculating clock’, which could add and subtract, and indicated the overflow by ringing a bell. Subsequently, it helped in the evolution of Pascaline. In AD 1642, French mathematician, scientist, and philosopher, Blaise Pascal, invented the first functional automatic calculator. It had a complex arrangement of wheels, gears, and windows for the display of numbers. It was operated by a series of dials attached to wheels that had the numbers zero to nine on their circumference. When a wheel made a complete turn, it advanced to the wheel to the left of it. Indicators above the dial displayed the correct answer. However, usage of this device was limited to addition and subtraction only.
Figure 1.5 Pascaline
In 1694, German mathematician Gottfried Wilhem Von Leibriz extended the Pascal's design to perform multiplication, division, and to find square root. This machine is known as stepped reckoner. It was the first mass-produced calculating device, which was designed to perform multiplication by repeated addition. The stepped reckoner did not make use of interconnected gears, but instead, a cylinder of stepped teeth operated it. The only problem with this device was that it lacked mechanical precision in its construction and was not very reliable.
Figure 1.6 Stepped Reckoner
Joseph Marie Jacquard, a French textile weaver, used the principle of the weaving process to represent the two digits of the binary system. Jacquard took a large step in the development of computers when he developed punch cards to increase rug production. In 1801, Jacquard invented a power loom with an automatic card reader known as punch card machine. The idea of Jacquard to use punched cards was to provide an effective means of communication with machines. He automated the process with the use of punched cards and placed them between the needles and the thread. The presence or absence of a hole represented the two digits of the binary system, which is the base for all modern digital computers.
Figure 1.7 Punch Card
Charles Babbage, a professor of mathematics, devised a calculating machine known as the difference engine in 1822, which could be used to mechanically generate mathematical tables. The difference engine can be viewed as a hugely complex abacus. It was intended to solve differential equations as well. However, Babbage never quite made a fully functional difference engine and in 1833, he quit working on this machine to concentrate on the analytical engine.
Analytical engine is considered to be the first general-purpose programmable computer. Babbage's innovation in the design of the analytical engine made it possible to test the sign of a computed number and take one course of action if the sign was positive, and another if the sign was negative. Babbage also designed this device to advance or reverse the flow of punched cards to permit branching to any desired instruction within a program. This was the fundamental difference between the analytical engine and the difference engine. Lady Ada Lovelace helped him in the development of the analytical engine. She not only helped Babbage with financial aid, but being a good mathematician, also wrote articles and programs for the proposed machine. Due to her contributions, she is known as the ‘first programmer’. However, Babbage never completed the analytical engine, but his proposal for this device reviewed the basic elements of the modern computer such as input/output, storage, processor, and control unit.
Herman Hollerith invented the punched-card tabulating machine to process the data collected in the United States' census. This electronic machine was able to read the information on the cards and process it electronically. It consisted of a tabulator, a sorter with compartments electronically controlled by the tabulator's counter and the device used to punch data onto cards. This tabulator could read the presence or absence of holes in the cards by using spring-mounted nails that passed through the holes to make electrical connections. In 1896, Hollerith founded the Tabulating Machine Company, which was later named IBM (International Business Machines).
FACT FILE
What's in the Name?
In 1896, Hollerith founded the Tabulating Machine Company, which was later named IBM (International Business Machines). IBM developed numerous mainframes and operating systems, many of which are still in use today. For example, IBM co-developed OS/2 with Microsoft, which laid foundation for Windows operating systems.
Figure 1.8 Hollerith's Tabulator
In the process of the development of computers, many scientists and engineers made significant advances.
Before discussing various generations of computers, let us discuss some well-known computers of the past, which are considered to be the predecessors of modern computers.
From the year 1937 to 1944, an American mathematician, Howard Aiken, under the sponsorship of IBM, developed MARK-I. It was essentially a serial collection of electromechanical calculators and had many similarities to Babbage's analytical machine. This electronic calculating machine used relays and electromagnetic components to replace mechanical components. MARK-I was capable of performing addition, subtraction, division, multiplication, and table reference. However, it was extremely slow, noisy and bulky (approximately 50 feet long, 8 feet high, and weighed 5 tons).
In 1939, John Vincent Atansoft and Clifford Berry formulated the idea of using the binary number system to simplify the construction of an electronic calculator. By the end of 1939, they built a first electronic computer named as ABC (Atansoft Berry Computer). It is considered as the first computing machine which introduced the idea of binary arithmetic, regenerative memory, and logic circuits. This computer used electronic vacuum tubes and the circuitry was based on George Boole's Boolean algebra.
In 1944, British mathematician Alan Mathison, along with some colleagues, created a computer named Colossus, which comprised 1800 vacuum tubes. It was one of the world's earliest working programmable electronic digital computers. Colossus was a special-purpose machine that suited a narrow range of tasks (for example, it was not capable of performing decimal multiplications). Although Colossus was built as a special-purpose computer, it proved flexible enough to be programmed to execute a variety of different routines.
In 1946, John Eckert and John Mauchly of the Moore School of Engineering at the University of Pennsylvania developed ENIAC or Electronic Numerical Integrator and Calculator. Like the ABC, this computer used electronic vacuum tubes to make the internal parts of the computer. It embodied almost all the components and concepts of today's high-speed, electronic digital computers. This machine could discriminate the sign of a number, compare quantities for equality, add, subtract, multiply, divide, and extract square roots. ENIAC consisted of 18000 vacuum tubes, required around 160 KW of electricity and weighed nearly 30 tons. It could compute at speeds 1000 times that of Mark-I but had a limited amount of space to store and manipulate information.
John Eckert and John Mauchly also proposed the development of Electronic Discrete Variable Automatic Computer (EDVAC). Although, the conceptual design for EDVAC was completed by 1946, it came into existence in 1949. The EDVAC was the first electronic computer to use the stored program concept introduced by John Von Neumann. It also had the capability of conditional transfer of control, that is, the computer could be stopped at any time and then resumed. EDVAC contained approximately 4000 vacuum tubes and 10000 crystal diodes.
EDSAC, or Electronic Delay Storage Automatic Calculator, was also based on John Von Neumann's stored program concept. The work began on EDSAC in 1946 at the Cambridge University by a team headed by Maurice Wilkes. In 1949, the first successful program was run on this machine. It used mercury delay lines for memory, and vacuum tubes for logic. EDSAC had 3000 vacuum valves arranged on 12 racks and used tubes filled with mercury for memory. It could carry out only 650 instructions per second. A program was fed into the machine via a sequence of holes punched into a paper tape. The machine occupied a room, which measured 5 metres by 4 metres.
UNIVAC, or Universal Automatic Computer, was the first commercially available electronic computer. It was also the first general-purpose computer, which was designed to handle both numeric and textual information. The Eckert-Mauchly Corporation manufactured it in 1951 and its implementation marked the real beginning of the computer era. UNIVAC could compute at a speed of 120-3600 microseconds. Magnetic tapes were used as input and output mediums at a speed of around 13000 characters per second. The machine was 25 feet by 50 feet in length, contained 5600 tubes, 18000 crystal diodes, and 300 relays. The UNIVAC was used for general-purpose computing with large amounts of input and output.
The history of computer development is often discussed with reference to the different generations of computing devices. In computer terminology, the word generation is described as a stage of technological development or innovation. A major technological development that fundamentally changed the way computers operate, resulting in increasingly smaller, cheaper, more powerful and more efficient and reliable devices, characterises each generation of computer. According to the technology used, there are five generations of computers, which are discussed in the following sections.
First generation computers were vacuum tubes/thermionic valve based machines. These computers used vacuum tubes for circuitry and magnetic drums for memory. A magnetic drum is a metal cylinder coated with magnetic iron-oxide material on which data and programs can be stored. Input was based on punched cards and paper tape and output was displayed in the form of printouts.
First generation computers relied on binary-coded language (language of 0s and 1s) to perform operations and were able to solve only one problem at a time. Each machine was fed with different binary codes and hence were difficult to program. This resulted in lack of versatility and speed. In addition, to run on different types of computers, instructions must be rewritten or recompiled.
Examples: ENIAC, EDVAC, and UNIVAC.
Figure 1.9 Vacuum Tube
Second generation computers used transistors, which were superior to vacuum tubes. A transistor is made up of semiconductor material like germanium and silicon. It usually had three leads (see Figure 1.10) and performed electrical functions such as voltage, current or power amplification with low power requirements. Since transistor is a small device, the physical size of computers was greatly reduced. Computers became smaller, faster, cheaper, energy-efficient and more reliable than their predecessors. In second generation computers, magnetic cores were used as primary memory and magnetic disks as secondary storage devices. However, they still relied on punched cards for input and printouts for output.
One of the major developments of this generation includes the progress from machine language to assembly language. Assembly language used mnemonics (abbreviations) for instructions rather than numbers, for example, ADD for addition and MULT for multiplication. As a result, programming became less cumbersome. Early high-level programming languages such as COBOL and FORTRAN also came into existence in this period.
Examples: PDP-8, IBM 1401 and IBM 7090.
Figure 1.10 Transistor
The development of the integrated circuit was the trait of the third generation computers. Also called an IC, an integrated circuit consists of a single chip (usually silicon) with many components such as transistors and resistors fabricated on it. Integrated circuits replaced several individually wired transistors. This development made computers smaller in size, reliable, and efficient.
Instead of punched cards and printouts, users interacted with third generation computers through keyboards and monitors and interfaced with operating system. This allowed the device to run many different applications at one time with a central program that monitored the memory. For the first time, computers became accessible to mass audience because they were smaller and cheaper than their predecessors.
Examples: NCR 395 and B6500.
Figure 1.11 Integrated Circuit
The fourth generation is an extension of third generation technology. Although, the technology of this generation was still based on the integrated circuit, these have been made readily available to us because of the development of the microprocessor (circuits containing millions of transistors). The Intel 4004 chip, which was developed in 1971, took the integrated circuit one step further by locating all the components of a computer (central processing unit, memory, and input and output controls) on a minuscule chip. A microprocessor is built onto a single piece of silicon, known as chip. It is about 0.5 cm along one side and no more than 0.05 cm thick.
The fourth generation computers led to an era of Large Scale Integration (LSI) and Very Large Scale Integration (VLSI) technology. LSI technology allowed thousands of transistors to be constructed on one small slice of silicon material whereas VLSI squeezed hundreds of thousands of components on to a single chip. Ultra-large scale integration (ULSI) increased that number into millions. This way computers became smaller and cheaper than ever before.
The fourth generation computers became more powerful, compact, reliable, and affordable. As a result, it gave rise to the personal computer (PC) revolution. During this period, magnetic core memories were substituted by semiconductor memories, which resulted in faster random access main memories. Moreover, secondary memories such as hard disks became economical, smaller, and bigger in capacity. The other significant development of this era was that these computers could be linked together to form networks, which eventually led to the development of the Internet. This generation also saw the development of the GUIs (Graphical User Interfaces), mouse, and handheld devices. Despite many advantages, this generation required complex and sophisticated technology for the manufacturing of CPU and other components.
Examples: Apple II, Altair 8800, and CRAY-1.
Figure 1.12 Microprocessor
The dream of creating a human-like computer that would be capable of reasoning and reaching a decision through a series of “what-if-then” analyses has existed since the beginning of computer technology. Such a computer would learn from its mistakes and possess the skill of experts. These are the objectives for creating the fifth generation of computers. The starting point for the fifth generation of computers has been set in the early 1990s. The process of developing fifth generation of computers is still in the development stage. However, the expert system concept is already in use. The expert system is defined as a computer information system that attempts to mimic the thought process and reasoning of experts in specific areas. Three characteristics can be identified with the fifth generation computers, which are:
These days, computers are available in many sizes and types. You can have a computer that can fit in the palm of your hand or that which can occupy the entire room; single user computers can be used by hundreds of users simultaneously. Computers also differ based on their data processing abilities. Hence, computers can be classified according to purpose, data handling, and functionality.
Computers are designed for different purposes. They can be used either for general purposes or for specific purposes.
Figure 1.13 Classification of Computers
A general-purpose computer, as the name suggests, is designed to perform a range of tasks. These computers have the ability to store numerous programs. These machines can be used for various applications, ranging from scientific as well as business purpose applications. Even though such computers are versatile, they generally lack speed and efficiency. The computer that you use in your schools and homes are general-purpose computers.
These computers are designed to handle a specific problem or to perform a single specific task. A set of instructions for the specific task is built into the machine. Hence, they cannot be used for other applications unless their circuits are redesigned, that is, they lacked versatility. However, being designed for specific tasks, they can provide the result very quickly and efficiently. These computers are used for airline reservations, satellite tracking, and air traffic control.
Different types of computers process the data in a different manner. According to the basic data handling principle, computers can be classified into three categories: analog, digital, and hybrid.
A computing machine that works on the principle of measuring, in which the measurements obtained are translated into desired data is known as analog computer. Modern analog computers usually employ electrical parameters, such as voltages, resistances or currents, to represent the quantities being manipulated. Such computers do not deal directly with the numbers. They measure continuous physical magnitudes (such as temperature, pressure, and voltage), which are analogous to the numbers under consideration. For example, the petrol pump may have an analog computer that converts the flow of pumped petrol into two measurements – the quantity of petrol and the price for that quantity.
Analog computers are used for scientific and engineering purposes. One of the characteristics of these computers is that they give approximate results since they deal with quantities that vary continuously. The main feature of analog computers is that they are very fast in operation as all the calculations are done in ‘parallel mode’. It is very easy to get graphical results directly using analog computer. However, the accuracy of analog computers is less.
A computer that operates with information, numerical or otherwise, represented in a digital form is known as digital computer. Such computers process data (including text, sound, graphics, and video) into a digital value (in 0s and 1s). In digital computers, analog quantities must be converted into digital quantity before processing. In this case, the output will also be digital. If analog output is desired, the digital output has to be converted into analog quantity. The components that perform these conversions, are the essential parts or peripherals of the digital computer.
Digital computers can give the results with more accuracy at a faster rate. The accuracy of such computers is limited only by the size of their registers and memory. The desktop PC at your home is a classic example of digital computer.
A hybrid computer incorporates the measuring feature of an analog computer and the counting feature of a digital computer. For computational purposes, these computers use the analog components and for the storage of intermediate results, digital memories are used. In order to bind the powers of analog and digital techniques, hybrid computers comprehensively use analog-to-digital and digital-to-analog converters. Such computers are broadly used for scientific applications, various fields of engineering and in industrial control processes.
Based on physical size, performance and application areas, we can divide computers generally into four major categories: micro, mini, mainframe, and super computers.
A micro computer is a small, low cost digital computer, which usually consists of a microprocessor, a storage unit, an input channel, and an output channel, all of which may be on one chip inserted into one or several PC boards. The addition of a power supply and connecting cables, appropriate peripherals (keyboard, monitor, printer, disk drives, etc.), an operating system and other software programs can provide a complete micro computer system. The micro computer is generally the smallest of the computer family. Originally, they were designed for individual users only, but nowadays they have become powerful tools for many businesses that, when networked together, can serve more than one user. IBM-PC Pentium 100, IBM-PC Pentium 200, and Apple Macintosh are some of the examples of micro computers. Micro computers include desktop, laptop, and hand-held computers.
Figure 1.14 Desktop Computer
Figure 1.15 Laptop
Figure 1.16 Personal Digital Assistant
In the early 1960s, Digital Equipment Corporation (DEC) started shipping its PDP series computer, which the press described and referred as mini computers. A mini computer is a small digital computer, which normally is able to process and store less data than a mainframe but more than a micro computer, while doing so less rapidly than a mainframe but more rapidly than a micro computer. They are about the size of a two-drawer filing cabinet. Generally, they are used as desktop devices that are often connected to a mainframe in order to perform the auxiliary operations.
Mini computer (sometimes called a mid-range computer) is designed to meet the computing needs for several people simultaneously in a small to medium size business environment. It is capable of supporting from 4 to about 200 simultaneous users. It serves as a centralised storehouse for a cluster of workstations or as a network server. Mini computers are usually multi-user systems and so these are used in interactive applications in industries, research organisations, colleges, and universities. These are also used for real-time controls and engineering design work. High-performance workstations with graphics I/O capability use mini computer. Some of the widely used mini computers are PDP 11, IBM (8000 series), and VAX 7500.
Figure 1.17 Mini Computer
A mainframe is an ultra-high performance computer made for high-volume, processor-intensive computing. It consists of a high-end computer processor, with related peripheral devices, capable of supporting large volumes of data processing, high performance on-line transaction processing systems, and extensive data storage and retrieval. Normally, it is able to process and store more data than a mini computer and far more than a micro computer. Moreover, it is designed to perform faster than a mini computer and much faster than a micro computer. Mainframes are the second largest (in capability and size) of the computer family. However, a mainframe can usually execute many programs simultaneously at a high speed, whereas super computers are designed for single processes.
Figure 1.18 Mainframe Computer
Mainframe allows its user to maintain large information storage at a centralised location and be able to access and process this data from different computers located at different locations. They are typically used by large businesses and for scientific purposes. Examples of mainframe computers are IBM's ES000, VAX 8000, and CDC 6600.
Super computers are the special purpose machines, which are specially designed to maximise the numbers of FLOPS (Floating Point Operation Per Second). Any computer below one gigaflop/sec is not considered a super computer. A super computer has the highest processing speed at a given time for solving scientific and engineering problems. It basically contains a number of CPUs that operate in parallel to make it faster. Its processing speed lies in the range of 400–10,000 MFLOPS (Millions of Floating Point Operation Per Second). Due to this feature, super computers help in many applications such as information retrieval computer-aided design.
A super computer can process a great deal of information and make extensive calculations very, very quickly. They can resolve complex mathematical equations in a few hours, which would have taken a scientist with paper and pencil a lifetime, or years, using a hand calculator. They are the fastest, costliest and most powerful computers available today. Typically, super computers are used to solve multi-variant mathematical problems of existent physical processes, such as aerodynamics, metrology, and plasma physics. These are also used by military strategists to simulate defence scenarios. Cinematic specialists use them to produce sophisticated movie animations. Scientists build complex models and simulate them in a super computer. Here, it is used to model the actions and reactions of literally millions of atoms as they interact. Super computer has limited use because of its price tag and limited market. The largest commercial uses of super computers are in the entertainment/advertising industry. Examples of super computers are CRAY–3, Cyber 205, and PARAM.
FACT FILE
India's Super Achievement
Recently, India developed the PARAM Padma supercomputer, which marks an important step towards high-performance computing. The PARAM Padma computer was developed by India's Center for Development of Advanced Computer (C-DAC) and promised processing speeds of up to one teraflop per second (1 trillion processes per second).
Figure 1.19 Super Computer
A computer can be viewed as a system, which consists of a number of interrelated components that work together with the aim of converting data into information. In a computer system, processing is carried out electronically, usually with little or no intervention from the user. To attain information, data is entered through input unit, processed by central processing unit (CPU), and displayed through output unit. In addition, computers require memory to process data and store output. All these parts (the central processing unit, input, output, and memory unit) are referred to as hardware of the computer.
The general perception of people regarding the computer is that it is an “intelligent thinking machine”. However, this is not true. Every computer needs precise instructions on what is to be done and how to do it. The instructions given to computers are called programs, which constitute the software. In the following section, we will discuss the hardware components of computer in detail. Computer software is discussed in Chapter 03.
There are several computer systems in the market with a wide variety of makes, models, and peripherals. In general, a computer system comprises the following components:
Figure 1.20 Components of a Computer System
The input unit is formed by attaching various input devices such as keyboard, mouse, light pen, and so on to a computer. An input device is an electromechanical device that accepts instructions and data from the user. Since the data and instructions entered through different input devices will be in different form, the input unit converts them into the form that the computer can understand. After this, the input unit supplies the converted instructions and data to the CPU for further processing.
The central processing unit (CPU) is referred to as “brain” of a computer system and it converts data (input) into meaningful information (output). It is a highly complex, extensive set of electronic circuitry, which executes stored program instructions. It controls all the internal and external devices, performs arithmetic and logic operations, and operates only on binary data, that is, data composed of 1s and 0s. In addition, it also controls the usage of main memory to store data and instructions and controls the sequence of operations.
The central processing unit consists of three main subsystems, the Arithmetic/Logic Unit (ALU), the Control Unit (CU), and the Registers. The three subsystems work together to provide operational capabilities to the computer.
The arithmetic/logic unit (ALU) contains the electronic circuitry that executes all arithmetic and logical operations on the data made available to it. The data required to perform the arithmetic and logical functions are inputs from the designated registers. ALU comprises the following two units:
Figure 1.21 CPU
The control unit of the CPU contains circuitry that uses electrical signals to direct the entire computer system to carry out, or execute, stored program instructions. This unit checks the correctness of sequence of operations. It fetches program instructions from the primary storage unit, interprets them, and ensures correct execution of the program. It also controls the input/output devices and directs the overall functioning of the other units of the computer.
Registers are special-purpose, high-speed temporary memory units that hold various types of information such as data, instructions, addresses, and the intermediate results of calculations. Essentially, they hold the information that the CPU is currently working on. Registers can be thought of as CPU's working memory, a special additional storage location that offers the advantage of speed. They work under the direction of the control unit to accept, hold, and transfer instructions or data and perform arithmetic or logical comparisons at high speed.
The output unit is formed by attaching output devices such as printer, monitor, and plotter to the computer. An output device is used to present the processed data (results) to the user. The basic task of output unit is just opposite to that of input unit. It takes the outputs (which are in machine-coded form) from the CPU, converts them into the user understandable form such as characters, graphical, or audio visual, and supplies the converted results to the user with the help of output devices.
Figure 1.22 Control Unit
Figure 1.23 Registers in CPU
A computer system incorporates storage unit to store the input entered through input unit before processing starts and to store the results produced by the computer before supplying them to the output unit. The storage unit of a computer comprises two types of memory/storage: primary and secondary. The primary memory holds the instructions and data currently being processed by the CPU, the intermediate results produced during the course of calculations, and the recently processed data. While the instructions and data remain in primary memory, the CPU can access them directly and quickly. Due to the limited size of primary memory, a computer employs secondary memory, which is extensively used for storing data and instructions. It supplies the stored information to other units of the computer as and when required.
A computer accepts input in two ways, either manually or directly. In case of manual data entry, the user enters the data into computer by hand, for example, by using keyboard and mouse. A user can also enter data directly by transferring information automatically from a source document (like from cheque using MICR) into the computer. The user does not need to enter information manually. Direct data entry is accomplished by using special direct data entry devices like a barcode reader. Some of the commonly used input devices are keyboard, pointing devices like mouse and joystick, digital camera, and scanners.
Figure 1.24 Keyboard
A keyboard is the most common data entry device. Using a keyboard, the user can type text and commands. Keyboard is designed to resemble a regular typewriter with a few additional keys. Data is entered into computer by simply pressing keys. The layout of the keyboard has changed very little ever since it was introduced. In fact, the most common change in its technology has simply been the natural evolution of adding more keys that provide additional functionality. The number of keys on a typical keyboard varies from 84 keys to 104 keys.
Portable computers such as laptops quite often have custom keyboards that have slightly different key arrangements than a standard keyboard. In addition, many system manufacturers add special buttons to the standard layout. Keyboard is the easiest input device, as it does not require any special skill. Usually, it is supplied with a computer and so no additional cost is incurred. The maintenance and operational cost of keyboard is also less. However, using a keyboard for data entry may be a slow process because the user has to manually type all the text. In addition, it can be difficult for people suffering from muscular disorder.
Most computers come with an alphanumeric keyboard but in some applications, keyboard is not convenient. For example, if the user wants to select an item from a list, the user can identify and position those items by selecting them through the keyboard. However, this action could be performed quickly by pointing at the correct position. A pointing device is used to communicate with the computer by pointing to locations on the monitor screen. Such devices do not require keying of characters, instead the user can move a cursor on the screen and perform move, click or drag operations. Some of the commonly used pointing devices are mouse, trackball, joystick, light pen, and touch screen.
FACT FILE
Wireless Keyboard and Mouse
With the increasing use of wireless technology, the wireless versions of keyboard and mouse have also been developed. Rather than connecting through wires, they connect with computer using one of the three technologies, namely, Bluetooth, infra red, or radio frequency. The wireless keyboard requires three AA batteries and the wireless mouse requires two AA lithium batteries to operate. They also have a power switch on the bottom to turn them ON or OFF. The use of wireless devices helps in eliminating the wiring tangles around the PC and provides mobility and flexibility to the user to position him/her self relative to the computer.
Mouse is a small hand-held pointing device with a rubber ball embedded at its lower side and buttons on the top. Usually, a mouse contains two or three buttons, which can be used to input commands or information. It may be classified as a mechanical mouse or an optical mouse, based on the technology it uses. A mechanical mouse uses a rubber ball at the bottom surface, which rotates as the mouse is moved along a flat surface, to move the cursor. It is the most common and least expensive pointing device. An optical mouse uses a light beam instead of a rotating ball to detect movement across a specially patterned mouse pad. As the user rolls the mouse on a flat surface, the cursor on the screen also moves in the direction of the mouse's movement. It is pricier than their mechanical counterparts but are accurate and often do not need a mouse pad.
A mouse allows us to create graphic elements on the screen such as lines, curves, and freehand shapes. Since it is an intuitive device, it is easier and convenient to work as compared to the keyboard. Like a keyboard, it is also supplied with a computer; therefore, no additional cost is incurred. However, it needs a flat space close to the computer. The mouse cannot easily be used with laptop (notebook) or palmtop computers. These types of computers need a track ball or a touch sensitive pad called a touchpad.
Figure 1.25 Mouse
Trackball is another pointing device that resembles a ball nestled in a square cradle and serves as an alternative to a mouse. In general, a trackball is as if a mouse is turned upside down. It has a ball, which when rotated by fingers in any direction, moves the cursor accordingly. The size of the ball in the trackball varies from as large as a cue ball, to as small as a marble. Since it is a static device, instead of rolling the mouse on the top of the table the ball on the top is moved by using fingers, thumbs, and palms. This pointing device comes in various shapes and forms but with the same functions. The three shapes, which are commonly used, are a ball, button, and square.
Like the mouse, a trackball is also used to control cursor movements and the actions on a computer screen. The cursor is activated when buttons on the device are pressed. However, the track ball remains stationary on the surface, only the ball is moved with the fingers or palm of the hand. By moving just the fingers and not the entire arm, the user can get more precision and accuracy, which is why many graphic designers and gamers choose to use trackball instead of mouse. In addition, since the whole device is not moved for moving the graphic cursor, a trackball requires less space than a mouse for operation. Trackball, generally, tends to have more buttons. A lot of computer games enthusiasts and graphic designers also tend to choose to have more buttons to cut down on keyboard use. These extra buttons can also be re-programmed to suit whatever functions they require. Trackballs are not supplied normally so an additional cost is always charged. Moreover, before using them, a user has to learn how to use them.
Figure 1.26 Trackball
A joystick is a device that moves in all directions and controls the movement of the cursor. The basic design of a joystick consists of a stick that is attached to a plastic base with a flexible rubber sheath. This plastic base houses a circuit board that sits beneath the stick. The electronic circuitry measures the movement of the stick from its central position and sends the information for processing. A joystick also consists of buttons that can be programmed to indicate certain actions once a position on the screen has been selected using stick. It offers three types of control: digital, glide, and direct. Digital control allows movement in a limited number of directions such as up, down, left, and right. Glide and direct control allow movements in all directions (360 degrees). Direct control joysticks have the added ability to respond to the distance and speed with which the user moves the stick.
A joystick is generally used to control the velocity of the screen cursor movement rather than its absolute position. It is used for computer games. The other applications in which it is used are flight simulators, training simulators, CAD/CAM systems, and for controlling industrial robots.
Figure 1.27 Joystick
A light pen (sometimes called a mouse pen) is a hand-held electro-optical pointing device which when touched to or aimed closely at a connected computer monitor, will allow the computer to determine where on that screen the pen is aimed. It facilitates drawing images and selects objects on the display screen by directly pointing to the objects. It is a pen like device, which is connected to the machine by a cable. Although named light pen, it actually does not emit light but its light sensitive-diode would sense the light coming from the screen. The light coming from the screen causes the photocell to respond by generating a pulse. This electric response is transmitted to the processor that identifies the position to which the light pen is pointing. With the movement of light pen over the screen, the lines or images are drawn.
Figure 1.28 Light Pen
Light pens give user the full range of mouse capabilities without the use of a pad or any horizontal surface. Using light pens, users can interact more easily with applications, in such modes as drag and drop, or highlighting. It is used directly on the monitor screen and it does not require any special hand/eye coordinating skills. Pushing the light pen tip against the screen activates a switch, which allows the user to make menu selections, draw, and perform other input functions. Light pens are perfect for applications where desk space is limited, in harsh workplace environments, and any situation where fast accurate input is desired. It is very useful to identify a specific location on the screen. However, it does not provide any information when held over a blank part of the screen. Light pen is economically priced and requires little or no maintenance.
A touch screen is a special kind of input device that allows the direct selection of a menu item or the desired icon with the touch of finger. Essentially, it registers the input when a finger or other object is touched to the screen. It is normally used when information has to be accessed with minimum effort. However, it is not suitable for input of large amounts of data. Typically, it is used in information-providing systems like the hospitals, airlines and railway reservation counters, amusement parks, and so on.
Figure 1.29 Touch Screen
Digital camera stores images digitally rather than recording them on a film. Once a picture has been taken, it can be transferred to a computer system and then manipulated with an image editing software and printed. The big advantage of digital cameras is that making photos is both inexpensive and fast because there is no film processing.
Figure 1.30 Digital Camera
There are a number of situations when some information (picture or text) is available on paper and is needed on the computer for further manipulation. A scanner is an input device that converts a document into an electronic format that can be stored on the disk. The electronic image can be edited, manipulated, combined, and printed by using the image editing software. The scanners are also called optical scanners as they use a light beam to scan the input data.
Note that most of the scanners come with a utility program that allows it to communicate with the computer and save the scanned image as a graphic file on the computer. Moreover, they can store images in both gray-scale and color mode. The two most common types of scanners are hand-held and flat-bed scanner.
A hand-held scanner consists of light emitting diodes, which are placed over the document to be scanned. This scanner performs the scanning of the document very slowly from the top to the bottom with its light on. In this process, all the documents are converted and then stored as an image. While working, the scanner is dragged very steadily and carefully over the document at a constant speed without stopping, or jerking in order to obtain best results. Hand-held scanners are widely used where high accuracy is not of much importance. The size of the hand-held scanners is small. They come in various resolutions, up to about 800 dpi (dots per inch) and are available in either grey scale or colour. Furthermore, they are used when the volume of the documents to be scanned is low. These devices read the data on the price tags, shipping labels, inventory part number, book ISBNs, and so on.
Figure 1.31 Hand-held Scanner
Flat-bed scanners look similar to a photocopier machine. It consists of a box containing a glass plate on its top and a lid that covers the glass plate. This glass plate is used for placing the document to be scanned. The light beam is placed below the glass plate and when it is activated, it moves horizontally from left to right. After scanning one line, the light beam moves in order to scan the next line and the procedure is repeated until all the lines are scanned. For scanning, an A4 size document takes about 20 seconds. These scanners can scan black and white as well as colour images. The flatbed scanners are larger in size and more expensive than the hand-held scanners. However, they usually produce better quality images because they employ better scanning technology.
Figure 1.32 Flad-bed Scanner
As stated earlier, a scanner converts an input document into an electronic format that can be stored on the disk. If the document to be scanned contains an image, it can be manipulated using image editing software. However, if the document to be scanned contains text, you need optical character recognition (OCR) software. This is because when the scanner scans a document, the scanned document is stored as a bitmap in the computer's memory. The OCR software translates the bitmap image of text to the ASCII codes that the computer can interpret as letters, numbers, and special characters.
Figure 1.33 An OCR System
Because of OCR, data entry becomes easier, error-free and less time consuming. However, it is very expensive and if the document is not typed properly, it will become difficult for the OCR to recognise the characters. Furthermore, except for tab stops and paragraph marks, most document formatting is lost during text scanning. The output from a finished text scan will be a single column editable text file. This text file will always require spell checking and proof reading as well as re-formatting to get the desired final layout.
Optical mark recognition is the process of detecting the presence of intended marked responses. A mark registers significantly less light than the surrounding paper. Optical mark reading is done by a special device known as optical mark reader. In order to be detected by the OMR reader, a mark has to be positioned correctly on the paper and should be significantly darker than the surrounding paper. The OMR technology enables a high speed reading of large quantities of data and transferring this data to computer without using a keyboard. Generally, this technology is used to read answer sheets (objective type tests). In this method, special printed forms/documents are printed with boxes, which can be marked with dark pencil or ink. These forms are then passed under a light source and the presence of dark ink is transformed into electric pulses, which are transmitted to the computer.
OMR has a better recognition rate than OCR because fewer mistakes are made by machines to read marks than in reading handwritten characters. Large volumes of data can be collected quickly and easily without the need for specially trained staff. Usually, an OMR reader can maintain a throughput of 1500 to 10000 forms per hour. However, the designing of documents for optical mark recognition is complicated and the OMR reader needs to be reprogrammed for each new document design. OMR readers are relatively slow because the person putting marks on the documents must follow the instructions precisely. Any folding or dirt on a form may prevent the form from being read correctly. In addition, it requires accurate alignment of printing on forms and needs a paper of good quality.
Figure 1.34 Questionnaire using OMR Marks
You must have seen special magnetic encoding using characters, printed on the bottom of a cheque. The characters are printed using special ink, which contains iron particles that can be magnetised. To recognise these magnetic ink characters, a magnetic ink character reader (MICR) is used. It reads the characters by examining their shapes in a matrix form and the information is then passed on to the computer.
The banking industry prefers MICR to OCR as MICR gives extra security against forgeries such as colour copies of payroll cheques or hand-altered characters on a cheque. If a document has been forged, say a counterfeit check produced using a colour photocopying machine, the magnetic-ink line will either not respond to magnetic fields, or will produce an incorrect code when scanned using a device designed to recover the information in the magnetic characters. The reading speed of the MICR is also higher. This method is very efficient and time saving for data processing.
Figure 1.35 Cheque Number Written in MICR Font
Bar code is a machine-readable code in the form of a pattern of parallel vertical lines of varying widths. It is commonly used for labelling goods that are available in super markets and numbering books in libraries. This code is sensed and read by a bar code reader using reflective light. The information recorded in bar code reader is then fed into the computer, which recognises the information from the thickness and spacing of bars. Bar code readers are either hand-held or fixed-mount. Hand-held scanners are used to read bar codes on stationary items. With fixed-mount scanners, items having a bar code are passed by the scanner – by hand as in retail scanning applications or by conveyor belt in many industrial applications.
Bar code data correction systems provide enormous benefits for just about every business with a bar code data-collection solution, capturing data is faster and more accurate. A bar code scanner can record data five to seven times faster than a skilled typist. A bar code data entry has an error rate of about 1 in 3 million. Bar coding also reduces cost in terms of labour and revenue losses resulting from data collection errors. Bar code readers are widely used in supermarkets, department stores, libraries, and other places. You must have seen bar codes on the back cover of certain books and greeting cards. Retail and grocery stores use a bar code reader to determine the item being sold and to retrieve the item price from a computer system.
FACT FILE
Bar Code Data
The bar code data is just a reference number, which the computer uses to look up associated record file(s), which contain descriptive information. For example, the bar codes found on food items do not contain the price or other description; instead the bar code has a product number in it. When read by a bar code reader and transmitted to the computer, the computer finds the disk record file(s) associated with that item number. This file contains the price, vendor name, and other information.
Bar code scanners are electro-optical systems that include a means of illuminating the symbol and measuring reflected light. The light waveform data is converted from analog to digital, in order to be processed by a decoder, and then transmitted to the computer software. The process begins when a device directs a light beam over a bar code. The device contains a small sensory reading element, called sensor, which detects the light being reflected back from the bar code, and converts light energy into electrical energy. The result is an electrical signal that can be converted into an alphanumeric data. The pen in the bar code unit reads the information stored in the bar code and converts it into a series of ASCII characters by which the operating system gets the information stored in the bar code.
Figure 1.36 Bar Code Reader
Output is data that has been processed into useful information. It can be displayed or viewed on a monitor, printed on a printer, or listened through speakers or a headset. Generally, there are two basic categories of output: the output, which can be readily understood and used by the humans, and which is stored on secondary storage devices so that the data can be used as input for further processing. The output, which can be easily understood and used by human beings, are of the following two forms:
Based on the hard copy and soft copy outputs, the output devices are classified into: hard copy and soft copy output devices. Printers and plotters are the most commonly used hard copy output devices. The commonly used soft copy output device is computer monitor.
Since the dawn of the computer age, producing printed output on paper has been one of the computer's principal functions. A printer prints information and data from the computer onto a paper. Generally, the printer prints 80 or l32 columns of characters in each line, and prints either on single sheets, or on a continuous roll of paper, depending upon the printer itself. The quality of a printer is determined by the clarity of a print it can produce, that is, its resolution. Resolution is used to describe the sharpness and clarity of an image. The higher the resolution, the better the image. For printers, the resolution is measured in dpi (dots per inch). The more dots per inch, the better will be the quality of image. The dots are so small and close together that they project the image as a solid one. If a printer has a resolution of 600 dpi, it means that the printer is capable of printing 360,000 dots per square inch.
Printers are divided into two basic categories: impact printers and non-impact printers. As their names specify, impact printers work by physically striking a head or needle against an ink ribbon to make a mark on the paper. This includes dot matrix printers, daisy wheel printers, and drum printers. In contrast, inkjet and laser printers are non-impact printers. They use techniques other than physically striking the page to transfer ink onto the page.
Dot matrix printer (also known as the wire matrix printer) uses the oldest printing technology and it prints one character at a time. It prints characters and images as pattern of dots. The speed of dot matrix printers is measured in characters per second (cps). Most dot matrix printers offer different speeds depending on the quality of print desired. The speed can vary from about 200 to over 500 cps. The print quality is determined by the number of pins (the mechanisms that print the dots), which can vary from 9 to 24. The more pins per inch, the higher the print resolution. The best dot matrix printers (24 pins) can produce near letter-quality type image. Most dot matrix printers have a resolution ranging from 72-360 dpi.
Figure 1.37 Dot Matrix Printer
Dot matrix printers are inexpensive and have low operating costs. These printers are able to use different types of fonts, different line densities, and different types of paper. Many dot matrix printers are bi-directional, that is, they can print the characters from either direction, left or right. The major limitation of dot matrix printer is that it prints only in black and white. In addition, as compared to printers like laser printers, they produce low to medium quality printing. The image printing ability is also very limited. These printers may not be able to print graphic objects adequately but can handle applications such as accounting, personnel, and payroll very well. Dot matrix printers are commonly used in low-cost, low-quality applications like cash registers. These printers are limited to situations where carbon copies are needed and the quality is not too important.
The major drawback of dot matrix printer is that the pattern of dots that make up each character are visible on the print produced by it, making it look unprofessional. If you need a printer that can produce professional letter quality documents, you need a daisy wheel printer. Daisy wheel printer is named so because the print head of this printer resembles a daisy flower, with the printing arms that appear like the petals of the flower. These printers are commonly referred to as letter quality printers as the print quality is as good as that of a high-quality typewriter.
Figure 1.38 Daisy Wheel Printer
Daisy wheel printers produce high-resolution output and are more reliable than dot matrix printers. They can have speed up to 90 cps. These printers are also called smart printers because of its bidirectional printing and built-in microprocessor control features. However, daisy wheel printers give only alphanumeric output. They cannot print graphics and cannot change fonts unless the print wheel is physically replaced. These printers are usually very slow because of the time required to rotate the print wheel for each character desired. Daisy wheel printers are slower and more expensive than dot matrix printers. However, if the appearance of the correspondence is important and you do not need graphics, a daisy wheel printer is a better choice.
The dot matrix and daisy wheel printer are character or serial printers, that is, they print one character at a time. However, drum printer is a line printer, it can print a line in a single operation. Generally, line printer is used because of its speed as it uses special tractor-fed paper with pre-punched holes along each side. This arrangement allows a continuous high-speed printing. Its printing speed varies from 300 lines to 2000 lines per minute with 96 to 160 characters on a 15-inch line. Although such printers are much faster than character printers, they tend to be quite loud, have limited multi-font capability, and often produce lower print quality than most recent printing technologies. Line printers are designed for heavy printing applications. For example, in businesses where enormous amounts of materials are printed, the low speed character printers are very slow; therefore, the user need high-speed line printers. Although, drum printers have high speed of printing, they are very expensive and their character fonts cannot be changed. Moreover, the strike of the hammer should be precise. A single mistimed strike of the hammer may lead to wavy and slightly blurred printing.
Figure 1.39 Drum Printer
The most common type of printer found in homes today is the ink-jet printer. An ink-jet printer is a printer that places extremely small droplets of ink onto paper to create an image. Being a non-impact printer, it does not touch the paper while creating an image. Instead, it uses a series of nozzles to spray drops of ink directly onto the paper. Ink-jets were originally manufactured to print in monochrome (black and white) only. However, the print head has now been expanded and the nozzles increased to accommodate cyan (C), magenta (M), yellow (Y), and black (K). This combination of colours is called CMYK. It allows for printing images with nearly the same quality as a photo development lab using certain types of coated paper.
Figure 1.40 Ink-jet Printer
Ink-jet printers are costlier than dot matrix printers but the quality is much better. These printers can print any shape of character, which a user can specify as they produce printed output as pattern of tiny dots. This allows the printer to print many special characters, different sizes of print, and enables it to print graphics such as charts and graphs. Ink-jet printers typically print with a resolution of 600 dpi or more. Due to the high resolution, these printers produce high quality graphics and text printouts. They are also affordable, which appeals to small businesses and home offices. These printers print documents at a medium pace but slow down if printing a document with multi-colours. These printers can print about 6 pages a minute and can be programmed to print symbols such as Japanese or Chinese characters.
Figure 1.41 Laser Printer
A laser printer provides the highest quality text and images for personal computers today. It is a very fast printer, which operates on the same principle as that of a photocopy machine. Most laser printers can print text and graphics with a very high quality resolution. They are also known as page printers because they process and store the entire page before they actually print it. They produce sharp, crisp images of both text and graphics, providing resolutions from 300 to 2400 dpi. Today, the resolution of most printers is 600 dpi. They are quiet and fast, able to print 4-32 text-only pages per minute for individual microcomputers and up to 200 pages per minute for mainframes. Laser printers can print in excess of 2000 lines per minute. Furthermore, they can print in different fonts, that is, type styles and sizes. Laser printers are often faster than ink-jet printers but are more expensive to buy and maintain than the other printers. The cost of these printers depends on a combination of costs of paper, toner replacement, and drum replacement. These printers are useful for volume printing because of their speed.
A plotter is a pen-based output device that is attached to a computer for making vector graphics, that is, images created by a series of many straight lines. It is used to draw high-resolution charts, graphs, blueprints, maps, circuit diagrams, and other line-based diagrams. It is similar to printer, but it draws lines using a pen. As a result, it can produce continuous lines, whereas printer can only simulate lines by printing a closely spaced series of dots. Multicolour plotter uses different-coloured pens to draw different colours. Colour plots can be made by using four pens (cyan, magenta, yellow, and black) and need no human intervention to change them.
Being vector-based, a plotter tends to draw much crisper lines and graphics. The lines drawn by these devices are continuous and very accurate. However, plotter is considered a very slow output device because it requires excessive mechanical movement to plot. Furthermore, it is unable to produce solid fills and shading. Plotters are relatively expensive as compared to printers but can produce more printouts than standard printers. They are mainly used for Computer Aided Design (CAD) and Computer Aided Manufacturing (CAM) applications such as printing out plans for houses or car parts. These are also used with programs like AUTOCAD (computer assisted drafting) to give graphic outputs. There are two different types of plotters, drum plotter (where the paper moves) and flat-bed plotter (where the paper is stationary).
Figure 1.42 Plotters
The monitor is the most frequently used output device for producing soft-copy output. A computer monitor is a TV like display attached to the computer on which the output can be displayed and viewed. The computer monitor can either be a monochrome display or a colour display. A monochrome screen uses only one colour (usually white, green, amber or black) to display text on contrasting background. Colour screens commonly display 256 colours at one time from a selection of over 256,000 choices. Monitors are available in various sizes like 14, 15, 17, 19, and 21 inches. The size of the display is described based on two parameters: aspect ratio and screen size. Aspect ratio is the ratio of the width of the display screen to the height, that is, the ratio of vertical points to the horizontal points necessary to produce equal-length lines in both directions on the screen. Generally, computer displays have an aspect ratio of 4:3. Like televisions, screen sizes are normally measured diagonally (in inches), the distance from one corner to the opposite corner.
Figure 1.43 Aspect Ratio and Screen Size
Sometimes, while watching television, you may notice that the picture looks a bit blurred. The reason behind this is that the displayed image is not solid but is created by the configurations of dots. These dots are known as picture elements, pels, or simply pixels. The golden rule of a sharp image is that the more the pixels, the sharper the picture. The screen clarity depends on three basic qualities:
Figure 1.44 Dot Pitch
Nowadays, most computer monitors are based on Cathode Ray Tube (CRT) technology. The basic operation of these tubes is similar to that in television sets. Figure 1.45 illustrates the basic components of a CRT.
A beam of electrons (cathode rays) emitted by an electron gun passes through focusing and deflection systems that direct the beam toward specified positions on the phosphor-coated screen. The phosphor then emits a small spot of light at each position contacted by the beam. When the electron beam strikes the phosphors, the light is emitted for a short period of time, this condition is known as persistence. Technically, persistence is defined as the time it takes the emitted light from the screen to decay to 1/10 of its original intensity. Graphics monitors are usually constructed with persistence in the range from 10 to 60 microseconds. Since the light emitted by the phosphor fades very rapidly, some method is needed for maintaining the screen picture. One way to keep the phosphor glowing is to redraw the picture repeatedly by quickly directing the electron beam back over the same points. This type of display is called a refresh CRT.
The primary components of an electron gun in a CRT are the heated metal cathode and a control grid. Heat is supplied to the cathode by directing a current through a coil of wire, called the filament, inside the cylindrical cathode structure. This causes electrons to be ‘boiled off’ the hot cathode surface. In the vacuum inside the CRT envelope, the free, negatively charged electrons are then accelerated toward the phosphor coating by a highly positive voltage. The accelerating voltage can be generated with a positively charged metal coating on the inside of the CRT envelope near the phosphor screen, or an accelerating anode can be used, as in Figure 1.45. Note that sometimes the electron gun is built to contain the accelerating anode and focusing system within the same unit.
Before reaching the phosphor-coated screen, the electrons have to be passed through the monitor's focusing system. The focusing system is initially set up to focus the electron flow into a very thin beam and then in a specific direction. Focusing can be accomplished either by electric or by magnetic fields.
When the electrons in the beams collide with the phosphor coating, their kinetic energy is absorbed by the phosphor. Some of this energy is converted into heat while rest of the energy causes the electrons in the phosphors to move up to the higher energy levels. After this, when these electrons begin to return to the ground state, they emit light at certain frequencies. These frequencies are proportionate to the energy difference between the higher state and the ground state. As a result, the image, which we see on the screen, is the combination of all the electron light emissions.
Figure 1.45 Cathode Ray Tube
In the previous section, we discussed the most popular CRT monitors that are used as the display devices. With the widespread use of smaller computers like PDAs and laptops, a new type of display Liquid Crystal Display (LCD) has made a big impact on computer market. LCD screens have been used since long on notebook computers but are also becoming popular as a desktop monitor.
The term liquid crystal sounds like a contradiction. We generally conceive a crystal as a solid material like quartz and a liquid as water like fluid. However, some substances can exist in an odd state that is semi-liquid and semi-solid. When they are in this state, their molecules tend to maintain their orientation like the molecules in a solid, but also move around to different positions like the molecules in a liquid. Thus, liquid crystals are neither a solid nor a liquid. Manufacturers use this amazing ability of liquid crystals to display images.
A LCD screen is a collection of multiple layers. A fluorescent light source, known as the backlight, makes up the rearmost layer. Light passes through the first of two polarising filters. The polarised light then passes through a layer that contains thousands of liquid crystal blobs aligned in tiny containers called cells. These cells are aligned in rows across the screen; one or more cells make up one pixel. Electric leads around the edge of the LCD create an electric field that twists the crystal molecule, which lines the light up with the second polarising filter and allows it to pass through.
The process illustrated in Figure 1.46 is followed for a simple monochrome LCD. However, colour LCD is more complex. In a coloured LCD panel, each pixel is made up of three liquid crystal cells. In front of each of these cells, there is a red, green, or blue filter. Light passing through the filtered cells creates the colours on the LCD. Nowadays, nearly every colour LCD uses a thin-film transistor (TFT), also known as an active matrix, to activate each cell. TFT-based LCD creates sharp, bright images as compared to previous LCD technologies. The oldest of the matrix technologies, passive-matrix, offers sharp text but leaves “ghost images” on the screen when the display changes rapidly, making it less than optimal for moving video.
Figure 1.46 Coloured Liquid Crystal Screen
A LCD addresses each pixel individually. As a result, they can create sharper text than CRTs. However, LCD has only one ‘natural’ resolution, limited by the number of pixels physically built into the display. If you want to move up to, say, 1024 by 768 LCD on an 800 by 600 LCD, you have to emulate it with software, which will work only at certain resolutions.
We can classify memory into two broad categories: primary memory and secondary memory.
Primary Memory, also known as main memory, stores data and instructions for processing. Logically, it is an integral component of the CPU but physically, it is a separate part placed on the computer's motherboard (also known as main board). Primary memory can be further classified into random access memory (RAM) and read only memory (ROM).
Random access memory is like the computer's scratch pad. It allows the computer to store data for immediate manipulation and to keep track of what is currently being processed. It is the place in a computer where the operating system, application programs, and data in current use are kept so that they can be accessed quickly by the computer's processor. RAM is much faster to read from and write to than the other kinds of storage in a computer like the hard disk or floppy disk. However, the data in RAM stays there only as long as the computer is running. When the computer is turned off, RAM loses all its contents. When the computer is turned on again, the operating system and other files are once again loaded into RAM. When an application program is started, the computer loads it into RAM and does all the processing there. This allows the computer to run the application faster. Any new information that is created is kept in RAM and since RAM is volatile in nature, one needs to continuously save the new information to the hard disk.
Figure 1.47 Random Access Memory
Let us take a simple example of why RAM is used by the computer. Whenever a user enters a command from the keyboard, the CPU interprets the command and instructs the hard disk to ‘load’ the command or program into main memory. Once the data is loaded into memory, the CPU is able to access it much quickly. The reason behind this is that the main memory is much faster than secondary memory. The process of putting things the CPU needs in a single place from where it can get them more quickly is similar to placing various documents, which the user needs, into a single file folder. By doing so, the user finds all the required files handy and avoids searching in several places every time he needs them.
Note: Random access memory is also called read/write memory because, unlike read only memory (ROM) that does not allow any write operation, random access memory allows CPU to read as well as write data and instructions.
Just as a human being needs instructions from the brain to perform actions in certain event, a computer also needs special instructions every time it is started. This is required because during the start up operation, the main memory of the computer is empty due to its volatile property so there has to be some instructions (special boot programs) stored in a special chip that could enable the computer system to perform start up operations and transfer the control to the operating system. This special chip, where the start up instructions are stored, is called ROM. It is non-volatile in nature, that is, its contents are not lost when the power is switched off. The data and instructions stored in ROM can only be read and used but cannot be altered thereby making ROM much safer and secure than RAM. ROM chips are used not only in the computer but also in other electronic items like washing machine and microwave oven.
Figure 1.48 ROM BIOS Chip
Generally, designers program ROM chips at the time of manufacturing circuits. Burning appropriate electronic fuses to form patterns of binary information does programming. These patterns of binary information are meant for specific configurations, that is why different categories of computers are meant for performing different tasks. For example, a micro program called system boot program contains a series of start-up instructions to check for the hardware, that is, I/O devices, memory, and operating system in the memory. These programs deal with low-level machine functions and are alternate for additional hardware requirement. ROM performs the necessary BIOS (Basic Input Output System) function to start the system and then transfers the control over to the operating system.
ROM can have data and instructions written into it only one time. Once a ROM chip is programmed, it cannot be reprogrammed or rewritten.. If it is erroneous, or the data needs to be reorganised, one has to replace it with the new chip. Thus, the programming of ROM chips should be perfect having all the required data at the time of its manufacturing. Note that in some instances, ROM can be changed using certain tools. For example, flash ROM (a type of ROM) is non-volatile memory that occasionally can be changed such as when a BIOS chip must be updated. The ROM chips can consume very little power, are extremely reliable, and in case of most small electronic devices, contain all the necessary programming to control the device.
Secondary Memory, also known as auxiliary or external memory is used for storing instructions (software programs) and data, since main memory is temporary and limited in size. This memory is least expensive and has much larger storage capacity than primary memory. Instructions and data stored on such storage devices are permanent in nature. It can be removed only if the user wants or if the device is destroyed.
A floppy disk is a round, flat piece of Mylar plastic coated with ferric oxide (a rust like substance containing tiny particles capable of holding a magnetic field) and encased in a protective plastic cover (disk jacket). It is a removable disk and is read and written by a floppy disk drive (FDD), which is a device that performs the basic operation on a disk, including rotating the disk and reading and writing data onto it. The disk drive's read/write head alters the magnetic orientation of the particles, where orientation in one-direction represents ‘1’ and orientation in the other represents ‘0’.
Traditionally, floppy disks were used on personal computers to distribute software, transfer data between computers, and create small backups. Earlier, 5¼-inch floppy disks were used. Later, a new format of 3½-inch floppy disk came into existence, which has larger storage capacity and supports faster data transfer as compared to 5¼-inch floppy disks. Floppy diskettes are small, inexpensive, readily available, easy to store, and have a good shelf life if stored properly. They also possess the write-protect feature, which allows the users to protect a diskette from being written on. To write-protect a diskette, the user has to shift a slide lever towards the edge of the disk, uncovering a hole. The key advantage of floppy disk is that it is portable.
Figure 1.49 3½‘’ Floppy Disk
To read and write data onto a floppy disk, floppy disk drive is used. The drive is made up of a box with a slot (having a drive gate) into which user inserts the disk. When user inserts a disk into the floppy disk drive, the drive grabs the disk and spins it inside its plastic jacket. Also the drive has multiple levers that get attached to the disk. One lever opens the metal plate, or shutter, to expose the data access area. Other levers and gears move two read/write heads until they almost touch the diskette on both sides. The drive's circuit board receives instructions for reading/writing the data from/to disk through the floppy drive controller. If the data is to be written onto the disk, the circuit board first verifies that no light is visible through a small window in the floppy disk. If the photo sensor on the opposite side of the floppy disk detects a beam of light, the floppy drive detects disk to be write-protected and does not allow recording of data.
The circuit board translates the instructions into signals that control the movement of the disk and the read/write heads. A motor located beneath the disk spins a shaft that engages a notch on the hub of the disk, causing the disk to spin. When the heads are in the correct position, electrical impulses create a magnetic field in one of the heads to write data to either the top or bottom surface of the disk. Similarly, on reading the data, electrical signals are sent to the computer from the corresponding magnetic field generated by the metallic particle on the disk.
Figure 1.50 Floppy Disk Drive
Since the floppy disk head touches the diskette, both media and head wear out quickly. To reduce wear and tear, personal computers retract the heads and stop the rotation when a drive is not reading or writing. Consequently, when the next read or write command is given, there is a delay of about half a second while the motor gathers maximum speed.
The hard disk, also called the hard drive or fixed disk, is the primary storage unit of the computer. It consists of a stack of disk platters that are made up of aluminium alloy or glass substrate coated with a magnetic material and protective layers. They are tightly sealed to prevent any dust particle, which causes head crash, from getting inside. Hard disk can be external (removable) or internal (fixed) and can hold a large amount of data. The capacity that is the amount of information that a hard disk can store is measured in bytes. A typical computer today comes with 80-320 GB of hard disk. The storage capacity of hard disk has increased dramatically since the day it was introduced. The hard disk speed is measured in terms of access time (typically in milliseconds). A hard disk with lower access time is faster than a hard disk with higher access time; the lower the access time, the faster the hard disk.
Figure 1.51 Hard Disk Drive
A hard disk uses round, flat disks (platters) made up of glass or metals which are coated on both sides with a special material designed to store information in the form of magnetic patterns. Each platter has its information recorded in tracks, which are further broken down into smaller sectors. Making a hole in the centre of platters and stacking them onto a spindle mount the platters. The platters rotate at high speed, driven by a special motor connected to the spindle. Special electromagnetic read/write heads are mounted onto sliders and are used to either record data onto the disk or read data from it. The sliders are mounted onto arms, all of which are mechanically connected into a single assembly and positioned over the surface of the disk by a device called actuator. Each platter has two heads, one on the top of the platter and one on the bottom, so a hard disk with three platters would have six surfaces and six heads.
Data is recorded onto the magnetic surface of the disk in exactly the same way as it is on floppies. However, a disk controller is attached to the hard disk drive that handles the read/write commands issued by the operating system. Each read/write command specifies a disk address that comprises the surface number, track number, and sector number. With this information, the read/write head moves to the desired sector and data can be read from or written to. Usually, the next set of data to be read is sequentially located on the disk.
Note that unlike floppy drives, in which the read/write heads actually touch the surface of the material, the heads in most hard disks float slightly off the surface. Nevertheless, the distance between the head and the disk surface is much less compared to the thickness of a human hair. When the heads accidentally touch the media, either because the drive is dropped or bumped hard or because of an electrical malfunction, the surface becomes scratched. Any data stored where the head has touched the disk is lost. This is called a head crash. To help reduce the possibility of a head crash, most disk controllers park the heads over an unused track on the disk when the drive is not being used by the CPU.
Figure 1.52 Distance between Head and Disk Surface
Magnetic tape appears similar to the tape used in music cassettes. It is a plastic tape with magnetic coating on it. The data is stored in the form of tiny segments of magnetised and demagnetised portions on the surface of the material. Magnetised portion of the surface refers to the bit value ‘1’ whereas the demagnetised portion refers to the bit value ‘0’. Magnetic tapes are available in different sizes, but the major difference between different magnetic tape units is the speed at which the tape is moved past the read/write head and the tape's recording density. The amount of data or the number of binary digits that can be stored on a linear inch of tape is the recording density of the tape.
Magnetic tapes are very durable and can be erased as well as reused. They are the cheap and reliable storage medium for organising archives and taking backups. However, they are not suitable for data files that need to be revised or updated often because data on them is stored in a sequential manner. Every time the user needs to advance or rewind the tape to the position where the requested data starts. Tapes are also slow due to the nature of the media. If the tape stretches too much, then it will render it unusable for data storage and may result in data loss. The tape now has a limited role because disk has proved to be a superior storage medium than it. Today, the primary role of the tape drive is limited to backing up or duplicating the data stored on the hard disk to protect the system against loss of data during power failures or computer malfunctions.
The magnetic tape is divided into vertical columns (frames) and horizontal rows (channels or tracks). The data is stored in a string of frames with one character per frame and each frame spans multiple tracks (usually 7 or 9 tracks). Thus, a single bit is stored in each track, that is, one byte per frame. The remaining track (7th or 9th) stores the parity bit. When a byte is written to the tape, the number of 1s in the byte is counted, the parity bit is then used to make number of 1s even (even parity) or odd (odd parity). When the tape is read again, the parity bit is checked to see if any bit has been lost. In case of odd parity, there must be an odd number of 1s represented for each character and an even number of 1s in case of even parity.
Magnetic tape drive uses two reels, supply reel and take-up reel. Both reels are mounted on the hubs and the tape moves from the supply reel to the take-up reel. Figure 1.54 shows the basic tape drive mechanism. The magnetic oxide coated side of the tape passes directly over the read/write head assembly, thus making contact with the heads. As the tape passes under the read/write head, the data can be either read and transferred to the primary memory or read from primary memory and written onto the tape.
Figure 1.53 Representing Data in Magnetic Tape
A magnetic tape is physically marked to indicate the location from where reading and writing on tape is to begin (BOT or beginning of tape) and end (EOT or end of tape). The length of tape between BOT and EOT is referred to as the usable recording (reading/ writing) surface. BOT/EOT markers are usually made up of short silver strips of reflective type. These markers are sensed by an arrangement of lamps and/or photodiode sensors to indicate the location from where reading and writing is to begin and end. On a magnetic tape, data is recorded in form of blocks where each block consists of a grouping of data (known as records) that is written or read in a continual manner. Between these blocks, the computer automatically reserves some blank space called inter-block gap (IBG). One block may contain one or more records that are again separated by blank space (usually 0.5 inch) known as inter-record gap (IRG). In case of reading data from a moving tape, whenever an IRG is reached, the moving tape is stopped. It remains in immobile motion until the record is processed.
Figure 1.54 Basic Tape Drive Mechanism
In the last few decades, computer technology has revolutionised the businesses and other aspects of human life all over the world. Practically, every company, large or small, is now directly or indirectly dependent on computers for data processing. Computer systems also help in the efficient operations of railway and airway reservation, hospital records, accounts, electronic banking, and so on. Computers not only save time, but also save paper work. Some of the areas where computers are being used are listed below.
Figure 1.55 Information Format of Magnetic Tape
Figure 1.56 Application Areas of Computer
18.119.167.173