5.4 Parallel Architectures7

If a problem can be solved in n time units on a computer with one processor (von Neumann machine), can it be solved in n/2 time units on a computer with two processors, or n/3 on a computer with three processors? This question has led to the rise of parallel computing architectures.

Parallel Computing

There are four general forms of parallel computing: bit level, instruction level, data level, and task level.

Bit-level parallelism is based on increasing the word size of a computer. In an 8-bit processor, an operation on a 16-bit data value would require two operations: one for the upper 8 bits and one for the lower 8 bits. A 16-bit processor could do the operation in one instruction. Thus, increasing the word size reduces the number of operations on data values larger than the word size. The current trend is to use 64-bit processors.

Instruction-level parallelism is based on the idea that some instructions in a program can be carried out independently in parallel. For example, if a program requires operations on unrelated data, these operations can be done at the same time. A superscalar is a processor that can recognize this situation and take advantage of it by sending instructions to different functional units of the processor. Note that a superscalar machine does not have multiple processors but does have multiple execution resources. For example, it might contain separate ALUs for working on integers and real numbers, enabling it to simultaneously compute the sum of two integers and the product of two real numbers. Such resources are called execution units.

Data-level parallelism is based on the idea that a single set of instructions can be run on different data sets at the same time. This type of parallelism is called SIMD (single instructions, multiple data) and relies on a control unit directing multiple ALUs to carry out the same operation, such as addition, on different sets of operands. This approach, which is also called synchronous processing, is effective when the same process needs to be applied to many data sets. For example, increasing the brightness of an image involves adding a value to every one of several million pixels. These additions can all be done in parallel. See FIGURE 5.8.

A figure represents the processors in a synchronous computing environment.

FIGURE 5.8 Processors in a synchronous computing environment

Task-level parallelism is based on the idea that different processors can execute different tasks on the same or different data sets. If the different processors are operating on the same data set, then it is analogous to pipelining in a von Neumann machine. When this organization is applied to data, the first processor does the first task. Then the second processor starts working on the output from the first processor, while the first processor applies its computation to the next data set. Eventually, each processor is working on one phase of the job, each getting material or data from the previous stage of processing, and each in turn handing over its work to the next stage. See FIGURE 5.9.

A figure represents the processors in a synchronous computing environment.

FIGURE 5.9 Processors in a pipeline

In a data-level environment, each processor is doing the same thing to a different data set. For example, each processor might be computing the grades for a different class. In the pipelining task-level example, each processor is contributing to the grade for the same class. Another approach to task-level parallelism is to have different processors doing different things with different data. This configuration allows processors to work independently much of the time, but introduces problems of coordination among the processors. This leads to a configuration where each of the processors have both a local memory and a shared memory. The processors use the shared memory for communication, so the configuration is called a shared memory parallel processor. See FIGURE 5.10.

A figure represents the architecture of a shared-memory parallel processor.

FIGURE 5.10 A shared-memory parallel processor

Classes of Parallel Hardware

The classes of parallel hardware reflect the various types of parallel computing. Multicore processors have multiple independent cores, usually CPUs. Whereas a superscalar processor can issue multiple instructions to execution units, each multicore processor can issue multiple instructions to multiple execution units. That is, each independent core can have multiple execution units attached to it.

Symmetric multiprocessors (SMPs) have multiple identical cores. They share memory, and a bus connects them. The number of cores in an SMP is usually limited to 32 processors. A distributed computer is one in which multiple memory units are connected through a network. A cluster is a group of stand-alone machines connected through an off-the-shelf network. A massively parallel processor is a computer with many networked processors connected through a specialized network. This kind of device usually has more than 1000 processors.

The distinctions between the classes of parallel hardware are being blurred by modern systems. A typical processor chip today contains two to eight cores that operate as an SMP. These are then connected via a network to form a cluster. Thus, it is common to find a mix of shared and distributed memory in parallel processing. In addition, graphics processors that support general-purpose data-parallel processing may be connected to each of the multicore processors. Given that each of the cores is also applying instruction-level parallelism, you can see that modern parallel computers no longer fall into one or another specific classification. Instead, they typically embody all of the classes at once. They are distinguished by the particular balance that they strike among the different classes of parallel processing they support. A parallel computer that is used for science may emphasize data parallelism, whereas one that is running an Internet search engine may emphasize task-level parallelism.

SUMMARY

The components that make up a computer cover a wide range of devices. Each component has characteristics that dictate how fast, large, and efficient it is. Furthermore, each component plays an integral role in the overall processing of the machine.

The world of computing is filled with jargon and acronyms. The speed of a processor is specified in GHz (gigahertz); the amount of memory is specified in MB (megabytes), GB (gigabytes), and TB (terabytes); and a display screen is specified in pixels.

The von Neumann architecture is the underlying architecture of most of today’s computers. It has five main parts: memory, the arithmetic/logic (ALU) unit, input devices, output devices, and the control unit. The fetch–execute cycle, under the direction of the control unit, is the heart of the processing. In this cycle, instructions are fetched from memory, decoded, and executed.

RAM and ROM are acronyms for two types of computer memory. RAM stands for random-access memory; ROM stands for read-only memory. The values stored in RAM can be changed; those in ROM cannot.

Secondary storage devices are essential to a computer system. These devices save data when the computer is not running. Magnetic tape, magnetic disk, and flash drives are three common types of secondary storage.

Touch screens are peripheral devices that serve both input and output functions and are appropriate in specific situations such as restaurants and information kiosks. They respond to a human touching the screen with a finger or stylus, and they can determine the location on the screen where the touch occurred. Several touch screen technologies exist, including resistive, capacitive, infrared, and surface acoustic wave (SAW) touch screens. They have varying characteristics that make them appropriate in particular situations.

Although von Neumann machines are by far the most common, other computer architectures have emerged. For example, there are machines with more than one processor so that calculations can be done in parallel, thereby speeding up the processing.

KEY TERMS

EXERCISES

For Exercises 1–16, match the power of 10 to its name or use.

  1. 10−12

  2. 10−9

  3. 10−6

  4. 10−3

  5. 103

  6. 106

  7. 109

  8. 1012

  9. 1015

  1.   1. Nano-

  2.   2. Pico-

  3.   3. Micro-

  4.   4. Milli-

  5.   5. Tera-

  6.   6. Giga-

  7.   7. Kilo-

  8.   8. Mega-

  9.   9. Often used to describe processor speed

  10. 10. Often used to describe size of memory

  11. 11. Used in relation to Internet speeds

  12. 12. Latin for “thousandth”

  13. 13. Italian for “little”

  14. 14. Peta-

  15. 15. Roughly equivalent to 210

  16. 16. Roughly equivalent to 250

For Exercises 17–23, match the acronym with its most accurate definition.

  1. CD-ROM

  2. CD-DA

  3. CD-R

  4. DVD

  5. CD-RW

  6. DL DVD

  7. Blu-Ray

  1. 17. Format using two layers

  2. 18. Data is stored in the sectors reserved for timing information in another variant

  3. 19. Can be read many times, but written after its manufacture only once

  4. 20. Can be both read from and written to any number of times

  5. 21. Format used in audio recordings

  6. 22. A new technology storing up to 50 GB

  7. 23. The most popular format for distributing movies

Exercises 24–66 are problems or short-answer exercises.

  1. 24. Define the following terms:

    1. Core 2 processor

    2. Hertz

    3. Random access memory

  2. 25. What does FSB stand for?

  3. 26. What does it mean to say that a processor is 1.4 GHz?

  4. 27. What does it mean to say that memory is 133 MHz?

  5. 28. How many bytes of memory are there in the following machines?

    1. 512 MB machine

    2. 2 GB machine

  6. 29. Define RPM and discuss what it means in terms of speed of access to a disk.

  7. 30. What is the stored-program concept, and why is it important?

  8. 31. What does “units that process information are separate from the units that store information” mean in terms of computer architecture?

  9. 32. Name the components of a von Neumann machine.

  10. 33. What is the addressability of an 8-bit machine?

  11. 34. What is the function of the ALU?

  12. 35. Which component in the von Neumann architecture would you say acts as the stage manager? Explain.

  13. 36. Punched cards and paper tape were two early input/output media. Discuss their advantages and disadvantages.

  14. 37. What is an instruction register, and what is its function?

  15. 38. What is a program counter, and what is its function?

  16. 39. List the steps in the fetch–execute cycle.

  17. 40. Explain what is meant by “fetch an instruction.”

  18. 41. Explain what is meant by “decode an instruction.”

  19. 42. Explain what is meant by “execute an instruction.”

  20. 43. Compare and contrast RAM and ROM.

  21. 44. What is a secondary storage device, and why are such devices important?

  22. 45. Discuss the pros and cons of using magnetic tape as a storage medium.

  23. 46. What are the four measures of a disk drive’s efficiency?

  24. 47. Define what is meant by a block of data.

  25. 48. What is a cylinder?

  26. 49. Define the steps that a hard disk drive goes through to transfer a block of data from the disk to memory.

  27. 50. Distinguish between a compact disc and a magnetic disk.

  28. 51. Describe a parallel architecture that uses synchronous processing.

  29. 52. Describe a parallel architecture that uses pipeline processing.

  30. 53. How does a shared-memory parallel configuration work?

  31. 54. How many different memory locations can a 16-bit processor access?

  32. 55. Why is a faster clock not always better?

  33. 56. Why is a larger cache not necessarily better?

  34. 57. In the ad, why is the 1080p specification for the screen not entirely true?

  35. 58. Keep a diary for a week of how many times the terms hardware and software appear in commercials on TV shows you are watching.

  36. 59. Take a current ad for a laptop computer and compare that ad with the one shown at the beginning of this chapter.

  37. 60. What is the common name for the disk that is a secondary storage device?

  38. 61. To what does the term pixels refer?

  39. 62. What is a GPU?

  40. 63. If a battery in a laptop is rated for 80 WHr, and the laptop draws 20 watts, how long will it run?

  41. 64. What is the difference between 1K of memory and a 1K transfer rate?

  42. 65. Compare and contrast a DVD-ROM and a flash drive.

  43. 66. Giga- can mean both 109 and 230. Explain to what each refers. Can this cause confusion when reading a computer advertisement?

THOUGHT QUESTIONS

  1. Has your personal information ever been stolen? Has any member of your family experienced this?

  2. How do you feel about giving up your privacy for the sake of convenience?

  3. All secrets are not equal. How does this statement relate to issues of privacy?

  4. People post all sorts of personal information on social media sites. Does this mean they no longer consider privacy important?

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.188.40.207