Server Virtualization 27
responsible for managing the workflow as it passes through the system. The
primary goal is to achieve sequential consistency, in other words, to make
SMP systems appear to be exactly the same as a single-processor, multipro-
gramming platform. Engineers discovered that system performance could
be increased nearly 10–20% by executing some instructions out of order.
However, programmers had to deal with the increased complexity and cope
with a situation where two or more programs might read and write the
same operands simultaneously. This difficulty, however, is limited to a very
few programmers, because it only occurs in rare circumstances. To this day,
the question of how SMP machines should behave when accessing shared
data remains unresolved.
Data propagation time increases in proportion to the number of pro-
cessors added to SMP systems. After a certain number (usually somewhere
around 40 to 50 processors), performance benefits gained by using even
more processors do not justify the additional expense of adding such proces-
sors. To solve the problem of long data propagation times, message passing
systems were created. In these systems, programs that share data send mes-
sages to each other to announce that particular operands have been assigned
a new value. Instead of a global message announcing an operand’s new
value, the message is communicated only to those areas that need to know
the change. There is a network designed to support the transfer of messages
between applications. This allows a great number processors (as many as
several thousand) to work in tandem in a system. These systems are highly
scalable and are called massively parallel processing (MPP) systems.
1.4.4 Massively Parallel Processing Systems
Massive parallel processing is used in computer architecture circles to refer to
a computer system with many independent arithmetic units or entire
microprocessors, which run in parallel.
27
“Massive” connotes hundreds if
not thousands of such units. In this form of computing, all the processing
elements are interconnected to act as one very large computer. This
approach is in contrast to a distributed computing model, where massive
numbers of separate computers are used to solve a single problem (such as
in the SETI project, mentioned previously). Early examples of MPP systems
were the Distributed Array Processor, the Goodyear MPP, the Connection
Machine, and the Ultracomputer. In data mining, there is a need to per-
form multiple searches of a static database. The earliest massively parallel
27. http://en.wikipedia.org/wiki/Massive_parallel_processing, retrieved 10 Jan 2009.
Chap1.fm Page 27 Friday, May 22, 2009 11:24 AM