8.2 Why Networks and Multiprocessors?

Definitions

Programming a single CPU is hard enough. Why make life more difficult by adding more processors? A multiprocessor is, in general, any computer system with two or more processors coupled together. Multiprocessors used for scientific or business applications tend to have regular architectures: several identical processors that can access a uniform memory space. We use the term processing element (PE) to mean any unit responsible for computation, whether it is programmable or not. We use the term network (or interconnection network) to describe the interconnections between the processing elements.

Why so many?

Embedded system designers must take a more general view of the nature of multiprocessors. As we will see, embedded computing systems are built on top of the complete spectrum of multiprocessor architectures. Why is there no one multiprocessor architecture for all types of embedded computing applications? And why do we need embedded processors at all? The reasons for multiprocessors are the same reasons that drive all of embedded system design: real-time performance, power consumption, and cost.

Cost/performance

The first reason for using an embedded multiprocessor is that they offer significantly better cost/performance—that is, performance and functionality per dollar spent on the system—than would be had by spending the same amount of money on a uniprocessor system. The basic reason for this is that processing element purchase price is a nonlinear function of performance [Wol08]. The cost of a microprocessor increases greatly as the clock speed increases. We would expect this trend as a normal consequence of VLSI fabrication and market economics. Clock speeds are normally distributed by normal variations in VLSI processes; because the fastest chips are rare, they naturally command a high price in the marketplace.

Because the fastest processors are very costly, splitting the application so that it can be performed on several smaller processors is much cheaper. Even with the added costs of assembling those components, the total system comes out to be less expensive. Of course, splitting the application across multiple processors does entail higher engineering costs and lead times, which must be factored into the project.

Real-time performance

In addition to reducing costs, using multiple processors can also help with real-time performance. We can often meet deadlines and be responsive to interaction much more easily when we put those time-critical processes on separate processors. Given that scheduling multiple processes on a single CPU incurs overhead in most realistic scheduling models, as discussed in Chapter 6, putting the time-critical processes on PEs that have little or no time-sharing reduces scheduling overhead. Because we pay for that overhead at the nonlinear rate for the processor, as illustrated in Figure 8.1, the savings by segregating time-critical processes can be large—it may take an extremely large and powerful CPU to provide the same responsiveness that can be had from a distributed system.

image

Figure 8.1 Scheduling overhead is paid for at a nonlinear rate.

Cyber-physical considerations

We may also need to use multiple processors to put some of the processing power near the physical systems being controlled. Cars, for example, put control elements near the engine, brakes, and other major components. Analog and mechanical needs often dictate that critical control functions be performed very close to the sensors and actuators.

Power

Many of the technology trends that encourage us to use multiprocessors for performance also lead us to multiprocessing for low power embedded computing. Several processors running at slower clock rates consume less power than a single large processor: performance scales linearly with power supply voltage but power scales with V2 .

Austin et al. [Aus04] showed that general-purpose computing platforms aren’t keeping up with the strict energy budgets of battery-powered embedded computing. Figure 8.2 compares the performance of power requirements of desktop processors with available battery power. Batteries can provide only about 75 mW of power. Desktop processors require close to 1,000 times that amount of power to run. That huge gap cannot be solved by tweaking processor architectures or software. Multiprocessors provide a way to break through this power barrier and build substantially more efficient embedded computing platforms.

image

Figure 8.2 Power consumption trends for desktop processors [Aus04] © 2004 IEEE Computer Society.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.103.229