1.9. System-Design Evolution

Semiconductor advances achieved through the relentless application of Moore’s law have significantly influenced the evolution of system design since microprocessors became ubiquitous in the 1980s. Figure 1.10 shows how minimum microprocessor feature size has tracked Moore’s law since the introduction of the Intel 4004, which used 10-micron (10,000-nm) lithography. The figure also incorporates ITRS 2005 (International Technology Roadmap for Semiconductor) projections to the year 2020, when the minimum feature size is expected to be an incredibly tiny 14 nm. Each reduction in feature size produces a corresponding increase in the number of transistors that will fit on a chip. Presently, Intel’s dual-core Itanium-2 microprocessor holds the record for the largest number of transistors on a microprocessor chip at 1.72 billion. Most of the Itanium-2’s on-chip transistors are devoted to memory. In the 21st century, SOCs routinely contain tens of millions to several hundred million transistors.

Figure 1.10. The relentless decrease in feature size that has slavishly followed Moore’s law for decades fuels rapid complexity increases in SOC designs.


A series of system-level snapshots in 5-year intervals illustrates how system design has changed, and how it also has clung to the past. Figure 1.11 shows a typical electronic system, circa 1985. At this point in the evolution of system design, microprocessors have been available for nearly 15 years and microprocessor-based system design is now the rule rather than the exception. Packaged microprocessor ICs are combined with standard RAM, ROM, and peripheral ICs and this collection of off-the-shelf LSI chips is arrayed on a multilayer printed-circuit board. Glue logic provides the circuits needed to make all of the standard LSI chips work together as a system.

Figure 1.11. By 1985, microprocessor-based system design using standard LSI parts and printed-circuit boards was common.


In this design era, the glue-logic chips are probably bipolar 74LS series TTL chips but more advanced designers are using small, field-programmable chips (usually fast, bipolar PROMs or PALs made by Monolithic Memories) for glue. Note the single-processor bus that links all of the chips together. Even though microprocessors have been available for nearly a decade and a half, the block diagram shown in Figure 1.11 could easily be representative of systems designed with Intel’s original 4004 microprocessor.

Five years later, in 1990, system designers were still largely working at the board level with standard packaged microprocessor ICs. However, much of the glue logic has migrated into one or more ASICs (for high-volume systems) or FPGAs (for lower volume systems), as shown in Figure 1.12. These ASICs were usually too small (had insufficient gate count) for incorporating microprocessor cores along with the glue logic.

Figure 1.12. By 1990, system design was still mostly done at the board level but system designers started to use ASICs and FPGAs to consolidate glue logic into one or two chips.


ASIC capacities had advanced enough to include a processor core by 1995, initiating the SOC design era. Despite the additional potential flexibility and routability afforded by on-chip system design, system block diagrams continued to look much like their board-level predecessors, as illustrated in Figure 1.13. In general, system designers used the new silicon SOC technology in much the same way they had used printed-circuit boards.

Figure 1.13. Although some system designs had fully migrated onto one chip by 1995, system block diagrams continued to closely resemble earlier board-level designs.


By the year 2000, increasingly advanced IC lithography permitted the incorporation of processor cores with increasingly high clock rates, which caused a mismatch between the processor’s bus speed and slower peripheral devices. To uncouple the fast processor-memory subsystem from the slower peripheral sections of the SOC, system designers started to use on-chip bus hierarchies with fast and slow buses separated by a bus bridge, as shown in Figure 1.14. This system topology allows the high-speed bus to shrink in size, which reduces its capacitance and allows the processor’s memory bus to keep up with the processor’s high clock rates. Logically, the system block diagram of a system designed in the year 2000 still closely resembles one designed in 1985.

Figure 1.14. By the year 2000, system designers were splitting the on-chip bus into a small, fast processor-memory bus and a slower peripheral bus. A hierarchical bus topology allowed chip designers to greatly reduce the capacitance of the high-speed bus by making it physically smaller. Even so, the SOC’s logical block diagram continued to strongly resemble single-processor, board-level systems designed 15 years earlier.


Present-day SOC design has started to break with the 1-processor system model that has dominated since 1971. Figure 1.15 shows a 2-processor SOC design with a control-plane processor and a data-plane processor. Each processor has its own bus and shares a peripheral device set by communicating over bus bridges to a separate peripheral bus. This arrangement is an extension of the bus hierarchy discussed in connection with Figure 1.14.

Figure 1.15. Present-day SOC design has started to employ multiple processors instead of escalating clock rates to achieve processing goals.


The terms “control plane” and “data plane” came into use during the Internet boom of the late 1990s and early part of the 21st century. At first, these terms referred largely to the design of multiple-board networking systems. High-speed network data passed through high-performance processors and hardware accelerators on a high-speed circuit board—called the data plane. Overall system control did not require such high performance so the control task was given to a general-purpose processor on a separate circuit board—called the control plane. These terms have now become universal because they suitably describe many processing systems such as video-encoding and video-decoding designs that must handle high-speed data and perform complex control.

A 2-processor design approach has also become very common in the design of voice-only mobile-telephone handsets. A general-purpose processor (almost universally an ARM RISC processor due to legacy-software and type approval considerations) handles the handset’s operating system, user interface, and protocol stack. A DSP (digital signal processor) handles the mobile phone’s baseband and voice processing (essentially DSP functions such as Fast Fourier Transformations (FFTs) and inverse FFTs, symbol coding and decoding, filtering, etc.). The two processors likely run at different clock rates to minimize power consumption. Processing bandwidth is finely tuned to be just enough for voice processing—which minimizes product cost (mobile-phone handset designs are sensitive to product-cost differentials measured in fractions of a penny because they sell in the hundreds of millions of units per year) and also minimizes power dissipation—which maximizes battery life, talk time, and standby time.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.16.67.54