© The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature 2023
C. RichardUnderstanding Semiconductorshttps://doi.org/10.1007/978-1-4842-8847-4_5

5. Tying the System Together

Corey Richard1  
(1)
San Francisco, CA, USA
 

Millions of dollars and a year of our design team’s hard work has been spent, but it was well worth it! The fab has finished and shipped your order – 100,000 freshly minted custom processors are on the way. What now? If you’re reading this on a kindle or a laptop, you may have noticed that a screen is in front of you, not a die-package assembly. The truth of the matter is that a well-designed IC is only as good as the system it’s a part of. As Moore’s Law slows down and companies lean more heavily on functional scaling to meet the demands of their customers, system integration has become increasingly important. From advanced interconnects and next generation IC Packaging to signal integrity and power distribution networks, in this chapter we’ll explore the technologies that tie systems together.

What Is a System?

The word system can be used at different levels to describe a fully functional and discrete structure, like a laptop, or any number of subsystems or modules fully equipped to tackle a specific task within the larger design – think of the screen and LED drivers that make up the display module on the Kindle that we mentioned earlier. Many devices are made from a collection of different ICs and components that may have been designed by entirely separate companies using proprietary methods and unique microarchitectures – integrating all of them is a complicated and difficult task. In addition, connection points between each module are often a bottleneck for data flow, which can increase latency and slow down the overall system. To connect the various sub-systems and make sure power and data get to the right place, a well-designed network of interconnects, strong packaging technologies, and good signal and power integrity analysis are vital to developing a high-functioning system.

Input/Output (I/O)

An integral part of what ties the system together are interconnects, commonly termed input/output (I/O). Within a single chip, interconnects are the wiring that connects different components, like transistors, with one another to form logic gates and other functional building blocks. At a higher level, interconnects refer to the connections between the chip and the PCB, other components or chips on the PCB, and other parts of the system as a whole. Large chips can have as much as 30 miles of stacked interconnect wiring occupying the various layers of an IC (these layers are formed during the BEOL stage of the front-end manufacturing process covered in the previous chapter) (Zhao, 2017). In sum, interconnects serve as the gateway through which individual components and functional blocks interact with the rest of the system. The system designer has many different options in connecting the I/O within the system, which all starts with selecting the proper package for each IC.

IC Packaging

As we saw in Chapter 4, a finished chip is placed in an IC package during the final stage in the electronic manufacturing cycle. Or in the case of wafer-level packaging, the finished chip is already packaged before being cut and separated from the rest of the wafer. Electronic packaging protects the chip from the outside world and supports the electrical interconnects that connect the device to the rest of the system. While we briefly introduced electronic packaging in Chapter 4, here we’ll get to drill down and explore all the different package varieties at a deeper level. Feel free to skip ahead and check out Figure 5-2 on Packaging Structures and Architectures to help visualize the various components and configurations as we parse through our discussion of each packaging type.

The most widely used package types include

  • Wire Bond – When ICs were first beginning to gain traction, packaging interconnects were limited to wire bonded connections from IC bonds and landing pads inside the packaging to pins that soldered onto the external board or substrate that supported the rest of the system (Gupta & Franzon, 2020). This arrangement limits the number of possible interconnects from an individual chip, since they can only be located on the border of the die and packaging.

  • Flip-Chip Packaging – This procedure solves this area problem by enabling interconnects to be placed within the borders of the die itself (Gupta & Franzon, 2020). It accomplishes this by adding steps to the back end of the manufacturing and assembly process.

  • The Flip-Chip attachment process has approximately six steps. We touched on this back in Chapter 4, but will do a more detailed breakdown of flip chip technology here.

  1. 1.

    Individual ICs are manufactured through wafer fabrication. Attachment pads on the die are treated to make them more receptive to soldering.

     
  2. 2.

    Small bits of metal called solder balls are placed on each of the IC attachment pads through a process called wafer bumping (Tsai et al., 2004).

     
  3. 3.

    After wafer bumping, individual ICs are cut apart through wafer dicing and flipped over.

     
  4. 4.

    The solder balls on the flipped die are aligned with a pattern of corresponding bond pads (called a ball grid array or BGA) on a substrate or PCB using high-precision robots.

     
  5. 5.

    The solder balls are remelted and soldered to the underlying substrate or PCB in a process called flip-chip bonding (Tsai et al., 2004).

     
  6. 6.

    Finally, the spaces in between the solder ball interconnects are filled in with materials called underfills that mechanically and thermally support the newly mounted chip (Tsai et al., 2004).

     
Figure 5-1 helps us better understand each step of the flip-chip bonding process. Steps 1 through 4 are pictured in ascending order from left to right. Step 5 (flip-chip bonding) accounts for the bottom left two figures, while Step 6 (underfill) accounts for the bottom two right. Note that this drawing is not at all to scale – the solder balls are very small, typically just a hundred microns across.

Eight sequential illustrations of the process of bonding together integrated chips via flip chip. The 16 solder balls in a 4 by 4 formation are melted in between.

Figure 5-1

Flip-Chip Bonding Process (Twisp, 2008)

  • Wafer-Level (Chip-Scale) Packaging – In traditional manufacturing processes, a bunch of chips on a silicon wafer are cut apart into individual die, before they are placed in their respective packages. Wafer-level packaging, however, begins the packaging process before the wafer is even diced (Lee, 2017). Cutting the die apart later in the process creates a smaller die-package assembly that is approximately the size of the chip itself – this is why wafer-level packaging is often referred to as chip-scale packaging (CSP). You can picture this as the difference between putting something in a box vs. wrapping it with wrapping paper. The box will take extra room, while the paper will stick to the edges of whatever you’re wrapping. This extra room saved by wafer-level packaging is especially valuable for applications with limited area requirements, like mobile devices. CSP technology only works for small die sizes, which further limits its usage.

  • Multi-Chip-Modules (MCM) and System-in-Package (SiPs) These two approaches integrate multiple die into a single package and are also useful for limited-space applications. The two are similar except that while MCMs integrate multiple die on a 2D surface, SiPs integrate die both horizontally (2D) and vertically (2.5/3D) (Lau, 2017). The advantages of MCMs and SiPs are that they allow engineers to modulate parts of the design and more easily incorporate licensed IP (Gupta & Franzon, 2020). Instead of having to include a CPU, memory, and a GPU all on one SoC, for example, each could be designed separately, and then packaged together in one larger module. Engineers can mix-and-match each block with other parts, but at the cost of performance and power disadvantages when compared to more tightly integrated SoCs. While suffering some integration disadvantages, modulated packaging architectures allow the use of a cheaper silicon process for the analog functions of the system, and only use the expensive lower-geometry processes for the critical high-speed processing and memory functions. SiPs also allow for passive devices like capacitors and inductors to be integrated into a single package which can improve performance by minimizing the distance between components. The tradeoffs between SoC chip-level monolithic integration and package-level heterogeneous integration is a pressing question facing many system architects and engineering leaders today – we will explore this topic in greater detail in future chapters.

  • 2.5/3D Packaging – Advances in IC Packaging have improved performance and made electronic systems more efficient. It used to be that all packages and their enclosed chips were attached to a PCB or other substrate through metal pins, which then connected that chip to another part of the system through a network of wires (Gupta & Franzon, 2020). Today, system architects have many more options. Breakthroughs in die stacking technology have allowed design teams to stack multiple die on top of one another. In this configuration, die are stacked using vertical interconnects called through silicon vias (TSVs) to form 2.5/3D packaging architectures (Lapedus, 2019). In 2.5D packaging, die are connected to a shared substrate, called an interposer, which in turn is connected to a PCB while in 3D packaging, die are stacked directly on top of one another (Lapedus, 2019). This configuration may not be as tightly integrated as a 3D die stack, but is less costly and is more tightly integrated than a separately packaged wire bond configuration (Gupta & Franzon, 2020). New copper hybrid bonding technology uses copper-to-copper interconnects to connect stacked die with greater interconnect density and lower resistance, enabling quicker data transfer and faster processing speeds than traditional TSVs (Lapedus, 2020). These advanced packaging technologies have also enabled the design of integrated modules including die from different silicon processes. For example, a 22nm processor chip and a 180nm high-power audio amplifier can be included together in a single plastic module.

  • Stacking technology was first applied to memory systems like Hybrid Memory Cube (HMC) and High Bandwidth Memory (HBM), where a relevant memory component might be stacked with its relevant processing chip (memory-on-logic) or additional memory die (memory-on-memory), but applications have broadened since then (Lapedus, 2019). Stacking chips vertically enables increased I/O density, which reduces the amount of processing time it takes to move a signal, or information, from one part of the electronic system to the other while saving valuable silicon real estate (Gupta & Franzon, 2020). This can be especially useful for more compact electronic devices, like smartphones, where space is limited and valuable.

    A diagram exhibits the components of a wire bond and a flip chip B G A, M C M and S I P, and 2 D monolithic integration, 3 D integration, and 2.5 D integration via three layers.

    Figure 5-2

    Packaging Types and Architectures

Figure 5-2 illuminates each of the various packaging types and architectures we’ve discussed in considerable detail. Comparing each sub-diagram to its horizontal counterpart(s) reveals key features that differentiate each packaging configuration. In the first row, wire bonded die use wires on the border of the die surface to connect to pads on the packaging substrate. These connections are used to tie the die to the rest of the system, whether directly through via’s (connecting wires) on the PCB or through a ball grid array (BGA) soldered to the PCB. Because wire bonds require excess wiring, they are slower than their flip chip counterparts, which use bumping and through silicon vias (TSV) to connect the die to the underlying packaging substrate more efficiently. In the second row, Multi-chip Modules (MCMs) and System in Packages (SiPs) look relatively similar. Their key difference is that MCMs integrate multiple die side by side on a 2D surface within the same overall package. SiPs, on the other hand, use both 2D and 2.5/3D stacked die configurations to integrate multiple die within the same overall package. In the bottom row, we scrutinize the nuances of 2D Monolithic Integration, 3D Integration, and 2.5D Integration. 2D Monolithic Integration attaches die to a substrate side by side within the same package. 3D Integration attaches die on top of one another within the same package. 2.5D Integration connects two or more die on an interposer that sits between the die and the base substrate, before integration within the same package (both horizontal 2D and vertical 3D integration elements).

While comprising a relatively small portion of our total system cost, making up just $30 billion of the over $440 billion in total 2020 sales reported by the SIA, IC Packaging has an outsized impact on system performance and has seen a resurgence of interest from engineering leaders desperate to squeeze more performance out of existing process nodes (Semiconductor Packaging Market by Type, 2021).

Signal Integrity

As components are packed tighter and tighter in integrated systems, signal integrity has become increasingly important. It is derived from the study of electromagnetics, an area of physics that explores the interaction of electric currents, or fields, with one another.

Digital Signals are electric pulses that travel along transmission lines carrying information from place to place throughout an electrical system (MPS, n.d.). In electronic systems, these signals are typically represented by a voltage (MPS, n.d.). Let’s think back to what we learned in our discussion of electricity and how voltage and current relate to one another. Remember that voltage acts like water pressure in a pipe, driving current along the wiring and circuitry that comprises an electronic system – the current in question here is the signal itself. Currents with higher voltages sufficient to open transistor gates are ON (1) signals, while currents with lower voltages are OFF (0) signals. If we zoom in even further, we can see that a current is made up of charges moving in the same direction. When you hear the term bit used in computer engineering, the bit is made up of these charges. Strung together, patterns of bits comprise signals that different components of a computer can interpret as information. At each junction in the transmission process, a transmitter sends a signal to a receiver along a transmission line that connects them (Altera, 2007). To be clear, the transmitter and receiver in this case are physically connected by a wired transmission line, unlike a transmitter and receiver in a wireless system. In complex electronic circuits where wires can be literally a few nanometers from each other, neighboring signals can interfere with one another and the surrounding environment as they pass along transmission lines through the various components of the system. These transmission line effects on signal integrity can result in data loss, accuracy issues, and system failure.

To mitigate these effects, signal integrity engineers conduct electromagnetic simulations and analysis to identify and resolve potential issues before they arise. Common forms of interference include noise, crosstalk, distortion, and loss.
  1. 1.

    Noise occurs when energy that is not part of the desired signal interferes with the signal being transmitted (Breed, 2010).

     
  2. 2.

    Crosstalk occurs when energy from one signal inadvertently transfers to a neighboring transmission line (Breed, 2010).

     
  3. 3.

    Distortion occurs when the signal pattern is damaged or warped. In extreme cases, distortion can be so severe that incorrect data is delivered to the receiver (Breed, 2010).

     
  4. 4.

    Loss can occur for several reasons, including resistive loss from transmission line conductivity issues, dielectric loss from a loss in signal velocity, and radiation loss in unsealed systems (Breed, 2010).

     

At high bit rates and great distances, signals can degrade to a point where an electronic system fails completely.

Bus Interfaces

One of the key performance bottlenecks in electronic devices is the transmission of data between each of the system’s constituent parts. Doubling a processor’s power doesn’t mean anything if it takes too long to get the information it needs to do its job from the rest of the system. To unlock the power of advanced circuits, bus interfaces, the physical wires through which data travels between the different components of a system or PCB, have become increasingly important. That 64-bit processor in your computer has to move 64 bits of data together on a single bus interface. Bus interfaces can have three primary functions (Thornton, 2016):
  1. 1.

    To transmit data (data bus)

     
  2. 2.

    To find specific data (address bus)

     
  3. 3.

    To control the operations of different parts of the system (control bus)

     
Together, these three buses are called a system bus and collectively control the flow of information to and from a CPU or microprocessor (Thornton, 2016). The block diagram presented in Figure 5-3 illustrates the relationship between the system bus, its constituent data, address, and control buses, and key system components like the CPU, memory, and i/o interconnects that tie them all together.

A block diagram depicts the interconnections of the C P U, memory, and input and output with control, address, and data buses in the system bus.

Figure 5-3

System Bus Block Diagram (Nowicki, 2019)

Data buses for PC’s can accept information flow to and from the central processor ranging from 8- to 64-bits at a time. Like a hose unable to pump more water than its diameter allows, a bus connected to a 32-bit processor cannot deliver or receive more than 32-bits at a time (per-clock cycle). In this way, the bit number functions as a measurement of bus “diameter” or “width” (Thornton, 2016).

Much of the development in bus interfaces has occurred in the personal computing space. In the early days, bundles of wires called a bus-bar would separately connect each component to one another, but this approach was slow and inefficient (Wilson & Johnson, 2005). To increase speed and improve performance, PC companies migrated to an integrated structure that reduced the number of interface junctures from a cluster of haphazardly connected components and modules down to two chips – the Northbridge and the Southbridge chipset architecture.

In this configuration, the Northbridge interfaces directly with the CPU via the front-side bus (FSB) and connects it with components that have the highest performance requirements, like memory and graphics modules (Wilson & Johnson, 2005). The Northbridge then interfaces with the Southbridge, which in turn connects to all the lower priority components and interfaces like Ethernet, USB, and other low speed buses (Wilson & Johnson, 2005). These non-Northbridge bus interfaces are collectively known as Peripheral Buses (PCMAG, n.d.). The Northbridge and Southbridge are connected to one another at a juncture known as the I/O Controller Hub (ICH) and together they are known as a chipset (Hameed & Airaad, 2019). We can see this chipset architecture clearly depicted in Figure 5-4.

A schematic diagram of the structure of a chipset. The northbridge interconnects with C P U, P C I-E, and R A M and has a direct link to the southbridge, where P C I, U S B, I S A, I D E, B I O S, and legacy are found.

Figure 5-4

Simple Chipset Diagram (Oyster, 2014)

Bus interfaces can be classified by the way they transmit bits between two digital devices (Newhaven Display International, n.d.). Parallel interfaces run multiple wires between two components and transmit bits across at the same time (Newhaven Display International, n.d.). This works well over short distances, but signal integrity issues can arise as the distance between two components increases. Common Parallel Interface Buses include DDR (Double Data Rate) which is used for memory, and PCI (Peripheral Component Interface) buses. Serial interfaces, on the other hand, transmit and receive data between two components across a single wire one bit at a time, but at much higher speeds (Newhaven Display International, n.d.). This reduces the chances of signal integrity issues since a bit cannot be received until the one before it has been processed. Serial data transmission is less likely to experience crosstalk issues since individual data lines are not bundled next to each other. However, serial data is susceptible to another type of interference called Intersymbol Interference (ISI), where a data bit can be affected by the bit transmitted just before (Kay, 2003). Common Serial Interface Buses include
  • PCIe (PCI Express Bus)

  • USB (Universal Serial Bus)

  • SATA (Serial Advanced Technology Attachment Bus)

  • Ethernet Bus

Beyond signal integrity advantages, serial interfaces are less costly due to lower wire count, but suffer slower transmission speeds (Kay, 2003). Parallel communication, by contrast, enables faster data transmission, but is costlier and has limited efficacy over long distance and when operating at high frequencies (Kay, 2003). We can see examples of parallel and serial interfaces depicted in Figure 5-5.

An illustration of parallel and serial interfaces. Parallel connections of D 0 to D 7 from transmitting to receiving side are observed in the former, while a lone horizontal connection with D 0 to D 7 is observed from both sides in the latter.

Figure 5-5

Parallel vs. Serial Interfaces (Ashri, 2014)

Unless you specialize in Signal Integrity or interface circuit design, it probably isn’t so important that you know all the bus interface flavors, but rather that you understand why they are so important to the overall system. The key takeaway to remember is that bus interfaces form the connective tissue responsible for delivering data, distributing instructions, and tying together the major components of the overall system.

Power Flow in Electronic Systems

To most of us, the power that fuels our electronics is like magic. We plug our phones and laptops in and Boom! they work. Harnessing the power of “power” is the subject of a vast field of engineering and numerous subdisciplines. Taking a page from our review of transistors, it’s helpful to think of system power flow like the flow of water through a water utility system. It might be useful to take a look at Figure 5-6 and follow along as we track each step.

A diagram of a battery linking to a power converter via A C power, which links to P D N via D C power and to C P Us and other processors. A reservoir also links to a major water distribution center, pipe network, and individual homes and buildings.

Figure 5-6

Power Flow in Electronic Systems – Water Utility Analogy

Our world is becoming more and more mobile every day, so we'll focus on battery-powered systems for this discussion. In our analogy, the battery acts like a reservoir of charges whose voltage is comparable to a municipal reservoir of water. A power converter converts AC Power to DC electric current, much like a major water distribution center might decrease the water pressure as water gets closer to its destination. As it turns out, AC power is much better for transmitting power over long distances (think utility-grade transmission lines in your city’s power grid), while DC power is much better for power transmission over short distances (think of the power home appliances might use). Batteries often store power in DC form as well, but for the sake of illustration, we’ll pretend this battery stores AC power that needs to be converted before use.

Next, the charges or electric voltage from the battery are transported through a network of metal planes, called a Power Distribution Network (PDN), to the different processing centers of the system, like a utility pipe network delivering water to the homes and buildings of end users like you and me.

To transport water to individual homes and buildings, utility providers use varying levels of water pressure to push H20 through the system. This is a delicate balancing act – if the water pressure gets too high, the pipes in the system may burst, but if it drops too low, then water won’t get through the pipes to its end destination. In electronics, we have a similar problem with voltage. If the voltage gets too high, the circuit may become damaged, or may overheat and fail, but if the voltage drops too low, then the circuit will not have enough energy to function. To prevent this from happening, power engineers must build a network of voltage regulators and power converters to ensure that voltage is never too high or too low at any point in the system. Like Signal Integrity analysis of bit and signal flow through a system, the field of Power Integrity studies the flow of voltage throughout a system to ensure that voltage is getting to the right place, in the right amount, at the right time (Mittal, 2020).

Voltage regulators are circuits designed to maintain a fixed voltage output, regardless of the input voltage they receive – they process the energy from a power source so that it can be handled properly by the rest of the circuit (MPS, n.d.). This is important in battery-powered systems because the battery voltage is not constant. If you’re walking down the street listening to music and get a phone call and your phone screen lights up, that takes a lot of instantaneous power and can cause the battery voltage to drop significantly as the current drawn from the battery increases. Voltage regulators ensure that all components in the system receive a stable voltage, even if the battery voltage is moving around. There are a wide variety of voltage regulators, including DC/DC converters, PMUs (power management units), Buck converters (a specific type of DC/DC converters), Boost and Flyback converters.

Summary

In this chapter, we first discussed I/O interconnects and their importance to connecting the various components of an overall system. With as many as 30 miles of wiring in a single chip, interconnects are a key factor in limiting or boosting device performance. From simple wire bond packaging to high I/O density flip-chip and wafer-level packaging, we dove deep into the various packaging architectures and the processes that make them possible. Next, we discussed the differences between multi-chip-modules and system-in-packages, introducing key concepts like heterogeneous and monolithic integration. We learned about advanced die stacking technology used in high bandwidth memory (HBM), HMC, and other 2.5/3D packaging architectures. From there, we explored the role of signal integrity in maintaining the pace and quality of information flow throughout an electronic system. Building on our understanding of interconnects and signal integrity, we then tackled the three buses – control, address, and data – that facilitate information flow between a CPU, Memories, and external sources of data. Finally, we explored the flow of power through an electronic system like water through a utility system – tracking voltage through power converters and a power distribution network to its end destination.

A company may source and design the perfect assortment of ICs and components to create a market-leading product, but having the right parts is not enough on its own. Building high caliber devices requires tight system integration with plenty of interconnects, the right IC packaging architecture, and strong signal and power integrity performance.

Your Personal SAT (Semiconductor Awareness Test)

To be sure that your knowledge builds on itself throughout the book, here are five questions relating to the previous chapter.
  1. 1.

    What are interconnects and what makes them so important?

     
  2. 2.

    What is the difference between wire bonding and flip-chip bonding? How do these impact the number of interconnects and transmission speed of a system?

     
  3. 3.

    Why do signal integrity engineers exist? What kinds of interfaces and transmission methods might they encounter?

     
  4. 4.

    In a chipset, why are the CPU, the northbridge, and southbridge arranged the way that they are? What do each of these bridges handle and how are they different?

     
  5. 5.

    Describe the four main stages of power flow in an electronic system. Which components or modules handle each stage? Can you relate each of them to similar components in a water utility system?

     
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.180.71