© The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature 2023
C. RichardUnderstanding Semiconductorshttps://doi.org/10.1007/978-1-4842-8847-4_7

7. RF and Wireless Technologies

Corey Richard1  
(1)
San Francisco, CA, USA
 

We began Chapter 6 by discussing the differences between Analog and Digital signals and the various subcomponents designed to process them. In Chapter 7, we pay particular attention to the analog world as we break down the exciting work of RF and Wireless electronics. From car radios to cell phones, the advent of wireless technologies has enabled instantaneous access to information and entertainment across the globe and on the go, transforming the way we live our daily lives. Before we track this evolution and the hardware technologies that make it possible, we’ll need to survey the electromagnetic spectrum.

RF and Wireless

To better understand wireless systems, it is helpful to consider two forms of energy. The first form is electrical energy, which we discussed in Chapter 1. Electrical energy runs along a conductor and is the result of a voltage difference that causes current to run through the internal wiring and circuits in an electronic device. The second form is the wireless “airborne” energy that travels from place to place in waves like a ripple through water. As scholars with boundless creativity, we will refer to this as wave energy.

By manipulating these two types of energy – electrical energy and wave energy – humanity manages to create, store, and communicate nearly three quintillion bytes of information every day (Bartley, 2021). Electrical engineers perform this magic by controlling the intensity of electrical energy over time, creating electrical “patterns” or signals. Computers use these patterns to communicate with one another – sending, receiving, storing, and processing the information that we use in our everyday lives.

RF Signals and the Electromagnetic Spectrum

So what exactly are radio frequency signals? RF Signals are analog “wave” signals used to transmit information from one place to another without a physical cable or wired connection (NASA, 2018). The semiconductor revolution that started in the 1960s was limited by those physical connections, but wireless technology enables us to take the technology with us wherever we go. These analog wireless signals can exist across a broad range of intensity and frequencies, called the electromagnetic spectrum (NASA, 2018).

What sets RF signals apart from other analog “wave” signals like Sound and Video? Frequency. Frequency is what electronics use to distinguish between one signal and another (NASA, 2018). If you’ve ever wondered how you’re able to listen to the radio and talk on the phone at the same time without them interfering with one another, here is your answer: Radio, television channels, telephone, and Internet service providers all use different ranges of frequencies, called frequency bands, to deliver different types of information (Commscope, 2018).

The Federal Communications Commission (FCC) strictly regulates who can use which frequency ranges (bands) for which types of communication in order to avoid people interfering with each other’s signals (FCC, 2018). Even so, sometimes our devices will interfere with each other – try using your Bluetooth earbuds too close to your microwave oven and it might ruin your podcast!

Each band is reserved for a particular communication protocol to ensure that all users are abiding by the same standards (AM, FM, CDMA, 802.11, etc). Service providers only have a finite amount of frequency range they can work with (bandwidth) to deliver an ever-expanding suite of services like Internet and television to their customers (FCC, 2018). Because of this, providers are incentivized to fit as much information for as many people as possible into a fixed amount of frequency range. They aim to squeeze as much revenue out of their allotted bandwidth as possible, investing billions in wireless technologies like CDMA (more on this in a bit). In simple terms, when phone companies and TV providers brag about “bandwidth” and network speed in their commercials, what they’re really saying is “we use our frequency range to deliver the quickest service to the most customers.”

As you can see from Figure 7-1, various parts of the electromagnetic spectrum are used for different applications. Radio Frequency (RF) describes a part of “spectrum real estate,” which we use for radio, TV broadcasting, and others. Within the RF frequency range, a subsection includes microwaves whose frequency range is used for Wi-Fi, radar, and cell phone coverage. All electromagnetic waves are radiation. Most frequencies are harmless, but at high frequencies, they can be very harmful, as can be seen in the ionizing part of the spectrum in the right side of the diagram.

An illustration of the electromagnetic spectrum depicts the frequency and applications of radio waves, microwaves, infrared, ultraviolet, X-rays, and gamma waves. It indicates an atmosphere opaque to wavelengths, an optical window, and a radio window.

Figure 7-1

Electromagnetic Spectrum by Frequency and Application (NASA, 2010)

RFIC – Transmitters and Receivers

All RF systems have two main components – a Transmitter and a Receiver. The basic functions of a receiver and a transmitter are straightforward. Information is sent, or transmitted, by the transmitter, and accepted, or received (hold on to your hats for this one folks) by the receiver. It is possible for a single component to function as both a receiver and a transmitter - this is called a transceiver. More on that later.

Most transmitters and receivers require at least six base components to operate: a source of power, an oscillator, a modulator, an amplifier(s), an antenna(s), and filters (Weisman, 2003). Not all components within a transmitter or receiver require power to function. Those that do are called active components and those that don’t are called passive components. We review each of the six basic transmitter and receiver subcomponents in the following section.

1. Power Source – RF waves are a form of energy; this energy must come from somewhere like the battery in your phone.

2. Oscillator – This is the source of the “RF” waves. An oscillator produces RF analog “wave” signals that will act as a “carrier” for the information being transmitted (Lowe, n.d.). This oscillator sets the frequency of the transmission.

3. Modulator – This is where some real magic happens. For an RF “carrier” signal produced by an oscillator to be useful, it needs to be “imprinted” with the digital information that it will be carrying to the receiving system. The modulator does this by making small adjustments to the frequency or amplitude of the carrier signal that are then received and converted back into digital signals by a demodulator in a receiver at the other end. A rough depiction of what a modulator does is included in Figure 7-2. This is an amplitude modulation system, because the height (or amplitude) of the modulated output data depends on the digital input data. A demodulator would separate the RF carrier signal (wave) and the information input (digital “computer” language signal), so the receiving computer’s digital machinery can properly process the information. One key device used to convert between RF Analog and Digital signals is called a modem. A modem is a device with a modulator and a demodulator – yes, early communication engineers weren’t very clever when it came to naming things. A modem can execute modulation and demodulation algorithms at the same time, allowing it to convert between analog-to-digital and digital-to-analog signals quickly (Borth, 2018).

A modulator block diagram of the input of digital input data and R F carrier wave and the output of modulated output data.

Figure 7-2

Modulator Block Diagram

4. Amplifier – This increases, or amplifies, a signal. The more powerful the signal, the farther it will travel and the greater distance it will maintain its accuracy. As RF energy spreads or travels further, it suffers loss as it passes through the various media it encounters (where it is partially absorbed), is reflected by objects in its path, or experiences interference from other electromagnetic energy and signals. When a signal gets bigger after passing through a component, the signal is said to have experienced gain (the opposite of loss). By the time a signal has reached its intended destination, it is likely that it will have lost a considerable amount of its original strength. With such a faint signal, amplifiers can be used to boost the signal back to a usable strength that a computer can process. On the other end, as a signal is on its way out of the transmitter, amplifiers are used to boost the signal so that it can travel greater distances while maintaining its signal integrity and deliver an intact message to the receiving device. Amplifiers are used throughout RF systems to boost signal strength for all kinds of technical reasons, but thinking of them as boosters for faint incoming and fresh outgoing signals is a much less abstract way of visualizing what they do (Weisman, 2003).

5. Antenna – Antennas are the components that receive and transmit signals to other systems. Many antennas can act as both, alternating its function between transmitter and receiver (Weisman, 2003).

6. Filters – There are catalogs of RF sub-components well beyond what you probably need to know, but before we move on there is one last device that deserves our attention – filters. Conceptually, filters are relatively simple. Their purpose is to let signals with intended frequencies get into the system and keep signals with unintended frequencies out, like a security guard at a private community who checks your license plate as you enter or leave. In the RF world, these unintended signals can come from interference or noise, which can arise from random environmental disturbances (this is called EMI, or electromagnetic interference) and from other “artificial” RF noise produced by broadcasts of RF signals operating at similar frequencies. A filter is why when you turn your radio to 94.7FM, you don’t also hear the song playing on 94.9FM. The signal at all those other frequencies is “filtered out.” There are four main types of filters used in RF systems (Shireen, 2019):
  1. 1.

    Low Pass: Filter only lets signals in below a certain frequency.

     
  2. 2.

    High Pass: Filter only lets signals in above a certain frequency.

     
  3. 3.

    Band Pass: Filter only lets signals in between two frequencies.

     
  4. 4.

    Band Reject: Filter only lets signals in outside of a range of frequencies.

     
To illustrate how all these pieces fit together, Figure 7-3 provides a simple block diagram of a transmitter and a receiver. For this example, we've assumed that the data being transmitted is an audio or video signal, but it could be any packet of data. For example, a simple instruction to load your favorite website on your phone. Not every RF system looks exactly like this, but for the sake of conceptual understanding, assume a receiver is processing the RF signal produced by the oscillator in the transceiver that sent the message it has just received. Filters can exist throughout the system, but it is useful to picture them (and they are often placed) in between the amplifier and the antenna.

Two block diagrams. A block diagram of a transmitter of the flow from the power supply to the oscillator, audio or video information, modulator, filter, amplifier, and antenna. A receiver block diagram depicts the path from an antenna to a filter, amplifier, power supply, demodulator, and speaker.

Figure 7-3

Transmitter and Receiver Block Diagrams

If we trace the data path for the transmitter in Figure 7-3, we can see that it originates from the oscillator, which emits a carrier signal. This carrier signal is combined with digital information (audio or visual data in this example) to create a wireless signal that is passed through filters and an amplifier(s) that modify and boost the signal before transmitting it toward the intended receiving device. If we trace the data path for the receiver, we can see that it begins at the antenna, which receives an incoming signal from another device transmitting in its direction. The signal is then passed through filters and amplifiers through to a demodulator, which separates the wireless carrier signal and the digital audio or visual information before passing along the received information for consumption or further processing within the system.

The OSI Reference Model

System designers have a challenging job to do. They must ensure that their system (1) fits together as a properly functioning unit and (2) seamlessly communicates with other devices. For design organizations working on advanced devices, challenge number one can involve coordinating the efforts of thousands of engineers working on dozens of sub-modules. Once this first challenge is overcome and a high performing device is fully functional, integrating with devices made by different companies with separate design methodologies, technologies, and processes would be incredibly complicated if not downright impossible without some sort of shared set of rules or guidelines.

To illustrate further, imagine you are a hardware engineer building a chip intended to help run applications on a mobile device. To be useful, the system must be able to efficiently run the code programmed by your customer’s software engineering teams, while at the same time integrate with all kinds of other hardware devices like laptops, Bluetooth devices, and other cell phones. How can you design a system that works with everything else? In a vacuum this would be impossible – every hardware company would build different systems running separate programming languages and have difficulty communicating with one another. With the OSI model, however, you can just “follow the recipe” and build a system that both supports the needs of your software teams while seamlessly connecting with other devices.

The Open Systems Interconnection (OSI) model describes the system layers that connect the underlying hardware to the user-facing interface that consumers interact with. The layers themselves are collections of standards and protocols that allow design engineers and system architects to communicate with one another (within layer), with engineers working on other layers (between layers), and other systems within a given network. As you can imagine, with the number and variety of electronic systems that exist today, designing systems and products that can work together and integrate with one another is a complicated and challenging thing to do. A universally accepted model like OSI enables the standardization necessary to produce high performing networks and integrated systems.

The model breaks down networking functions into an OSI system stack comprising seven layers, each of which can be designed independently of each other. Layers 1–3 are responsible for the physical transportation of information around the network while Layers 4–7 deal with user applications. Layer 7, or the application layer, contains the user interface that a consumer interacts with like a web page or app home screen on your phone. As you descend through each subsequent layer, you get closer and closer to the circuitry powering the application. The lowest layer in the model is the physical layer (PHY Layer), where the data itself (a string of 1’s and 0’s) is transmitted to the underlying hardware. In practice, the PHY Layer comprises an electronic circuit or circuits that connects a device to a larger network. This circuitry is usually composed of mixed signal and analog ICs, RF components like transceivers and receivers, and DSP modules that can interpret and modify incoming and outgoing signals. You can see a more detailed description of each layer in the appendix.

To be clear, the OSI model was primarily designed for inter-system communication between different devices within a larger network. There are many more collections of frameworks and standards used to communicate between design teams working within a given system rather than between them. Different operating systems, like iOS or Android for example, may have separate protocols and standards that enable embedded software engineers working at the Hardware Abstraction Layer or Platform Layers to communicate with software engineers working at the middleware layer at the same company. What’s important is that the external-facing endpoints of a system can integrate well with other devices. The variety and intricacies of different reference models is well beyond the scope of this book, but it’s useful to picture any computing system as a collection of clearly defined layers stacked on top of one another, from hardware to user. We can create a universal Macro-System Stack by adding a “Hardware Layer” beneath the Physical Layer of the OSI model (see Figure 7-4). The “Hardware Layer” in our stack is what this book is all about, but we can’t forget the layers above it that make the hardware useful!

The Macro-System Stack is a more practical way of thinking about how each of the different levels of a computing system fit together. The Application Layer is what you interact with – whether an app on your phone, a program on your computer, or the interface for a website you are visiting. The Middleware Layer comprises the backend application framework. This is where a traditional software developer would code the “inner workings” of a given application – tying together the various databases, functions, and security protocols that power a software program. Middleware is built on a Platform Layer, comprised of an operating system supported by Kernel and Device Drivers, which manage the operations of the various hardware components like memory and CPU time (GeeksforGeeks, 2020). The Hardware Abstraction and Physical Layers bridge between the software and hardware that power it. Between and across the Hardware Platform, Hardware Abstraction, and Physical Layers lie embedded software and firmware. Embedded software and firmware are types of software that sit “closer” to the hardware than typical, middleware layer or application layer code. Firmware, for example, might be used to carry out low-level tasks such as analog-to-digital or digital-to-analog conversion. The terms can be used interchangeably, though embedded software usually refers to code that impacts higher-level features or functions of a device that is often “farther” from the core circuitry than the firmware might be. All of this is built on the Hardware Layer, which comprises the physical circuity and core silicon which we learn about in this book.

An illustration of 6 layers. The layers from top to bottom are an application layer with a user interface, a middleware layer with security frameworks, a platform layer, a hardware abstraction layer, a physical layer, and a hardware layer with integrated circuitry and core silicon.

Figure 7-4

Macro-System Stack

RF and Wireless – The Big Picture

You should by now have a general understanding of the inner workings of RF devices, but how does this network of devices work together to bring you your favorite shows, talk to your friend in another city, or connect you to the Internet?

To understand this “big picture” of wireless networking systems, we will track the path of a typical long-distance phone call (see Figure 7-7).

To start, a phone making a call will transmit a signal (from its transmitter’s antenna), which will likely be picked up by a cell phone tower or base station. A base station is a relay point that extends a service network to a specific area (Commscope, 2018). Because RF signals lose strength and accuracy the farther they travel (due to interference and noise), service providers have spent billions building robust networks of base stations across the globe to make sure you don’t lose your signal (Commscope, 2018).

You can think of a base station as a giant receiver and transmitter with a router that receives an incoming phone call or broadcast and directs and amplifies the signal to another base station or “information exchange center” on its way to its intended destination (Wright, 2021). Base stations come in all shapes and sizes – cell phone towers, small stations on top of buildings, and those tacky antennas you see poorly disguised as a tree on the side of the freeway are all base stations (Commscope, 2018). Each base station has a range of coverage called a coverage cell, and the patchwork of cells make up a service provider’s coverage area. As a rule of thumb, the smaller a cell, the greater the signal strength (less distance = less interference and noise), but the smaller the coverage area (Weisman, 2003). The reason you may have terrible service when you go hiking in a remote area is because you are too far away from a base station your phone can connect to. Service providers are constantly weighing the benefits of a stronger network against the costs (more base stations). We summarize the major cell types in Figure 7-5 and can see their coverage areas illustrated in Figure 7-6.

An illustration summarizes macro cell, microcell, pico cell, and in-building systems and their respective coverage areas. The range of macro cells stretches from a few miles to over 20. The range of microcells is just over a mile. The range of a pico cell is around 200 yards.

Figure 7-5

Different Types of Base Stations

An illustration of the coverage area of different cells. It denotes Wi-Fi and Bluetooth coverage areas in the room, pico cells in the building, microcells in urban areas, macro-cells in suburban areas, and satellites in global areas.

Figure 7-6

Different Types of Coverage Cells

Once it has received the signal from your phone, the base station then sends your call to a central exchange, where it can be routed to any number of places. If a call is across the country, for example, your call will likely be routed to a satellite, which will then route the call to another exchange center close to the location to which the call has been directed. That exchange point will then route the call to another base station or a landline, which can finally connect the call to the intended receiver.

An illustration of the signal path between the cell phone and base station, information exchange center and base station, worldwide transfer location and information exchange center, antenna relay and information exchange center, antenna relay and mobile devices, and information exchange center and landline.

Figure 7-7

Signal Path of a Long-Distance Phone Call

Of course, directly connecting via satellite or landline is a possibility (hence satellite phones and cable TV) and oftentimes, hard connections are routed to get from one hub to another. Hardwired information transfer is great for processing time and information integrity, but can be very expensive, so it is usually used over shorter distances, for significant data transmission between exchanges, and for uses where speed is extra important (Weisman, 2003). High-frequency Financial Trading firms, for example, have spent billions laying fiber-optic cables from Chicago to New York to shave milliseconds off the time it takes financial data to reach their trading centers. Even if you’re making a call using a landline, if the call recipient isn’t within a short distance of where you live, chances are your call is being routed wirelessly via satellite or long-distance fiber-optic cabling to another routing center closer to its end destination.

Broadcasting and Frequency Regulation

With so many devices and users of limited bandwidth available, a dizzying amount of RF signals are flying around at any given moment.

If a base station receives more than one call at a time, how does it know which signal is which and where each signal is supposed to go? To mitigate this problem, strict regulation of spectrum frequencies by the FCC ensures that we don’t the important parts of the electromagnetic spectrum with too much “signal pollution” (Weisman, 2003). The FCC dictates which frequency bands can be used for what so that your call doesn’t interfere with your neighbors’ addiction to “Keeping Up with the Kardashians.” These frequency bands are a limited and valuable resource, and it is in service providers’ best interest to do everything they can to maximize the information they can send using the fixed bandwidth they have allotted to them.

A lot of complex technology goes into solving this problem which we’ll cover in the following sections.

Digital Signal Processing

Digital Signal Processors (DSP) are used to process signals in real time and convert between analog and digital signaling. In RF and wireless communications, DSP technology uses sophisticated mathematics and computation methodology to fit more information into a given digital signal. By packing more information into the same signal, we can send a lot more from place to place using the same, limited amount of frequency bandwidth. DSPs accomplish this using sophisticated algorithms to encode and shrink the amount of “frequency space” that a given piece of information requires, a process called signal compression. Signal compression can be lossless, which takes advantage of special algorithms to encode the exact information with less storage or lossy, which uses complex theories on what humans can see and hear to only store information that we can perceive.

The second technologies important to understand are the different kinds of multiple access standards. In essence, multiple access standard technology allows service providers to route multiple calls through the same base station, or across a given amount of bandwidth. The two most common multiple access technologies used today are TDMA and CDMA.

TDMA and CDMA

TDMA (Time Division Multiple Access) separates or multiplexes (in engineering speak) calls into a time domain within the same frequency channel first by converting a caller’s voice into digital bits, and then breaking these digital bits into defined “chunks” of time. These “chunks” of call time from different conversations are then sent, one after the other, using the same small amount of bandwidth (this is what we mean by channel). This all works because voice is actually a very low bandwidth signal. Voice is frequently sampled at 4kHz, and if the TMDA channel has 4MHz of bandwidth, we can cram 1000 voice signals into that one channel.

You might expect that breaking a conversation into so smaller parts and sending them separately would lead to an unintelligible mess. The magic of TDMA is that the system can put the “conversation chunks” back together so quickly that there is no detectable lapse in the conversation, even though there is a break in between when each chunk is received (ITU, 2011). And remember, all these components are running at very, very fast clock speeds. A 1GHz chip (1 billion operations per second), can perform a million operations on your phone call data in just one millisecond. Trust me, you’ll never notice.

You can think of TDMA like two jugglers tossing different colored balls to one another (each color being a different conversation) along the same path then dropping the balls into separate “call buckets.” The jugglers in this analogy can toss the balls so quickly though, that it looks like they are just playing a simple game of catch, with no break in the conversation for the listener on the other end of the call.

CDMA (Code Division Multiple Access), on the other hand, uses algorithms to code digitized voice bits or other data and transmit it across a wider channel (greater frequency range), which is then “de-coded” on the receiving end (ITU, 2011). Instead of alternating between which bits are sent from which device along a given channel, CDMA can send data from numerous senders to numerous receivers at the same time and use algorithms and digital signal processing (DSP) technology to ensure the data gets to the right place intact.

Figure 7-8 helps bring our juggler analogy to life. While TDMA multiplexes data into chunks that can be sent using the same frequency channel one after the other, CDMA uses digital signal processing to send data at the same time across a wider frequency channel. Lightning-fast digital signal processing techniques help both methods establish uninterrupted signals and strong, uninterrupted service.

An illustration of two jugglers playing a simple game of catch and tossing different colored balls to one another along the same path, then dropping the balls into separate "call buckets." Two graphs of T D M A and C D M A compare frequency versus time.

Figure 7-8

Juggling the Differences Between TDMA and CDMA

Where are these technologies used in practice? TDMA is the technology that underlies GSM (Global System for Mobile Communication) and is the primary standard for communication networks globally. In the US carriers like AT&T use GSM, while Verizon Wireless uses CDMA.

1G to 5G(eneration) – An Evolution

The world of telecommunications and wireless technologies is always evolving – here’s a wide-angle view of how far we’ve come (Vora, 2015). The original 1G technology could transmit only 4kbps, where today's 5G technology can transmit up to 1,000,000kbps (1Gbps), nearly a quarter million times more!

1G – First Generation: This is when cell phone technology first came into existence, starting in the late 1970s. The first cell phones were big and clunky, with terrible battery life. They used analog technology to send RF analog “wave” signals from point A to point B wirelessly.

2G – Second Generation: Here cell phones, using modulation, could now transmit digital data wirelessly. CDMA and GSM technologies were developed that allowed service providers to connect more devices more affordably, though services were limited to voice and text messaging (SMS).

3G – Third Generation: Building on voice and text services, 3G technology expanded wireless capability to deliver email, video streaming, web browsing, and other technologies that made the “Smartphone” possible.

4G – Fourth Generation: High-speed connectivity built on advancements in hardware technologies has allowed for faster data transmission, mobile gaming, video conferencing, high-definition content delivery, and cloud computing capabilities.

LTE (Long-Term Evolution) You’ll often see service providers use this term in conjunction with 4G (“4G LTE”) to market their service as better and faster than their competitors. In actuality, LTE is just an industry standard used to make sure the various devices, access points, base stations, satellites, and other components that make up our telecommunications network are able to work with each other to create one big, fully functioning system. Standards like LTE help make sure different technology companies can develop products that are able to work with other parts of the system, which is pretty important if you don’t want your service to cut out every time your cell phone is routed to a cell tower operated by a different network provider.

5G – Fifth Generation: Though you probably see 5G all over the place, this generation of networking technology is still being developed and is designed to make 4G much faster and more efficient. To deliver higher “data throughput” connection speeds, 5G technology will require a robust network of thousands of cell towers and tens of thousands of small cell antenna cells deployed across coverage areas.

We can see the evolution from 1G to 5G played out in Figure 7-9. Since the advent of 1G consumer analog systems in the 1970s, telecommunications infrastructure and wireless technology have rapidly evolved to deliver increased connectivity and performance to businesses and consumers across the board. 6G networks are already in development to enable new, data-hungry applications and continued growth in the space.

A timeline of the evolution of 5 G from 1 G. The evolution of 1 G began in 1979, 2 G began in 1991, 3 G began in 1998, 4 G began in 2008, and 5 G began in 2019.

Figure 7-9

The Evolution of 5G

Wireless Communication and Cloud Computing

While the evolution from 1G to 5G has delivered exponential improvements in data transmission and brand-new technologies like mobile gaming, video conferencing, and high-definition video streaming, these faster rates have driven an insatiable demand for cloud computing. The “cloud” is an elusive mystery to many of us but is a much simpler concept than one might expect. What we call the cloud is really just countless servers housed in giant rooms called data centers. By storing, or hosting, applications on higher performance computers, companies and consumers can store information, run applications, and boost capacity without having to invest in and manage all their own infrastructure. This is only possible because communication networks are so fast today. You wouldn't think to store 1MB worth of photos in the cloud if you had to use your 56k modem to view it each time. But when data exchange rates are lightning fast, why buy an external hard drive or a top-of-the-line 512GB iPhone when you can get to all your photos, music, and movies in seconds? By the same token, why set up a private company server, when AWS (Amazon Web Services) or Google Cloud can handle it for a lot less trouble at a fraction of the cost? Prior to the boom in wireless innovation over the last couple of decades, the bottleneck to centralized computing operations like data centers was moving the data to and from end users. With this bottleneck alleviated, cloud computing is here to stay. The problem has now shifted from limited bandwidth to building and powering data center infrastructure that can support ballooning demand.

We can see examples of two data centers in Figure 7-10, which pictures a data center built in a shipping container used by Microsoft to process and redistribute Bing Map data (left) and the bird’s eye view of Google’s Data Center in Council Bluffs, Iowa (Right) (Scoble, 2020) (Davis, 2019). With 200,000 square feet of space, it pales in comparison to the China Telecom Data Center in Hong Kong, which, at over 10 million square feet, is the largest data center in the world (Kumar, 2022)!

Two photographs display a data center built in a shipping container and a bird's-eye view of Google’s data center in Council Bluffs, Iowa.

Figure 7-10

Data Center Used by Microsoft’s Bing Map Team (Left) and Google’s Data Center in Council Bluffs, Iowa (Right) (Scoble, 2020) (Davis, 2019)

Chapter Seven Summary

In this chapter, we did a deep dive into the RF portion of the Electromagnetic Spectrum and learned about how different frequency bandwidths are carefully regulated by the FCC. Next, we broke down the two main parts of all RF systems – transmitters and receivers – into their constituent parts. By interweaving the six primary subcomponents – oscillators, modulators, amplifiers, antennas, filters, and power sources – with thousands of unique wireless building blocks, engineers are able to build intricate systems capable of long-distance, high-frequency remote processing and mobile communication. Following our analysis of RF subcomponents, we took a step back for a wider view of the OSI Reference Model and broader macro-system stack. From there, we compared the different types of base stations and followed the path of a signal from an international caller across the globe to a receiver on the other side of the world. Having traced a call from one end of the world to the other, we pondered the question – how do so many signals travel using the same bandwidth and airspace? We found the answers in TDMA, CDMA, and Digital Signal Processing Technologies that help maximize throughput with different signal dicing and timing schemes. We followed these bandwidth-optimizing technologies through the telecommunications evolution from the 1st Generation Analog devices in the 1980s and 1990s through high frequency 5G networks in development today. Finally, we touched on the downstream impacts of wireless technology advancements and the rise of cloud computing.

All RF systems depend on RF Transmitters and Receivers to send and receive information. Leveraging technologies like TDMA, CDMA, and DSP, service providers and device manufacturers are able to squeeze the most they can out of a limited amount of bandwidth. As technology has progressed, each successive generation of wireless devices and telecommunications infrastructure boosts what we can transmit and receive over the airwaves. The foundation on which nearly all RF systems are built, semiconductors have revolutionized the way we communicate with one another, entertain ourselves, and process information.

Your Personal SAT (Semiconductor Awareness Test)

To be sure that your knowledge builds on itself throughout the book, here are five questions relating to the previous chapter.
  1. 1.

    What do we mean when we say “RF”? Which key characteristics set one RF signal apart from another?

     
  2. 2.

    How does the FCC manage shared frequency bandwidths? What technologies are used to fit more information into a given amount of bandwidth? How does each technology accomplish this task?

     
  3. 3.

    Name the five key base components to any transmitter or receiver. Which component is number six and what makes it special?

     
  4. 4.

    Why is the physical layer so important in the OSI model? What are the other six layers and how do they function?

     
  5. 5.

    What makes each generation of telecommunications technology unique? How have these advancements enabled the growth and success of cloud computing?

     
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
13.58.201.235