Having spent eight chapters dissecting everything from transistor structures to System Architecture, we should be well familiarized with semiconductor technology. The question now begs, how does all this technology play into the big picture? Since its beginnings in the 1960s, the semiconductor industry has faced two ongoing major challenges – rising design costs and increasing manufacturing costs. These two challenges have driven the field from fully integrated design companies to multi-faceted fabless design models we see today. In the following sections, we outline drivers of design costs and manufacturing overhead followed by a discussion of the transformation of the semiconductor ecosystem over the last fifty years.
Design Costs
Designing advanced ICs is an increasingly complex and costly endeavor. Each chip must undergo architecture and IP qualification, design verification, physical design, software licensing (EDA), prototyping, and validation before it can be sent to a fab for manufacturing (McKinsey & Company, 2020).
In the nascent stages of the semiconductor industry, design costs were largely driven by a lack of universally accepted standards and tools. Most companies developed their software tools in-house, increasing talent training costs and making it difficult to integrate with components designed by other companies. To tackle these challenges, Electronic Design Automation (EDA) Companies began developing tools to automate the design process and drove the adoption of universal hardware description languages (HDL) like VHDL and Verilog, which helped “modulate” the design flow so it could be better targeted for additional investment and enabled design firms to build larger systems at aggregated, more manageable levels of abstraction (like C++ instead of assembly language) (Nenni & McLellan, 2014). The separation between design houses and EDA tool companies boosted efficiency, reduced design costs, and re-organized companies to better align with their core competencies in software and hardware. An advanced analog semiconductor company could focus all their R&D spending on developing analog design expertise using licensed, off the shelf EDA tools rather than developing custom tools in house. Today’s EDA industry is dominated by large firms like Cadence, Synopsys, and Mentor Graphics, with expansive tool portfolios that cover the design flow end-to-end. EDA and design software account for nearly 50% of design expenditures and continue to be significant drivers of burgeoning design costs (Venture Outsource, n.d.). To put into perspective the growing influence of EDA, one only needs to look at the size of EDA companies themselves. EDA giant Synopsys, for example, doesn’t make any actual silicon, but has a market cap of over $50B, roughly the size of Ford Motor Company (Yahoo! Finance, 2021).
Through the 1980s, most integrated circuits were medium in size and complexity. To create a fully functioning product, systems companies would purchase different chip types and integrate them into their device by soldering them to a PCB or other connecting device. As semiconductor technology grew more advanced and systems companies sought higher levels of integration, demand for Systems on Chips (SoCs) skyrocketed. SoCs enabled companies to fit all the traditionally separate functional components (memory, processor, etc.) onto a single chip. As the need for closer integration increased, so too did the complexity of each system component, making design even more difficult and expensive. To ease the complexity and stamp down on costs, Semiconductor IP Companies began to flourish. IP companies did two things to reduce design complexity – (1) they offered base designs of common modules that could be used as a foundation for an application specific circuit, and (2) they offered cell libraries that could be used by circuit designers to generate even more complex designs. In return for upfront licensing fees, per-product royalties, or library access subscriptions, design companies could quickly acquire the undifferentiated parts of a new design and focus their efforts on the parts that made their chip unique. Successful semiconductor IP companies today fall into one of three categories – microprocessors like those offered by ARM Holdings, communication architectures that enable different parts of an SoC to talk to one another like those offered by Arteris, and Analog IP, which has become harder and harder to design at each successive process node. IP companies in these three areas continue to play an important role in keeping design costs under control. If you’re an analog company, these IP providers can supply all the digital functions you need (like microprocessor and memory), and if you’re a digital company, they can provide the analog functions (like oscillators and power reference circuits).
As lithography and manufacturing equipment suppliers enable increasingly smaller transistors, semiconductor R&D and design have become progressively more difficult – complicated by phenomena like quantum tunneling and current leakage (McKinsey & Company, 2020). The cost to design the most advanced 3nm SoCs already costs between $500 million to $1.5 billion, according to research and consulting firm IBS (International Business Strategies) (Lapedus, 2019). Remember, this is just the cost to design an advanced SoC. That doesn't include the cost to manufacture the design at the 3nm node. Designing ICs using less advanced process nodes can reduce costs, but even dated technologies cost a significant amount of money – estimated design expenditures for 7nm and 10nm chips are $300 million and $175 million, respectively (McKinsey & Company, 2020). Silicon design is expected to become increasingly complex in the coming decade, providing unique challenges to chip providers’ forward progress (McKinsey & Company, 2020).
Manufacturing Costs
Evolution of the Semiconductor Industry
1960s–1980s: Fully Integrated Semiconductor Companies
From the commercial introduction of the IC by Fairchild Semiconductor in the 1960s through the 1980s, the semiconductor industry was largely composed of Fully Integrated Semiconductor Companies. These early firms would forecast demand for a product, then design, manufacture, and package it before marketing it to potential customers. Each company invested significant resources into its own fab(s) to manufacture the chips it sold. High fixed costs left firms vulnerable to volatile demand swings – if sales for their products dipped even slightly, fab utilization could drop significantly, spreading revenue across a greater cost basis and cutting into profit margins (Nenni & McLellan, 2014).
1980s–2000: IDM + Fabless Design + Pure-Play Foundry
The fragmentation of different parts of the value chain began to take hold in the 1980s and 1990s. The costs of building and owning a fab were increasingly expensive, requiring greater amounts of up-front capital investment for each successive technology node. Once built, companies were driven to maximize their output and contribution margin, even if they weren’t covering their fixed costs. As a result, even Integrated Device Manufacturers (IDMs), like Intel, which design, manufacture, and sell their own integrated circuits have started to lease out parts of their fab capacity (Nenni & McLellan, 2014). Former IDMs like Texas Instruments and AMD have become fabless in the last couple of years. Processor companies would sell their older fab facilities “down-market” to analog companies that could utilize those older technologies for many more years.
In the mid-1980s, companies like Xilinx (founded 1984) and Qualcomm (1985) began with a business model designed to take advantage of this excess capacity (Nenni & McLellan, 2014). By only designing the chips and signing contracts with an IDM to utilize additional manufacturing capacity, they were able to forego high upfront capital investments into their own fab and essentially convert these fixed costs into variable expenditures built into the price charged for each wafer. Not only did this fabless model reduce the required capital for new entrants, it also better matched variable supply economics to variable demand.
In 1987, shortly after Xilinx and Qualcomm were founded, the first pure-play foundry, TSMC, opened its doors (Nenni & McLellan, 2014). TSMC had an entirely new business model – it focused completely on manufacturing other company’s designs. By specializing in fabrication technologies, foundries are thus able to focus on a smaller set of core competencies, take advantage of more stable demand and consistent volumes (since they draw orders from multiple customers instead of just their own designs), and strategically position themselves in markets with lower labor costs like Taiwan and other parts of Southeast Asia. Geographically, this balance has continued, with US dominance in fabless design and Southeast Asian dominance in manufacturing- and assembly-related activities. Xilinx and other fabless design companies quickly shifted away from IDMs to lower-cost domestic and overseas foundries, a model that has dominated the market ever since. Even the few remaining IDMs that have their own fabs, like Samsung and Intel, still depend on pure-play foundries for more advanced technology nodes, typically using their own manufacturing capacity to produce devices requiring less demanding, older process nodes.
At the same time the fabless design model took hold in the late 1980s and 1990s, an important shift was happening further up the value chain on the design side as well. As EDA tools made building front-end design easier to do, systems companies that had historically depended on out of the box products or custom silicon from fabless design companies and IDMs began building their own chips better suited to their specific needs. Tools had entrusted them with front-end and back-end capability that in theory could enable them to bypass chip design companies and work directly with foundries, but they still lacked the complex, back-end expertise necessary to carry out physical design and downstream value chain activities. New players like VLSI Technologies and LSI Logic as well as established players like Qualcomm met this demand with innovative, ASIC-centric business models (Nenni & McLellan, 2014). Pure ASIC companies like LSI handled front-end designs developed by systems companies then coordinated with manufacturing and assembly suppliers to bring their product designs to fruition. Design services companies like Xilinx and Qualcomm drew much of their revenue from a similar model, though they also began to focus on specific markets, developing unique expertise and growing product portfolios of their own (Nenni & McLellan, 2014). Xilinx (now owned by AMD), for example, controls over 50% of the market for FPGAs, programmable chips that can be repurposed for different uses after manufacturing, while Qualcomm is a world leader in custom silicon for wireless connectivity and infrastructure products.
2000–Today: Fabless Design Companies + Foundries + IDM Stragglers + System Company In-House Design
The motivation for these companies to develop their own silicon may primarily be motivated by cost, but there’s also a motivation to protect their intellectual property. If Apple is working with a silicon provider on a chip for their newest, most cutting edge iPhone, for example, they may have to reveal too many details of their system in order for the silicon company to provide the ideal solution. Pulling that all in-house keeps those key discussions from leaving the corporate campus.
The pace of change in the semiconductor industry has been so fast that an engineer entering their retirement years in the 2020s has lived through all four of these eras. She may have started her career at a Fully Integrated Semiconductor Company like Fairchild, designing the chip, literally walking it across the building to the fab, then seeing it tested and shipped out the loading dock in the back of that same building. In fact, she may have done some of the testing herself. Now, she may be working at Apple, designing a single block in a massive chip that is manufactured, tested, and shipped offshore. She may never actually see the final silicon. Quite a change for someone that has never changed industries.
Fabs vs. Fabless Design – The Case Against IDMs
There exist today three primary types of semiconductor business models – Fabless Design Companies, Pure-Play Foundries, and Integrated Device Manufacturers. As manufacturing and fab construction costs continue to grow, IDMs are having to rethink the way they do business. Owning a fab can provide numerous advantages, including greater process control, faster time-to-market, and tighter design integration. These benefits, however, have been increasingly outweighed by the costs.
One of pure-play foundries’ biggest advantages over IDMs are their ability to pool demand across multiple customers and fully utilize capacity. This smooths out revenue over time and prevents underutilization of fixed assets. As the cost of manufacturing equipment and foundry construction continues to rise, underutilization becomes increasingly expensive – scaling directly with fabrication costs and causing disadvantages in unit-economics.
Intel may argue that owning a fab gives their company greater control over the manufacturing process and allows them time-to-market advantages over fabless companies, whose products may have to wait in line behind other foundry customers. While this may be true for smaller companies with low-volume or low-margin orders which can be deprioritized in foundry production schedules, any technical delays at an IDM’s fab can represent a significant obstacle to meeting delivery targets. Intel’s next generation 7nm chips were recently delayed until 2022, and the company now may have to contract with foundries to produce some of its products (Fox, 2020). Operating an IDM in the 2020s is truly a feast-or-famine endeavor. If you hit all your schedule targets and release a manufacturing process that’s superior to your competitor’s, you have a huge advantage. If you miss those targets, you’ve sunk billions into failed process development, and may be forced to manufacture your products in the same foundries that your competitor is using. And your internal development may never catch up once you fall behind.
Proprietary manufacturing IP and data capture is another common reason IDMs may use to justify the costs of fab ownership. Though this may have proved an advantage historically, foundries have unique technological advantages that are hard to match as an IDM. Because they manufacture a wider variety of components and ICs, foundries can iterate process technology at a quicker rate, building important competencies they can then democratize and provide to fabless design houses and other key customers. Remember what we were saying a while ago about comparative advantage?
Tighter integration between design and manufacturing is a substantial advantage that can result in significant cost-reductions and performance improvements. Though fab-owning IDMs may have a material edge here, software tools, design suites, and remote-work networking technologies have made integration between foundries and their customers much more efficient. Fabless Design Companies like Qualcomm and Broadcom can now seamlessly design new chips with fabrication costs and production optimization squarely in mind. Even small startups can access leading integration and manufacturing technologies by utilizing what’s called a shuttle run where a foundry combines the designs of multiple customers onto a single mask set and manufactures just a few wafers. You may only get a few hundred copies of your chip, but that’s plenty to build demo boards, go on the road and showcase your technology. And instead of a $5M fab expenditure, you may only pay $500,000 since that cost is shared among many customers. Yes, that's still a lot of money, but it’s a number that many Venture Capital Funds would be willing to risk on a new technology. Then your company has the proof of concept in actual silicon so you can go and raise a larger round of financing.
In light of these disadvantages, the few remaining IDMs – namely Intel and Samsung – have shifted or are shifting to a Fab-Lite model, whereby they produce some of their own chips, while outsourcing significant portions of their demand to foundries (Patil, 2021). An X-Factor to keep in mind here is the recent global semiconductor shortage and competition between the United States and China. The few remaining US IDMs like Intel and GlobalFoundries are poised to benefit substantially from the $50 billion dollars in government incentives recently passed by Congress to strengthen the industry and shore up domestic supply chains. It is yet to be seen, however, if this renewed support will counteract the fundamental market disadvantages IDMs suffer from today.
Many decades ago, AMD’s founder and then CEO Jerry Sanders famously quipped, “Real men have fabs.” Some years later, in 2009, AMD spun off its fab into what is now GlobalFoundries (Pimentel, 2009). The lesson – ditch the fab and give your foundry a call.
Industry Outlook
Shaped by a multitude of overlapping variables and market forces, the semiconductor industry is constantly changing. For the next several pages, we will highlight five key trends that shape the industry’s history, present health, and future growth prospects.
1. Cyclical Revenues and High Volatility
To manage Silicon Cycles and year-to-year volatility, semiconductor companies must be able to control costs, while not sacrificing key investments in R&D. Manufacturing represents the greatest expenditures by US-based semiconductor companies, comprising on average nearly a third of total costs, followed by R&D, Depreciation and Amortization, and SG&A (SIA Databook, 2021). Costs can differ significantly between Fabless Design Companies and IDMs, who have higher capital equipment expenses due to fab ownership. It can be a real challenge to weather an industry downturn if you have massive, fixed costs from a manufacturing facility on your balance sheet. You’re stuck paying for all that equipment, even if the fab is only running at 50% capacity. Due to growing manufacturing and capital equipment costs, production expenditures have grown significantly as a percentage of total costs over the last two decades.
2. High R&D and Capital Investment
3. High Compensation and Positive Productivity Growth
In addition, revenue per employee, a strong indicator of productivity, has doubled over the past twenty years, reaching close to $571,000 in 2020 (SIA Databook, 2020). Unlike most industries where average selling prices increase at or above the rate of inflation, per unit costs have decreased at a rate fast enough for companies to maintain profitability without excessive price increases. In lieu of price drops in vital input materials or manufacturing costs, consistent profitability would only be possible by increasing production efficiency.
4. Long-Term Profitability
- 1.
Increased demand for consumer electronics due to rising household disposable incomes, increasing urbanization, and rapid population growth (Fortune Business Insights, 2021)
- 2.
Fast-expanding emerging economies with growing demand for ICs (Fortune Business Insights, 2021)
- 3.
Technological Drivers including Internet of Things (IoT), Smartphones and 5G Communications, and Artificial Intelligence and Machine Learning (AI/ML) (Columbus, 2020)
For proof that sunny days for the semiconductor industry lie ahead, look no further than the COVID-19 pandemic. At the beginning of lockdowns, analyst forecast a sales contraction of between 5 and 15%, but sales in 2020 actually increased from $413 billion in 2019 to approximately $440 billion in 2020, a growth rate of 5.1% (Bauer et al., 2020) (SIA Factbook, 2021). As consumers were stuck at home and not spending money on gas, vacations, or new office wardrobes, they had money to spend on new work-from-home computers and gaming systems. Reduced demand in automotive, industrial, and parts of the consumer market were in turn offset by areas like server demand, PCs, and long-term growth areas like AI and 5G, helping the industry beat expectations and maintain a healthy growth trajectory (eeNews, 2021). NVIDIA, one of the leaders in processors for graphics, AI, and crypto, has set new records for quarterly revenue for every quarter of 2021, peaking at quarterly revenue of $6.5B reported in August of 2021 (Tyson, 2021).
Though new technologies and the explosion of personal computers, servers, and cell phones over the last couple of decades have been a boon for the industry as a whole, high consolidation has adversely affected many companies.
5. High Consolidation
As SoC designs become increasingly complex and transistors are pushed to their physical limits, design and manufacturing costs have never been higher. The industry’s profitability has depended on consistent cost-cutting R&D breakthroughs. As a whole, this has been successful, with costs shrinking from $.98 per chip in 2001 to about $.63 in 2019 (SIA Databook, 2020). In addition to pressure to tamp down on per-die unit costs, there has been enormous pressure on companies to get more out of each device – a boon for consumers who gained access to greater computing power at lower costs. These dual pressures for lower cost and greater performance have driven ferocious competition between rivals and resulted in the consolidated behemoths we see today.
This pattern is not unique to semiconductors – capital intensive industries often favor size, since ballooning fixed costs can be spread or amortized across higher annual revenues. It is no surprise that the cost of goods sold (COGS) as a percentage of revenue is significantly higher for smaller firms, who lack the overhead and economies of scale to compete with bigger players. This trend largely tracks with the push toward greater consolidation, with average annual deal volumes stretching to $68.8 billion per year since 2015 (IC Insights, 2021). In 2020 alone, three mega acquisitions were among the industry’s top five semiconductor acquisitions – NVIDIA’s $40 billion dollar acquisition of ARM Holdings, AMD’s takeover of Xilinx for $35 billion, and Analog Devices’ swallowing of Maxim for $21 billion (IC Insights, 2021).
The consolidation trend has gained speed in recent years – well over half of the most valuable 51 Semiconductor acquisitions have occurred since 2015 (Design and Reuse, 2021). Continued consolidation appears likely, though it is unclear for how long, considering how few major US semiconductor companies remain independent.
There are a couple of things that are important to note to contextualize this diagram: (1) These are deal announcements, all deals did not necessarily follow through. This could be for a variety of reasons, including rejection by shareholders, management resistance, and a lack of regulatory approval, as was the case of Qualcomm’s failed bid to purchase NXP for nearly $40B in 2016. (2) The base cumulative valuation data for the years 2011–2020 were drawn from IC Insights 2021 McClean Report (IC Insights, 2021). The base cumulative valuation data for 2021 was estimated at $22B, but only included the first eight months of reported M&A announcements (Design & Reuse, 2021). (3) We borrowed the baseline assumptions from the IC Insights 2021 McClean report, assuming most notably that coverage includes “purchase agreements for semiconductor companies, business units, product lines, chip intellectual property (IP), and wafer fabs, but excludes acquisitions of software and system-level businesses by IC companies… transactions between semiconductor capital equipment suppliers, material producers, chip packaging and testing companies, and design-automation software firms.”
In full disclosure, while we tried to mimic IC Insight’s restrictions for all spotlighted deals, there may be deals that should not have been included or overestimation of the net value of individual deals according to the restrictions they used, for which a portion of a given deal or the deal itself may have fallen under one of the restricted categories. In 2016, for example, the combined value of all acquisitions alone comprises more than the $103B in M&A activity estimated by IC Insights – it is likely that a proportion of each deal or one of the deals itself was not included in IC Insight’s net estimates. There was also a difference in their estimate of the value of the NXP acquisition that was eventually rejected by regulators – IC Insights estimated this deal value at $38.5B, while we quoted the $47B valuation at the time the acquisition announcement was made. These errors are not important to the overall message that this diagram is meant to convey, but important to note for accuracy and that the proportion of any individual deal’s value relative to the net M&A activity may be disproportionately high for any given year. For a comprehensive list of sources, please see a fully annotated version of Figure 9-12 in Appendix B.
US vs. International Semiconductor Market
Our silicon supply chain journey starts when Samsung, an IDM based out of South Korea, places an order to Qualcomm, a fabless design firm based out of San Diego, for 5 million modem chips to help power its latest lineup of smartphones. Qualcomm uses an ISA licensed from ARM Holdings, a Semiconductor IP company based in the United Kingdom, and EDA tools from Cadence Design Systems, an EDA tools company headquartered in Silicon Valley, to design the new modem chip. Once Qualcomm finishes the front-end design process, it sends the GDS file to TSMC, a Pure-Play Foundry based in Taiwan, for manufacturing. TSMC cannot finish the manufacturing process on its own, however, without key inputs from various other countries. Its foundries use EUV Lithography machines from ASML, a Semiconductor Equipment company based in the Netherlands, to etch circuit patterns for its most advanced technology nodes. In addition to the equipment on its manufacturing line, TSMC uses wafers made from silicon dioxide sourced from the United States, processed into ingots in Japan, and sliced into wafers in South Korea. Once the wafer fabrication process at TSMC is complete, individual die are cut apart and placed in IC packaging at an OSAT (Outsourced Assembly and Test) company in Malaysia. Finally, the finished die are sent to a System-Level Integration company in China, where they are placed into smartphones and delivered to customers around the world!
Our simple example required inputs from seven separate regions to complete the journey from customer order to product delivery. Real semiconductor supply chains are much more dense, with complex electronic devices requiring thousands of components, tools, equipment, and effort provided by hundreds of suppliers and sub-suppliers.
The United States controls about 60% of logic and analog markets, though it trails in the memory and discrete component markets with about 20–25% market share in each. Contrary to popular belief, the United States still manufactures a considerable portion of the world’s semiconductors; it was the third largest export from the United States in 2020, trailing only aircraft and oil (SIA Factbook, 2021). Though still a manufacturing leader, its modern manufacturing capacity relative to overseas competitors will likely shrink as manufacturing capacity has grown at a rate less than five times that of firms overseas, although recent supply chain concerns may slow this decline.
Though China has been the largest consumer of ICs since 2005, chips that are designed and produced within the country account for only 15% of total purchases (Nenni & McLellan, 2014). To avoid any possible consequences of future trade tensions or over-reliance on the United States, the Chinese government has devoted considerable attention and resources to grow the country’s domestic semiconductor industry (Nenni & McLellan, 2014). In 2014, Beijing pumped $20 billion into government-backed private equity funds, like Tsinghua Unigroup, focused on semiconductor technology development (Nenni & McLellan, 2014). More recently, in 2019, the Chinese government created a similar $29 billion fund with the aim of reducing China’s dependence on foreign suppliers and developing IC design and manufacturing technology (Kubota, 2019).
COVID-19 and the Global Semiconductor Supply Chain
The COVID-19 pandemic set in motion a global chip shortage that exposed many of the vulnerabilities of the semiconductor supply chain. The shortage has had significant economic consequences, including production cutbacks in the automotive, consumer electronics, medical device, and networking equipment industries, with lead times of many semiconductors stretching as high as one year (Vakil & Linton, 2021). A number of factors are responsible for the current situation. At the beginning of the pandemic, automakers mistakenly forecast a severe long-term drop in vehicle sales and cut back on orders of key chips. Foundries were only too happy to fill that capacity with chips for large at-home monitors, student Chromebooks, and Peloton bikes. By the time automotive demand rebounded, manufacturing lines had already been retooled to produce other consumer products, leaving car companies high and dry with no chips to power their vehicles (Vakil & Linton, 2021). To compound the issue, fires at two Japanese factories that produce advanced sensing ICs and fiberglass used to make PCB’s further cut into supply (Vakil & Linton, 2021). Though each of these factors – a global pandemic, inaccurate forecasting and capacity allocation, and fires at two key Japanese factories – may seem like one-time, unavoidable hiccups, the reason they were able to cause so much disruption is structural in nature.
The core weaknesses of the current semiconductor supply chain are (1) regional stratification of key activities and (2) resulting mutual interdependence. The reasons for regional concentration of important value chain nodes are straightforward – high design complexity and expensive manufacturing require a combination of scale and technical expertise that constrains what each player in the global supply chain can domestically produce. These dynamics incentivize each country to specialize according to their unique competitive advantages and has resulted in the segmented market structure that we see today.
US leadership in R&D and design, for example, can largely be attributed to its existing talent pool and access to a steady stream of new engineers from US Technical Universities. About two-thirds of electrical engineering and computer science graduates at leading US schools are international, but with 80% of students remaining in the US post-graduation, the country is likely to retain a significant edge in engineering talent for the forseeable future. (Varas et al., 2021). The United States also benefits from a vast pool of venture funding, which has the capacity and will to make ambitious bets in the semiconductor industry.
On the manufacturing side, East Asia holds competitive advantages, including skilled and affordable manufacturing talent, robust infrastructure, and higher levels of government incentives. Though talent and infrastructure are both important to manufacturing, the importance of government incentives cannot be overstated. According to a report published by the SIA and BCG, incentives may account “for up to 30-40% of the 10-year total cost of ownership of a new state-of the-art fab, which is estimated to amount to $10-15 billion for an advanced analog fab and $30-40 billion for advanced logic or memory” (Varas et al., 2021). The same report estimated that the total cost of ownership in the United States is between 20 and 50% greater than in Asia and that between 40 and 70% of that difference is due to the lower incentives offered by the US government as compared with Asian competitors (Varas et al., 2021).
While free trade and specialization have enabled the global semiconductor ecosystem system to thrive, delivering powerful chips at lower costs to consumers worldwide for the last five decades, they have come at the price of a fragile, unstable supply chain. Today, there exist greater than 50 points across the semiconductor supply chain where a single region controls more than 65% of global market share (SIA Whitepaper, 2021). This concentration exacerbates three key risk factors – random variability, geographically clustered manufacturing capacity, and geopolitical conflicts.
The first of these – natural variability – cannot be avoided. Any number of accidents can happen, like the two fires at the Japanese factories in 2020. While these occurrences cannot be predicted or completely avoided, a more distributed supply chain would reduce the risk that a single occurrence could cause a major disruption.
The second issue, geographic clustering, is a special case of natural variability risk as it relates to physical placement of manufacturing facilities. Advanced semiconductor manufacturing capacity below 10nm, for example, is currently divided between only two countries – Taiwan (92%) and South Korea (8%) (SIA Whitepaper, 2021). Such concentration of activities poses a unique danger – in areas with significant seismic activity, such as Japan and Taiwan, vast swaths of the global fab capacity could be knocked out by a natural disaster like an earthquake, choking the entire system and causing a global chip shortage. While a fire may damage a single factory, an earthquake could damage many.
The third risk factor, geopolitical conflict, primarily relates to the political tensions within Asia and between the United States and China. In such a highly interdependent market, tensions between major players can cut off access to suppliers, customers, and investors – like when the United States sanctioned Chinese telecommunications giants Huawei, ZTE, and three other companies citing national security concerns in 2019.
In order to strengthen the supply chain and make it more resilient, experts do not believe every country needs to become completely self-sufficient. Full regional self-sufficiency would require an enormous amount of upfront investment, as much as $1.2 trillion dollars by some estimates, and balloon prices by 35–65% (Varas et al., 2021). However, targeted investment in US semiconductor manufacturing capacity and a greater balance between efficiency and redundancy would go a long way in protecting from future chip shortages and economic downturns, while reducing our over-reliance on other countries for components vital to national security (SIA Whitepaper, 2021).
Chinese Competition
While the United States uses its advantage in education and engineering talent to lead in activities like chip design and manufacturing equipment, East Asia and China control about 75% of semiconductor manufacturing capacity (Varas et al., 2021). Such a wide disparity in manufacturing capacity puts the US economy and national security at risk. In addition to the essential role ICs play in supporting the modern economy, semiconductors power everything from critical infrastructure like telecommunication networks, to advanced cybersecurity and artificial intelligence applications. Shoring up the semiconductor supply chain has become a recent political priority in the US – of the quarter-trillion-dollar Science and Technology bill recently passed by Congress, $52 billion was earmarked for semiconductor manufacturing (Whalen, 2021). The bill drew bipartisan support as a way to counter China’s growing economic and military power.
China has made development of its domestic semiconductor industry a key piece of its five-year (2020–2025) plan and invested heavily in the space – pledging over $150 billion in investment from 2014–2030 according to the SIA (SIA Whitepaper, 2021). Capital infusions can only go so far, however. Advanced semiconductor manufacturing requires significant pools of relevant engineering talent, a seasoned base of companies with technical know-how, and access to advanced manufacturing toolsets. Despite nearly $50 billion in government incentives delivered over the last 20 years, Chinese companies only account for about 7.6% of global semiconductor sales today, with little presence in advanced logic, cutting-edge memory, or higher-end analog chips (SIA Whitepaper, 2021). To be fair, Beijing’s determination to reach semiconductor independence and substantial capital investments have driven annual growth rates of 15–20% and positioned China as a leader in the labor-intensive OSAT market, but the country is still likely a decade or more away from advanced technology nodes like those in Taiwan (Allen, 2021). Though the subject of Chinese competition in the semiconductor space has made headlines in recent years, don’t push the panic button just yet – it is still far behind the United States in terms of market share and technical know-how.
Ultimately, a more diversified and robust manufacturing base in both countries could reduce over-reliance by either country on the other and boost global manufacturing capacity, which would lower costs for consumers across the globe.
From smartphones to infotainment systems, the world has an insatiable appetite for chips. As old markets mature, countries compete for market dominance over the constant stream of new technologies that have come to take their place. Internet of Things (IoT), Smartphones and 5G Communications, and Artificial Intelligence (AI) and Machine Learning (ML) technologies will continue to drive demand for high-performance ICs using smaller-scale advanced process nodes across the globe. Though rising design and manufacturing costs constrain growth and squeeze margins, the semiconductor industry faces a far more existential threat – the fundamental limits of transistor sizes and the slowing down of Moore’s Law.
Chapter Nine Summary
In this chapter, we first dug into the two major challenges facing the industry today – rising design and manufacturing costs. We used this context to shape our trip down memory lane, reflecting on the shift from fully integrated semiconductor companies in the 1960s to the fabless-IDM-foundry dynamic we have today. We next focused on a few key trends. With R&D and Capital Investments second only to Biotech, the semiconductor industry has weathered volatile sales cycles and seen consistent productivity growth while delivering high profits and significant pay increases to its workers. Though not without its ups and downs, for the last six decades, the industry has been a growing and profitable sector. Finally, we learned that the United States is still a leader in chip design and manufacturing, though China has grown as a competitive threat in recent years.
As capital requirements bloat, companies have been forced to get big or go bust, consolidating into a handful of top firms. This trend clearly does not favor IDMs who are unable to pool demand and must split resources and attention across both manufacturing and design. Though leading IDMs Intel and Samsung are nominal leaders in revenue, they have trailed in profitability metrics like return on assets and Price/Earning (P/E) Ratios over the past several years. Despite holding some advantages from tighter software-hardware integration, they face perpetual disadvantages to Pure-play Foundries who must worry about only one core competency (wafer fabrication and manufacturing). The slow decay of IDM titans does not reflect the industry’s prospects at large, however, with strong growth forecasts and a bright future ahead.
Your Personal SAT (Semiconductor Awareness Test)
- 1.
How have burgeoning design and manufacturing costs shaped the evolution of the semiconductor industry?
- 2.
Make the case for IDMs. What are some competitive advantages they may have that fabless companies do not? Do you think these are sustainable?
- 3.
List three key current industry trends. Which do you think is most important?
- 4.
What are core drivers of consolidation? Can you think of any drawbacks to being too big (think IDMs)?
- 5.
Describe the distribution of global value chain activities and consumption across the United States and Asia. Which surprising factor is responsible for much of Asia’s manufacturing cost advantages?