HOW DOES AN ANALYST or policy specialist wade into the complex world of actual military science–that is, the realm of physics and engineering on which so many practical decisions about military matters turn? Given the sophistication of the technologies involved, this would seem an impossible task for the generalist, or even for many scientists lacking specialized knowledge of certain aspects of military technical matters. But in another sense, it is a necessary task. Only by striving for answers to questions like can missile defenses work, can space weapons provide capabilities unavailable from systems based on Earth, and can future warfare be radically transformed by changes in underlying weapons systems and tactics may we reach decisions about proper defense resource allocation. Only in these ways can we, even more importantly, avoid major surprises in future wars (or, to put it differently, profit from any surprises before adversaries can do so). Only in these ways can we understand the potential, and the limits, of arms control.
This section of the book provides a primer on some of these key technical subjects to aid the general reader. In so doing, it suggests an analytical approach to addressing such subjects that may be of broader use even for matters not discussed here.
The utility of such a primer is limited in part by the technical proficiencies of the author, but even more fundamentally by the fact that technologies change with time, and that basic knowledge can only go so far in answering questions that often require detailed precise information about the very latest technological trends and opportunities. To pursue state-of-the-art scientific work, scientists are clearly needed, and policy generalists cannot be of great use. It took Szilard and Einstein to warn President Roosevelt that nuclear weapons were possible, for example, and it also took experts to figure out when capabilities like radar, aerial flight, space flight, and laser sensors as well as weapons were within reach. To anticipate future breakthroughs, and help decide which technologies are worth pursuing, the Department of Defense has numerous expert scientific advisory groups and consultants today–ranging from the notorious JASON group and the Defense Science Board, to the main weapons laboratories run by DoD (like Lincoln Labs at MIT) and the military services as well as the Department of Energy, to many individual scientists or groups of scientists working either for the defense industry or for universities.
That said, some basic understanding of scientific and technical issues in defense policy is essential for the policymaker. Scientists cannot be asked to make all decisions concerning technology, since many decisions involve other matters, too–the country’s national security objectives, its resource constraints, its competing priorities, its arms control interests, and so on. Since many core matters in defense policy revolve heavily around physics and engineering, a basic familiarity with these fields is necessary. Even a very limited basic knowledge about key concepts and terminologies allows the generalist to follow conversations and studies led by more technically expert individuals. If generalists are at least able to follow technical discussions, they can often discern the key assumptions behind science-based arguments. In other words, basic scientific literacy among generalists helps create a vetting process that can often weed out sloppy, mistaken, or ideologically motivated arguments. It can also make generalists more able to appreciate the work of whistleblowers and dissidents from within the scientific community, and pay them heed when institutional and political forces may otherwise overwhelm them.1
Some basic matters of physics are both simple enough to be accessible to the generalist, and important and enduring enough that they can be expected to remain relevant for policymakers well into the future. When the immutable laws of physics can be invoked to help understand a situation, the resulting explanation is more likely to be durable. It is not always possible to find basic physical arguments or principles that help resolve a technical issue or debate, but it often is. Making some investment in understanding core physics concepts can then have benefits far down the road.
Some examples of how a sound understanding of basic military principles and technologies can inform policy debate may illustrate these points. Take the missile defense debate of the 1980s, shortly after Ronald Reagan’s “Star Wars” speech of 1983 in which he announced his Strategic Defense Initiative (SDI). Whatever the broader strategic benefits of SDI may have ultimately been, many of the technical goals advanced by its partisans could be debunked–or shown to be very expensive and rather improbable–by basic physical reasoning. For example, putting lasers in space to shoot down warheads could be shown to require dozens of lasers (because of the Earth’s rotation, meaning a given satellite would not stand still over a given point on the planet’s surface). With each of those lasers requiring a mirror effectively equivalent to that of the Hubble telescope just to steer the beam, costs could be placed into the many tens of billions for just the initial deployment of the system (even assuming its technical feasibility). On a related subject of SDI, a good deal of analysis about possible countermeasures that an attacker could use to fool a defense suggested that any country sophisticated enough to build a substantial nuclear-tipped ballistic missile inventory could defeat most basic defenses. These arguments did not shut the door on all possible uses for missile defense, by any means, but they were sobering for those who wanted to believe that defense could trump offense in the nuclear realm.
In 1999, many predicted that NATO airpower could easily intimidate Serbian militias into stopping their deprivations against the Kosovar Albanian population. But others recognized that, if NATO planes stayed above 15,000 feet altitude to reduce their vulnerability, their ability to identify and target small Serb formations would be extremely limited–and the ability of their precision bombs to strike accurately through the cloud cover prevalent in the Balkans in early spring would be limited as well. Again, basic science, coupled with Clausewitzian cautions about fog and friction in war, provided policy-relevant insights.
Finally, the popular hypothesis of the 1990s and early years of this century that a revolution in military affairs (RMA) was underway led to a number of unsound predictions. Some knowledge of technology tended to make it easier to see why skepticism was warranted. Chief among the assertions of the RMA proponents was that most, if not all, forms of warfare would be radically changed, leading to a much different (and reduced) role for traditional ground power in combat. The technological basis for this prognostication was always weak, and it did not take a Ph.D. in physics to know why. Yet the RMA movement is part of what influenced Donald Rumsfeld, first to try to cut back severely on U.S. ground forces during his first year as Secretary of Defense for President George W. Bush, and then to insist on deploying only a small invasion force to Iraq in 2003.
This chapter begins with the issue of the so-called revolution in military affairs or RMA. It is the broadest technical subject addressed here. (In addition to helping frame subsequent discussions, it also complements the discussion of military readiness in the book’s budgetary chapter.) The chapter then turns to three more specific subjects. It addresses space weaponry, missile defense, and nuclear weapons design and testing in turn. They are distinct, yet are also somewhat interrelated: space assets are quite important in missile defense, and missile defenses are designed largely to defend against enemy nuclear weapons.
This chapter is hardly a comprehensive treatment of topics in military science. Important matters such as understanding trends in miniaturization, robotics, and nanotechnology, controlling the development and spread of advanced biological pathogens, and curbing nuclear proliferation through tougher export controls are not considered. But several of the key subjects of the modern era are addressed.2 The general approach in each of the following sections is to provide a basic overview of the relevant physics and technology issues, and then try to draw whatever policy lessons might follow.
In the 1990s, after the drama of Operation Desert Storm brought war to living rooms in near–real time, and displayed the remarkable effectiveness of precision weapons in modern warfare, it became popular to argue that a revolution in military affairs (RMA) was brewing. Akin to previous radical transformations in warfare, such as those brought on by the inventions of gunpowder, railroads, machine guns, tanks, and airplanes–as well as the doctrines turning those individual technologies into potent fighting forces–many hypothesized that the computer age would turn the world of warfare upside down again.3 In the American debate, this hypothesis was advanced with breathless enthusiasm by some, who assumed that the United States would continue to lead the world in innovation and hence in new forms of warfare, but with some trepidation by others, who observed that established powers are often challenged by rising powers when new eras in warfare become possible. Various phrases have been coined to describe variants on the overall modern RMA theory, including network-centric warfare (with its associated concept of “effects-based operations” designed to attack enemy nodes of information gathering and decision-making) and fourth-generation warfare (following the previous generations of Napoleonic war, early industrial war culminating in World War I, and blitzkrieg/carrier/maneuver war exemplified in World War II).4
The experience of the current decade has put these RMA debates into some perspective. The 9/11 attacks themselves, as well as the difficulties faced by American forces in Iraq, have disabused most of the idea that we could be entering into a “post-heroic” or “virtual” era of warfare.5 By the predictions of some, such an era would be characterized by conflicts in which risking casualties was no longer quite as necessary (or politically possible), at least for an advanced superpower fighting a less advanced foe, and in which stand-off weaponry (or, increasingly, robotic weapons) linked to reconnaissance systems via lightning-fast communications networks would dominate the fighting.
Since technology by itself does not a revolution make, however (with the possible exception of nuclear weaponry), an RMA could only be exploited properly if it was catalyzed by the decisions of defense policymakers. In other words, any such revolution would need to be made, not passively received. Defense innovation is a complex process requiring not just different decisions on resource allocation but on warfighting doctrines and tactics, and the ways in which different elements of military forces cooperate together.6 To be sure that the United States and its allies benefited from the RMA, rather than being hurt by it relative to their enemies, money and the energies of key leaders would need to be shifted away from areas where trends in warfare seemed largely static and redirected to new and exciting horizons.7 For proponents of the RMA, instructive analogies might be the eras in which wooden sailing ships, horse cavalry, or bayonet-wielding infantrymen became obsolete. Surely those who recognized these trendlines earliest were best served in future wars, since they no longer wasted scarce funds–or even more importantly in some cases, scarce hours for strategizing and training and preparing battle plans–on moribund notions of how to wage war.8 This logic led some RMA proponents to suggest, for example, that the United States might need to preserve resources for military innovation by avoiding peacekeeping missions, reducing forward-deployed forces for deterrence in key regions of the world, scaling back its two-war combat capability, and/or canceling major weapons systems of the traditional or “legacy” variety.9
But how to size up this modern debate meaningfully? Asserting an RMA, and pointing to nifty new gadgetry that seems to portend or constitute major progress, does not suffice to conclude the argument. This is a classic example of a technology-centric hypothesis in which the lay observer can feel frustrated, if not shut out, by the conversation–yet one in which the nation’s interests can only be advanced by combining technical analysis with broader strategic judgment. Decisions about when to wage war, whom to fight and whom not to fight, which interests to defend and which interests to recognize as indefensible (or too hard to protect at a reasonable cost) must be informed by the technical and doctrinal realities of warfare. Yet they obviously must be made only after broad consideration of many factors. As such, it is important that most non-specialists not be precluded from joining the conversation. Similarly, decisions about military resource allocation should be influenced by an understanding of which areas of modern military capability are potentially advancing so quickly that they merit extra attention and additional money. But those budgeting decisions also require a sense of the nation’s broader priorities and interests–of which wars we are likely to need to fight, not just which wars we might most prefer, or which might best play to our current and future strengths.10 It is important that the RMA debate not be so obtuse, arcane, or inaccessible that most policymakers and citizens feel unable to participate. It is also important that the hypotheses of RMA proponents be expressed as specifically and carefully as possible, so they can be evaluated analytically and individually.11
In the following pages, I suggest three ways in which this assessment can be attempted. The first is to look at history and place current technical/doctrinal trends in some perspective. Are we truly at the cusp, or in the middle, of a transition so dramatic that it qualifies as revolutionary–meaning that change is not only impressive, but disproportionately more rapid than in past recent eras?
A second approach is to look at areas of major technology more carefully. Computers are undergoing a contemporary revolution, to be sure, but is this true for other key areas of military technology as well? And to the extent it might not be true, can fundamental progress in computers as well as perhaps a couple other key areas of technology nonetheless themselves drive a revolution at a time when many key material underpinnings of modern militaries may not be progressing so fast?
A third approach is to try to understand outcomes in recent military battles as clearly as possible. This should then help us understand the degree to which modern technology is the driver–and the degree to which expected future progress in technology may so radicalize the way humans battle each other that we must discard many old notions of conflict to prepare for a whole new future way of war.
Beyond a doubt, a computer-driven revolution is occurring in modern times, with implications that go far beyond how fast we can call up data on the Internet. Microprocessors combined with modern communications technologies such as fiber optic cable and satellite constellations guide the performance of many mechanical systems, permit creation of real-time data networks accessible not only at desks but by phones and BlackBerrys and other devices, and point the way to a pending age of robotics. These changes are remarkable. Certainly they would seem, at first blush, as dramatic as some of the technology-driven revolutions of past times, such as the invention of the crossbow centuries ago, or of gunpowder, or iron-hulled ships, or the machine gun, or blitzkrieg and aircraft carrier war. But how do we know if they portend, or ensure, a revolution in warfare?
To answer this question, we must be specific. Many historians will surely look back on this era and, with rhetorical flourish, remark on the changes that occurred. But in policy terms, the most important matter is whether changes in warfare are now so rapid, and so exponential, that they necessitate a fundamental change in how we do business.12
The modern American military has institutionalized change from within in the last hundred years or so. At least since the 1920s, military planners have consistently expected the next decade of war to differ greatly from the one before. And even once one moves beyond the chronology of the 1930s and 1940s, which brought the world blitzkrieg, carrier war, amphibious assault, modern radar, great strides in submarine as well as antisubmarine technologies such as sonar, and nuclear weapons, changes were rapid. The 1950s saw the coming of age of helicopters and jets; the 1960s witnessed widescale adoption of satellites and ballistic missiles; the 1970s brought huge leaps in cruise missile, stealth, infrared, and night-vision technologies; the 1980s and 1990s saw the real arrival of the modern age of precision strikes and rapid battlefield communications, facilitated in large part by modern computing, and the initial arrival of robotic technologies like unmanned aerial vehicles–followed in the current decade by the weaponization of such vehicles.
Even this cursory review reminds us that the age of computing is not without modern precedent, in terms of the remarkable new capabilities it offers. Certainly the notion of airplanes, then of airplanes flying without propellers, then of airplanes flying with rotary wings able to go straight up and down are quite remarkable. Yet the latter two of these changes rarely are viewed as revolutionary. Being able to monitor the Earth from space, or through bad weather, or at night creates a transparency to the battle-field radically different than what had been the case before–yet most of these breakthroughs in sensing technologies are often not as ballyhooed as the computer is today. Is that because the computer is so obviously more important? Perhaps, and perhaps not.
In addition, to the extent the modern era is characterized as a computer-driven period of civilian and military innovation, is it possible that the fastest rate of change may already be behind us? That is, the basic notion that we are living in an era during which the military “reconnaissance strike complex” of advanced sensors, munitions, and information networks has begun to flourish is now widely appreciated by defense planners. To be sure, that reconnaissance strike capability is constantly improving, but the core concept is now well understood and established. So even to the extent that there is revolution in the air, it is possible we are beyond the point of maximum change.13
These arguments are hardly conclusive; they are designed simply to be reminders of how much change has gone before us, as a way of maintaining a certain humility about the context of contemporary accomplishments. For the broader purposes of this chapter, they also provide an example of how historical perspective may be used to gain some analytical perspective on a given hypothesis or recent development. History can itself be a tool of scientific inquiry—even if it is generally not conclusive in its lessons for today.
To take the next step in evaluating the significance of modern technology innovation, and its potential for dramatically changing warfare, we need to learn more about the nature of technological progress in the world today. Is it truly most dramatic in the computer area? Are other sectors of technology changing as fast, or almost as fast, or significantly more slowly? With some provisional answers to these questions in hand, we can then return to the broader question of trying to ascertain how everything adds up—how the overall state of modern technological innovation affects what is possible for military planners today and tomorrow.
One report from the 1990s that was enthusiastic about the prospects of a modern revolution in military affairs argued that computers are not unique. It claimed, by contrast, that the rate of change they are experiencing typifies the modern era. In other words, by this logic, computers are not the outlier, they are the new norm, and most types of systems are changing comparably fast—meaning a new generation of capability emerges every few years.14
A more sober perspective might note that most modern airplanes, ships, and ground vehicles travel at roughly the same speed as their predecessors of twenty, thirty, or even forty years ago; that modern satellites and other sensors, while better to be sure than their predecessors, perform their jobs in roughly comparable ways; that the internal combustion engine remains the dominant power source for land warfare, ensuring huge logistical trains to support deployed armies; that small arms and explosives remain very difficult to detect from any distance, especially in complex or urban terrain.
Rather than keep the debate at such a broad level of point and counterpoint, it is better to get more concrete. Several broad categories of technology can be defined, then several subcategories within each can be further specified, and then each can be individually assessed. Such was the methodology I adopted in a book written in 2000. After a survey of the technical literature, and an appeal to basic concepts of physics that established the realm of the possible for many of them, I hazarded initial estimates about the rate of change in each area. With these approximations in hand, formulated crudely given the facts that my technical training went only through the Master’s Degree level and was not dramatically improved by real-world work experience, I then traveled to numerous weapons laboratories and research centers around the country to engage in dialogue with true experts and gain their feedback. This approach is an example of how someone with less than world-class technical credentials can nonetheless wade into the scientific debate about military matters.
The spectrum of key defense technologies can be broken down in many ways. One convenient method is to focus on the following:
Note with this list that computers are a key element of each of the categories rather than a category unto themselves. The preceding categories directly relate to key battlefield requirements for any army—gaining information, sharing it, maneuvering, destroying the enemy, and protecting oneself.
The laws of physics limit progress that is being made, or that will be possible, in many areas of sensing—that is, trying to locate and identify objects of military interest through visual, infrared, radar, sonar, or other detectors. Progress in miniaturization and computing is allowing some progress in technologies like radar, while robotics is permitting reconnaissance systems to go more easily where they could go before only with great difficulty and risk. For the most part, however, progress rates are modest in sensor technology.15
Sonar has largely plateaued, and predictions from the 1990s that the oceans would become transparent to one form of sensor or another remain extremely far from realization.16 (The laws of physics suggest it will take a very long time indeed to change this situation, given that most forms of radiation do not penetrate more than a few dozen meters of seawater.) Optically-based systems remain inherently limited by their difficulty seeing through walls, soil, water, and foliage. Most of these constraints apply to radar too, even if some kinds of foliage-penetrating radar show some promise. Particle beams can see a great deal, but typically only at close range, and only when substantial power sources are available. Aspirations remain to make them capable of seeing through walls or dense jungle.17 But systems capable of such accomplishments are likely to be very expensive and probably quite large for many years.
Biological detectors are advancing, but it remains very hard to identify pathogens at any distance even when they are in aerosol form.18 Magnetic detectors cannot easily find small arms or IEDs or other such materials except at very close range, given the number of non-military objects emitting signals themselves (and the possibility of weapons being made without metal). Chemical detectors are improving, but again need to be relatively near their quarry in order to have access to enough molecules to allow reliable detection. Systems for detecting sniper fire are being researched, but in the near future they will not prevent well-trained snipers from getting in the first shot and then fleeing.19
Of course, the state of the art is hardly static. For example, even if the underlying technologies on sensors are themselves advancing only modestly in most cases, the ability to proliferate sensors across more platforms, including unmanned ones, is growing. So is the ability of communications networks to share whatever information is obtained more quickly. Clever applications of such capabilities will arise, like a fledgling capability known as Ancile that identifies incoming mortar rounds, predicts their impact points, and informs individuals near those impact points of how to move to avoid harm most efficiently.20
But when one considers some of the bolder rhetoric of RMA proponents and compares it with the technical and physical realities, there is a case for sobriety and modesty in expectations.21 For example, according to an Air Force document of a decade ago, sensors will soon allow us to find and identify virtually anything of military significance on the face of the Earth in coming years. This vision is not plausible, however. Similar caution should be taken when considering, for example, the prospects of an idea put forward by the Defense Science Board in 1996—that “there is a good chance that we can achieve dramatic increases in the effectiveness of rapidly deployable forces if redesigning the ground forces around the enhanced combat cell [light, agile units with ten to twenty personnel each] proves to be robust in many environments.”22 Unmanned platforms may be proliferating, but they are still expensive to operate and hardly omnipresent on the battlefield. Communications systems may be radically better than before, but they cannot themselves generate good data and continue to rely on sensors for that data. Most of all, in complex environments such as cities, the majority of military targets remain small and well camouflaged amidst very complex backgrounds, and often shielded from most stand-off sensors by buildings or other objects. As a result, even high-tech U.S. units in such environments will often “find” their enemies by being shot at.
In communications, there is no doubt that progress has been remarkable. In the last decade or so, computers have been placed on tactical fighting vehicles, data rates involving satellite communications have increased by somewhere between a factor of 10 and 100 depending on the measure used, and through frequent practice the U.S. military has greatly improved procedures to get the right real-time data to the right people on the battlefield. Underlying progress in computer technology has made all this possible by radically improving the rates at which data can be processed before being distributed, making possible what the military likes to call “Network-Centric Warfare.” In Operation Desert Storm, a delay of many hours or even days arose between when one sensor on one part of the battlefield found a target and when an aircraft elsewhere could be directed to attack that target. The typical delay had shrunk to less than an hour by the end of the 1990s and to less than half an hour in recent years. The benefits of these trends are seen not only in the air and naval domains, where modest numbers of high-value platforms are the order of the day, but even in ground combat, where communications systems like FBCB2 (“Force 21 Battle Command Brigade and Below”) increasingly integrate every major ground vehicle into a common battlefield picture, benefiting from GPS and other enabling technologies. Even when enemy force positions are not known, these kinds of systems help a great deal with “blue force tracking,” allowing American and coalition forces to know each other’s whereabouts, as evidenced in the invasion of Iraq and other recent missions.23
Of course, many trends in communications help potential enemies, too. Al-Qaeda and its global affiliates have learned how to communicate via the Internet very effectively, frequently changing web site electronic addresses while also varying the physical locations from which they set up and update those sites. They have learned how to avoid reliance on satellite phones and to minimize their use even of cell phones, especially in areas with strong U.S. or allied local presence. This reduces their inherent ability to coordinate quickly in some ways, but they have adapted by dispersing the technologies and the authority needed to initiate operations locally so that central headquarters are not as critical—at least not for the routine use of car and truck bombs and other such relatively simple devices. Turning to the other end of the conflict spectrum and the actions of potential nation-state rivals, countries like China are increasingly emulating the United States by constructing “reconnaissance strike complexes” of their own.
Within the communications realm at least, it is probably fair to say that trends are indeed revolutionary, even if they may help adversaries nearly as much as America and its allies. However, one fundamental vulnerability of these new communications capabilities remains: the information networks being constructed today are fragile, remarkably so in some ways. Some of the most striking vulnerabilities are in the growing use of commercial communications satellites, for example, which are not resilient to direct attack or jamming. If major powers fight each other, the resilience of their information grids will be quickly and severely tested.
Without belittling the efforts of modern engineers, if there is a single striking area of technology in which progress is not revolutionary, it is in the basics of how vehicles are powered—and fueled—on the battlefield. Visionaries about future war have talked about fast jets bouncing along the troposphere and covering intercontinental distances in two hours, or weapons in space being de-orbited to strike rapidly and precisely at targets on Earth, or ground armies quintupling their speeds while reducing fivefold or tenfold the number of forces they need to accomplish a given mission. But a careful examination shows fairly definitively that such visions are not within reach in the foreseeable future. As such, the following language used in the well-regarded 1997 National Defense Panel report is too sweeping to be accurate: “The rapid rate of new and improved technologies—a new cycle about every eighteen months—is a defining characteristic of this era of change . . .”24 Indeed, for many areas of engine and propulsion technology, it can be debated whether there is a new cycle of technology even every eighteen years, if one focuses on the fundamentals of fuel consumption and speed.
Of course, modern jet fighters are faster than before, their engines burn at hotter temperatures, they can go further and faster on supercruise. Catamaran-hull ships can attain speeds of 50 knots or more. Solar-powered robots are being developed. Ramjets can power certain air-to-air missiles at remarkable supersonic speeds, and will get even faster as scramjet (or supersonic combustion ramjet) technology becomes available in coming years.25 Per pound of vehicle mass, modern internal combustion engines are more efficient than their predecessors. And next-generation combat vehicles are intended to require much less fuel than Abrams tanks.
But there are striking limitations implicit in all the preceding observations and predictions. Most radically new capabilities such as hypersonic vehicles and electromagnetic rail guns relate to special-purpose vehicles that are too immature and expensive to be widely practicable in the near future.26 Transport planes and ships as well as main warships and aircraft carriers and submarines, ballistic missiles and space launch vehicles, and battlefield trucks continue to plow along at roughly the same speeds, without radically reduced fuel requirements, relative to their predecessors of two or three or even four decades ago. Progress is measured in improvements of 10 and 25 percent from one generation of vehicle to another, not a doubling of capability and speed every eighteen to twenty-four months as with computers.27
If next-generation main combat vehicles require less fuel than today’s big tanks and fighting vehicles, it will be largely because they will be smaller and lighter—and, inevitably, more vulnerable to direct fire, given the modest incremental rates of progress in armor.28 (This is especially true when measured on a relative scale against the rates of progress in antitank weaponry.) The internal combustion engine of today is better than that of the 1960s, 1970s, and 1990s, but it operates not far beyond the basic parameters of such earlier vintages. Predictions like that offered by a knowledgeable observer in 1997 that the speeds of battlefield maneuver for major ground forces might increase from 40 kilometers per hour in Operation Desert Storm to 200 kilometers per hour by 2010 can now be seen to be incorrect (with today’s speeds much closer to the 40 than 200).29
This is not to say we should abandon all modernization efforts of traditional weapons platforms. To take one example, the U.S. Air Force strongly favors the F-22 Raptor and F-35 Lightning II out of a conviction that previous generations of fighters are now less capable than a number of foreign-made combat aircraft, and that they would fare poorly against comparably trained pilots in air-to-air combat (as well as against enemy surface-to-air missile attacks). For example, the Air Force rates the Russian-made Su-30 Variant as superior to an F-15C air superiority fighter in its radar, its weapons, its electronic attack capabilities, its range, its multitarget tracking ability, and its maneuverability (in other words, all the categories it appears to consider major in importance, based on a recent briefing). Somewhat dubiously, the U.S. Air Force also gives the edge to the Chinese-made F-11B, though by a closer margin.30 Rather than hinge everything on superior pilot training and superior networking (through systems such as AWACS control planes), therefore, the Air Force wants its future aircraft to be more capable.
To be sure, as history advances, some capabilities may become more important than others, with stealth as well as advanced sensor and fast communications systems at the top of the list. Such an argument is not necessarily a case for a categorical improvement of all combat capabilities; what matters is reducing key vulnerabilities and improving systems where advances in technology offer major benefits. For example, moving from a traditional combat aircraft such as an F-14, -15, -16, or -18 to a stealthy plane can reduce an aircraft’s radar cross section (or effective reflective area for radar waves) from say 10 square meters down to 0.01 or even 0.001 square meters, based on the best available unclassified estimates—implying a tenfold reduction or more in the distance at which an aircraft can be tracked by current-generation radar.31 These improvements can dramatically reduce aircraft attrition per sortie, by a factor even greater than the range reduction of radars trying to find and track the aircraft; they can also improve the element of surprise in an attack. These types of specific arguments based on concrete attainable improvements in existing systems should always be considered seriously by force planners. It is worth noting, however, that they are often at some odds with the RMA visionaries who tend to belittle improvements in existing weapons systems as old-fashioned as a manned combat aircraft.
Robotics do promise important new capabilities. In fact, they are already delivering them, starting with recent dramatic increases in unmanned systems in combat (including the first uses of weaponized robotics, the unmanned combat aerial vehicle).32 Several thousand UAVs and several thousand more ground robots have been employed in the Iraq and Afghanistan wars, focused on what Jim Carafano and Andrew Gudgel of the Heritage Foundation describe as the “Three D’s,” jobs that are dull, dirty, and/or dangerous.33 Explosives ordnance disposal is a good example, and robotics allow humans to stay in the loop from close proximity, easing the challenge of making the robotics sophisticated enough to get the job done.34 Improvements in computing and in certain mechanical technologies are making this possible. But the state of battery technology still limits what small systems can do, and big systems wind up not being dramatically cheaper or more expendable than manned systems. What robotics can do in the coming years is reduce the risk of U.S. and allied casualties—for submarines patrolling shallow waters and looking for enemy submarines or mines, for ground forces disabling ordnance or searching houses that could be booby-trapped, for aircraft hovering over enemy territory for long periods or trying to penetrate dense enemy air defenses.35 This is a very real benefit, to be sure, but it should not be confused with radical improvements in capability, or the replacement of normal troops with automated armies.
Modern military ordnance is remarkably capable. This is not so much a statement about the explosive materials contained within them, which have evolved only modestly over the years (in terms of explosive power per unit of mass and such features), but more about their accuracy and autonomy. In recent times, every few years have brought another round of innovation—laser-guided bombs and “tank plinking” on a large scale in Desert Storm in 1991, the use of GPS-guided joint direct attack munitions (JDAMs) in the 1999 Kosovo War, the use of semi-autonomous submunitions such as sensor-fused weapons each carrying multiple SKEET warheads in Operation Iraqi Freedom in 2003. The next war could likely witness the use of fully autonomous, loitering submunitions that hover above a battlefield until a suitable target appears—such as the so-called LOCAAS (low-cost autonomous attack system), which has already been successfully tested.36 These more accurate munitions also reduce logistics requirements for deployed forces at least modestly, by reducing the weight of typical ammunition expenditures over any given period of time.
However, it is possible that the dramatic progress in precision-guided ordnance of the 1970s, 1980s, and 1990s has actually been slowing in pace. The first of these decades brought remarkably accurate ballistic missiles, and the early use of laser-guided and infrared-guided bombs as well as the beginnings of cruise missile technology. The 1980s saw the blossoming of these capabilities, which culminated in Operation Desert Storm in 1991. The 1990s saw the extension of these trends in precision to all-weather day-night weaponry such as the JDAM and other munitions guided by GPS satellites.
However, in the current decade, change may be somewhat less dramatic. The GPS constellations are being modernized, but less to create huge new accuracies and more to ensure dependability in the face of possible jamming. So-called ramjet technology is being applied to develop hypersonic missiles, going much faster than the speed of sound, but again the purpose is largely to protect gains realized already (by striking air defense radars and other moveable assets more quickly, so they cannot elude attack) than to create capabilities never before seen. Large-ordnance weaponry is being built, including the so-called Massive Ordnance Penetrator with 5,300 pounds of explosive (and a total weight of 30,000 pounds) and an ability to penetrate 200 feet into the soil. It could be useful against Iranian or North Korean deep underground targets, such as leadership bunkers or nuclear weapons facilities. But the number of aimpoints for which it is needed are a very small single-digit percentage of the overall target set in most conflicts, and the likelihood of having adequately reliable targeting data to use such ordnance most effectively is modest.37
It is important here to note that as the pace of innovation may be slowing for the United States, American competitors may be catching up. For example, in coming years China could gain the ability to use large numbers of precision submunitions launched from maneuverable ballistic missile reentry vehicles. These could, in theory, make it quite impractical to use airfields lacking hardened shelters; and even those with shelters could have their runways threatened.38
So which technology trends are most important? And what do they say about the prospects for a military revolution? More importantly, what do they say about the need to reallocate resources and priorities within the Department of Defense to make sure any such revolution helps the United States and its allies rather than having said revolution catch them by surprise?
We can learn much about the answers to these questions from the real-world laboratory of the battlefield. For all the progress in international security since World War II, and the relative infrequency of wars between the major powers, there are still enough wars among regional powers, between the major powers and smaller powers, and within states, that we can see vividly what modern technologies and other new aspects of the contemporary era are doing to change the nature of combat. For students of warfare, many questions about technology issues such as the RMA hypothesis can be settled—or at least informed—by careful study of actual modern combat. Consider the following recent trends, and their apparent implications for assessing the viability of the RMA hypothesis:
Of course, the situation is not either/or. We need not conclude definitively that we are living in revolutionary times, or not. Nuance is acceptable, and it is reasonable to conclude that there are elements of revolution and major transformation demanding our attention as well as major changes in policy, while other aspects of warfare change less quickly. The fact that technical progress has sometimes been exaggerated hardly means it is unimportant, and the fact that it seems gradual when we are living through it does not mean it is slow in historical perspective. In the Afghanistan war, for example, it often became possible to take information from a sensor and get it to a “shooter” literally within minutes. This was perhaps not the result of a clear single technical breakthrough so much as a gradual improvement in procedures over the years that complemented ongoing progress in sensor systems and communications capabilities, as well as the proliferation of sensor technologies (often on unmanned aerial vehicles) that had not previously been numerous enough to create a continuous surveillance capability.43 But a measured view about the so-called RMA may help avoid overly faddish beliefs that new types of combat have suddenly become predominant, with potentially serious consequences for how the United States allocates defense resources and plans for war.
Space is a region from which the United States now does far more than monitor nuclear weapons and missiles. In addition to traditional reconnaissance and early-warning missions, space is now the place from which the United States coordinates its conventional wars in real time. Information on battlefield targets is sometimes acquired there; information about these targets, as well as most other data, flow through space to allow rapid, high-volume, and dependable transmissions.
Several broad questions about space policy require a level of understanding about the basic science and physics of using space for military purposes.
Near-Earth space is home to a wide range of military and civilian satellites, not to mention vast amounts of debris that can interfere with satellite operations. Assets in space also require assets on the ground, and links with the ground, to provide services to military users of satellites.
Most satellites move around Earth at distances ranging from 200 kilometers to about 36,000 kilometers. This region is divided into three main bands. Low Earth Orbit (LEO) extends out to about 5,000 kilometers. Geosynchronous orbit (GEO) is the outer band for most satellites. It is 35,888 kilometers or 22,300 miles above the equator of Earth. At that altitude, a satellite’s revolution around the Earth takes exactly twenty-four hours, meaning it remains over the same spot on Earth’s equator continuously. Medium Earth Orbit (MEO) is essentially everything in between LEO and GEO. MEOs are concentrated between 10,000 and 20,000 kilometers above the surface of Earth.44
The range of LEO orbits begins just above Earth’s atmosphere, which is generally considered to end at an altitude of about 100 kilometers. The altitude of LEO orbits is less than the radius of Earth (which is about 6,400 kilometers, or almost 4,000 miles). In other words, if one viewed low-altitude satellites from some distance, they would appear quite close to Earth, relative to the size of the Earth itself. The dimensions of geosynchronous orbits are large relative to the size of Earth (though they are still small relative to the distance between Earth and the moon, about 380,000 kilometers). Earth’s gravitational field, together with the velocity (speed and direction of movement) of a satellite, establish the parameters for that satellite’s orbit. Once these physical parameters are specified, the orbit is determined and trajectories are predictable, unless and until a maneuvering rocket is subsequently fired.
Close-in satellite orbits take as little as ninety minutes to complete a tour around the planet. As noted, geosynchronous orbits take exactly twenty-four hours. Satellites in close-in circular orbits move at nearly 8 kilometers per second; those in geosynchronous orbit move at about 3 kilometers per second. Those following intermediate orbits have intermediate speeds and periods of revolution about Earth.
Satellite orbits are generally circular, though a number are elliptical, and some are highly elliptical—passing far closer to Earth in one part of their orbit than in another. Satellites may move in polar orbits, passing directly over the North and South Poles once in every revolution around Earth. Alternatively, they may orbit continuously over the equator, as do GEO satellites, or may move along an inclined path falling somewhere between polar and equatorial orientations.
Getting satellites into orbit is, of course, a very challenging enterprise. They must be accelerated to very high speeds and properly oriented in the desired orbital trajectories. Modifying a satellite’s motion is very difficult once the rocket that puts it into space has stopped burning; generally, the satellite’s own small rockets are only capable of fine-tuning a trajectory, not changing it fundamentally. Even though satellites in GEO end up moving much more slowly than satellites in LEO, they must be accelerated to greater initial speeds (typically about 10.5 kilometers per second). This is because they lose a great deal of speed fighting Earth’s gravity as they move from close-in altitudes to roughly 36,000 kilometers above the planet’s surface. In fact, a three-stage rocket that could carry a payload of fifteen tons into LEO, for example, could only transport three tons into GEO. For that reason, it typically costs two to three times as much per pound of payload to put a satellite into GEO as into LEO.
Even getting to LEO is difficult. For example, putting a payload into Low Earth Orbit typically requires a rocket weighing 50 to 100 times as much as the payload. Consequently, even Low Earth Orbit launch is stubbornly expensive, despite longstanding efforts to reduce launch costs; putting a satellite into LEO typically costs from $3,000 to $6,000 per pound (though some Ukrainian and Chinese launch services charge less than $2,000).45 There are some hopes that the next generation of launch vehicles will be less expensive—but probably not radically so.46
Most satellites weigh from 2,000 pounds to 10,000 pounds, roughly speaking, implying launch costs of about $10 million for smaller satellites in LEO to $100 million for larger satellites in GEO. Exceptions exist, however, including the large imaging satellites known as Lacrosse and KH-11, each of which is believed to weigh about 30,000 pounds. In addition, most satellites have dimensions ranging from 20 feet to 200 feet and power sources capable of generating 1,000 to 5,000 watts—though again, imaging satellites would be expected to exceed these bounds.
Currently, about 17,000 items of space debris are large and visible enough to be tracked by U.S. monitoring equipment. Given the state of technology at present, that implies a diameter of at least ten centimeters (about four inches). Less than 1,000 of these objects are working satellites; the rest are old satellites or large pieces of debris from rockets.47
The vast majority of most countries’ current satellites are in LEO or GEO. In fact, excluding Russian satellites (with their particular history and their particular circumstances, servicing a large northern country), each of those zones accounts for about 45 percent of the satellites in active use today. Another 5 percent are in MEO; most of the remainder are located in highly elliptical orbits.48
In many cases, the dividing line between military and civilian satellites is blurred. The United States uses GPS satellites for military and civilian purposes. It buys time on commercial satellites for military communications. The U.S. military and intelligence services often purchase imagery from private firms, especially when relatively modest-resolution images (with correspondingly larger fields of view) are adequate. And some satellites provide weather data to the military as well as to other government agencies.
In addition to satellites, a tremendous amount of manmade junk resides in space. Probably 100,000 pieces of debris larger than a marble are in orbit—those at altitudes above 1,000 kilometers will remain in orbit for centuries; those above 1,500 kilometers for millennia. Perhaps 300,000 small objects, such as chips of metal or even specks of paint, are too small to be tracked—nevertheless, if measuring at least four millimeters in size, they are large enough to do potential harm to any object they might strike, given the enormous speeds of collision implied by orbiting objects. In 1983, for example, a paint speck only 0.2 millimeters in diameter made a 4--millimeter dent in the Challenger space shuttle’s windshield. Only two other collisions between debris and operational satellites were known to have occurred through 2001, but with debris in low orbital zones growing at the rate of about 5 percent annually, more can certainly be expected. Indeed, a small satellite at an altitude of 800 kilometers now has about a 1 percent chance annually of failure due to collision with debris. In the range below 2,000 kilometers, there is now a total of 3 million kilograms of debris (in contrast to about 200 kilograms of meteoroid mass).49
To illustrate some of these general considerations more vividly, consider the current American satellite fleet. Most individual types of satellites are in LEO or GEO. However, MEO is also important due to about thirty global positioning satellites now in that region.50 These provide navigation aid to military and civilian users. Since 2000, they have provided both types of users with their positions to within about five meters.
The U.S. military operates LEO satellites for ocean reconnaissance, weather forecasting, and ground imaging. The number of White Cloud ocean reconnaissance satellites that listen for emissions from ships probably numbers about eight, at altitudes of roughly 1,000 kilometers. The United States has at least two weather satellites, known as Defense Meteorological Satellite Program systems, in polar LEOs (they also carry gravity-measurement, or geodetic, sensors).51
The United States also deploys probably half a dozen high-resolution imaging satellites in that LEO zone. They come in two principal types: radar imaging satellites, known as Lacrosse or Onyx systems, and optical imaging satellites, known as Keyhole systems, with the latest types designated KH-11 and KH-11 follow-on or advanced satellites. The Lacrosse radar satellites operate at roughly 600 to 700 kilometers above Earth, are capable of effective operations in all types of weather, and produce images with sufficient clarity to distinguish objects one to three meters apart. The KH satellites are capable of nighttime as well as daytime observations, by virtue of their ability to monitor infrared as well as visual frequencies. They acquire information digitally and transmit it nearly instantaneously to ground stations. Their mirrors are nearly three meters in diameter, and they move in slightly elliptical orbits ranging from about 250 kilometers at perigee (point of closest approach to Earth) to 400 kilometers or more at apogee. Ground resolutions are as good as roughly fifteen centimeters (six inches) or even less under daylight conditions. They can take images about 100 miles to either side of their orbital trajectories, allowing a fairly wide field of view.52 They do not work well through clouds, however.53
In GEO or near-GEOs, the United States deploys communications satellites, early-warning satellites for detecting ballistic missile launch, and signals-intelligence satellites for listening to other countries’ communications or the emissions of their electronics systems, such as surface-to-air radars. For example, in the communications domain the United States has numerous Air Force packages on various hosts (including GPS satellites in MEO) for tactical communications; eight Follow-On satellites (or UFOs!) operating in the UHF frequency band that replace the Navy FLTSATCOM satellites for naval communications; two global broadcast system satellites for transmission of video and other high-data multimedia; nine defense satellite communications system (DSCS) satellites; and five MILSTAR (Military Strategic and Tactical Relay) satellites hardened against nuclear effects and jamming for critical communications. It also has at least three defense support program (DSP) satellites for early warning of ballistic missile launches (as with most of its sensitive military satellites, exact numbers are classified; often what is publicly available is the record of launches rather than of how many satellites remain operational).54
The United States fields a handful of signals-intelligence satellites in GEO, though like the Lacrosse, Keyhole, White Cloud, and DSP systems, their exact number is classified. The signals-intelligence satellites have in recent times included the Magnum, with an antenna reportedly 200 meters wide for eavesdropping on communications. Jumpseat satellites, flying elongated orbits, were developed to listen into communications from northern parts of the Soviet Union.
The United States puts most military payloads into orbit from launch facilities at Cape Canaveral in Florida and Vandenberg Air Force Base in California. It also operates a half dozen smaller sites for some payloads.55
These satellites have permitted a radical increase in data flow rates in recent conflicts—from 200 million bits per second, already an impressive tally, in Operation Desert Storm in 1991 to more than ten times as much (2.4 gigabits per second) in Operation Iraqi Freedom in 2003.56 With the introduction of laser communications satellites over the coming decade, this progression is expected to continue, with another tenfold or more increase in capacity likely (assuming, that is, that reliable high-speed ways to link the satellites with ground stations are found—atmospheric turbulence and weather create challenges to using many types of lasers).57
As for tracking objects in space, today most countries conduct space surveillance using telescopes and radar systems on the ground. Only the United States has a system providing some semblance of global coverage (though its southern hemisphere capabilities are quite limited). Its monitoring assets are located in Hawaii, Florida, Massachusetts, England, Diego Garcia, and Japan.
Consider the capabilities of a couple other countries as well. Although it has clearly fallen from its superpower status, Russia remains the world’s second space power by most meaningful measures. It continues to put satellites into space at an impressive pace, averaging more than twenty-five launches a year in recent times, in contrast to a U.S. level of around forty.58 It does so using at least eight different families of launch vehicles of many sizes and payloads, including Molniya, Soyuz, Cosmos, Shtil, and Start variants. It operates five of the world’s twenty-seven major launch sites. Russia’s manned space program also continues. In recent years, it has maintained a typical flight schedule of two launches with three to six cosmonauts per year.
Russia has more than forty working military satellites by recent estimates, close in quantity to the United States. They run the gamut from communications and navigation assets to early-warning satellites to electronic intelligence devices. Its satellite capabilities have been deteriorating since the dissolution of the Soviet Union, though some efforts of late have been made to restore these capabilities, for example with satellite navigation systems akin to GPS.59
China has more than thirty satellites in orbit and has been increasingly active, with five to ten launches per year in recent times.60 It operates three launch sites and is an increasingly popular low-cost provider of orbiting services. It also is working on a manned space program, run by the People’s Liberation Army (PLA), and put its first astronaut into space in 2003. It also hopes to put an unmanned vehicle on the moon within a few years. China uses a half-dozen space launch vehicles in the Long March series. Most are three-stage rockets whose payloads range from 2,000 to 10,000 pounds per launch. An improved family of liquid-fueled rockets is also being developed. One variant is expected to have, among other features, the capacity to lift 24,000 pounds to LEO. China is improving its satellite and space capabilities with vigor, and is interested in developing imaging satellites based on electro-optical capabilities, synthetic aperture radar, and other technologies. Its Ziyuan imaging satellites, planned in conjunction with Brazil, would have real-time communications systems to get data to the ground quickly, as would be needed for tracking mobile military targets including ships. It is also cooperating with Russia on a number of space programs, possibly including satellite reconnaissance technology, and is making progress on electronic intelligence satellites, as well as on a rudimentary GPS-like system called Beidou.61
If space-related technologies could be frozen in place in their current state, the United States would be in a fortunate position. It dominates the use of outer space for military purposes today, while Russia’s capabilities have declined considerably. China’s assets are improving, but it probably needs better real-time information grids and perhaps an electronic signals intelligence satellite to have significant capabilities against U.S. Navy ships near Taiwan, for example.62 The capabilities of America’s other potential rivals are generally rudimentary. The United States is able to use satellites for a wide range of missions, including not only traditional reconnaissance and early-warning purposes but also prompt real-time targeting and data distribution in warfare. Although some hope to develop space-based missile defense assets someday, the present need for such capabilities is generally rather limited, and ground-based systems increasingly provide some protection, in any event (see the next section of this chapter for more). Of course, it is not possible to freeze progress in technology, nor stop the dissemination of technologies already available.
Clearly many technologies related to space are advancing rapidly. The following discussion focuses on several that seem likeliest to offer major breakthroughs in military capability in coming years.
Other technologies not discussed here in detail will improve, too, certainly, but in many cases, while the new capabilities they provide will be important, they may not offer radical changes. For example, space-based radar constellations may be larger and much more capable in the future. Even then, satellites will remain very expensive (a billion dollars a piece or more, for large systems), placing limits on how fast such capabilities can be deployed. Moreover, they will not provide capabilities that are otherwise totally absent, since aircraft (like JSTARS as well as various UAVs) can provide similar types of coverage in theaters where the United States can establish air supremacy.63
Chemically fueled lasers that could destroy their targets by heating them with continuous waves of infrared radiation are being developed today by the United States as missile defense systems, and perhaps by other countries as well (see the following discussion on missile defense). They could, however, be used against LEO satellites as well, at least in theory. This makes them relevant to a broader discussion on the military uses of space.
Such lasers are usually able to convert about 20 to 30 percent of the energy released by chemical reactions into laser power.64 To damage a soft target like paper or human skin, a total dose of about 1 joule per square centimeter is required (a joule is a watt of power applied for a second). The type of target usually envisioned for high-energy laser weapons today, for example, the metal making up the skin of a SCUD missile, might be damaged after receiving 1,000 joules per square centimeter. By contrast, many satellites could apparently be damaged after receiving as few as ten joules per square centimeter, assuming a pulse lasting several seconds, according to a 1995 Air Force scientific advisory study. (Their trajectories are also easier to predict, further easing the challenge.) The main point is that satellites can be much easier to damage or destroy than missiles like SCUDs, meaning they could be targeted from much lengthier distances.
The current airborne laser program (simply known as ABL) enjoys two major advantages over most previous laser systems (such as the so-called MIRACL laser built in New Mexico). First, it is airborne, meaning it can fly and operate above the atmosphere’s densest region and above almost all clouds. Since Earth’s atmosphere interferes with most kinds of visible and near-visible light, scattering or absorbing much of it, this is a great benefit. In addition, the infrared wavelength used by the airborne laser is less affected by whatever atmosphere it does encounter (a wavelength range of 0.5 to 1.5 microns is considered ideal; the ABL operates at 1.315 microns). Each ABL is actually designed to be a system of lasers. The main beam is a high-power system for destroying an enemy missile. Other lasers of lesser power on the aircraft are designed for targeting and tracking and to measure atmospheric conditions. The ABL is designed first and foremost to work against liquid-fueled short-range missiles, such as SCUDs, in their burning or “boost” phase, though it could certainly be used against any liquid-fueled rocket with comparable effectiveness. Whether the ABL would work against solid-fuel ICBMs or not is unclear.
The ABL uses hydrogen peroxide, potassium hydroxide, chlorine gas, and water as raw ingredients. A number of modules (six on the first test aircraft, fourteen eventually) will together produce a beam with a strength of about 1 million to 2 million watts and a beam roughly the size of a basketball at a range of hundreds of kilometers. It is to operate on a modified 747 aircraft, and its maximum range against a short-range ballistic missile is estimated at up to several hundred kilometers. With a single pay-load of chemical fuel, it could fire about twenty shots, each lasting several seconds.65
Due to its basic technology, the ABL inherently constitutes a latent antisatellite capability.66 The main issue with converting the ABL into an antisatellite (ASAT) weapon probably concerns target acquisition and tracking. At present, the ABL relies on hot rocket plumes for acquisition of the target; overhead satellites would not provide such a signature. Thus, the ABL could not track and destroy a satellite unless its tracking sensors were first cued to the satellite’s location by the U.S. space surveillance system. Providing the necessary data links would require software changes and perhaps even more, but it would not require changes to the basic laser system of the ABL.
What could other countries do to exploit high-energy laser technology for space weapons applications? Consider the case of China. The Pentagon believes the Chinese may have acquired (perhaps from Russia) high-energy laser technology that could be used in antisatellite operations. Some reports indicate it has investigated atmospheric “thermal blooming,” an effect caused by the passage of high-powered laser light through the atmosphere that leads to the distortion and weakening of a high-powered laser beam if not properly addressed.
In the end, however, it is doubtful that China, or for that matter any other country, could develop an airborne laser capability in the next ten to fifteen years. The juxtaposition of various technologies and the resources required for such a program are probably beyond its means; it is not totally a given that the United States will itself be successful with this technology. China may soon have the inherent ability to produce a ground-based high-energy laser like the MIRACL, should it devote the very substantial resources and time needed to make such a program work. It is not clear it could build the adaptive optics and other sophisticated features that would help concentrate its power, however. Without the latter, a ground-based system would have limited capabilities for ballistic missile defense, given atmospheric effects and the fact that Earth’s curvature would prevent the laser from striking most missiles during much of their trajectory. But ASAT operations are easier to contemplate, since one can wait for a clear day and for the target to fly overhead.
What about space-based lasers? They are much further from fruition than ground-based systems or the ABL. The Pentagon acknowledges that they are probably ideas for 2020 and beyond. The U.S. space-based laser program as conceived to date would employ a different type of chemical laser that makes use of hydrogen and fluorine to create hydrogen fluoride, resulting in infrared radiation at a wavelength of 2.7 microns. That is about twice the wavelength of the airborne laser and is less suitable for use within the atmosphere. Given how strongly radiation at that wavelength is absorbed by water vapor, it would probably only penetrate down to 30,000 to 40,000 feet if directed into the atmosphere from space. But against targets in space that disadvantage clearly would not matter. The fuels are light and relatively stable, which is good for long-term storage in space. In the space-based laser (SBL), a large mirror with a diameter of at least four meters and perhaps as much as eight meters would be used to create a fine beam. The mirror would have to be extremely light. It would probably need to be furled up while being deployed, and then unfolded once in space. The laser would be about twenty meters long and weigh nearly twenty tons, according to current plans. The program’s goal has been to move toward a lethal demonstration of the system in orbit by 2012, but a constellation of a dozen or more satellites providing global coverage is probably at least a decade away.67
Each SBL would essentially be a combination of three extremely complex technologies: the laser itself, the power source for the laser, and the equivalent of a space telescope to direct the beam. Integrating these elements may be no harder than in the airborne laser. Indeed, a space-based laser would not have to deal with any atmospheric distortion of its beam, as noted. But in other ways, the challenge associated with the SBL is much greater. It is already proving difficult to put lasers with weights of 100,000 pounds or more on aircraft; it is far harder to put them into space with rockets each capable of lifting payloads less than half that weight. Even if high-powered lasers, space telescopes, and large fuel payloads could be individually orbited, assembling them in space and making them work in that environment for the purposes of missile defense or antisatellite operations is a far more challenging proposition. These challenges may or may not prove surmountable within two decades. But absent major breakthroughs in materials or rocketry, or both, the costs of building and orbiting a constellation of space-based lasers may prove excessive, even if the concept turns out workable. Should certain new laser concepts, such as the free-electron laser or all-gas-phase iodine laser, be developed in the megawatt range by then, the construction and launch costs of the basic optics alone could still prove staggering. Costs for a constellation of two dozen laser weapons were recently estimated at $50 billion or more by the Congressional Budget Office.68
As noted earlier, space is an expensive place to operate, not only because it is remote, but because launch costs are very high. Will this remain true in the future? Many concepts of future space warfare assume much cheaper future pathways to space that may or may not be realistic.
In fact, fundamental improvements in the efficiency and cost of space launch systems have been elusive for many years now. Progress in propellants and structural materials for rockets, be they launch vehicles or ICBMs and SLBMs or interceptors, has been limited. Indeed, the theoretical maximum performance of current chemical fuels is being approached. New materials used in the structures of rockets can improve performance at the margin, but major improvements are unlikely with current technology. The evolved expendable launch vehicle (EELV) program, the major U.S. effort of late to achieve greater efficiencies and lower costs in space launch operations, will do very well to reduce costs by half. In fact, it seems more likely that it will do well to reduce costs at all.
Even more futuristic weapons are being contemplated by defense planners. For example, space-to-Earth kinetic energy attack weapons could also be of interest. The basic science of these types of vehicles is not particularly challenging. However, a dedicated program to create the appropriate types of aerodynamic vehicles would be needed, as would testing. It would be necessary either to develop objects that would fall predictably through the atmosphere without deviating from planned trajectories or burning up, or to develop an aerial vehicle that could fly to its destination once it had been decelerated. But orbiting weapons and later deorbiting them does not offer advantages in speed or cost or technological feasibility, compared, for example, with ballistic missiles. Putting the objects in space is roughly as energy-intensive as shooting them halfway around the world on a ballistic trajectory; using booster rockets to cause them to descend takes comparable lengths of time to what ballistic flight requires. Because of the huge costs of putting objects in space, it is extremely rare that weapons in space can be even remotely cost-competitive with Earth-based weapons.69 (There are notions of building a “space elevator” to reduce such costs—but of course the elevator, dozens of miles long at a minimum, first needs to be invented, proven practical and affordable, and then built!)
Progress in electronics and computers, as well as improvements in miniaturized boosters, have made possible smaller and smaller satellites in recent years. These types of devices augur a whole new era in satellite technology. Beyond benign applications in communications, scientific research, and the like, one type of application could be small stealthy space mines able to position themselves near other countries’ satellites, possibly even without being noticed, awaiting commands to detonate and destroy the latter. They could also use microwaves, small lasers, or even paint to disable or destroy certain satellites. Moreover, they could be orbited only as needed, permitting countries to develop ASAT capabilities without having to place weapons in space until they wished to use them.
Most devices known as microsatellites weigh ten to one hundred kilograms; nanosatellites are smaller, weighing one to ten kilograms. In recent years, experimental picosatellites—devices weighing less than one kilogram—have been orbited. Two have been put up by the United States, and there may be others in space as well, as yet undetected. But it is microsatellites that are becoming prevalent. For example, Germany, China, and the United States have all orbited satellites weighing about seventy kilograms, Brazil has put up a satellite of about 100 kilograms, and Thailand and Surrey Satellite Technology in the United Kingdom have jointly orbited a device weighing less than fifty kilograms. Advanced microsatellite programs, designed largely for research purposes but also for activities such as communications, are under way in the United States, the United Kingdom, France, Russia, Israel, Canada, and Sweden. Other countries collaborating with private firms based in these locations include China and Thailand, as well as South Korea, Portugal, Pakistan, Chile, South Africa, Singapore, Turkey, and Malaysia.
Using microsatellites as ASATs may already be theoretically within near-term reach for a number of countries. The maneuvering capability needed to approach a larger satellite through a co-orbital technique is not sophisticated, especially if there is no time pressure to attack quickly and the microsat can approach the larger satellite gradually. In June of 2000, for example, the University of Surrey launched a five-kilogram nanosatellite built for less than $1 million on a Russian booster (that also carried a Russian navigation satellite and Chinese microsatellite). The nanosatellite then detached from the other systems and used an onboard propulsion capability to maneuver and photograph the other satellites with which it had been orbited. In early 2003, a thirty-kilogram U.S. microsat maneuvered to rendezvous with the rocket that had earlier boosted it into orbit. These microsats were already near the satellites they approached, by virtue of sharing a ride on the same booster, making their job somewhat easier. But the principle of independent propulsion and maneuvering is being established. Larger maneuvering space mines are quite likely already within the technical reach of a number of countries; smaller versions may soon be, too.
In summary, then, a few enduring realities about the physics and technology of space systems can be distilled, and a few general answers to the questions posed at the beginning of this section can be deduced.
Missile defense was among the most polarizing and contentious issues in American defense policy for at least two decades until the Bush administration withdrew from the ABM Treaty and proceeded to deploy ballistic missile defense systems—most notably, one for intercepting long-range warheads in the midcourse of their flight that had been developed largely by the Clinton administration. As such, while the Bush administration was eager to withdraw from a treaty that the Clinton administration had had very mixed views about, the decision to deploy had a certain bipartisan quality at some level (even as most Democrats objected to the way in which the Bush administration withdrew unilaterally and rather abruptly from the ABM Treaty). The wars of recent years then tended to keep the focus off missile defense, reinforcing the new tendency to relegate it to somewhat secondary status as a prominent issue.
That said, the issues with missile defense remain very important. The overall program is very expensive, averaging about $12 billion a year all told (including defenses against shorter-range missiles) during the Bush years. The request for 2009 was for $13 billion and the longer-term plan forecast spending of $62.5 billion over the following five years.73 Among other things, this sum of money is to purchase twenty more midcourse interceptors for the Alaska/California system, 211 Standard Missile interceptors for the Aegis Navy system, ninety-six land-based THAAD interceptors, about 400 additional land-based and shorter-range Patriot missiles, and ten interceptors for the midcourse system to be based in Europe.74 Missile defense remains a source of substantial contention with Russia, most acutely in regard to a possible European site for a defense base but also more generally as a symbol of unchecked American power (dating back to the withdrawal from the ABM Treaty and, in fact, the entire legacy of the Strategic Defense Initiative of Ronald Reagan). It also causes concerns in Beijing, a major power with a much smaller nuclear arsenal than Russia that could in theory be countered to some extent by American missile defenses—and also a power that could conceivably wind up in a serious crisis with America over the matter of Taiwan (even if that seems less likely at the moment). One need not oppose missile defense categorically, wish for a restoration of the ABM Treaty, or sympathize with any and all criticisms of missile defense by foreign governments to recognize the sensitivities of the issue.
Several programs are at the core of the current U.S. missile defense effort. They include the Patriot missile (for ground-based defense against missiles in the final or “terminal” stage of flight), THAAD (ground-based defense against midcourse threats of modest range), the Alaska/California system (ground-based defense, with help from a sea-based radar, against long-range missile threats), the Aegis Navy system (against missile threats over or near the sea), and the airborne laser as well as the kinetic energy interceptor (both designed to work against missiles in their “boost phase,” just after launch and while they are still burning). In addition, many of these specific systems are being linked together, and fed information, by various command and control systems, radar programs (upgrades to existing radars and deployment of new ones), and the planned launch of a major satellite constellation to track warheads (and try to identify them if disguised within clouds of decoys or other countermeasures). Each of these various types of capabilities is being upgraded sequentially.
As of the end of 2008, the Missile Defense Agency had upgraded radars on land in Japan, the United Kingdom, Alaska, and California, and built a sea-based mobile radar homeported in Alaska. It had increased its tally of midcourse interceptors based in California and Alaska to thirty. It now has eighteen Aegis-class ships with the capability to intercept medium-range missiles, and a total of thirty-four SM-3 interceptors on them. And it has conducted thirty-five successful “hit to kill” intercepts in forty-three attempts, with various degrees of realism in those tests, but a clear track record of improved capability.75
To evaluate these plans and consider various options for missile defense, it is important to have a clear mental picture of how ballistic missiles and the technologies designed to counter them actually function. Missile defense is very hard, and given the fact that it must work with extremely high overall reliability against nuclear-tipped missiles to offer acceptable levels of protection, the advantage clearly goes to the attacker over the defender. But if the defender has a major technological advantage, and enough resources, it may be increasingly possible to neutralize some of the plausible threats a small extremist state may pose. That mixed message is where the following basic technological discussion would seem to lead.
Ballistic missiles are rockets designed to accelerate to fast enough speeds that they can fly relatively long distances before falling back to earth. They are first accelerated by the combustion of some type of fuel, after which they simply follow an unpowered—or ballistic—trajectory. They consist, most basically, of rocket engines, fuel chambers, guidance systems, and warheads, though the specifics vary a great deal depending on the range and sophistication of the missile.
For shorter-range missiles, the entire weapons system is generally simple. The missile usually consists of a single-stage rocket, which fires until its fuel is exhausted or shut off by a flight-control computer and then ceases functioning for the duration of the flight. The missile body and warhead often never separate from each other, flying a full trajectory as a large single object.
For longer-range missiles or rockets, the system consists of two or three stages, or separate booster rockets, each with its own fuel and rocket engines. The rationale for this staging is to improve efficiency and thereby maximize the speed of the reentry vehicle or vehicles. Putting all the fuel for a long-range rocket in one stage would make for a very heavy fuel chamber and mean that the rocket would have to carry along a great deal of structural weight throughout the entire phase of boosted flight. That would lower the ultimate speed of the warhead or warheads, reducing their range. With staging, by contrast, much of the structural weight is discarded as fuel is consumed. That makes it possible to accelerate the payload to speeds sufficient to put it on an intercontinental trajectory. Long-range warheads must reach speeds of about 4.5 miles a second (roughly 7 kilometers a second), or almost two-thirds of the speed any object would need to escape the earth’s gravitational field entirely (roughly 7 miles, or 11 kilometers, a second). To reach such speeds with existing rocket fuels, efficiency in design—including rocket staging—is essential.
On long-range rockets, warheads are designed so they can be released from the missile body during flight. Generally, warheads and any decoys are released after boosting but while the rocket is still going up—that is, in the ascent phase of flight.76 Releasing warheads from the missile is clearly necessary if multiple warheads with multiple aim points are to be used. It is also desirable since large missile bodies are subject to extreme forces on atmospheric reentry that could throw them, and any warheads still attached to them, badly off course.
In fact, warheads do not fly free and exposed. They are instead encased within reentry vehicles. These objects provide heat shields and aerodynamic stability for the eventual return into earth’s atmosphere. They protect the warheads from melting or otherwise being damaged by air upon reentry and also maximize the accuracy with which they approach their targets.
Missiles may be powered by solid fuel or liquid fuel. If liquid fuels are used, it is usually considered desirable that they be storable and not require cooling or other special treatment that would involve extensive preparation before launch. Advanced intercontinental ballistic missiles (ICBMs) can use either type of fuel; Russian SS-18s use liquid fuel, for example, whereas modern U.S. missiles employ solid fuel.77
Missile guidance must be exquisitely accurate. Warhead trajectories are determined by the boost phase, meaning their course is set hundreds or thousands of miles before they reach their targets. To land within a few hundred feet of a target—or even a couple of miles—requires considerable care in how long the rocket motors are fired and in what direction the rocket is steered. Generally, rockets use inertial guidance systems to measure the acceleration provided by the boosters at each and every stage of their burning. Computers then integrate those measurements to plot out a trajectory for the warheads; a feedback loop then corrects any inaccuracies in how the rockets have been firing, so that when they are shut off, the warheads’ ballistic flight will take them halfway around the world and land them perhaps within a few football fields of their designated aim point.
The standard simple missile carries a single warhead. It is generally large as warheads go, but not enormous—typically weighing about as much as bombs dropped from aircraft (several hundred pounds up to perhaps a ton in weight). Rockets can also carry large numbers of bomblets instead of warheads if the weapon is not designed to cause a nuclear detonation. These can carry conventional, chemical, or biological agents in smaller packages, or submunitions, distributing their aggregate effects over a larger area than a single warhead could. They could also carry radiological payloads—basically radioactive waste, designed not to explode but to contaminate, injure, and kill indirectly.
Both warheads and bomblets can be designed to explode on impact, or when reaching a certain altitude, or after a certain amount of flight time. Bombs designed to explode at a particular altitude or after so much flight time may—or may not—detonate if they accidentally strike the ground. Much depends on the details of their design; as a rule, modern U.S. warheads would not explode under such circumstances, but simpler weapons could. This fact is relevant to certain types of missile defenses that could destroy a missile but not the warheads it carried.
Long-range missiles can also have multiple independently targetable reentry vehicles, or MIRVs. Britain, France, Russia, and the United States have developed and deployed this technology. It works in the following manner. All warheads are initially within a “bus,” or vehicle-sized object that separates from the rocket’s third stage at the end of powered flight. The bus has mini-booster rockets of its own, which it can use to modify its own position and speed before releasing a reentry vehicle (RV) containing a warhead (and any decoys or chaff to accompany it). It can then reposition itself before releasing another RV. Based on their minor differences in position and velocity, the warheads can then travel slightly different trajectories. Magnified by the effects of fifteen to twenty minutes of high-speed long-distance flight, these minor changes in trajectory can translate into impact points distributed throughout a “footprint” perhaps 100 by 300 miles in size.78
As noted, a missile bus may also carry decoys. These are objects designed to resemble warheads, thereby confusing the defense’s sensors and preventing them from identifying the true warhead or slowing the defense’s response time. In the vacuum of space, even extremely light decoys move at the same speed as heavy warheads if given the same initial speed; air resistance is clearly not a factor, and gravity acts equally on objects of all weights. That makes it straightforward to fool simple sensors during exoatmospheric flight. More advanced sensors that can gauge the size, shape, rotational motion, temperature, or radar reflectivity of an object may be able to distinguish warheads from decoys—unless the decoys become more sophisticated or unless the warheads are camouflaged to make them resemble decoys.
Ballistic flight is unpowered flight within the earth’s gravitational field. In other words, it corresponds to what is essentially the freefall of a fast-moving object. Once a rocket stops burning, the only forces acting on it—or any warheads or decoys released from it—are because of gravity or, upon atmospheric reentry, air resistance. That makes flight trajectories predictable and essentially parabolic with respect to the earth’s surface. But the other details of the trajectories vary greatly and depend on the speed of the rocket when its boosters stop firing, as well as the angle at which the rocket is pointed.
The first, or boost, phase of a ballistic-missile trajectory is a powered flight typically lasting one to five minutes. This boost phase generally lasts about a fifth of a missile’s total flight time.
For shorter-range missiles, the boost phase occurs entirely within the earth’s atmosphere; for long-range missiles, it generally extends beyond the atmosphere into space. Either way, during boost phase, the missile gains an upward as well as an outward or horizontal component to its velocity. For a long-range ICBM, the missile will usually be about 200 to 500 miles downrange of its launch point and have reached an altitude of about 125 to 400 miles at the end of its boost phase.79
Once boost phase is complete, the remainder of the upward flight is often termed the ascent phase. Upward flight ends at the trajectory’s apogee, or highest point above the earth. The missile then begins to accelerate back to earth in its descent phase.
For existing ICBMs, the ascent phase begins outside the atmosphere. It would be possible for a sophisticated country to build a fast-burn missile that would complete its boost phase within the atmosphere, but that has not yet been accomplished.80 (The atmosphere is generally considered to end at roughly sixty miles or one hundred kilometers above the Earth’s surface—even though there is no true cutoff but instead an exponential decline, and some air molecules are found even above one hundred miles.)
During exoatmospheric flight, the horizontal element of the velocity of the missile and any warheads or decoys remains constant. The vertical component of velocity is reduced by gravity, eventually slowing to zero and then reversing as the missile and any objects it has released return to earth. The result is, as noted, essentially a parabolic trajectory, as the missile continues in a generally upward motion until gravity turns its trajectory first flat and then downward.
Finally, the missile and any objects it releases, including warheads, bomblets, and decoys, reenter the atmosphere—assuming they reached a high enough altitude to have left it in the first place. Typically, missiles with ranges of 300 miles (about 500 kilometers) or more leave the atmosphere; those with shorter ranges do not.
Missile bodies, warheads, and decoys slow down during reentry because of air resistance, and do so in a manner that depends on their weight, size, and shape. As a result of this air resistance, descending objects heat up. They are also subject to strong forces that may damage them structurally if they are not well built.81
Missiles may be flown on several different types of trajectories to cover a given distance. A missile that flies a minimum-energy trajectory will travel the maximum distance given the speed at which its rocket burns out. But missiles may also fly on what are known as lofted or depressed trajectories for certain purposes. These names are fairly self-explanatory. Lofted trajectories are those on which the rocket’s flight attains a higher altitude than a minimum-energy trajectory for the same horizontal range. Depressed trajectories, by contrast, stay closer to the earth’s surface than is normal for long-range flight. Both require greater speed, and hence more fuel, to cover the same distance relative to the Earth’s surface.
One can categorize defenses by considering the range of the defensive weapons as well as the type of mechanism used to destroy a warhead. One can also distinguish defenses according to where they are based—on land or sea, in the air or in space.82
To date, many defense systems (such as early versions of the Patriot) have employed traditional explosives to destroy incoming warheads. But modern systems such as the Alaska/California national missile defense system of the United States increasingly use “hit to kill” technology in which high-speed collisions between interceptor and warhead destroy the latter. (Given the typical relative speeds of well over 10 kilometers per second of these objects when they approach each other, any contact virtually guarantees the annihilation of both.)
Most missile defenses to date work in a fairly straightforward and similar fashion—and in a manner not so different from the way a radar-guided surface-to-air missile works against an airplane. First, a defense battery is “told” of a missile launch, usually by communication from an early-warning satellite that senses the heat or infrared signal from the offensive missile’s booster rockets. The defense battery’s radar then begins to scan the sky looking for the incoming threat. Once it locates and begins to track the threat, and the incoming object is at the proper distance, an interceptor missile is launched. Its trajectory is chosen to put it in the right place to meet the incoming threat; a computer linked to the radar makes the necessary computation.
For older systems employing explosive kill methods, the approach is then typically as follows. After interceptor launch, the defense battery radar does double duty, tracking the incoming threat and the outgoing defensive interceptor missile. The interceptor missile may have a radar receiver that allows it to pick up radar echoes from the target. (Placing a radar receiver on the interceptor missile allows for more precise tracking; it is referred to as semiactive homing.) At the proper moment, a ground control station sends a radio signal to the interceptor, causing it to detonate a conventional-explosive warhead. The explosion then creates shrapnel that, if sufficiently close to the incoming warhead, should destroy that warhead. This is the basic way the Patriot missile defense system known as the Patriot PAC-2 functions.83
With hit-to-kill interceptors, such as the most advanced version of the U.S. Patriot system (PAC-3), the Army’s theater high-altitude area defense (THAAD), and the Navy’s theater-wide (NTW) programs, the final approach is different. Equipped with many miniature boosters, they are intended to maneuver so well that they can collide directly with incoming threats, obviating the need for (and weight of) explosives. They generally also will use either their own radar (as with the Patriot PAC-3) or advanced infrared sensors (THAAD and NTW, as well as the Alaska/California long-range system) for the final homing, having first been steered to the general vicinity of a target by radar.
The Alaska/California system mentioned earlier is an example of what is sometimes called a midcourse missile defense against long-range ICBM or SLBM warheads. Such systems generally have fifteen to twenty minutes to work against ICBMs, which is one of their appeals. During that time, interceptor missiles could travel thousands of miles, meaning that, in theory, it is practical to defend an entire land mass such as the United States with a single base or two of missiles.
The interceptors could be fired as soon as an enemy launch was noticed by an infrared-detection satellite. More likely, they would be launched after radar picked up the missile following a few minutes of flight. The United States presently has radars for such purposes on its own continental coasts, in Alaska, in England, and in Greenland. These types of radars have long wavelengths that are optimal for long-range detection. A different type of radar, generally using shorter wavelengths and thus having less range but more accuracy, would then track the threatening objects. It would guide interceptors toward targets until the interceptors were close enough to pick up the threats with their own sensors. In the final approach, such sensors would provide much more accurate readings of the location of the threats than distant radars could.84
Several interceptors might be launched more or less simultaneously at a single threat, to account for the possibility of random failures. Alternatively, if time were sufficient, a first interceptor could be launched, and then a second or third would be launched if previous efforts had failed. This latter technique is called a “shoot-look-shoot” defense.
In fact, it could take four or five interceptors to reliably shoot down a single warhead, not only for midcourse NMD but for most types of missile defense using interceptor rockets. That is why the Clinton administration advertised its proposed one-hundred-interceptor system as capable of destroying only a couple dozen warheads. Several problems could cause a given interceptor to miss. Rocket boosters can fail—for example, during the cold war, superpower ICBMs were generally considered to have no more than 80 to 85 percent reliability.85 Or the so-called kill vehicle could miss its target, because of random error, a manufacturing defect, or some other cause. Even if the overall interceptor reliability were as high as 80 percent, very high reliability is needed against a nuclear weapon. To obtain 99 percent confidence of a successful intercept, in this example, three interceptors would be needed per warhead. Even more might be required if several interceptors could fail for the same reason (that is, if their probabilities of failure were not simply random, and independent from each other, but linked and systemic). Since there is not a great deal of time in which to intercept warheads, moreover, it might be impractical to attempt one intercept before firing a second and third and perhaps a fourth and fifth interceptor just in case they were needed. In other words, “shoot-look-shoot” defensive tactics may not be possible, necessitating a launch of several interceptors at once against a given warhead.
Boost-phase defenses have an appeal that midcourse systems do not: they can in theory destroy a rocket before it releases multiple warheads, as well as any decoys designed to fool a defense. An example of boost-phase systems includes the airborne laser or ABL system now in development. A major difficulty with boost-phase defenses, however, is that they must be based near the enemy missile launch point. That could be on land, at sea, or in the air—but it would need to be near the enemy missile launch points in any case. Since the boost phase lasts only three to five minutes (or less for shorter-range missiles), an interceptor does not have much time and cannot cover much distance. As a result, it must begin its flight near its target. This problem is not serious if the potential missile threat comes only from small countries that border U.S. allies or international waterways. But it makes a boost-phase defense generally impractical against missiles launched from countries with large land masses, like Russia or China.
What if a boost-phase defense were based in a low orbit in space? Even then, a space-based interceptor would need to be in the right place at the time a missile was launched, since it would not have much time to complete the intercept before the offensive booster stopped burning. So the defender would need to put interceptors in many different orbits, spacing them appropriately (the interceptors would be in constant motion relative to the Earth’s surface). A simple calculation shows that only one out of several dozen interceptors might be, by chance, in the right place at the right time to intercept a given ICBM. So even to have the capacity to intercept five to ten enemy missiles, several hundred interceptors could be needed.86
Looking at the geometry and geography of various boost-phase options, as well as existing technologies and the likely growth in cost over time of key missile defense components, the Congressional Budget Office estimated costs for a number of modestly sized boost phase missile defenses. Its 2004 study considered five options. The first three involved land-based or sea-based interceptors of varying speeds, assumed that sixty would be needed at a total of ten sites (for any one of the options), and estimated investment costs at up to roughly $15 billion to $30 billion. Operating costs would add another $11 billion or so over twenty years. Space-based options would involve roughly 150 to 350 interceptors, respectively (since the Earth’s rotation below orbiting interceptors would guarantee that most of them would be out of position at any given moment). Investment costs could reach $22 billion to $35 billion, with estimated twenty-year operating costs adding from $22 billion to $50 billion more.87
Even lasers, which produce beams traveling at the speed of light, would need to be located near missile launch points. Otherwise, their beams would be too weakened by the atmosphere, or by the inevitable spreading of a light beam that occurs over distance (known as diffraction) even in the vacuum of space. The beams could also simply be blocked by the earth’s curvature.
Missile defense systems would generally be alerted about the launch of an enemy missile by infrared-detection satellites high above the earth. The satellites would see the strong heat signature of the rocket. Although such signals have occasionally been confused with forest fires and other hot emissions from our planet over the years, the combination of experience, more sensitive satellites, and better computers makes such confusion less likely all the time. As noted earlier, U.S. early-warning satellites are “parked” in geosynchronous orbit about 22,000 miles (or roughly 36,000 kilometers) above the Earth’s surface. At that height, an object orbiting the Earth completes a full revolution once every twenty-four hours—the same speed at which the Earth’s surface rotates. As a result, the satellite remains above the same region of the planet continuously.
Missile defense technology is surely improving. But adversaries can adjust. Even relatively unsophisticated enemies would surely do everything in their power to make a defense’s job as hard as possible—and they would probably have some fairly simple ways to do so.
One approach would be to fire more missiles than the defense has interceptors, simply saturating the defense and ensuring that some offensive weapons could not be intercepted. If the attacker had MIRV technology, saturating a midcourse or terminal defense would be even easier and require even fewer missiles.
Against defenses that can only work outside the atmosphere, in the vacuum of space, an attacker could choose to fly its shorter-range missiles on trajectories that would never leave the atmosphere. Some defenses only work in outer space (or in the very high parts of the atmosphere) because they depend on sensitive infrared detectors to home in on a target—and such detectors can be blinded by the heat generated by air resistance, particularly if an interceptor missile is traveling at high speed. Keeping trajectories within the atmosphere would require an attacker to shorten the range of many of its missiles. But for many scenarios that would not be a steep price for an attacker to pay. Rather than flying its missiles on depressed trajectories, an attacker might also move its missiles as close as possible to their target (for example, Chinese missiles aimed at Taiwan could be placed near the Taiwan Strait before launch, as indeed they have been by Beijing). In that case, their natural trajectories would be lower and their durations of flight would be reduced—preventing some defenses from having enough time to intercept them.
Against any defense that must work in the vacuum of outer space, the attacker has its greatest range of options.88 In this exoatmospheric or mid-course region, a warhead would generally have separated from its missile—or could be designed to do so almost immediately after boosting was complete. (As noted, an advanced country could design even its long-range missiles to complete their boosting while within the atmosphere, though a less sophisticated country might not be able to.)89
Outside the atmosphere, air resistance will not separate out the generally lighter decoys from the heavier warheads (as it would do for the Patriot and other TMD systems that operate within the atmosphere).90 In outer space, even extremely light decoys would fly the same trajectory as true warheads, so speed could not be used to distinguish the real from the fake. To mimic the infrared heat signature of a warhead, thereby fooling sensors that measure temperature, decoys could be equipped with small heat generators, perhaps weighing only a pound. To fool radars or imaging infrared sensors, warheads and decoys alike could be placed inside radar-reflective balloons that would make it impossible to see their interiors.91 Decoys could also be spun by small motors so the balloons surrounding them rotated at the same speed as real warheads, in case the defense’s radar was sensitive enough to pick up such motion.
There is some chance that lighter decoys could be distinguished from heavier warheads based on how they moved away from the bus. If pushed away by something like springs, lighter decoys would tend to move faster than heavy warheads, assuming springs of similar force. But detecting such differences in motion would require extremely precise sensors. The attacker might also compensate by issuing chaff just prior to releasing decoys and warheads—to prevent radars from seeing what happens during the release. It is for these reasons that the decoy problem is acute, and possibly not solvable for the foreseeable future, in the case of midcourse defenses.
Decoys like those mentioned here are not trivial to make, however—and might work only if repeatedly flight tested. (A test of a missile defense system by the United States in late 2008 involved decoys that failed to function properly.)92 Balloons need to be inflated in outer space. Some type of mechanism must physically separate each decoy from its host vehicle as well—something that is easy to do for Russia (or the United States, Britain, or France) and others that have mastered MIRV technology, but a bit harder for countries that have not. (Most states that are of concern to the United States are highly unlikely to have MIRV technology anytime soon.) The associated technology is fairly simple, but making it work in the laboratory is not the same as making it work at high speed in outer space, especially after a high-acceleration trajectory through the Earth’s atmosphere. (It bears re-emphasis that in late 2008 the United States conducted a test in which decoys failed to deploy properly.)93
Making decoys work within the atmosphere is even harder. It can be done, but it requires decoys that can overcome the effects of air resistance so as not to slow down more quickly than real warheads would. Decoys that could mimic warheads within the atmosphere therefore might need small booster rockets. Alternatively, they could be made small and dense, so they would fly the same trajectories as heavier but larger warheads (since the rate of slowing from air resistance increases with an object’s size as well as its weight), though in that case their radar signatures might give them away.
Against boost-phase defenses, countermeasures are also possible, though they are relatively difficult to make. As noted, boost phases could theoretically be shortened to minimize the time a defense would have to home in on the hot rocket booster. Against interceptors that would track a rocket’s plume, contaminants could be put in the rocket fuel to make its plume asymmetric and potentially lead astray any interceptors that might home in on the midpoint of the plume (unless the interceptors also had an additional sensor). Against lasers, a rocket could be rotated, or given a shiny external surface that would reflect most incoming light. Finally, rockets could also be launched from remote locations on cloudy days when infrared detection satellites might not detect their heat signatures immediately—reducing the time when boost-phase defenses could work.
In short, the missile defense job involves not only very advanced technologies but a complex interaction between offense and defense. Moreover, the tools available to each side are different, and in many cases advantageous to an attacker, meaning that even a less sophisticated attacker may be able to compete successfully with a technologically advanced defender. The broad message here is that one must ask about the likely offensive countermeasures that could be deployed against each and every different type of defense. Missile defense is not pure science; it is an interactive, competitive, action-reaction process.
Missile defenses are improving, but the task is inherently very challenging. In addition, “the enemy gets a vote,” and can employ various countermeasures to challenge a defense that may have performed well against a single easily distinguishable warhead in a simulation or test. On balance, the offense is in a stronger inherent position than the defense, especially when nuclear weapons are involved (since they require any meaningful defense to have a very high probability of successful intercept). However, it is worth bearing in mind as well that countermeasures are not trivial to perfect, especially for countries with small warhead and missile inventories and limited military resources or diplomatic “space” within which to test. The following points provide some additional detail to substantiate these broad conclusions.
A technical issue of great importance is nuclear testing. It is relevant to maintaining deterrence for the United States, but perhaps even more importantly in the modern era, it is of importance in addressing the nuclear non-proliferation agenda. To put it directly, if testing can be impeded or stopped by international accord and resulting international pressure on any would-be violators, can nuclear proliferation be slowed? To get at such questions, this section begins with a primer on how nuclear weapons work.
Fission bombs, the simplest type, use either enriched uranium (U-235), or reprocessed plutonium (with Pu-239 the key isotope) created in a nuclear reactor, as their core material. Either one is capable of undergoing a chain reaction, meaning that once some atoms within a given mass of material begin to split or fission, they can cause an exponentially increasing number of atoms to themselves fission, in a process that accelerates extremely fast. Neutrons produced by the process of one atom splitting are sufficient in number, and typically endowed with the right amount of energy, that they can, on average, cause more than one atom to split themselves, assuming a sufficiently large amount of material is present in a condensed space. Hence, a chain reaction occurs, with one fission resulting from another, in a process that builds upon itself. The point here is that, with more than one fission resulting on average from any previous one, the process escalates exponentially. The rate at which this occurs is fast enough that, if the materials are appropriately sized and shaped, they can create enormous numbers of fissions—and enormous energy—before the weapon blows itself apart.
A number of things must go right for this process to work. The correct amount of material must be present, and the weapon must be constructed in such a way that it does not destroy itself before a large yield is created.
One way to build a bomb is with enriched uranium 235—the scarcer isotope of the two found most in nature (the other being U-238, which makes up 99.3 percent of natural uranium). Since U-238 is not very prone to fissioning, even in the presence of lots of free neutrons, bombs using a uranium chain reaction as their source of explosive energy cannot be built unless the concentration of U-235 is greatly increased through a process like centrifuge rotation or gaseous diffusion. With these methods, the slightly lighter weight of U-235 means that on average molecules containing it have greater speed than those with U-238, so through a repetitive process of enrichment the concentration of the U-235 atoms or molecules can be increased if a mechanism to separate faster molecules from slower ones is employed.
Once adequate amounts of U-235 are available, typically twenty kilograms or more, the uranium can be put into two main chunks, neither one large enough to generate a chain reaction. (This is because, if the mass is small enough, those natural fissions that do occur in the uranium generally produce neutrons that escape from the mass into space, rather than encountering and being absorbed by new uranium atoms. So the chain reaction process never gets going.) But when the two chunks are joined, as with a “gun-assembly” weapon like the Hiroshima bomb, they produce a large enough mass to “go critical.” If the uranium is partially surrounded with materials that tend to reflect neutrons—so any neutrons headed for the open tend to return to the uranium mass and have a chance at causing new fissions—and if it is also surrounded with a tamper that slows down the process of the explosion, allowing more time for new chain reactions to occur—the yield can be further enhanced.
Built in this way, the Hiroshima bomb had a yield of ten kilotons (the equivalent of 10,000 tons of TNT) and destroyed a region with a radius of about one kilometer; such a weapon detonating in New York could easily kill 100,000.96 Just to underscore the power of nuclear explosives, that much energy is created by the complete fissioning of just half a kilogram of uranium or plutonium. (In other words, despite all the efforts to reflect neutrons and slow the blast process, nuclear explosives are not very efficient, and do not typically consume the majority of their nuclear “fuel” in the course of an explosion.)97
For today’s nuclear powers, the most common way to build a fission weapon is not to use a gun-assembly uranium weapon, but to surround a shell of plutonium with conventional explosive. The explosive must have multiple detonators so it is simultaneously detonated all around the shell, and it must be shaped correctly so the explosive force applies equally across the surface of the plutonium. When the weapon is triggered, the shell is thereby compressed, forming a sphere that attains critical mass. Less than eight kilograms of plutonium (under twenty pounds) is adequate to create such a weapon.98
As with uranium bombs, the yield of these weapons is enhanced considerably, for a given amount of fissile uranium or plutonium, if a neutron-reflecting material like uranium-238 or a beryllium or tungsten product is used to enclose the plutonium. Also, weapons can be made more efficient and deadly if neutron generators (designed to start the chain reaction more quickly) are used. In that way, the weapon does not depend entirely on random fissions to begin the chain reaction process, so the exponentially accelerating process can happen faster, allowing a greater yield before the weapon self-destructs. Polonium-210 is such a neutron-generating element.99
All of these materials, it is worth noting, are either hard to acquire or hard to work with—underscoring the degree to which building a nuclear bomb is challenging, even for some nation states, and certainly for terrorist groups. They are dangerous to handle, and they must be well machined to function correctly within a weapon. But on balance it must still be concluded that for groups able to get their hands on fissile material, the odds of it being turned into at least a crude and heavy nuclear device are fairly high.100
For those groups or states able to build bombs, the yield of a weapon can be increased if a weapon is “boosted” by a mixture of deuterium and tritium gas injected inside the shell of the plutonium as the weapon is detonated. That tritium absorbs neutrons and then undergoes a fusion process (atoms coming together to form new heavier atoms, rather than the opposite fission process in which a large atom splits to form more than one smaller atoms). The fusion process itself generates energy. Even more important than the energy thereby generated by fusion in the boosting process, however, are the neutrons generated by the fusion—which in turn have a high likelihood of inducing more fissions from the plutonium. Again, the goal is to maximize the number of plutonium atoms that fission quickly, before the bomb essentially blows itself up and terminates the chain reaction process.
An even more advanced bomb is the thermonuclear or hydrogen bomb. Such a bomb includes a device like that described earlier as its “primary,” which gets the whole explosion going. In addition, it has a “secondary” stage, powered by x-rays from the first stage, that is designed to produce a large fraction of its energy from fusion. The yields of such thermonuclear weapons can be very large, hundreds of kilotons or even megatons (by contrast, even the most sophisticated and efficient fission bombs typically have yields limited to several dozen kilotons at most).
Some assume that in a thermonuclear or fusion bomb, the primary is a fission device, while the secondary is the fusion part of the weapon. But as noted, many primaries also employ fusion to an extent, through the boosting process. To add further to the confusion, much of the energy from the second stage of a thermonuclear bomb is typically produced by fission, since the deuterium-tritium gas is enclosed generally by a uranium-238 shell, which is good at absorbing the high-energy neutrons produced by the fusion process (even though U-238 is not good at absorbing neutrons from fissioning uranium atoms, as noted previously, given the different average speeds of those neutrons). So there is actually some fusion within the “fission” part of the bomb—the first stage—and some fission within the “fusion” part.101
This type of primer, while obviously not adequate for answering questions such as whether North Korea or Iran can build a nuclear weapon to fit on a missile in the coming years, nonetheless helps inform some of the questions relevant to the nuclear testing debate and specifically the issue of whether the comprehensive nuclear test ban treaty (CTBT) would enhance American security. This is considered in the following.102
Large nuclear weapons detonations are easy to detect. If they occur in the atmosphere (in violation of the atmospheric test ban treaty) they are visible to satellites, and their characteristic radiation distribution makes them easy to identify. It is for such reasons that no country trying to keep its nuclear capabilities secret has tested in the atmosphere in the modern era (South Africa is the last country that may have done so). If the detonations are underground, as is more common, they are still easy to identify via seismic monitoring, provided they reach a certain size. Any weapon of kiloton power or above (the Hiroshima and Nagasaki bombs were in the ten to twenty kiloton range) can be “heard” in this way. In other words, any weapon with significant military potential tested at its full strength is very likely to be noticed. American seismic arrays are found throughout much of Eurasia’s periphery, for example, and even tests elsewhere could generally be picked up. Indeed, even though it either “fizzled” or was designed to have a small yield in the first place, with a yield of about one kiloton and thus well below those of the Hiroshima and Nagasaki bombs, the October 2006 North Korean test was detected and clearly identified as a nuclear burst.103
The chances of detection can be reduced in only two viable ways. First, test a device well below its intended military yield, through some type of modification of the weapon’s physics. (Doing this may make the device very different from the actual class of weapon it is designed to represent, meaning sophisticated extrapolation will be needed to deduce how the actual weapon would behave based on the results of the detonation of the modified device.) Second, dig out a very large underground cavity into which a weapon can be placed, thereby “decoupling” the blast from direct contact with the ground, and allowing it to weaken before it reaches surrounding soil or rock and causes the Earth to shake. This latter approach is arduous, and does not make a weapon totally undetectable. It simply changes the threshold yield at which it can be heard by American, Russian, and international seismic sensors.104
In summary, a country that was very sophisticated in nuclear technology might be able to do a test of a modified device that escaped international detection by virtue of having its normal yield reduced through modifications to the basic physics of the weapon. For example, less plutonium or highly enriched uranium might be used. Or if it was an advanced type of weapon, less tritium might be used. But accomplishing such engineering feats would probably be beyond the means of a fledgling power, since they are difficult even for advanced nuclear powers. (This is much of the reason why the threshold test ban treaty that limits the power of any nuclear explosion allows tests of up to 150 kilotons. It is hard to use very small explosions in the sub-kiloton range to verify the proper functioning of a sophisticated and powerful nuclear weapon.) Scientists can learn some things from artificially small explosions caused by modified devices—but probably not enough to give them high confidence that the weapon they have developed is highly reliable at its intended yield.
U.S. nuclear verification capabilities have picked up the Indian, Pakistani, and North Korean nuclear tests—even the small, relatively unsuccessful ones—in the last decade and would be able to do so with high confidence for tests from those or other countries in the future. Verification capabilities are not airtight or perfect, but their limitations are probably not solid grounds upon which to oppose a test ban treaty, on balance.
Most agree that the United States needs a nuclear deterrent well into the foreseeable future. Common sense would seem to support the position that, at some point at least, testing will be needed in the future to ensure the arsenal’s reliability. How can one go 10 or 20 or 50 or 100 years without a single test and still be confident that the country’s nuclear weapons will work? Equally important, how can one be sure that other countries will be deterred by an American stockpile that at some point will be certified only by the experiments and tests of a generation of physicists long since retired or dead?
From the nuclear arms control point of view, some of this perception about the declining reliability of nuclear weapons might be welcomed. Declining reliability might translate into declining likelihood of the weapons ever being used and declining legitimacy for retention of a nuclear arsenal. But as a practical strategic and political matter, any test ban must still allow the United States to ensure 100 percent confidence in its nuclear deterrent into the indefinite future. Even if some uncertainty over the functioning of a certain percentage of the arsenal is tolerable, doubt about whether any part of it would function effectively could seriously disrupt the core logic of deterrence.
Thankfully, a reasonable confidence in the long-term viability of the American nuclear arsenal should be possible without testing. To be sure, with time the reliability of a given warhead class may decline as its components age. In a worst-case scenario, it is conceivable that one category of warheads might become flawed without our knowing it; indeed, this has happened in the past. But through a combination of monitoring, testing, and remanufacturing of the individual components, conducting sophisticated experiments (short of actual nuclear detonations) on integrated devices, and perhaps introducing a new warhead type or two of extremely conservative design into the inventory, the overall dependability of the American nuclear deterrent can remain strong. In other words, there might be a slight reduction in the overall technical capacities of the arsenal, but still no question about its ability to exact a devastating response against anyone attacking the United States or its allies with weapons of mass destruction.
Consider first the issue of monitoring a given warhead type, and periodically replacing components as needed. This is the key way the United States is maintaining its nuclear arsenal at present (its last test was in 1992). As an example, as noted earlier a typical nuclear warhead has a shell of plutonium that is compressed by a synchronized detonation of conventional explosives that surround it. Making sure the explosion is synchronized along all parts of the explosive, so the compression of plutonium is symmetrical, is critical if the warhead is to work. Over time, wires can age, detonators can age, and so forth. But these types of components can be easily replaced and their proper functioning be verified through simulations that make no use of nuclear material (and are thus allowable under a CTBT).
Things get a bit more complicated once the compression of the plutonium is considered. The interaction of the conventional explosive with the plutonium is a complex physical phenomenon that is highly dependent on not just the basic nature of the materials involved, but their shapes and their surfaces and the chemical interactions that occur where they meet. Plutonium is not a static material; it is, of course, radioactive, and ages in various ways with time. Conventional explosives age, too, meaning that warhead performance can change with time. To prevent this, in theory one can simply rebuild the conventional explosives and the plutonium shells to original specifications every twenty or thirty or fifty years, avoiding the whole issue of monitoring the aging process by simply remanufacturing the key elements of the weapon every so often. In fact, one of the fathers of the hydrogen bomb, Richard Garwin, once recommended doing exactly that.105 But others retort that previous processes used to cast plutonium and manufacture chemical explosives have become outdated. For example, previous generations of plutonium shells (often called “pits” in the nuclear trade) were machined to achieve their final dimensions, but this produced a great deal of waste. The goal for the future has been to cast plutonium pits directly into their final shape instead (by heating the plutonium to molten form, forming it into a proper shape, and then letting it cool). Doing so, however, would create a different type of surface for the pit that might interact slightly differently with the conventional explosive relative to the previous design. And even a slight difference might be enough to throw off the proper functioning of a very sensitive, high-performance, low-error-tolerance warhead. Similarly, the way high explosives are manufactured typically changes with time. Replacing one type with another has in the past greatly affected warhead performance, even when that might not have been easily predicted based on the explosive force of the explosive—again, the detailed chemical interactions with the plutonium pit, among other such complex phenomena, are of critical importance.
So what to do? Some would argue that, for relatively small and shrinking nuclear arsenals, it is worth the modest economic cost and environmental risk (which is quite small by the standards of Cold War nuclear activities) to keep making plutonium pits and conventional explosives as we have before, even if the methods are outdated. That would ensure reliability by keeping future warheads virtually identical to those of the past. Mimicking past manufacturing processes should not be beyond the capacities of today’s scientists. But this argument is not presently carrying the day, in part because of the view that there will inevitably be at least small differences in how warheads are built from one era to another even if attempts are made to avoid it.106
DoE has instead devoted huge sums of money to its science-based stockpile stewardship program, to understand as well as possible what happens within aging warheads and to predict the performance of those warheads once modified with slightly different materials in the future. It is a very good program, even as elements of it naturally remain debatable.107 It is also more scientifically interesting—and thus more likely to attract good scientists into the weapons business in future years—than a program for stockpile maintenance that would do no more than rebuild weapons every few decades. But the science-based stockpile stewardship program still gives some people unease. For example, a key part of the effort is using elegant three-dimensional computer models to predict what will happen inside a warhead modified to use a new type (or amount) of chemical explosive based on computational physics. This is a very challenging process to model accurately. This method is good but perhaps not perfect.108
A final way to ensure confidence in the arsenal is to design a new type of warhead, or perhaps use an old design that is not currently represented in the active U.S. nuclear arsenal but that has been tested before. This approach would seek to use “conservative designs” that allow for slight errors in warhead performance and still produce a robust nuclear yield. The conservative warhead could then take its place alongside other types of warheads in the arsenal, providing an added element of confidence. Taking this approach might lead to a somewhat heavier warhead (meaning the number that could be carried on a given missile or bomber would have to be reduced), or a lower-yield warhead (meaning that a hardened Russian missile silo might not be so easily destroyed, for example). But for the purposes of post–Cold War deterrence, this approach is generally sound, and weapons designers tend to agree that very reliable warheads can be produced if performance criteria are relaxed. It could also lead to less use of toxic materials such as beryllium and safer types of conventional explosives (that are less prone to accidental detonation) than is the case for some warheads in the current arsenal.109
It is for such reasons that the Bush administration and Congress have shown interest in a “reliable replacement warhead” concept the last few years. To date, it is only a research concept, and a controversial one at that, with Congress not always willing to provide even research funding.110 But it does have a certain logic and, as one element of a future American arsenal, makes sense on balance. In fact, it might even obviate the need to consider the periodic-remanufacturing idea, since it is quite clear that the United States could deploy such a warhead with extremely high confidence of its reliability.111 Simple warhead designs are quite robust—recall, for example, that the Hiroshima bomb (a gun-assembly uranium device) was not even tested before being used.
Some have suggested we may need future nuclear testing for new types of warheads to accomplish new missions. For example, in the 1980s, some missile defense proponents were interested in a space-based nuclear-pumped x-ray laser. That was never particularly practical. But the idea of developing a nuclear weapon that could burrow underground before detonating has gained appeal—not least because countries such as North Korea and Iran are responding to America’s increasingly precise conventional weaponry by hiding key weapons programs well below the planet’s surface.
One possible argument for such a warhead is to increase its overall destructive depth. In theory, the United States could modify the largest nuclear weapons in its American stockpile to penetrate the Earth. This approach would roughly double the destructive reach of the most powerful weapons in the current arsenal, according to physicist Michael Levi.112 But if an enemy can avoid weapons in the current arsenal, it could avoid the more powerful bombs by digging deeper underground. Given the quality of modern drilling equipment, that is not an overly onerous task.
Could Earth-penetrating weapons at least reduce the nuclear fallout from an explosion? They could not prevent fallout. Given limits on the hardness of materials and other basic physics, no useful nuclear weapon could penetrate the Earth far enough to keep the radioactive effects of its blast entirely below ground. But such weapons could reduce fallout. Relative to a normal bomb, it is possible to reduce the yield of an Earth-penetrating weapon tenfold while maintaining the same destructive capability against underground targets.113 This would reduce fallout by a factor of ten as well.
That would be a meaningful change. But is it really enough to change the basic usability of a nuclear device? Such a weapon would still produce a huge amount of fallout, its use would still break the nuclear taboo, and it would still only be capable of destroying underground targets if their locations were precisely known—in which case there is a chance that conventional weapons or special forces could neutralize the site. This is the policy question that the preceding technical discussion is designed to inform, and answering it clearly requires a combination of technical and broader strategic assessments. But the technical aspects of the problem should be a part of any such calculation.
Nuclear weapons are complex devices that are expensive and complex to produce. That is a fortunate fact of science. More than sixty years into the nuclear age, if it had been different on the technical front, many more countries and perhaps terrorist groups could have their own fission and/or fusion weapons. But the prevalence of nuclear material worldwide, the fact that numerous countries including Pakistan and North Korea already do possess nuclear weapons, and the limitations of international controls on the movement of nuclear technologies and materials nonetheless make the present situation fraught with danger.
The focus here has been on the issues involved in building and testing a nuclear weapon. This is relevant to, among other things, the nuclear testing debate. Specifically, how much would an international accord banning nuclear tests (and punishing any violators of the regime) complicate the challenges of would-be proliferators? On the other hand, assuming nuclear weapons are still viewed as necessary for the foreseeable future, how much might a CTBT impinge on the reliability and credibility of the American nuclear deterrent? Several observations flow from the earlier discussion. Like those of other sections of this chapter, they do not tend to put policy debates to rest or resolve them definitively. Rather, they establish some boundaries to the debate and help inform the choices at hand.
Beyond the specifics of any issue, what are the broad lessons that emerge from the preceding discussions? Clearly, knowing more about scientific and technical matters in military analysis is better than knowing less; few would contest this assertion. But for the generalist, untrained in advanced science or engineering and unequipped to understand the complexities of nuclear science, space technology, military robotics, sonar and radar, lasers and radio-frequency weapons, and so on (not to mention topics not addressed here, such as advanced biological pathogens), how should one try to tackle even the rudiments of these complex subjects?
It is worth underscoring that it is definitely worth the while of any generalist to have some familiarity with the basics of military science. If nothing else, familiarity can provoke probing questions that require scientists to offer detailed answers and explanations for their views—which then can be examined for internal self-consistency and be scrutinized by other scientists.
Such a process was exemplified in the 1980s debate over “Star Wars,” the project to render nuclear weapons impotent and obsolete through defenses—which was soon widely recognized, not only by scientists but the general policy community, to be an excessively ambitious goal for any technical system. (This is not to deny there were other arguments, at the time and thereafter, in support of Reagan’s Strategic Defense Initiative.)
By contrast, such a process of vetting, and of understanding the fundamentals of technology, arguably did not happen adequately prior to the Iraq War of 2003. At that time, Secretary of Defense Rumsfeld’s hopes that a revolution in military affairs was truly underway (as well as some flawed assumptions about Iraqi politics) apparently persuaded him and other members of the Bush administration to make only minimal plans for the post-Saddam period. A greater respect for the age-old truths of military history, together with some healthy skepticism about the promise of technology and some clear-headed examination of the evidence about whether an RMA was really going to transform land warfare quickly, could have led to much more thorough preparation. And it did not take an advanced understanding of the exact capabilities of modern munitions, sensors, stealth aircraft, or vehicle and body armor to accomplish this.
By trying to grapple with basic science, a generalist can also learn important lessons about the limits of what rudimentary science can tell us regarding key policy debates. In other words, far from producing sophistry, a diligent effort to stay abreast of certain technological debates can remind one of which aspects of the subjects are beyond them. Learning the basics leads one to become curious about other matters, and develop working hypotheses in one’s mind about them—which can then be compared with the evidence and with the arguments of more informed scholars. This is a trial-and-error process that should breed a healthy dose of humility, not overconfidence.
For example, one can learn that any major space-based laser would—since it is essentially the combination of a Hubble telescope and a high-powered directed energy device—push the limits of technology, weigh a great deal, and cost in the billion-dollar range. But a similar understanding of the basics cannot help one be sure whether or not such a weapon could actually be created within five or ten years.
Or to take another example, some understanding of nuclear physics—and of the history of the bomb—can help a generalist appreciate that producing fissile materials is typically the hardest part of making a simple nuclear weapon. Once in possession of adequate amounts of such material, most countries are likely to be able to build a workable device, perhaps with roughly the destructive power of the Hiroshima or Nagasaki bomb. And it can help one understand that, absent nuclear testing, it is much harder for any country to have confidence in a more sophisticated device, such as a thermonuclear warhead or a weapon designed efficiently and elegantly enough to fit atop a missile.
In the end, military science is far too big a part of defense analysis simply to be ignored; even nonscientists must try to wade into the subject to the best of their abilities. Thankfully, there is enough good science writing in the modern world that a diligent generalist can usually make substantial headway. The only alternative is to pretend that the scientific aspects of defense policy matters can be separated from other aspects of key decisions, and outsourced to the experts for them to resolve—which, given the interconnectedness of so many aspects of defense policy, is really no alternative at all.
QUESTION 13: What are the likely capabilities of North Korea’s suspected nuclear arsenal?
ANSWER: The U.S. experience with its testing program (discussed earlier in relation to the CTBT debate) and other related considerations are of help here. No simple mathematical formula can answer the question. However, an appreciation of where the United States has itself had problems in building reliable warheads can shed some light on which technical challenges may be greatest for the DPRK.
North Korea is believed to have enough plutonium to make perhaps six to eight nuclear bombs (roughly forty to forty-five kilograms).114 That plutonium has been, as best we can tell from remote observation (partially corroborated by eyewitness accounts), successfully reprocessed so it is separate from radioactive waste and usable in bombs. Since fission bombs are considered relatively straightforward to make, for a country able to acquire the fissile material, it has long been assumed that North Korea has had simple nuclear explosives. That hypothesis was confirmed in October of 2006 when North Korea tested a nuclear device.
The yield of that device was quite low, however, perhaps less than one kiloton. This suggests that the DPRK may not have mastered the art of compressing the plutonium shell efficiently and symmetrically, nor quickly, enough to generate a large yield. (An alternative, less likely interpretation is that it intentionally caused only a small yield, but that would imply far more sophistication than a first-time testing nation would likely possess.)
Chances are that North Korea’s weapon is simple, crude, and heavy. It may or may not fit on a missile. If launched on one, it may or may not survive the g forces and heating of atmospheric reentry, given the DPRK’s limited likely capacities to accurately model such environments (and the fact that even the United States has experienced challenges in these areas with its own warheads in the past).
On balance, North Korea almost surely could detonate a nuclear weapon of simple design again. But it could require a large airplane or ground vehicle to deliver any such warhead, given its likely weight and possible vulnerability to stresses and strains in a more advanced delivery process. Only further testing (and perhaps some warhead modifications) would likely give the DPRK real confidence that it could mount an advanced warhead on a missile.
QUESTION 14: With recent successful tests, has missile defense now gained the upper hand against ballistic missile threats?
ANSWER: No. Reaching this conclusion would be to go too far. To be sure, hit-to-kill technology is doing much better, given successful tests this decade of the Navy’s Aegis-based Standard missile, the California/Alaska midcourse system, and shorter-range ground-based systems. It is only fair to acknowledge that these tests do rebut some past criticisms of missile defense.
But all of these tests were against simple, single, isolated targets flying predictable trajectories. None involved swarms of decoys; none involved maneuvering reentry vehicles; and none involved salvos of multiple warheads at once. At present, based on what is known about American defense systems, they likely could not handle more complex or sophisticated attacks, at least not reliably.
That said, developing offensive techniques like the use of decoys to fool defenses is perhaps harder than some assert. While not particularly complex conceptually, it takes some work and testing to learn how to dispense multiple objects from a single bus or other mechanism out in space. A clear lesson from examining the history of missile defense and space launch programs is that operations of any type in space are challenging, and unexpected mistakes often happen.
Countries such as North Korea have limited resources, as well as limited international political maneuvering room, to conduct such tests. So the DPRK’s ability to develop and maintain proficiency with space launch operations may be limited. It has considerable experience with single-stage rockets, and considerable expertise in building them and deploying them successfully, but multistage rockets as well as missile bus operations are another matter.
As such, while they have significant limitations, the U.S. ballistic missile defense systems developed to date are not without some meaningful capabilities. The practical question is how much more to spend on them when many other countries could, in theory, develop countermeasures, and when it is not clear how well more advanced U.S. systems would work against such countermeasures.
QUESTION 15: Is a revolution in military affairs underway?
ANSWER: Not necessarily. To some extent, this is a matter of definition, since there is no doubt that impressive things are happening in the realm of military technology. But if the contention is that warfare is changing so dramatically as to permit a radical reconceptualization of how it will be fought in the future—and thus of how defense resources should be allocated today—the question is open.
One popular formulation of the RMA hypothesis is that we occupy a period of time when new generations of defense technology arrive every eighteen to twenty-four months. This has been true with computing capabilities, so is it true, or even partly true, more generally? Putting this thesis on the table amounts to a testable proposition that allows greater precision in how we evaluate the RMA hypothesis.
In fact, the radical changes in technology that are leading to new capabilities every two years or so result from computer advances and not much else. Moore’s “law” has for decades described this pace of computing innovation. Plus, many other changes that have occurred in the modern era—such as the invention of helicopters, night vision devices, and satellite technology—were extremely impressive yet failed to be described as revolutions of the type now predicted for this computer age. While other areas of modern technology show impressive progress, notably microbiology and some robotic systems, change is occurring at only a modest pace in many other areas, like the propulsion systems, aerodynamics, and hydrodynamics of most vehicles (on the ground, in the air, on the water, and in space), as well as most types of sensor technology (sonar, radar, and optical).
The most enthusiastic theorizing about a modern RMA tended to precede the Iraq War. The latter experience has underscored the limits of technological progress, especially for those types of wars in which the United States has the most difficulty. American high-tech, while hugely useful in the war to be sure, did not produce anything close to victory for the United States in the war’s early years. Only when the U.S. armed forces and their Iraqi allies reverted to time-tested (and old-fashioned, generally low-tech) counterinsurgency methods in 2007 did the tide of battle begin to turn.
The ability of specific technologies to deliver dramatic results was frustratingly slow in Iraq (and Afghanistan). For example, against improvised explosive devices, U.S. deaths remained very high despite years of concerted investment and effort. The deaths declined when the surge-based strategy started to roll up insurgents and their IED caches before the weapons could be implanted, and to a lesser extent when the United States deployed large numbers of heavy mine-resistant vehicles (a type of heavier vehicle that flew in the face of the U.S. military’s goals to make most ground forces lighter and more maneuverable). Similarly, the only effective method for stopping most mortar and rocket attacks in Iraq is to prevent their launch in the first place rather than to defeat or defend against them in a more technically innovative way.
1. For a very thoughtful discussion of this issue, see Frank von Hippel, Citizen Scientist (New York: Touchstone, 1991), pp. xi–xv.
2. My selection of topics largely reflects my own relative strengths and limitations as an analyst; I emphasize those subjects I have studied and written on in greatest detail.
3. See, for example, Stuart E. Johnson and Martin C. Libicki, eds., Dominant Battlespace Knowledge (Washington, D.C.: National Defense University, 1996); Norman C. Davis, “An Information-Based Revolution in Military Affairs,” in John Arquilla and David Ronfeldt, eds., In Athena’s Camp: Preparing for Conflict in the Information Age (Santa Monica, Calif.: RAND, 1997); Joseph S. Nye, Jr., and Admiral William A. Owens, “America’s Information Edge,” Foreign Affairs, vol. 75 (March/April 1996); David A. Ochmanek and others, To Find, and Not to Yield: How Advances in Information and Firepower Can Transform Theater Warfare (Santa Monica, Calif.: RAND, 1998); Admiral Arthur K. Cebrowski and John J. Garstka, “Network-Centric Warfare: Its Origin and Future,” Proceedings (U.S. Naval Institute, January 1998), pp. 29–35.
4. Albert A. Nofi, Recent Trends in Thinking About Warfare (Alexandria, Va.: Center for Naval Analyses, 2006), pp. 8–17, available at www.cna.org/documents/DOO14875.A1.pdf [accessed April 10, 2008].
5. See, for example, Edward N. Luttwak, “A Post-Heroic Military Policy,” Foreign Affairs, vol. 75 (July/August 1996), pp. 33–44; and Michael Ignatieff, Virtual War: Kosovo and Beyond (New York: Henry Holt and Co., 2000).
6. See, for example, Jonathan Shimshoni, “Technology, Military Advantage, and World War I: A Case for Military Entrepreneurship,” International Security, vol. 15 (Winter 1990), pp. 213–15; and Robert P. Haffa, “Planning U.S. Forces to Fight Two Wars: Right Number, Wrong Forces,” Strategic Review (Winter 1999), pp. 15–21.
7. For a good discussion of the dangers of modern trends in military technology, capability, and operations for the United States and its allies, arising from capabilities such as precision missiles and information warfare, see Michael G. Vickers and Robert C. Martinage, The Revolution in War (Washington, D.C.: Center for Strategic and Budgetary Assessments, 2004).
8. For very good histories of past revolutions in military affairs, see for example Andrew Krepinevich, Jr., “Cavalry to Computer: The Pattern of Military Revolutions,” National Interest, no. 37 (Fall 1994), pp. 31–36; Williamson Murray, “Thinking About Revolutions in Military Affairs,” Joint Forces Quarterly (Summer 1997), pp. 69–76; Martin Van Creveld, Technology and War: From 2000 B.C. to the Present (New York: Free Press, 1991); Jared Diamond, Guns, Germs, and Steel: The Fates of Human Societies (W.W. Norton, 1997); and Max Boot, War Made New: Technology, Warfare, and the Course of History, 1500 to Today (New York: Gotham Books, 2006).
9. National Defense Panel, Transforming Defense: National Security in the 21st Century (Washington, D.C.: National Defense Panel, 1997), pp. 2, 32.
10. See Lawrence Freedman, The Revolution in Strategic Affairs, Adelphi Paper 318, International Institute for Strategic Studies (Oxford, England: Oxford University Press, 1998).
11. Of course, as a practical matter, many players including many civilians are typically involved in big decisions about how a nation-state prepares for, and engages in, warfare; it is critical that national leaders, civilians providing oversight to military services, civilian scientists, and members of Congress and/or parliament be as informed as possible, given their inherent roles in military debates in most countries. For historical analyses, see Barry R. Posen, The Sources of Military Doctrine: France, Britain, and Germany between the World Wars (Ithaca, N.Y.: Cornell University Press, 1984), pp. 41–80; Stephen Peter Rosen, Winning the Next War (Ithaca, N.Y.: Cornell University Press, 1991), pp. 13–18, 76–100; and Montgomery C. Meigs, Slide Rules and Submarines (Honolulu, Hawaii: University Press of the Pacific, 2002), pp. 211–20.
12. See Max Boot, War Made New: Technology, Warfare, and the Course of History, 1500 to Today (New York: Gotham Books, 2006), pp. 466–68.
13. Frederick W. Kagan, Finding the Target: The Transformation of American Military Power (New York: Encounter Books, 2006), pp. 393–401.
14. National Defense Panel, Transforming Defense, pp. 2, 32.
15. A recent Defense Science Board review study exhorts scientists to make major progress in sensors, talks about some promising trends, and notes the potential of miniaturized systems, but offers little detail about where breakthroughs seem imminent, and acknowledges that most sensor technologies will face severe limits in their range and reliability against many classes of targets. See Defense Science Board, Defense Science Board 2006 Summer Study on 21st Century Strategic Technology Vectors, Volume II: Critical Capabilities and Enabling Technologies (Washington, D.C.: Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics, February 2007), pp. 57–65.
16. Zalmay Khalilzad and David Shlapak, with Ann Flanagan, “Overview of the Future Security Environment,” in Zalmay Khalilzad and Ian O. Lesser, eds., Sources of Conflict in the 21st Century (Santa Monica, Calif.: RAND, 1998), pp. 35–36.
17. Richard Chait, Albert Sciarretta, John Lyons, Charles Barry, Dennis Shorts, and Duncan Long, A Further Look at Technologies and Capabilities for Stabilization and Reconstruction Operations (Washington, D.C.: National Defense University Center for Technology and National Security Policy, 2007), p. 52.
18. Ibid., pp. 51–52.
19. Stew Magnuson, “Technologists Take Aim at Enemy Snipers,” National Defense (October 2007), available at www.nationaldefensemagazine.org/issues/2007/October/Technologists.htm [accessed April 11, 2008].
20. Chait, Sciarretta, Lyons, Barry, Shorts, and Long, A Further Look at Technologies and Capabilities for Stabilization and Reconstruction Operations, p. 50.
21. Statement of General Ronald R. Fogleman, Chief of Staff, U.S. Air Force, before the House National Security Committee, 105 Cong. 1 sess., May 22, 1997.
22. Defense Science Board 1996 Summer Study Task Force, Tactics and Technology for 21st Century Military Superiority, vol. 1 (Department of Defense, 1996), p. S-4.
23. Office of Force Transformation, The Implementation of Network-Centric Warfare (Washington, D.C.: Department of Defense, 2005), pp. 44–45, available at www.oft.osd.mil/library/library_files/document_387_NCW_Book_LowRes.pdf [accessed April 10, 2008].
24. National Defense Panel, Transforming Defense: National Security in the 21st Century (Washington, D.C.: National Defense Panel, 1997), pp. 7–8.
25. Dean Andreadis, “Scramjets Integrate Air and Space,” The Industrial Physicist (August/September 2004), pp. 24–27, available at www.aip.org/tip/INPHFA/vol-10/iss-4/p24.pdf [accessed April 11, 2008].
26. Defense Science Board, Defense Science Board 2006 Summer Study on 21st Century Strategic Technology Vectors, Volume II: Critical Capabilities and Enabling Technologies, p. 84; and Grace Jean, “Electric Guns on Navy Ships: Not Yet on the Horizon,” National Defense (November 2007), available at www.nationaldefensemagazine.org/issues/2007/November/ElectricGuns.htm [accessed April 10, 2008].
27. Ronald O’Rourke, Navy Ship Propulsion Technologies: Options for Reducing Oil Use (Washington, D.C.: Congressional Research Service, 2006), available at fas.org/sgp/crs/weapons/RL33360.pdf [accessed April 11, 2008], pp. 1–10.
28. For a concurring view, see John Lyons, Richard Chait, and Jordan Willcox, An Assessment of the Science and Technology Predictions in the Army’s STAR21 Report (Washington, D.C.: Center for Technology and National Security Policy, National Defense University, July 2008), p. 23.
29. Robert H. Scales, Jr., “Cycles of War: Speed of Maneuver Will Be the Essential Ingredient of an Information-Age Army,” Armed Forces Journal International, vol. 134 (July 1997), p. 38.
30. Briefing slides presented to the Air Force Reserve and National Guard Conference by Lt. Gen. David A. Deptula, Deputy Chief of Staff for Intelligence, Surveillance and Reconnaissance, U.S. Air Force, “Emerging Threats to Legacy Fighters,” Andrews Air Force Base, Washington, D.C., December 5, 2007.
31. Lane Pierrot, A Look at Tomorrow’s Tactical Air Forces (Washington, D.C.: Congressional Budget Office, January 1997), p. 77. The detection range of a radar varies with the third root of the radar cross section of an object, so reducing the radar cross section by a factor of one thousand reduces radar range by a factor of ten. See J. C. Toomay, Radar Principles for the Non-Specialist (Belmont, Calif.: Lifetime Learning Publications, 1982); and Merrill Skolnik, Introduction to Radar Systems, 2nd edition (New York: McGraw-Hill Book Company, 1980).
32. Richard Chait, Albert Sciarretta, John Lyons, Charles Barry, Dennis Shorts, and Duncan Long, “A Further Look at Technologies and Capabilities for Stabilization and Reconstruction Operations,” Center for Technology and National Security Policy, National Defense University, Washington, D.C., September 2007, pp. 56–59, available at www.ndu.edu/ctnsp/publications.html [accessed July 20, 2008].
33. James Jay Carafano and Andrew Gudgel, “The Pentagon’s Robots: Arming the Future,” Backgrounder No. 2093 (Washington, D.C.: Heritage Foundation, 2007), available at www.heritage.org/Research/NationalSecurity/upload/bg_2093.pdf [accessed April 10, 2008].
34. Chait, Sciarretta, Lyons, Barry, Shorts, and Long, A Further Look at Technologies and Capabilities for Stabilization and Reconstruction Operations, pp. 56–59.
35. For a good summary of current capabilities and trends, see Michael G. Vickers and Robert C. Martinage, The Revolution in War (Washington, D.C.: Center for Strategic and Budgetary Assessments, 2004), pp. 30–45.
36. Vickers and Martinage, The Revolution in War, pp. 14–24.
37. See, for example, “33rd GPS Satellite Launched,” Aviation Week and Space Technology, December 24/31, 2007, p. 12; Robert Wall and Douglas Barrie, “Stealthy Strikes,” Aviation Week and Space Technology, December 24/31, 2007, pp. 18–19; and David Bond, “Mopping Up,” Aviation Week and Space Technology, January 7, 2008, p. 19.
38. John Stillion and David T. Orletsky, Airbase Vulnerability to Conventional Cruise-Missile and Ballistic-Missile Attacks: Technology, Scenarios, and U.S. Air Force Responses (Santa Monica, Calif.: RAND, 1999); and Andrew Krepinevich, Barry Watts, and Robert Work, Meeting the Anti-Access and Area-Denial Challenge (Washington, D.C.: Center for Strategic and Budgetary Assessments, 2003), pp. 15–19.
39. Stephen Biddle, Military Power: Explaining Victory and Defeat in Modern Battle (Princeton, N.J.: Princeton University Press, 2004), pp. 28–51, 132–49, 190–208.
40. Ivo H. Daalder and Michael E. O’Hanlon, Winning Ugly: NATO’s War to Save Kosovo (Washington, D.C.: Brookings, 2000).
41. Biddle, Military Power, pp. 55–60, 199–201.
42. Frederick W. Kagan, Finding the Target: The Transformation of American Military Policy (New York: Encounter Books, 2006), pp. 350–59.
43. Benjamin S. Lambeth, Air Power Against Terror: America’s Conduct of Operation Enduring Freedom (Santa Monica, Calif.: RAND, 2005), p. 342.
44. See Ashton B. Carter, “Satellites and Anti-Satellites: The Limits of the Possible,” International Security, vol. 10, no. 4 (Spring 1986), pp. 50–52; and David Wright, Laura Grego, and Lisbeth Gronlund, The Physics of Space Security: A Reference Manual (Cambridge, Mass.: American Academy of Arts and Sciences, 2005), pp. 40–46.
45. Barry D. Watts, The Military Uses of Space: A Diagnostic Assessment (Washington, D.C.: Center for Strategic and Budgetary Assessments, 2001), p. 123.
46. See, for example, Ed Kyle, “Space Launch Report, New Launchers: Space X Falcon,” Space Launch Report, October 2006, available at www.geocities.com/launchreport/blog017.html [accessed January 10, 2008].
47. Tamar A. Mehuron, “2007 Space Almanac: The U.S. Military Space Operation in Facts and Figures,” Air Force Magazine (August 2007), p. 82; Alan Collinson, “Briefing: Space Surveillance, Cutting the Clutter,” Jane’s Defence Weekly, January 16, 2008, p. 29; and Patterson Clark, “Current Missions,” The Washington Post, September 25, 2008, p. G4. Much of the material in this section is taken from Michael E. O’Hanlon, Neither Star Wars Nor Sanctuary: Constraining the Military Uses of Space (Washington, D.C.: Brookings, 2004), p. 35.
48. Watts, The Military Uses of Space, p. 50.
49. Peter L. Hays, United States Military Space: Into the Twenty-First Century (Montgomery, Ala.: Air University Press, 2002), p. 133; and Joel R. Primack, “Debris and Future Space Activities,” in James Clay Moltz, ed., Future Security in Space: Commercial, Military, and Arms Control Trade-Offs, Occasional Paper 10 (Monterey, Calif.: Monterey Institute of International Studies, 2002), pp. 18–20.
50. “Outlook/Specifications: Spacecraft,” Aviation Week and Space Technology, January 28, 2008, p. 171.
51. “Outlook/Specifications: Spacecraft,” Aviation Week and Space Technology, January 15, 2007, pp. 176–78, and January 28, 2008, pp. 170–72; Watts, The Military Uses of Space, pp. 42–43, 78; Craig Covault, “Secret NRO Recons Eye Iraqi Threats,” Aviation Week and Space Technology, September 16, 2002, p. 23; Jeffrey T. Richelson, America’s Secret Eyes in Space: The U.S. Keyhole Spy Satellite Program (Harper and Row, 1990), pp. 130–32, 186–87, 206–8, 227, 236–38; O’Hanlon, Neither Star Wars Nor Sanctuary, pp. 42–53.
52. The ability of a satellite to image places on Earth not directly below it is limited by three factors: first, the ability of its camera or lens to swivel, second the need of the user for a certain minimum degree of resolution in the image (which often makes images taken at longer range less useful), and third the curvature of the Earth, which blocks distant regions from view. This last constraint is generally the most binding. To calculate the maximum range, for low-altitude satellites the formula is radar horizon = square root of (diameter of Earth × altitude of satellite). This follows directly from the Pythagorean theorem, drawing a right triangle with one side the radius of the Earth, a second side the distance from the satellite in question to the farthest point on Earth’s surface within its view, and a third side from the center of the Earth to the satellite (this latter segment is the triangle’s hypotenuse).
Using symbols, we can write more compactly RH = √(DA). Since the diameter of the Earth is about 8,000 miles, a satellite at 200 miles’ altitude can therefore “see” out about 1,250 miles (and an aircraft at just under eight miles’ altitude can see about 250 miles).
53. “Outlook/Specifications: Spacecraft,” Aviation Week and Space Technology, January 15, 2007, pp. 176–78, and January 28, 2008, pp. 170–72; Watts, The Military Uses of Space, pp. 42–43, 78; Craig Covault, “Secret NRO Recons Eye Iraqi Threats,” Aviation Week and Space Technology, September 16, 2002, p. 23; Jeffrey T. Richelson, America’s Secret Eyes in Space: The U.S. Keyhole Spy Satellite Program (Harper and Row, 1990), pp. 130–32, 186–87, 206–8, 227, 236–38; O’Hanlon, Neither Star Wars Nor Sanctuary, pp. 42–53.
54. “Outlook/Specifications: Spacecraft,” Aviation Week and Space Technology, January 15, 2007, pp. 176–78, and January 28, 2008, pp. 170–72; and Mehuron, “2007 Space Almanac,” pp. 87–89.
55. Mehuron, “2007 Space Almanac,” p. 84.
56. Thomas A. Keaney and Eliot A. Cohen, Gulf War Air Power Survey Summary Report (Washington, D.C.: Government Printing Office, 1993), p. 193; Department of Defense, Kosovo/Operation Allied Force After-Action Report (Washington, D.C.: Department of Defense, 2000), p. 46; William B. Scott, “Milspace Comes of Age in Fighting Terror,” Aviation Week and Space Technology, April 8, 2002, pp. 77–78; and Patrick Rayerman, “Exploiting Commercial SATCOM: A Better Way,” Parameters (Winter 2003–2004), p. 55.
57. Jeremy Singer, “Laser Links in Space,” Air Force Magazine (January 2008), p. 57.
58. Walt Faulconer, “Civilian Space Portfolio Assessment,” briefing, Applied Physics Laboratory, Johns Hopkins University, Columbia, MD, April 22, 2008, p. 8.
59. Andrew E. Kramer, “Russia Challenges the U.S. Monopoly on Satellite Navigation,” The New York Times, April 4, 2007, available at www.nytimes.com/2007/04/04/business/worldbusiness [accessed January 10, 2008].
60. Faulconer, “Civilian Space Portfolio Assessment,” p. 8.
61. Kevin Pollpeter, Building for the Future: China’s Progress in Space Technology During the Tenth 5-Year Plan and the U.S. Response (Carlisle, Pa.: Strategic Studies Institute, Army War College, March 2008) pp. 19–27; O’Hanlon, Neither Star Wars Nor Sanctuary, pp. 54–56; Steven A. Smith, “Chinese Space Superiority?: China’s Military Space Capabilities and the Impact of Their Use in a Taiwan Conflict,” Air War College, February 17, 2006, p. iii, available at www.au.af.mil/au/awc/awcgate/awc/smith.pdf [accessed January 10, 2008]; and Jeff Kueter, “China’s Space Ambitions—And Ours,” The New Atlantis (Spring 2007), pp. 7–8.
62. Smith, “Chinese Space Superiority?”; and Geoffrey Forden, “China’s ASAT: No Space-age Pearl Harbor,” Wired (January 11, 2008), available at http://blog.wired.com/defense/ [accessed January 16, 2008].
63. Joseph Post and Michael Bennett, Alternatives for Military Space Radar (Washington, D.C.: Congressional Budget Office, 2007), pp. ix–xxi.
64. Defense Science Board, High Energy Laser Weapon Systems Applications (Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics, June 2001), pp. 49–54; and Elihu Zimet, “High-Energy Lasers: Technical, Operational, and Policy Issues,” Defense Horizons 18 (Washington: National Defense University, Center for Technology and National Security Policy, October 2002), pp. 6–7 of 16, available at www.ndu.edu/inss/DefHor/DH18/DH_18.htm.
65. General Accounting Office, “Missile Defense: Knowledge-Based Process Would Benefit Airborne Laser Decision-Making,” GAO-02-949T (July 16, 2002).
66. Missile Defense Agency Fact Sheet, “The Airborne Laser,” Department of Defense, Washington, D.C., September 2007, available at www.mda.mil/mdalink/pdf/laser.pdf [accessed January 10, 2008].
67. Sandra I. Erwin, “Killing Missiles from Space: Can the U.S. Air Force Do It with Lasers?” National Defense Magazine (June 2001), pp. 3–5, available at www.nationaldefensemagazine.org/article.cfm?Id=513.
68. Celeste Johnson and Raymond Hall, Estimated Costs and Technical Characteristics of Selected National Missile Defense Systems (Washington, D.C.: Congressional Budget Office, 2002), pp. 20–27.
69. Steven M. Kosiak, Arming the Heavens: A Preliminary Assessment of the Potential Cost and Cost-Effectiveness of Space-Based Weapons (Washington, D.C.: Center for Strategic and Budgetary Assessments, 2007), pp. 38–40; and Bob Preston, Dana J. Johnson, Sean J. A. Edwards, Michael Miller, and Calvin Shipbaugh, Space Weapons, Earth Wars (Santa Monica, Calif.: RAND, 2002), pp. 40–49.
70. Jon Rosamond, “USN Admiral Says Satellite Kill Was ‘One-Time Event,’ ” Jane’s Defence Weekly, March 26, 2008, p. 8.
71. Bruce G. Blair, Strategic Command and Control: Redefining the Nuclear Threat (Washington, D.C.: Brookings, 1985), pp. 201–7; Ian Steer and Melanie Bright, “Blind, Deaf, and Dumb,” Jane’s Defence Weekly, October 23, 2002, pp. 21–23; Donald Rumsfeld, “Report of the Commission to Assess United States National Security Space Management and Organization” (Washington, D.C.: January 11, 2001), pp. 21–22; Carter, “Satellites and Anti-Satellites”; and Watts, The Military Uses of Space, p. 99.
72. Dennis Papadopoulos, “Satellite Threat Due to High Altitude Nuclear Detonations,” briefing slides presented at Brookings on December 17, 2002, cited by permission from the author.
73. Philip E. Coyle, “Oversight of Ballistic Missile Defense (Part 3): Questions for the Missile Defense Agency,” Testimony before the House Committee on Oversight and Government Reform, Subcommittee on National Security and Foreign Affairs, U.S. Congress, Washington, D.C., April 30, 2008, p. 12, available at www.cdi.org/pdfs/CoyleTestimonyApr08.pdf [accessed July 29, 2008].
74. Philip E. Coyle, “What Are the Prospects, What Are the Costs?: Oversight of Ballistic Missile Defense (Part 2),” Testimony before the House Committee on Oversight and Government Reform, Subcommittee on National Security and Foreign Affairs, U.S. Congress, Washington, D.C., April 16, 2008, p. 21, available at www.cdi.org/pdfs/CoyleHouseOversightGovtReform4_16_08.pdf [accessed July 29, 2008].
75. DoD News Briefing with Lt. Gen. Trey Obering, July 15, 2008, pp. 2–3, available at www.defenselink.mil/transcripts/transcript.aspx?transcriptid=4263 [accessed August 1, 2008]; and Ronald O’Rourke, “Sea-Based Ballistic Missile Defense—Background and Issues for Congress,” CRS Report for Congress (Washington, D.C.: Congressional Research Service, May 23, 2008), pp. 13, 40, available at www.fas.org/sgp/crs/weapons/RL33745.pdf [accessed August 1, 2008].
76. David R. Tanks, National Missile Defense: Policy Issues and Technological Capabilities (Cambridge, Mass.: Institute for Foreign Policy Analysis, 2000), p. 3.3.
77. Curtis D. Cochran, Dennis M. Gorman, and Joseph D. Dumoulin, eds., Space Handbook (Maxwell Air Force Base, Alabama: Air University Press, 1985), pp. 3.27–30.
78. Thomas B. Cochran, William M. Arkin, and Milton M. Hoenig, Nuclear Weapons Databook, Volume I: U.S. Nuclear Forces and Capabilities (Ballinger Publishing, 1984), p. 107.
79. Tanks, National Missile Defense, p. 3.3.
80. See John Tirman, ed., The Fallacy of Star Wars (Vintage Books, 1984), pp. 52–65.
81. For more, see Stephen Weiner, “Systems and Technology,” in Ashton B. Carter and David N. Schwartz, eds., Ballistic Missile Defense (Brookings, 1984), pp. 49–97; and Robert G. Nagler, Ballistic Missile Proliferation: An Emerging Threat (Arlington, Va.: System Planning Corporation, 1992), pp. 52–65.
82. For more, see David B. H. Denoon, Ballistic Missile Defense in the Post–Cold War Era (Westview Press, 1995), chaps. 3–5; and Department of Defense, “The Strategic Defense Initiative: Defense Technologies Study,” reprinted in Steven E. Miller and Stephen Van Evera, eds., The Star Wars Controversy (Princeton University Press, 1986), pp. 291–322.
83. For more information, see the Federation of American Scientists’ web site at (www.fas.org/spp/starwars/program [November 2000]).
84. See J. C. Toomay, Radar Principles for the Non-Specialist (Mendham, N.J.: SciTech Publishing, 1998), pp. 1–64.
85. David Mosher and Michael O’Hanlon, The START Treaty and Beyond (Congressional Budget Office, 1991), p. 148.
86. Mosher and O’Hanlon, The START Treaty and Beyond, pp. 167–71.
87. David Arthur and Robie Samanta Roy, Alternatives for Boost-Phase Missile Defense (Washington, D.C.: Congressional Budget Office, 2004), pp. 40–42; and RAND, The Defense System Cost Performance Database: Cost Growth Analysis Using Selected Acquisition Reports (Santa Monica, Calif.: RAND, 1996).
88. George N. Lewis and Theodore A. Postol, “Future Challenges to Ballistic Missile Defense,” IEEE Spectrum, vol. 34 (September 1997), pp. 60–68.
89. The then–Martin-Marietta Corporation proposed fast-burn boosters back in the early 1980s; see Tirman, ed., The Fallacy of Star Wars, pp. 60–62.
90. See Andrew M. Sessler and others, Countermeasures: A Technical Evaluation of the Operational Effectiveness of the Planned U.S. National Missile Defense System (Cambridge, Mass.: Union of Concerned Scientists, April 2000), p. 42; and Gen. Larry Welch (ret.), chairman, and others, Report of the Panel on Reducing Risk in Ballistic Missile Defense Flight Test Programs (Department of Defense, February 27, 1998), p. 56 (www.fas.org/spp/starwars/program/welch/index.htm/ [November 2000]).
91. See the testimony of Richard L. Garwin and David C. Wright, “Ballistic Missiles: Threat and Response,” Hearings before the Senate Committee on Foreign Relations, 106 Cong., 1 sess. (Government Printing Office, 2000), pp. 74–90.
92. Ann Scott Tyson, “U.S. Shoots Down Missile in Simulation of Long-Range Attack,” The Washington Post, December 6, 2008, p. A2.
93. Ibid.
94. Bradley Graham, Hit to Kill: The New Battle Over Shielding America from Missile Attack (New York: Public Affairs, 2001), pp. 196–207.
95. David Wright, Laura Grego, and Lisbeth Gronlund, The Physics of Space Security: A Reference Manual (Cambridge, Mass.: American Academy of Arts and Sciences, 2005), pp. 98–100.
96. Michael Levi, On Nuclear Terrorism (Cambridge, Mass.: Harvard University Press, 2007), pp. 35–38, 52.
97. The same amount of deuterium undergoing fusion would produce about 25 kilotons of explosive force. See Samuel Glasstone, ed., The Effects of Nuclear Weapons (Washington, D.C.: U.S. Government Printing Office, 1962), pp. 5–6.
98. Edwin Lyman and Frank N. von Hippel, “Reprocessing Revisited: The International Dimensions of the Global Nuclear Energy Partnership,” Arms Control Today (April 2008), p. 9.
99. Richard L. Garwin and Georges Charpak, Megawatts and Megatons: A Turning Point in the Nuclear Age? (New York: Alfred A. Knopf, 2001), pp. 58–61.
100. Levi, On Nuclear Terrorism, pp. 40–50.
101. Garwin and Charpak, Megawatts and Megatons, pp. 58–65.
102. Some of my arguments here first appeared in Michael O’Hanlon, “Resurrecting the Test-Ban Treaty,” Survival, vol. 50, no. 1 (February–March 2008), pp. 119–32.
103. Zhang Hui, “Revisiting North Korea’s Nuclear Test,” China Security, vol. 3, no. 3 (Summer 2007), pp. 119–30.
104. Steve Fetter, Toward a Comprehensive Test Ban (Cambridge, Mass.: Ballinger, 1988), pp. 107–58.
105. America’s Defense Monitor Interview with Richard Garwin, April 3, 1999, available at www.cdi.org/adm/1235/Garwin.html.
106. Jonathan Medalia, “The Reliable Replacement Warhead Program: Background and Current Developments,” CRS Report for Congress, RL32929 (July 2007), pp. 4–9.
107. A. Fitzpatrick and I. Oelrich, “The Stockpile Stewardship Program: Fifteen Years On,” Federation of American Scientists (April 2007), available at www.fas.org/2007/nuke/Stockpile_Stewardship_Paper.pdf [accessed January 9, 2008].
108. “At the Workbench: Interview with Bruce Goodwin of Lawrence Livermore Laboratories,” Bulletin of the Atomic Scientists (July/August 2007), pp. 46–47.
109. National Nuclear Security Administration, “Reliable Replacement Warhead Program,” March 2007, available at www.nnsa.doe.gov/docs/factsheets/2007/NA-07-FS-02.pdf.
110. Walter Pincus, “New Nuclear Warhead’s Funding Eliminated,” The Washington Post, May 24, 2007, p. A6.
111. John R. Harvey, “Nonproliferation’s New Soldier,” Bulletin of the Atomic Scientists (July/August 2007), pp. 32–33; and Jonathan Medalia, “The Reliable Replacement Warhead Program: Background and Current Developments,” CRS Report for Congress RL32929, July 26, 2007, available at www.fas.org/sgp/crs/nuke/RL32929.pdf.
112. Michael A. Levi and Michael E. O’Hanlon, The Future of Arms Control (Washington, D.C.: Brookings, 2005), p. 28.
113. Michael A. Levi, “Dreaming of Clean Nukes,” Nature 428, April 29, 2004, p. 892.
114. Charles L. Pritchard, Failed Diplomacy: The Tragic Story of How North Korea Got the Bomb (Washington, D.C.: Brookings, 2007), p. 203.
Arthur, David and Robie Samanta Roy, Alternatives for Boost-Phase Missile Defense (Washington, D.C.: Congressional Budget Office, 2004).
Biddle, Stephen, Military Power: Explaining Victory and Defeat in Modern Battle (Princeton, N.J.: Princeton University Press, 2004).
Boot, Max, War Made New: Technology, Warfare, and the Course of History, 1500 to Today (New York: Gotham Books, 2006).
Carter, Ashton B., “Satellites and Anti-Satellites: The Limits of the Possible,” International Security, vol. 10, no. 4 (Spring 1986).
Diamond, Jared, Guns, Germs, and Steel: The Fates of Human Societies (W.W. Norton, 1997).
Fetter, Steve, Toward a Comprehensive Test Ban (Cambridge, Mass.: Ballinger, 1988).
Freedman, Lawrence, The Revolution in Strategic Affairs, Adelphi Paper 318, International Institute for Strategic Studies (Oxford, England: Oxford University Press, 1998).
Garwin, Richard L. and Georges Charpak, Megawatts and Megatons: A Turning Point in the Nuclear Age? (New York: Alfred A. Knopf, 2001).
Glasstone, Samuel, ed., The Effects of Nuclear Weapons (Washington, D.C.: U.S. Government Printing Office, 1962).
Graham, Bradley, Hit to Kill: The New Battle Over Shielding America from Missile Attack (New York: Public Affairs, 2001).
Johnson, Celeste, and Raymond Hall, Estimated Costs and Technical Characteristics of Selected National Missile Defense Systems (Washington, D.C.: Congressional Budget Office, 2002).
Johnson, Stuart E. and Martin C. Libicki, eds., Dominant Battlespace Knowledge (Washington, D.C.: National Defense University, 1996).
Kagan, Frederick W., Finding the Target: The Transformation of American Military Power (New York: Encounter Books, 2006).
Kosiak, Steven M., Arming the Heavens: A Preliminary Assessment of the Potential Cost and Cost-Effectiveness of Space-Based Weapons (Washington, D.C.: Center for Strategic and Budgetary Assessments, 2007).
Krepinevich, Andrew, Jr., “Cavalry to Computer: The Pattern of Military Revolutions,” National Interest, no. 37 (Fall 1994), pp. 31–36.
Krepinevich, Andrew, Jr., Barry Watts, and Robert Work, Meeting the Anti-Access and Area-Denial Challenge (Washington, D.C.: Center for Strategic and Budgetary Assessments, 2003).
Levi, Michael, On Nuclear Terrorism (Cambridge, Mass.: Harvard University Press, 2007).
Lyman, Edwin, and Frank N. von Hippel, “Reprocessing Revisited: The International Dimensions of the Global Nuclear Energy Partnership,” Arms Control Today (April 2008).
Murray, Williamson, “Thinking About Revolutions in Military Affairs,” Joint Forces Quarterly (Summer 1997), pp. 69–76.
National Defense Panel, Transforming Defense: National Security in the 21st Century (Washington, D.C.: National Defense Panel, 1997).
Preston, Bob, Dana J. Johnson, Sean J. A. Edwards, Michael Miller, and Calvin Shipbaugh, Space Weapons, Earth Wars (Santa Monica, Calif.: RAND, 2002).
Sessler, Andrew M. and others, Countermeasures: A Technical Evaluation of the Operational Effectiveness of the Planned U.S. National Missile Defense System (Cambridge, Mass.: Union of Concerned Scientists, April 2000).
Stillion, John and David T. Orletsky, Airbase Vulnerability to Conventional Cruise-Missile and Ballistic-Missile Attacks: Technology, Scenarios, and U.S. Air Force Responses (Santa Monica, Calif.: RAND, 1999).
Van Creveld, Martin, Technology and War: From 2000 B.C. to the Present (New York: Free Press, 1991).
Vickers, Michael G. and Robert C. Martinage, The Revolution in War (Washington, D.C.: Center for Strategic and Budgetary Assessments, 2004).
von Hippel, Frank, Citizen Scientist (New York: Touchstone, 1991).
Watts, Barry D., The Military Uses of Space: A Diagnostic Assessment (Washington, D.C.: Center for Strategic and Budgetary Assessments, 2001).
Welch, Gen. Larry (ret.), and others, Report of the Panel on Reducing Risk in Ballistic Missile Defense Flight Test Programs (Department of Defense, February 27, 1998), p. 56 (www.fas.org/spp/starwars/program/welch/index.htm/ [November 2000]).
Wright, David, Laura Grego, and Lisbeth Gronlund, The Physics of Space Security: A Reference Manual (Cambridge, Mass.: American Academy of Arts and Sciences, 2005).
18.226.172.214