Idiosyntactix
Strategic Arts and Sciences Alliance


The Brand Name of the Media Revolution

[library]

Institute for National Studies

2015:
POWER and PROGRESS

July 1996
Edited by Patrick M. Cronin

IV. TECHNOLOGY AND WARFARE

Martin C. Libicki

1. INTRODUCTION

Forecasts about tomorrow's battlefield depend on one's answer to a fundamental question: are the world's armed forces seeing a revolution in conventional military power driven by information technology? An answer first requires a clear definition of information-based warfare, a vision of the role technology plays in promoting military revolutions and an understanding of the changing rates at which improvements occur in the developmental cycle of a particular technology.

DEFINING INFORMATION-BASED WARFARE

Information-based warfare is that which utilizes information, especially computer-processed information, to impose one's will on an enemy. While all aspects of war are evolving rapidly, those driven by the computer-communications revolution are evolving the fastest. Thus, the description that follows of what warfare might be like in twenty years will concentrate on the changes in warfare wrought by the information revolution. This chapter considers technology holds--particularly information technology--to influence 21st warfare.

Information-based warfare's roots go back several decades. Already, the combined application of precision-guided munitions, long-range airborne and space-based sensors, and the development of tandem Global Positioning System and Inertial Navigation Systems (GPS/INS) guidance packages means that any target that can be located and identified can be engaged and disabled. These weapon systems have accentuated the importance of hiding and seeking. Over time, the offense will expend increasing effort seeking targets. After detection, their destruction will become ever more likely because of precision strike missiles. The defense will spend more resources on hiding, in camouflage, in trying to mimic background or civilian objects and in masking its signatures. The traditional warfare principles of firepower and maneuver may become far less relevant.

But other factors are also increasing the importance of information in warfare. These include command and control, mission planning, simulation, intelligence, and psychological operations. Each aspect of war is being transformed by the ever-greater speed and ever-lessening cost of collecting, processing,, and transmitting information.

WILL THE MTR BECOME AN RMA?

A Military-Technical Revolution (MTR) encompasses radical innovations in weapons or military equipment, and such technologies are or soon will be available on the world market, despite the present American lead in applications of information MTR,. American forces may be able to use such technology to gain a decisive advantage over their potential foes, but this will depend on whether they can employ the current MTR to initiate a Revolution in Military Affairs (RMA). An RMA results when a nation seizes an opportunity to transform its military doctrine, training, organization, equipment, tactics, operations and strategy in a coherent pattern in order to wage war in a novel and more effective manner. Examples of RMAs are the levée en masse of the revolutionary French Republic, the development of the blitzkrieg by the German Army and Air Force, and innovations in carrier warfare developed by the U.S. Navy. The underlying technologies utilized in all three RMAs were available to many countries at the time, but in each case only one country combined and employed them in a unique and successful manner to gain a decisive advantage in war.

The difficulty with predicting whether an RMA will emerge from the current MTR (and whether it will be ours or someone else's) originates in the search for a strategic rationale for an RMA. If the three examples above are indicative, an RMA emerges from a well-defined strategic problem. The levée en masse, for instance, was the response of Revolutionary France to invasion by the forces of the ancien régime. Military innovations carried out or proposed under the defunct Royal Army were wedded to the immense national energies unfettered by the revolution. The result was the huge French armies of unprecedented offensive power that smashed their way across Europe for 20 years. The Blitzkrieg permitted Germany to seize and hold the initiative and avoid the morass of slow-motion trench warfare. The U.S. Navy needed a method to conduct operations across the Earth's largest body of water.

Currently, the United States possesses the technology for a Revolution in Military Affairs, but it lacks a pressing strategic challenge to create one. Might the United States be motivated by the need to deter or defer a potential peer competitor? Such a problem does not necessarily have a military solution, nor is it generally viewed as an urgent priority. Would the United States look for innovative ways to wage war with minimal American casualties? Is the United States focused on figuring out how to project power without overseas bases, and, if so, is the RMA irrelevant to conflicts near existing bases? The more possible reasons one raises for an American RMA over the next 20 years, the less clear the focus becomes on a specific motivating challenge. Consequently, the likelihood of generating an American-led RMA from the information MTR remains highly uncertain. The only basis left for believing it will nevertheless take place is the generalized fear that complacency in its technological prowess will, more than any single factor, put United States armed forces at risk.

Will another country generate an RMA by 2015? Others may have clearer strategic objectives by the very nature of their geostrategic circumstances (e.g., the presence of so many potential great powers in the Asian Arc). However, if their most pressing challenge is to defeat a power much weaker than the United States, their response may be irrelevant to American concerns. Barring a direct military confrontation between the United States and a peer or near-peer competitor, the United States might confront an adversary facing two simultaneous strategic problems. Such an adversary might seek to defeat a local enemy at acceptable cost and also prevent or confound the intervention of the United States or another distant great power. Military units (e.g., massed tanks) that can do the first may be an easy conquest for our style of warfare. Military units that can do the latter (e.g., irregular units) may be unable to overcome conventional defenses at low cost. An even more difficult challenge to the United States would be provided by a state that used conventional force against local foes and threatened to use weapons of mass destruction to deter great power intervention. Consider the problem for the United States if Iraq had possessed nuclear capabilities in 1990 or if a nuclear-armed North Korea initiated a second war on the peninsula. Any U.S. response likely would combine counter-proliferation approaches with preemption, and even strategic deterrence, but this would hardly constitute an RMA.

CAVEATS OF MIDTERM FORECASTING

This report maintains that the emergence of a peer military competitor is not likely during the next 10 or 20 years, but it is considerably more likely after the year 2015. Under foreseeable circumstances, U.S. Armed Forces are likely to retain their present general superiority over those of any other state. However, they may not be invincible. It is possible that foreign forces may develop innovative ways of warfare that, at least temporarily, could grant them tactical or operational advantage. Even general American military superiority would not necessarily mean quick or easy victory in every case. We might suffer from other disadvantages, as well--we may be more sensitive to casualties, and we will almost certainly be farther from battlefields of 2015 than our potential foes. Furthermore, forecasting the degree of our superiority in a particular case requires knowledge of the military sophistication of a particular opponent. There is no way to know this in advance.

We can make some reasonable projections about the highest levels of military technological development in 20 years. To field a weapon system by 2015 would require the initiation of its development cycle by 2000 or so. Five years from now is barely time enough for a radically new invention to be proven and win the confidence necessary to justify a major development program. The capital replacement cycle for major weapons systems will also affect future inventories. For example, even if a new American tank program were initiated before 2000, it would not reach the field until roughly 2010. Barring accelerated procurement, such a hypothetical tank would account for only a low percentage of the Army's main battle tanks in 2015. To be sure, this example is not universally applicable. The upgrades that reify information warfare have faster development cycles. The Ballistic Missile Defense Organization (BMDO) has demonstrated that completely new equipment can be built in a 2-year cycle. A number of complex weapons systems, such as the airborne Joint Surveillance and Target Attack Radar System (JSTARS) have proven usable as early as 5 years prior to initial full operational capability. Nonetheless, most American and foreign equipment that would appear on the 2015 battlefield already exists or would be a recognizable development of existing systems.

Project 2015 starts with existing inventories, programs, and technologies and work forward 20 years to estimate how they might be used in the field and what opportunities they offer for doctrinal innovation. Yet, such forecasts require assessments of the likelihood and timing of technological advances. For instance, will the United States have, by 2015, a system capable of shooting down a tactical ballistic or a cruise missile (or a flock of them)? Assessing how well existing defensive systems can combat existing tactical ballistic missiles is hard enough, but predicting the relative course of two competing trend lines (better tracking/fuzing versus decreasing missile observability) involves even more uncertainties. The answers are critical because they affect the viability

of a number of weapons platforms, such as surface ships. If the latter remain effective, certain operations, such as precision fire, may be best moved offshore. If no surface ships remain viable, the Navy will need to fundamentally rethink the way it contributes to warfare.

Such uncertainties about the future of warfare are increased by the growing importance of proliferation in determining the capabilities of whatever enemies we might face. Because the Soviet Union fundamentally armed itself from its own technological base (albeit with some purloined information), its likely progress could be debated within well-formed parameters. Proliferation regimes, however, extant and proposed, now cover a large variety of weaponry. Political decisions about how to extend current regimes, and under what circumstances they are observed, are difficult to foresee over a 20-year period. One even could assume that all currently fielded nonnuclear foreign weapons will be available on world markets sometime between now and 2015.

Predicting the range of results of information-based warfare is complicated further by the type of conflict involving such systems. When the order of battle was derived from learning the number of enemy divisions or battleships, the fundamental units for calculating opposing strength were visible and countable. As "hiding and seeking" becomes more important, however, enemy capabilities to see or be seen increasingly are being kept secret and are harder for us to divine. (Consider the secrecy with which we envelop the capabilities of our own spy satellites.) No foe will reveal beforehand how he intends to spoof our sensors. Indeed, we are unlikely to know the degree of his success at such measures, short of their employment.

RATES OF TECHNOLOGICAL DEVELOPMENT

The fundamental force driving the information revolution has been the rapid and consistent rate at which silicon-based devices have continued to improve. Between 1981, when IBM's original personal computer was introduced, to 1996 with today's 200MHz Pentium Pro machines, processor speeds for personal computers have risen several hundred-fold,, doubling every 2 years. The personal computer has had comparable increases in standard memory configurations (from 64,000 to 8,000,000 bytes), hard storage systems (from 10 megabytes in 1984 to the more common gigabytes today) and modem speeds (from 300 bits/second to 28,000 bits/second). In communications, phone-line trunk capacities have increased from 1.5 million bits per second to 155 million bits per second (using synchronous optical networks).

Such improvements in capacity will not cease--but will such rapid rates of change continue into the future, and how important will such improvements be? The answers lie in an understanding of the history of rates of development of particular technologies. Often the pattern has been an S-curve of slow-fast-slow improvement. For example, although the first commercial synthetic polymer, bakelite, was created in 1909, new polymer products initially entered the market slowly. Most of the major commodity polymers (plastics precursors) were commercialized in the 1950s and 1960s. After painfully slow development, the Germans and Italians flew the first successful jet-powered aircraft in 1939-1940. Led by British and American designs, jet engine performance improved radically between the mid-1940s and the mid-1960s. However, as these two examples illustrate, at some point, progress in the development of a technology slows. Although the range of available polymers or the capabilities of modern jet engines are both far greater than 25 years ago, these more recent advances have been of degree rather than of kind.

At the time of rapid development of a particular technology, hopes for such continued improvements extending indefinitely may seem reasonable. In retrospect, one can see that expectations had erred toward extreme optimism. As a result of past experience, most contemporary observers anticipate the same fate will befall information devices. Disagreement concentrates only on the question of when this slowdown will occur.

Another S-curve pattern illustrates the correlation of capabilities with underlying power. Early personal computers, for example, were poorly suited to word processing. They had 40-character screens and were so slow that typists were forced to pause while keystrokes were laboriously processed into text. Improvements followed rapidly, and increased speeds, hard disks, and spell-checking programs made word processing progressively easier to accomplish on computers. However, further improvements in computers are unlikely to alter the utility of computers for word processing. This provides another illustration of the fact that, after a certain point, even rapid changes in technology permit only modest increases in functionality.

Thus, as important to modern warfare as information capability has become, it now takes several orders of magnitude improvement to make a significant difference in capability. For example, a system capable of generating imagery accurate to the inch is not necessarily 144 times better than one capable of generating accuracy to the foot, even though the former reveals 144 times as much information, in the technical sense. This is not true for some other aspects of warfare. For example, tank guns that can be aimed accurately from 4,500 meters give overwhelming advantage to an armored force engaging an opponent with tanks armed with guns accurate to only 1,500 meters. In this case, the 3-to-1 superiority alone provides a decisive difference. The performance of Soviet tanks with their Iraqi crews in combat with their American counterparts in the Gulf War made that clear.

What do these two S-curve rates of technological development portend for American security? That depends on the stage of development. Let us suppose that the United States enjoys a 10-year lead in a certain area. If the technology were in its laboratory stage of development, it would provide the United States with little or no military advantage. After the technology was applied to military equipment, the 10-year advantage would provide the United States with a decisive advantage. However, after the rate of technological development slowed down for the United States, it would stillbe increasing at a rapid rate for the nearest American competitor. Our advantage would be considerably narrowed and eventually rendered relatively insignificant.

KEY QUESTIONS

In estimating the technology components of the national security environment in 2015, this chapter focusses on two questions. First, what will be the important differences between what American forces will be able to do then, compared to what they can do now? Second, what will be our advantages over our best potential military rival? While the capabilities of military systems adapted from commercial systems can be anticipated with a fair degree of certainty from expected developments in the latter field, the capabilities of purely military systems--a category which includes many sensors--are far harder to guess.

The primary source of uncertainty about future foreign military systems arises from difficulties determining the strategy for which they will be developed. Important new military capabilities are likely to be created by a relatively small number of foreign countries: more likely, the West Europeans, Russia and China; less likely, Japan and India; perhaps some of the East Europeans, Korea, Taiwan, Israel, Brazil, and South Africa. On what strategic basis would they do so? Some may take advantage of American protection and seek to develop complementary capabilities. Others may orient their programs toward export sales.Yet others may concentrate on building capabilities against threats from the developing world. The answers to other questions are even harder to guess. Will suchcountries feel impelled to adapt to the information MTR or will they compete directly with capabilities that the United States will be demonstrating over the next twenty years? If the latter, will they be content to keep their innovations to themselves or, once they have mastered them, will they seek customers for such new technologies? What types of capabilities will the world's other advanced military powers pursue? A state that seeks to dominate its neighbors may develop conventional heavy weapons, but one that wishes to deter American involvement in a region may prefer lighter information-based systems. Without knowing the details of great-power rivalry in 2015, the easiest answers to these questions would arise from following commercial developments adapted to military use. Developments initiated by foreign militaries are far more difficult to foresee.

Finally, a distinction needs to be drawn between information-based warfare, the primary concentration of this chapter, and information warfare, a somewhat different topic. Although information warfare includes information-based warfare, it also includes other subtopics: the influence of information on national decisionmaking, and the conduct of conflict through non-violent attacks on computer systems, such as through viruses, worms, and "Trojan Horses." In addition, although information technologies can affect national security through other means--such as law enforcement against state-sponsored crime, struggle in the cultural-economic realm, and the formation and influence of transnational communities--this chapter does not discuss such subjects in order to avoid distraction from its focus on how progress in information technologies likely will improve the ability to prosecute warfare.

2. SYSTEMS TECHNOLOGIES

One way of looking at how technology might influence national security is to begin with the three great technical revolutions of our time: information, biology, and materials. All three result from the ability to microfabricate, that is, to manufacture on a scale as small as the atomic level. Microfabrication makes possible the creation of chips of greater speed and complexity, hence greater information-processing capacity. The ability to design and replicate complex biological molecules may permit increasingly sophisticated drugs and other medical treatments. The ability to build composites atom by atom may result in affordable and lightweight materials of extremely high strength and resilience.

Of these three revolutions, the information revolution is furthest advanced and thus most likely to influence the conduct of warfare 20 years hence. In time, biological and materials technology also may play a large role in warfare. For instance, the biological revolution may permit warriors to be psychologically or physiologically manipulated from afar. It may also permit the invention of terrible diseases to wreak havoc on mankind or nature. The materials revolution may enable the fabrication of armor far stronger and lighter than anything yet imagined. With biology and materials, however, such developments are speculative; capabilities imagined for more powerful compounds may be incompatible with physical laws. The information revolution, though, has actually occurred. True, it faces some scientific limits, such as a limited radio spectrum and chemical limits on battery power, but the world's militaries will face enormous challenges, as well as enjoy huge potential, over the next 20 years from innovations that already have been demonstrated and disseminated.

CAN THE VISIBLE BE KILLED?

The two fundamental issues of information-based warfare are:

Can the visible be destroyed and, if so, under what conditions?

What can be made visible and what can be kept hidden?

The development and refinement of Precision Guided Munitions (PGMs) suggests that anything that can be detected, classified, and assigned to a weapon can be destroyed. Many types of PGMs will soon be able to be guided to coordinates using GPS receivers, perhaps supplemented by INS guidance. The United States is fitting cruise missiles with such capabilities and also testing the Joint Direct Attack Munitions (JDAM), a completely new generation of PGMs with enhanced capabilities. GPS/INS guidance kits for Mk-82 bombs are already being installed for a few thousand dollars each; units under development in American laboratories ultimately may be available for below $1,000. Defense intelligence estimates have warned that countries such as Syria, Iran, India, or China will have GPS/INS-guided, low-observable missiles by 2000 or shortly thereafter.

Positional GPS/INS guidance retains two advantages over the self-contained sensor packages that currently guide PGMs.(Note 1) First, it works against known targets with weak signatures or against targets with intermittent signatures, such as a sputtering radar. Secondly, positional guidance packages use far less artificial intelligence; thus, at some point in their development, they promise to be much less expensive than at present. Missiles so equipped could be made cheaply enough to use in overwhelming numbers in saturation attacks against high-value targets such as ships, command and control sites, and logistics facilities--but GPS updates to positional guidance systems would become dependent on the maintenance of communications with a central system. This creates a key vulnerability.

Three types of defense for visible objects are available: range/speed, armor/burial, and counterattack. Range helps to protect some of the more valuable American systems, such as the Airborne Warning and Control System (AWACS) and JSTARS, because they can operate while flying beyond the strike limits of current anti-air missiles. But these systems generate obvious signals and cannot maneuver very well to avoid missiles. Twenty years from now, they are likely to be more vulnerable to PGMs with the ranges of cruise missiles, although only a few countries are likely to have such PGMs. Even in 20 years, speed will protect reconnaissance aircraft and submarines against PGMs with limited speed and range.

Armor is likely to improve by 2015. However, barring a breakthrough in materials technology, armor will provide no panacea. The newest thinking on how to protect tanks is for its skin to react to information on an incoming round to shape itself and so blunt the missile's impact. Even if this approach succeeds in practice, however, it will be available for very few tanks by 2015. Furthermore, such armor may still be vulnerable to heavy weapons, fast penetrators, or saturation attacks. Burial and bunkering offer primitive but effective protection for command posts and stores, but such methods cannot protect moving targets. The United States has produced some bunker-busting bombs, but a truly effective nonnuclear burrowing bomb is unlikely to appear by 2015.

The Defense Department is funding generously the development of counterattack technologies to defend high-value targets against long-range missiles, but initial results have been mixed. In retrospect, the Patriot missile was not as successful in the Gulf War as initially believed. Upgraded and new defensive missiles both can appear impressive in one-on-one test engagements. But they will not insure against a determined saturation attack against a valuable target. Offensive missiles are also growing stealthier, making their intercept increasingly difficult. European missile manufacturers are reportedly applying radar-reducing finishes to their tactical missiles, such as the Penguin and FOG-M.

Target areas could also be defended by local electromagnetic pulse (EMP) or microwave burst to cripple the electronics on incoming missiles. But such defenses could be overcome by the use of older technologies such as mechanical fuzes or terminal trajectories. Counterelectronics weaponry could nevertheless be devised during the next decade that would confuse the systems on offensive missiles. Eventually, this could force the expensive replacement of many missile fuzes.

Over the next 20 years, certain characteristics of missiles are likely to evolve. Future missiles will be somewhat lighter and thus longer ranged than present models.(Note 2) More missiles will be guided by fire-and-forget systems and armed with more sensitive target discrimination mechanisms. For special needs, such as the disabling of an installation with a high density of electronic systems, missile warheads could be armed with microwave-kill capabilities. Extremely long-range artillery could replace self-powered missiles for a number of tasks when effective elecromagnetic guns are developed.

In sum, by 2015, visibility is even more likely to equal death on the battlefield. Some American platforms will enjoy a high level of protection because of their stand-off range, armor, or self-defense systems. This would force our opponents to expend considerable ingenuity and resources if they wished to overcome our defenses. On the other hand, American forces will have high rates of success in destroying enemy targets after we detect them. Thus, the game of hide-and-seek will continue to grow in importance in clashes between conventional forces.

SEEKING AND HIDING

Seeking and hiding will determine the parameters of the battlespace of 2015. American abilities in these regards will be higher than that of those restricted to using only commercial or widely available military technologies. American detection capabilities will be determined by the performance of our sensors and integrators.

The purpose of sensing is to evaluate the battlespace environment for strategic, operational, and tactical purposes; the three tasks are interconnected. The integration of pieces of tactical information into a composite picture is a long-established intelligence method to assess a potential or actual opponent's strength and possible intentions. Awareness of the general situation, in turn, permits small-scale surveillance to focus on what is deemed particularly relevant. Modern sensor technology divides this latter task into a three-step process. One set of sensors indicates that an object justifies identification and tracking. Next, filters help focus on certain readings, to the exclusion of others. Finally, targets are pinpointed for prosecution. Often the last task is performed through yet other sensors. For the foreseeable future, synoptic pinpointing--finding targets by surveying everything at sufficiently high resolution--is unlikely to be possible except under very specialized conditions. Nonetheless, by 2015, some combination of sufficiently powerful computers and discriminating filters may automate a large share of what is currently accomplished only by human intelligence. The United States is likely to lead in such applications, but even our systems will tend to find only what they are looking for where they are programmed to look.

Sensors can be divided into two categories: stand-off systems that operate from space or the sea, and intrusive systems that operate from the air and the ground. The farther away a sensor operates from enemy-controlled area, the more survivable it is and the easier to deploy in circumstances short of war, or where the United States is not directly involved. Sensor employment is also influenced by the concept of plausible deniability. A high-flying reconnaissance aircraft can be used in peacetime without diplomatic risk only if it can avoid coming down in the territory it is observing. Similarly, a sensor that cannot be traced has greater political potential for use than one with an obvious national origin. By 2015, American forces may also have access to the information that states are routinely collecting by sensors monitoring their own activities, such as airport or seaport operations. Such access would be expected to fall off as international tensions mounted.

SPACE-BASED SENSORS

There are two types of current satellite surveillance systems: low-earth orbit and geosynchronous. Low-earth orbit satellites can take detailed pictures in the visible, infrared, and microwave bands, using Synthetic Aperture Radar (SAR). Such satellites can stay over one point for only a few minutes and can return only every few days. The tactical employment of such satellites is limited further because their flight schedules provide knowledge of surveillance times to those wishing to hide or cease activities during those periods. Geosynchronous satellites, used to scan most of a hemisphere, can provide continuous observation of specific spots, but they provide poor image resolution from their 35,000-kilometer-high orbits. Geosynchronous satellites generally are used for electronic intelligence and to search for infrared signatures of ballistic missiles. As surveillance satellites are adapted for tactical and operational, rather than strategic, use, placing them in medium-earth orbits may permit the combination of adequate resolution and continuous observation capabilities. If four Hubble-class sensors were placed in a 7,000- kilometer-high equatorial orbit, they could observe most points between 60 degrees north and south with a 2-meter resolution.(Note 3) Molniya orbits (north-south elliptical) also have some advantages if the area of interest is limited, such as the Northern Hemisphere.

Alternatively, the success of the Ballistic Missiles Defense Organization's multiple-sensor technology integration (MSTI) and Clementine spacecraft suggests that inexpensive light satellites can provide adequate resolution in low-earth orbits. Each cost roughly $30 to $50 million, had multiple sensors, weighed roughly a kilogram in weight and was capable of image resolution down to 20 meters.(Note 4) A sufficiently large fleet of such satellites with software-linked sensors from many different sources could keep any one location under near-constant surveillance. Using many of such satellites to look at a single point would render useless attempts at hiding from observation.

While the United States is currently more advanced than other countries in building surveillance satellites, how secure is that lead? Russia and France are not far behind in sophistication, and. China and Japan are catching up. By 2015, India (whose IRS satellites have fairly advanced sensors), Israel, Korea, and Canada also may have made significant advances. Several satellites that were designed for environmental monitoring already work at the 20-meter resolution level and the possible proliferation of a 1-meter resolution capability is not unimaginable. After all, at present, three American companies are vying to sell 1-meter systems on the world market. By 2005, France, Russia, and perhaps Japan may possess such satellite capabilities.

By 2015, commercially available satellites will have capabilities close to those of contemporary American military satellites. Obviously, military usefulness would be greatly enhanced by the delivery of images faster than even American satellites can now provide; current image-processing times from many American systems limit military responses to transient data. American forces have recently arranged to receive real-time data from France's Spot satellite. Indeed, military use of third-party satellites, even those configured for environmental purposes, will rise substantially over the next 20 years.

A significant reduction in the cost of lifting a kilogram into low-earth orbit could allow far more orbital and suborbital launches by 2015. Currently the lowest cost--using payloads of 10 metric tons or more--is roughly $10,000 per kilogram ($20,000 per for small payloads). Although, several aerospace contractors tout reusable launch vehicles that could cut this cost to perhaps $1,000 a kilogram,(Note 5) how well grounded are these estimates? The Space Shuttle originally was predicated on low costs and frequent launches but resulted in few economies relative to older technologies. Still, some aerospace engineers claim that reusable launch vehicles (of which single-stage-to-orbit are one type) could become available in 10 years if the U.S. Government chose one and invested as little as $4 billion in its development. More conservatively, NASA has estimated it would require 25 years and $20 billion. Meanwhile, Orbital Sciences Corporation, the world leader in light launchers, is striving to reuse 75 percent of its rocket components. If successful, its per-kilogram launch costs could be as little as $5,000 for small, individually launched payloads.

Might American manufacturers alone be able to cut launch costs by such large margins? It is unlikely that the other major space-faring countries--by 2015, Russia, France, China, Japan and perhaps India--would permit the United States to maintain such advantage. As followers, their research and development path to cheap space transport will be substantially reduced by learning from American successes and failures. One can assume that it would take them about 10 years from when the first U.S. reusable launch system appeared to possess the same capability themselves.

What would substantially cheaper access to space imply for American national security? Lower launch costs might increase the number of satellites in orbit, but how much it would do so would depend on whether the freedom to make satellites heavier would equate to making them cheaper. Other factors would keep the population of satellites under control, such as limited spectrum or geosynchronous "parking spaces". Conversely, very cheap space launch costs might enable the creation of buckshot-style antisatellite systems.

The largest single military result of lower launch costs might be cheaper ordnance delivery. A cost of a $10,000-per-kilogram launch to low-earth orbit means, in theory, that a 200-kilogram bomb (e.g., a Mk-82) could be dropped on any location on earth for $2 million. Still, this price is only barely competitive with a cruise-missile delivery. Military effectiveness of low-orbit ordnance delivery also would require surmounting other obstacles: the lead times for launch preparation, the cost of keeping a rocket on the launch pad long enough for sufficient target opportunities to justify a cost-effective ordnance payload, the potential for confusing a conventional launch for a ballistic missile attack, vulnerabilities to space-based lasers, and plasma plumes that could limit midcourse corrections on missiles as they reenter the atmosphere.

If the $2 million launch cost were $200,000, however, the number of targets that could be allocated cost effectively to such rockets would be very large. By 2015, if reusable spacecraft are cheap, easy and quick to launch (particularly if they are untethered from fixed-launch facilities), range could greatly diminish as a factor in heavy conventional warfare. Of course one has to consider current treaty restrictions against space-based weaponry and fractional orbital bombardment systems, as well as limits on the number of strategic rockets and on the range of tactical rockets.

NAVAL SYSTEMS

Unless a rivalry develops between two great naval powers, the next 20 years are unlikely to produce dramatic improvements in deep-ocean nuclear submarine technology. Instead, underwater technology developments will focus on shallow-water capabilities and nonnuclear boats, particularly those driven by air-independent fuel-cell systems. Non- nuclear boat designs are producing quieter submarines and ones capable of longer sustained operations. The proliferation of small attack submarines of European design is likely to make littoral operations increasingly risky. Even so, the current generation of submarines in developing-world navies is below the technology level at which the U.S. Navy can honestly profess great concern. But the prospective improvement in the next generation of European designs--and the likelihood of equally sophisticated Asian-designed submarines appearing--suggests that the U.S. Navy will have some cause for worry from both the quality and numbers of nonnuclear foreign submarines by 2015.

By 2015, mine warfare is likely to pose graver risks to large warships operating near shore and in shallow waters. Although not a widely reported fact, antiship mines caused more damage to allied warships in the Gulf War than any of the other more highly publicized systems. Future shallow-water mines (e.g., plastic mines) are likely to be increasingly difficult to detect and defeat, particularly if they are cued by independent sensor systems. Those who design such mines will strive to give them the acoustic signature of rocks. Their true character would be revealed only when fired. At that point, they would take on the characteristics of torpedoes, capable of sinking even the largest ships when used in concentration. The only impediment to the increasing sophistication of mines over the next 20 years is that most Western countries--with the exception of Italy--are not pressing the development of such technology. Advances are more likely to come from developing-world maritime states.

Naval systems used as sensors have certain advantages. They can be legally deployed prior to engagement. Standing offshore, they can pick up electronic intelligence and, through acoustic sensors, can monitor port operations. They can oversee the flight operations of coastal cities, peer into mountainous terrain and, from some locations, acquire radar signatures that hug the earth. These functions do have limits. For example, line-of-sight needs require naval sensors to be 30-meters high to observe the closest shoreline or small vessels in harbor at a distance of some 20 kilometers offshore.

Nevertheless, by 2015, bringing a large ship to within 20 kilometers of another nation's coastline to monitor activities may be both risky and inefficient. Risk would come from being a visible target within range of many land-based systems. Inefficiency would result from the limited range of any single platform. Still, naval air launched from a single platform would extend the observation coverage the platform could provide by an order of magnitude. But inefficiency also would stem from the difficulty of sustaining operations for any length of time without rotating ships. Generally, three ships are required to keep one on station indefinitely, but for some areas remote from North America, such as the Indian Ocean, this figure may be closer to five-to-one for sustained American naval operations. Thus, a series of buoys, possibly complemented by unmanned aerial vehicles (UAVs), might prove far more efficient at collecting signatures. Buoys do not stand as tall as ships but, in sufficient combination, might offer a radar "dish" of sufficient strength to simulate effectively today's land-based, over-the-horizon backscatter radars. Distributed buoys would have to pass large quantities of data back and forth to form a coherent picture. Still, by 2015, such a capability could be possible with enough improvements in computing power.

A future variant on the acoustic naval sensor may be the seismic sensor that detects otherwise inaccessible vibrations caused by surface movement of large vehicles. It would take considerable practice before such signals could be translated into comprehensible patterns. As with offshore buoys, large numbers of sensors coordinated with high-bandwidth links and plenty of exercises would be required before usable information routinely could be expected to be collected from such systems.

AIRBORNE SENSORS

Unmanned aerial vehicles are the subject of a number of Defense Department programs aimed at the creation of many different types for a wide array of missions. Although they set the standards for endurance and capabilities, 30 other countries also make UAVs of varying degrees of sophistication (e.g., in terms of sensor package, system integration and platform stealthiness). As a result, the United States does not have the overwhelming lead in this technology that it possesses in space sensors. UAVs function much closer to a given surveillance area than satellites can. Thus, a sensor package on a satellite with a 10-meter resolution can offer a 0.1-meter resolution when mounted on a UAV. Of course, the field of view from a UAV also is far smaller. But UAVs enjoy the enormous advantage over space-based optical sensors of being able to operate under cloud cover. Given their special capabilities, UAV sensors can identify an object, when sensors on a satellite can only spot it.

However, UAVs have great disadvantages. Because they violate airspace, they can create political problems when flown in other than wartime circumstances. UAVs are manpower intensive to operate and require that operators be within relatively close range of the battlefield. If spotted, UAVs can be blinded or destroyed. While UAVs can be stealthier than aircraft when flown at night, they are far more observable by day, and the air turbulence they create also may reveal their presence. Unlike stealth aircraft, UAVs are useless if not communicating. Since most UAV communication is through imagery, they need to use fairly high-power, high-bandwidth channels for transmission. This increases the risk of their detection. Alternatively, UAVs can carry film cameras, but that delays the delivery of useful imagery. Current acquisition doctrine seems to favor long loitering times and high levels of stealth. Both increase UAV costs. An alternative doctrine of many, cheap, short-loiter UAVs may prove preferable.

GROUND-BASED SENSORS

Within the next 20 years ground-based sensors are likely to be greatly improved. Ground-based sensors would be useful for picking up signatures carried through the atmosphere, notably sound and vapors. For example, a bomber flying low to evade radar may leave a very distinct signature pattern among densely placed acoustic sensors.

Very sensitive chemical sensors are under active development for civilian and military purposes alike. "Sniffers" could detect the presence of human soldiers, as well as the emissions of mechanical objects. The movement of metal objects can also be detected through their effect on magnetic fields (in the same way that stop lights are prepped by vehicle movements). Gravimetric sensors are being developed to differentiate among passing vehicles such as empty, lightly loaded or densely packed trucks.

Long-range and short-range sensors will require different deployment doctrines. Short-range sensors can only provide wide area coverage if they are employed in large numbers. Thus, they must be inexpensive to be cost-effective. Adequate resolution, such as for triangulation of signature sources, requires they be networked in real time. Short-range sensors will work best as adjuncts to other sensors, for confirmation and for complicating the work of opponents charged with eliminating offensive signatures.

The human senses coupled to the human brain provide the best ground-based sensor. The soldier of 2015 is likely to be equipped with a portable supercomputer, coupled with digital radio-based communications capable of relatively reliable video data exchange. Such a soldier could conceivably navigate with high-resolution, near-real-time photographic maps, perhaps linked with portable expert systems and might even operate using systems capable of simulating alternative courses of action, allowing the evaluation of possible consequences in advance.

INTEGRATION

It is one thing to gather all sorts of signature information about the battlefield; it is quite another to integrate such information into a coherent picture. Systems integration has gained a notorious reputation for taking longer and working more poorly than previously planned. Many weapon systems, for instance, carry electronics whose system parameters had to be frozen (and thus grew obsolescent) over the many years required to tie all the subsystems together.

Systems integration may fare better in the future than we have a right to believe. Substantial progress has been made in understanding the problems of systems integration. Data fusion, an application which puts great pressure on systems integration, is recognized as a skill that must be mastered to survive on the future battlefield. The great value of reliable software was emphasized by the controversy over the Strategic Defense Initiative. A mix of open-systems design philosophy, object-oriented programming, megaprogramming, tools integration, and Computer Assisted Software Engineering (CASE) may generate reliable techniques for managing complexity. Although the American defense industry base is being severely reduced, a large number of American defense-oriented systems personnel may be able to exercise their talents in nondefense projects. These range from designing public infrastructure, such as intelligent vehicle and highways, and earth observation, to intelligent manufacturing, and outfitting the global information infrastructure. Simulation is evolving into a technology through which systems integration concepts can be tested. Finally, the American armed forces are beginning to understand how the failure to interwork information systems from the various warfighting communities hinders the exploitation of the information collected. Continued emphasis on supporting joint and combined operations puts pressure on each service to exchange information with the rest.

The United States is likely to remain the world leader in systems engineering, as indicated by its substantial lead in software exports. A conservative estimate is that the United States will retain a modest advantage over Europe, a notable lead over Japan, and a wide edge over the three other potential great powers: Russia, China, and India. These advantages, however, will be most evident only with very large, world-class systems. Foreign countries are rapidly improving their software capabilities, whether measured in terms of software maturity or clever algorithms. To the extent that useful military systems can be constructed with medium-scale software, American advantages could be less significant by 2015 than at present.

Nonetheless, to a great extent, systems integration can be purchased abroad rather than developed at home. A country does not need an extensive higher educational infrastructure for that purpose if the graduate schools of the United States can educate that state's engineering elite. More than half of engineering doctorates awarded in this country are conferred on non-Americans, most from developing countries. While some remain and strengthen American society, most eventually return home.

Similarly, a nation can buy systems integration technology through purchase of an entire complex. Examples include highly sophisticated process control machinery (e.g., a modern refinery) or air traffic control systems, which resemble tomorrow's command-and-control systems. In 10 to 20 years, countries may be purchasing even more sophisticated energy complexes. A country's engineers may be able to decrypt the techniques that make systems integration work at the highest level. After that, if they could match the software and the successively lower levels of aggregation, they could acquire a total systems expertise almost at American levels. Brazil's $1.5 billion Amazon monitoring contract, recently won by Raytheon, may be a prototype of a system that can monitor a nation's entire defense zone.

COMMERCIAL CAPABILITIES

It would be very difficult to maintain the distinct American advantage in information-based warfare over the next 20 years. The relevant technologies are decreasingly the products of classified military-industrial complexes and increasingly the products of the commercial marketplace. Over the next 10 years, a sophisticated opponent will be able to buy or lease a wide panoply of capabilities from around the world: in GPS, surveillance, communications, direct broadcast, systems integration, internetworking, cryptography, and air-based imaging. Furthermore, the costs of such purchases will progressively decrease. While the U.S. military still may enjoy a lead in each of these areas of information-based warfare, our lead may have decreased considerably compared to 1995 Despite having been purchased at a large cost multiple.

For example, GPS is now universally available. Signals can be received by devices costing a few hundred dollars. In theory, the American military could degrade GPS signals so that our forces could determine locations far more closely that our adversaries, limited to 100 meter accuracies, can. In practice, three factors militate against this safeguard. First, the U.S. Government has promoted the use of GPS for civilian purposes, most notably commercial aviation. Only a major and prolonged crisis could justify the global degradation of information that we have persuaded others to rely upon for their safety.(Note 6) But accurate GPS data could enable a rocket attack against U.S. forces deployed in smaller contingencies. Since U.S. forces have access to the classified GPS signal, we might find it useful to jam the unclassified signal locally, but this is no panacea either. Second, GPS may be complemented by other navigation systems, such as Glonass and future additions to the Inmarsat and other communications constellations. Europeans are mulling the value of their own navigational capabilities. Third, the development of differential GPS means that if a set of fixed points near a target can be ascertained with precision, the target can be located with similar precision. Differential GPS systems are likely to proliferate throughout North America, Europe, and East Asia by 2000. Their accuracy exceeds that of Military Specification systems without differential correction.

GPS coupled with sufficiently good surveillance data theoretically places virtually every fixed facility at risk. Although U.S. forces employ camouflage and other deceptive tactics for many field installations, most logistics dumps, barracks, and command headquarters cannot be well hidden. Many such facilities could be identified and located if someone knew their general vicinity. In fact, if the facility were public, a terrorist with a portable GPS device would suffice to target it.

Overhead surveillance can locate fixed facilities with accuracy to within a meter or two. At present, the United States and the Russians alone can perform such detailed surveillance, but with the fall of the Soviet Union, a vigorous market has developed in Russian 2-meter imagery. The decision to permit sales of one-meter imagery from U.S. satellites would present any government or terrorist organization with the ability to collect a considerable volume of intelligence (until the United States exercises its prerogative of shutting it off). Over the next 20 years, the sale of satellites with such capabilities would permit many countries to acquire and transmit such imagery in near-real time. The obvious advantages offered by imagery to our forces in the Gulf War have prompted many states to consider acquiring better surveillance satellites, notably the French, the Gulf states, and the Japanese--the latter under the cover of disaster monitoring.

Another rapidly burgeoning information-gathering capability flows from the use of digital video cameras mounted on UAVs. For the last 10 years, an American company has been exporting digital-imaging systems that can collect high-resolution imagery 50 miles to each side with real-time data-links to ground locations. Video cameras are inexpensive, but the resolution of the mid-1990s (just under 300 lines) is relatively imprecise. The advent of high-definition television will create a market for high-resolution cameras, offering two-to-three times the line density. Their digitization will certainly take place by 2015. Electronic still cameras have been slow coming to market but they are already being sold and thier popularity should rise sharply within 10 years. Digital cellular telephony is already available through several technologies. By 2015 it will be both ubiquitous and capable of sufficiently high bandwidth to transmit imagery directly. Any number of Asian companies will make them commercially available.

American forces might attempt to deny an enemy such communications capability by blocking access to third-party satellites, but such an attempt could present several problems of a political nature. If every satellite owner were to cooperate, then access by our opponent to satellite links might be blocked, but what if cooperation were denied? Jamming signals to and from geosynchronous satellites frequently requires being in the line of sight. Global low-earth-orbit cellular systems would make it even more difficult to deny communications. A system's managers could refuse to transmit signals into and out of a region but doing so would eliminate service for nonbelligerent neighboring states. Jamming signals also could limit local use of global cellular systems However, it could be very difficult to shut down a system used by irregular forces operating inside a friendly country or to interrupt a primitive command and control system based on citizen's band radio. Similar difficulties would arise in interrupting another nation's air traffic control network without also interfering with international air traffic control operations in the general vicinity.

The rapid expansion of cellular communications may complicate targeting even further. At present, cellular systems use mobile terminals but fixed switches. The latter can be targeted easily. Yet, as electronic components continue to be reduced in size, future switches may themselves become mobile. Already, the development of personal communication systems points in this direction. One American company has managed to put the electronics for a modest-size cellular switch inside a large briefcase. The smaller the cell, the less power is required to operate it and the smaller its electronic signature. Thus, by 2015, some nodes may be virtually impossible to find and target.

Even without sensor proliferation, increasing global communications connectivity will decrease the chances for military activity to occur unnoticed. The daylight movement of an infantry platoon past a village can be kept secret from those outside the area if it is not connected electronically with the wider world. But as developing-world hinterlands become tied into the global communications network, such movements are more apt to be reported. Indeed, the marriage of digital video cameras with digital cellular, the products of which should be widespread throughout the Americas and Eurasia by 2015, means that many military movements are potentially liable to detection.

Direct broadcast satellite (DBS) television satellites may become ubiquitous in most areas of the developing world over the next decade. Most such satellites broadcasts reach several countries at once. Thus, jamming a particular channel would entail international complications. Again, unless the ownership of these satellites is heavily concentrated, some owner likely would be willing to provide television signals for even international outlaw states.

The Internet provides another broad conduit of data. An enemy that has retained phone connections can retain Internet service and thus the ability to broadcast messages internationally. As electronic commerce moves onto the Net, even a country facing an "information blockade" could get its software cast into a chip in any one of the world's many silicon foundries. More generally, the ubiquity of Internet connections makes it easier to acquire the world's knowledge of unclassified technical information.

What prevents the United States from destroying an opponent's space systems? One obstacle would be presented by an enemy using a third party's satellites. Unless every country with surveillance capability agrees not to sell its imagery, a route from neutral satellites to recipients hostile to the United States would not be difficult to create. In the Gulf War, it was sufficient to reach agreement with Russia to protect our "left hook" maneuver into Iraq from being reported to Baghdad. But with every passing year there will more satellite operators, making such embargoes increasingly difficult to maintain. Moreover, we may not always be in conflict with a pariah state like Iraq. Ostensibly neutral powers may surreptitiously help our opponent. The same logic applies to communications satellites. Unless every transponder's owner is careful to avoid uplinks from our foes, transmission will occur. And there are more than 1,000 transponders within sight of every point on the globe. The United States could demand positive proof that the owners of every possible space system is cooperating with us and destroy the satellites of those who refuse--but would we be willing to enforce this dictum in any conflict short of a global war?

HIDING

Although the balance between hiders and seekers is likely to tilt towards seekers, hiders also will be able to take advantage of new technologies such as stealth. Successive generations of U.S. equipment will incorporate increasing degrees of stealth; this is already true in regard to high performance aircraft and submarines. Stealth principles are being extended even to helicopters, certain surface ship classes and even to tanks. Current techniques include radar-absorbing materials, specially crafted surfaces, and false signal generators. Soon, noise-suppressing devices, the kind being engineered for vacuum cleaners, will be added to the stealth armory. Although the United States enjoys a strong lead in the area, component technologies already are understood in Europe, Russia and Japan.

Stealth, however, is no panacea. Many platforms use stealth not to hide but to delay detection, lowering their exposure time beneath the enemy engagement cycle. Information technologies are reducing the period required to engage a target, however, as sensors of increasing variety, number, and acuity appear, the cost of achieving a given level of stealthiness will increase. It take considerable maintenance to keep stealth aircraft stealthy. The more expensive the platform the fewer can be bought, the more they need to be protected, the stealthier and more complex they become, and so on. By 2015 the types of platforms that can made stealthy in a cost-effective manner may be roughly the same as they are in 1995: submarines, night-attack aircraft, and certain special operations forces equipment.

Other methods of hiding are operational: the proper employment of force elements, taking advantage of terrain, darkness and weather, and the use of decoys and camouflage. Operational and tactical adaptations to ever-improving and proliferating sensors, and the way sensors will be employed, are difficult to anticipate. Success against sensors will vary by national doctrine, and even from unit to unit. Electronic decoys designed to disguise force concentrations may be employed at the tactical, operational, and strategic level.(Note 7) But platform decoys are likely to degenerate in effectiveness as sensors improve. Decoys for smaller weapons and sensors may increase in utility by 2015.

Another opeational route to stealth is to disguise military platforms as commercial ones. Thus trucks could be used to carry missiles, merchant ships could host hidden naval guns, and bombings could be carried out by wide-bodied aircraft. As information processing elements become smaller, they can more easily fit within such platforms without distorting their shapes into distinctly recognizable profiles. An enemy whose assets are so hidden forces the other side to target assets not exclusively by signature but by more operational attributes (e.g., any truck here must be up to no good). In the end, the enemy must engage far more targets, but at the risk of blurring the distinction between what is and is not considered fair game. The more bombs, the heavier the logistics trail, and the greater the number of vulnerabilities associated with supply operations. Trees may come to be better protected by sticking them in the forest rather than by building an obvious brick wall around them.

(Continuation)

.
.
.

[library]

.
.
.

top

.
.
.

(.) Idiosyntactix