Idiosyntactix
Strategic Arts and Sciences Alliance


The Brand Name of the Media Revolution

[library]

DCOM - Chapter 4

Institute for National Studies


DEFENDING CYBERSPACE AND OTHER METAPHORS

MARTIN LIBICKI


The Retro Revolution

Sections

  • The Vocabulary of Strategic Conflict
  • The Ascendancy of Intelligence Operations
  • Retarding Reform of Acquisition
  • A Concluding Thought

    One of the many ironies of information warfare is its retro nature. On one hand, information warfare reflects the heady advances of information technology and anticipates the rich information infrastructure of the future we all will have to cope with and have already become dependent on. On the other hand, because metaphor, rather than experience, is the currency of discussion, the logic of information warfare often harkens back to the darkest days of the Cold War yielding the following three atavistic features:

    These provide yet one more reason why the metaphors of past wars must be scrutinized so that their application not obscure rather than reveal the essence of information warfare.

    The Vocabulary of Strategic Conflict

    Can information warfare be used strategically? Proponents have argued that a well-placed attack on a nation's information infrastructure might, like Douhet's airplane, permit a nation to go around the other side's forces and strike directly at its infrastructure. The atomic bomb was the reductio ad nihilum of an earlier version of this dictum.

    The appropriateness of Cold War strategic conflict as a metaphor can be judged by examining efforts to apply it. The concepts of deterrence and graduated response were dissected in the last essay. Four other concepts can be considered: (a) indications and warning, (b) minimum essential information infrastructure (MEII), (c) defense conditions (DEFCONs) as applied to information warfare, and (d) reconstitution.

    What would constitute indications and warning of strategic information attack? The United States thought it understood what would precede a Soviet tank surge into Germany (e.g., a mobilization of trucks) or a nuclear attack (e.g., the movement of top officials into prepared bunkers). But a strategic information warfare attack probably would not resemble anything previously experienced or planned for.

    One key difference between an information attack and a physical attack is that the latter requires the expensive, observable maintenance or restoration of military resources to attack status. What, if anything would constitute attack status for information warriors? Would an information warfare attack be preceded by information probes? Perhaps such feints would force systems administrators to tighten security, only to have that security fall back as users weary of the effort needed to maintain it. Or would feints be avoided because they would induce permanent security measures (such as better software), making systems more impervious to attack?[71]

    In information warfare there is no predetermined lead-time between ignition and detonation. Bad code might be inserted into a system years before it is needed simply because an opportunity to insert it arose unexpectedly. When needed, the code would be activated by external signals. True, bad code cannot sit around forever. Software upgrades may clean them out and the longer the code sits the greater the odds it is found or ignites early. Yet the cost of maintaining bad code (e.g., periodically checking on it) is probably low. Determining a MEII for carefully defined defense scenarios may be a useful exercise. In the event of complete system failure, a compact minimum capability for each infrastructure may be needed to bootstrap recovery operations. Yet, in general, a list of critical nodes and links that would constitute an MEII will be undefinable per se,[72] unknowable (how can outsiders determine the key processes in a system and ensure that they stay the same from one year to the next?), and obsolete well into its bureaucratic approval cycle (the NII is changing rapidly and has a long way to go before it gels). The government lacks tools to protect only key nodes. It should have policies to encourage system owners to protect themselves; they, in turn, will determine what needs to be protected and how.

    Having a DEFCON-like mechanism for hacker attacks makes a little sense. Organizations can respond to a rising threat of intrusion by increasing the difficulty of access or restricting who may access which capabilities and files. Without indications and warnings, knowing when to call for more stringent security measures is difficult.[73] The notion that an organization can relax most days because on some days it can tighten up is the wrong way to think about information assurance; by the time the threat is obvious, the viruses, worms, and Trojan Horses may well have been implanted. Most of the NII is in private hands; system owners would take a national declaration of an information warfare warning as just one piece of evidence among many in deciding their system policies.

    The reconstitution[74] concept fails in the opposite direction. Whether few or many decades are needed to recover from a nuclear attack makes no difference to the outcome of a nuclear war likely to be decided over hours, days, or, at most, weeks and months. By contrast, the impact of many types of attack on the NII would be directly proportional to the duration of the outage, or, in the case of bad information, to the time required for data reconstitution.[75] An attack on a natural gas distribution system in the middle of winter, for example, that would cut off supplies for an hour might force people to wear sweaters at home, but workers in large offices might not notice at all. An interruption that would last for a full day might force people into buildings with other sources of heat and force offices to shut.

    The Ascendancy of Intelligence Operations

    In the Cold War, the United States's struggle against a closed society raised the need for intelligence and, with that, the status of intelligence agencies. In a more open world (even with an increase in "peace" operations), the need for intelligence would seem logically to shrink -- open sources would mostly suffice. Yet information warfare brings back the need[76]; hence, as noted, its supporters in the intelligence community.

    As struggles over information -- thus, intelligence -- increasingly affect the conduct of conventional conflict, the mindset of intelligence is bound to pervade the warrior's mental constructs. In conventional combat, information on the performance of systems is only the beginning of strategy to counter those systems; a charging tank is terrifying even if the soldier knows its top speed, but data on the other side's information warfare systems constitute much, even most, of what is required to defeat those systems. The United States (and other nations) needs to hide the extent of its true capabilities (and vulnerabilities); and devote considerable effort to determining counterpart strengths and weaknesses.

    Were knowledge about who can do what to whose information systems peripheral to the outcome of conflict, the public policy debate on defense can be conducted on the assumption that what is knowable both makes the real difference and can be understood (despite what intelligence operatives may think). Yet, the more the struggle for information dominance determines the outcome of a war, the more public debate grows increasingly uninformed and therefore immaterial. If public debate cannot inform, much less determine, how much information power is enough, how can it address the relative costs and benefits of a particular level of effort? The issue of ends versus means figured prominently in public debate about U.S. involvement in Vietnam and, more recently, its defense of Kuwait. But public influence on the generation and use of military power -- an effective secondary form of civilian control - - is meaningless if insufficient information is public.

    Intelligence is cousin to deception. As hiding and seeking assume larger roles in outcomes, each side will necessarily put more effort into testing the other's capabilities, to see what is and is not detectable. One side may feint, the other may fake (ostensibly responding to false negatives and allowing some positives to seem to move unscathed). Deception and counterdeception have always been part of war, but they were practiced mainly by the few while the majority operated tools of force. Tomorrow, deception and counterdeception could become requirements for all warriors, and many will have trouble thinking in ways such practice demands.

    Beyond tactical deception lies operational deception, which exploits the other side's preconceived notions (e.g., Japan's belief that the United States would invade it from the Aleutians). As Admiral Wylie[77] has pointed out, military campaigns come in two types, cumulative and sequential. In a cumulative campaign (such as the antishipping campaign against Japan during World War II), each successful move has an independent effect, and no single tactical deception counts for much. In a sequential campaign, each successful move permits another (for example, the success of D-Day enabled every subsequent other military operation). A successful deception may remove the key stumbling block to a series of moves.

    Retarding Reform of Acquisition

    Despite strong opposition, since the mid-1980s two great shifts have started in how armed forces are provisioned: from military specification (MILSPEC) to commercial off-the-shelf systems and from Service-unique to systems that have to be designed for cross- Service internetworking. Both shifts threatened many fiefdoms that characterize defense acquisition.

    Information warfare offers opportunities for retrogression. It presents two obstacles to the Services's use of commercial systems. First, neither commercial hardware nor software is today well protected against painstaking malice. Commercial communications equipment, for instance, is rarely hardened against jamming or otherwise made invulnerable to spoofing (although spread-spectrum technologies in digital cellular phones offer some protection). Commercial software systems, developed for low-threat environments, are poorly protected against rogue code. Commercial networks are penetrated all the time. The military, which needs to operate in contested realms cannot afford such vulnerability. Yet if dependent on today's commercial systems, they have no good choice but to insert security after the fact; the more security, the more often a proprietary solution is less expensive. Second, some in the Armed Services maintain that unless commercial hardware and software are rigorously inspected, no one can be sure they have not been tampered with.[78] Most commercial electronics originate, in whole or in large part, from Asia. What guarantee is there that someone there did not sneak a circuit onto a chip that, on being awakened, will send out a previously unseen signal to disable or corrupt the unit it sits in? Software provides numerous opportunities for planted bugs. RAND's "Day After in Cyberspace" scenario posited many incidents in which systems were made unsafe, thanks to rogue code inserted by a contract shop in Bangalore, India. Ought the DOD not accept software whose source code it has not itself rigorously inspected? Vendors of commercial software, for whom DOD is a minor customer, are likely to balk at this condition; defense software houses, whose products are often purchased in source code, would have fewer problems with it.

    The threat of hacker warfare discourages internetworking systems across Service lines[79] (not to mention with allies much less coalition partners). A system run by a single organization can have a single point of contact ensure its integrity. But once systems are linked together, who becomes responsible? How does the Navy systems administrator, for instance, know that classified information sent from that office to the Army is treated there with the respect the Navy thinks necessary? How does the Army systems manager know that information received from the Air Force has not been tampered with on its journey? How does the Air Force know that what is coming in from a Navy system does not carry an insidious bug, worm, or Trojan horse? The world is full of trust-nobody security products that sit on interfaces of systems, but in practice, interconnection is felt to be tantamount to unsafe computing.

    A Concluding Thought

    Information warfare as a policy issue has yet to break the surface into public consciousness. If it does, the media[80] are prepared to argue that the threat of information warfare is completely fabricated -- that is, that it is the national security community's desperate attempt to recreate old threats it knew and loved so well. If the constructs of information warfare are taken from or used to revive earlier practices, its advocates will have only themselves to blame for being regarded as nostalgia buffs, and metaphor will have failed.

    |Table of Contents | Next Chapter |

  • .
    .
    .

    [library]

    .
    .
    .

    top

    .
    .
    .

    (.) Idiosyntactix