Idiosyntactix
Strategic Arts and Sciences Alliance


The Brand Name of the Media Revolution

[library]

DEFENDING THE NATIONAL INFORMATION INFRASTRUCTURE

Martin C. Libicki
Advanced Concepts, Technologies, and Information Strategies
Institute for National Strategic Studies
National Defense University

With every passing week, the United States appears to grow more vulnerable to attacks on its soft underbelly -- its national information infrastructure (NII). As this vulnerability to attack is metamorphisized into military metaphors, the logic of national defense kicks in. Yet, as comforting as such logic feels, such ascription would be a major mistake.

Systems security, of course, matters. DOD must assume that it cannot engage enemies that will not in turn counterattack DOD's computers in order to disrupt military operations. Operators of commercial systems must be cognizant of and thus held responsible for the harm done to third parties if their systems have been compromised. Those who introduce new commercial applications need to think through their potential for malicious misuse.

Yet reasonable prudence is a far cry from the notion that hacker attacks will constitute the 21st century's version of strategic warfare 1. Believing thus belies common sense aspects of both computers and national security. It also leads to policy prescriptions applied throughout society; such prescriptions could easily face opposition that would make proponents pine for halcyon days of the Clipper chip. The last thing the United States needs is a reaction to computer risks which is (in words applied to fin-de-siecle Austria-Hungarian Empire) "desperate but not serious".

This essay argues that the task of securing the NII needs to be put in its proper perspective. Like safety, it is an issue that ought to be taken seriously, but paranoia about the threat is unwarranted. Following a description of the nature of the threat, the following points are offered:

Do's and Don'ts for Government policy follow.

The Threat Potential to the U.S. Information Infrastructure

Why are people even worried about attacks on the nation's information infrastructure?

These four factors suggest that the challenge of computer security will matter more to America's well being tomorrow than it does today.

Does this, however, make the protection of the NII a matter of national security? Yes, to some extent.

Everyday Threats Engender Everyday Defenses

Systems abuse comes in many forms. Some are commonplace; they rely on normally present motivations (e.g., greed, thrills) and are no more than a high-tech version of carjacking and joyriding. Others are uncommon, difficult to anticipate, and serious. Systems owners can be normally expected to protect themselves against commonplace threats whose future probability and patterns can be predicted from their past. Uncommon but nonetheless serious threats are less likely to be watched for because they arise from less commonly acted-on human motives.

Deliberate systems abuse, many of which could be hacker attacks, comes in roughly six forms:

The first four types of attacks are or could be commonplace because they can be undertaken by individuals with something to gain. Because greed is eternal, for instance, the motive for breaking into and robbing a bank is everpresent. Ditto for stealing services. Threats against individuals (see the 1995 movie, "The Net"), although a potential tool of guerilla war, are more likely to be motivated by private grudges and gripes. A fourth case, stealing objective data, is the high-tech version of protecting classified information -- something the DoD already takes seriously every day.

The last two, corruption and disruption, best characterize the unexpected and malevolent nature of information warfare. Attackers require an external goal, and, for the most part, a concerted strategy and the time to carry it out.

Systems that face a known threat pattern (and whose owners would bear most or all of the cost of an attack) can determine an optimal level of protection. There is no reason to believe that they provide less protection against information attacks than they do against other threats to their well-being (e.g., shoplifting, embezzlement, customer lawsuits). The real worry is an attack that rarely is seen in the background environment -- and thus one for which there is less natural protection.

Insidious Attacks Often Put Attackers at Risk

Systems can generally be attacked in one of three basic ways: (1) through corrupted system hardware or software (2) by using an insider, or (3) by external hacking -- plus combinations thereof (e.g., having an insider reveal vulnerabilities that facilitate subsequent hacking). An information infrastructure can also be attacked through physical means (e.g., jamming, microwaves, shot and shell) but only for the purpose of physical denial and associated blackmail. Such attacks require on-site presence and as such are considered akin to acts of well-understood terrorism. They carry far greater risks for the user.

The first, corrupted hardware or software, may be epitomized by the myth of the rogue employee of a U.S. microprocessor firm queering some circuits in every PC chip which then goes bad simultaneously just when most needed. How to ensure simultaneity without premature discovery is never explained. A slightly more plausible threat is a planted bug in a specific system (e.g., a computer that is disabled by an external signal or other preset conditions).

The second, the acquiescence or complicity of someone with the right privileges, is more likely. In this downsizing era, there are no shortage of disgruntled employees and ex-employees from whom to recruit 4. Exploiting corruption, whether inside the physical system or among trusted users, has obvious advantages, particularly when used against systems secured against outsiders but not insiders. Yet, the risks of someone getting caught are far higher, because the chain of responsibility is more direct (and the number of suspects in either case is smaller than the billion plus people with phone access). Yet, recruiting such individuals from the outside involves risks akin to those for intelligence recruitment; if someone turns or is caught a system is warned that it is being targeted. The fact that no such conspiracy has come to light (yet) suggests the number of attempts to date has been small. The higher the risk, the lower the odds of being able to penetrate a large number of systems undetected; using insiders is a better avenue for opportunistic or intermittent rather than systematic attack.

This leaves the hacker route. Most systems tend to divide the world into at least three parts: outsiders, users, and superusers. One popular route of attack for Internet-like networks is (1) to use a password attack so that the outsider seen as is a user, and (2) use known weaknesses of operation systems (e.g., Unix) so that users can access superuser privileges. Once granted the privileges of a superuser, a hacker can read or alter files of other users or the system itself; control the system under attack; make it easier to reenter the system (even after tougher security measures are enforced); and insert rogue code (e.g., a virus, logic bomb, Trojan horse, etc.) for later exploitation.

The amount of damage a hacker can do without being acquiring superuser privileges depends on how systems allocate ordinary privileges. There is very little that someone who is authorized only to use a phone can do to the system. Computer networks tend to be more vulnerable to abusers especially when certain privileges are granted without being metered. Indeed, any system with enough users is bound to contain a few who can abuse resources, filch data, or otherwise gum up the works. While mechanisms to keep non-users off the system matter, from a security point of view, limiting what authorized users can do is of greater importance.

A variant attack method -- applicable only to communications networks open to the public -- is to flood the system with irrelevant communications or computerized requests for service. Systems where service is free or where accountability can be otherwise evaded tend to be more prone to such an attack. The weakness of such an attack is that it requires multiple separate sources (to minimize discovery) and that its effects last only as long as calls come in. Because communications channels in the United States tend to be much thicker than whose which go overseas, overseas sites are a poor venue from which to start a flooding attack 5.

Systems Can Be Protected

Even though many computer systems run with insufficient regard for security, they can nevertheless be made quite secure. The fundamental theory is that protection is a point to be sought on a two-dimensional space (see table 1). One dimension is the degree of access: from totally closed to totally open. A system that secures itself only by keeping out every bad guy will make it difficult or impossible for good guys to do their work. The second dimension is resources (money, time, attention) versus sophistication. A sophisticated system keeps bad guys out without keeping so many good guys out and visa versa.

Security Choices Scrimp on Security Spend on Security

Tighten Access Users are kept out Users can get in out or must alter with effort, but their work habits. no hackers can

Loosen Access Systems are Users can get in vulnerable to easily but most attack. hackers cannot.

To start with the obvious method, a computer system that receives no input whatsoever from the outside world ("air-gapped") cannot be broken into (and no, one cannot spray a virus into the air in the hopes that a computer acquires it). If the original software is trusted (and the National Security Agency [NSA] has developed multilayer tests of trustworthiness), the system is secure (efficiency aside). Such a closed system is, of course, of limited value but for some systems the benefits of freer access are more than outweighed by even the small chance of security vulnerabilities (e.g., nuclear systems).

The challenge for most systems, though, is letting them accept external input without putting important records or core operating programs at risk. One way to prevent compromise is to handle all inputs as data to be parsed (a process in which the computer decides what to do by analyzing what the message says) rather than as code to be executed directly. Security then consists of ensuring that no combination of computer responses to messages can affect a core operating program, directly or indirectly (almost all randomly generated data tend to result in error messages when parsed). To pursue a trivial example, there are no button combinations whose pressing will insert a virus into an automatic teller machine.

Unfortunately, systems need to accept changes to core operating programs, all the time. Absent more sophisticated filters, a tight security curtain may be needed around the few applications and superusers allowed to initiate changes (authorized users might have to work from specific terminals hardwired to the network, an option in Digital's VAX operating system). Another method that will cut down on viruses and logic bombs is to operate solely with programs found only on unerasable storage media such as CD-ROMs. Whenever programs must be altered, they can be rewritten, recompiled in a trusted environment, and fixed onto a new CD-ROM (the cost of equipment to cut a CD-ROM is now below a thousand dollars) 6.

The technologies of encryption and especially digital signatures provide other security tools. Encryption is used to keep files from being read and permit passwords to be sent through insecure channels. Digital signatures permit very strong links of authenticity and responsibility to be established between message and messenger. A digital signature is used to create a message hash with a private key for which there is only one public key. If a user's public key can unlock the hash and the hash is compatible with the message, the message can be considered signed and uncorrupted. Thus computer systems can refuse unsigned messages or ensure that messages really originated from other trusted systems (and rogue insiders can be more easily traced). The private key never has to see the network (where it could have been sniffed) or be stored on the system (where the untrustworthy could give it away). Digital signatures are being explored for Internet address generation, and for secure Web browsers. Not only users but also machines, and perhaps individual processes, may all come with digital signatures 7.

Firewalls may also offer a degree of protection. Yet this method, the most popular way to protect computers attached to the Internet, needs a good deal more work before it can be used reliably without a great deal of careful attention to details in setting it up 8.

Most problems of systems security can be parsed into user sloppiness, systems sloppiness, and poor software. User sloppiness includes poorly chosen passwords, or passwords left in public places. Systems sloppiness, likewise, includes a security regime that lets users choose their own passwords (or at least does not reject obvious ones), that does not remove default passwords or backdoors, that fails to install security patches, or that permits users enough access to total system resources to read or write files they should not have access to (particularly when these files control important processes). Poor software includes bugs that override security controls, or which permit errant users to crash the system, or, in general anything that makes security unnecessarily difficult or discretionary.

The head of the Computer Emergency Response Team (CERT) once estimated that well over 90 percent of all reported break-ins were made possible because hackers could exploit known but uncorrected weaknesses of the target system. For instance, the method hackers used to get into Rome Laboratory's computer in 1994 and Los Alamos's computers in 1996 was an unfixed bug in the Unix sendmail program that was used for the infamous Internet Worm incident in 1988. Fewer than one incident in ten came under the category of no- one-knew-that-could-be-done, but most of these were understood to be theoretically possible, even if the exact method used was not.

Because most PCs and workstation operating systems assume a benign world, rewriting them to make them secure against the best hackers is difficult; the more complex the software and security arrangements the greater the odds of a hole. In security, the primitive is often superior to the sophisticated; there are fewer configurations to test 9.

Nevertheless, against the terrorist, the virtual NASDAQ market can be secured with higher confidence than can the real NYSE stock floor -- if for no other reason than technology permits a system's owners to control all of its access points and examine everything that comes through in minute detail. In the physical world, public streets cannot be so easily controlled, moving items cannot be so confidently checked, and proximity and force matters.

Thus, the most misleading guide to protecting information systems is the myth of the superhacker, the evil genius capable of penetrating any system it chooses. Militaries have conventionally been built with the understanding that there is no perfect defense or offense. No wall however thick will withstand a battering ram of sufficient size (and no battering ram however strong can take down a sufficiently thick wall). This analogy to computer systems is specious. Systems are entered, not because they are forced, but because they have holes amenable to some combination of bytes. The placement and distribution of holes is what counts, not the force used to get through them.

The NII's Vulnerability Should Not Be Exaggerated

How vulnerable is the NII? Sadly, no one really knows. The publicized incidents of phone phreaks, Internet hackers, and bank robberies may or may not be the tip of the iceberg. The common wisdom is that victims do not like to talk about how they have been had, but Citibank's decision to prosecute rather than cover up perpetrators of a fairly large ($400,000 transferred with another $10,000,000 waiting in perpetrator accounts waiting to be withdrawn) computer crime suggest a change in perception and prospects for better reporting 10.

What does computer crime cost? The FBI's precise estimate is between $500 million to $5 billion; in the same league with cellular telephone fraud (roughly $1 billion) and PBX toll call fraud 11. Many such estimates must be carefully understood. For instance, most embezzlement today is computer crime because that is where financial records are kept; but embezzlement clearly predates the computer. The cost of a stolen phone call is much less than its price (because the call otherwise probably would not have been made; most phone systems have excess capacity; and the price of a phone call includes services such as billing that criminals do not need). The cost to a corporation of having its R&D looked at by competitors is almost impossible to assess but is easy to assign an outsize figure to.

By contrast, the cost of systems security is not cheap either. As much as three billion dollars a year is spent on anti-virus software, a figure that may even exceed the degree of damage that viruses would have caused had no such software been installed.

How frequent are Internet attacks? One way to calculate is to start with the 1200 reports received by CERT in 1995 12. The Defense Information Systems Administration used publicly distributed tools to attack unclassified defense systems. It worked eight of nine times. Only one of twenty victims knew they were attacked; and only one of twenty of them reported it as they should have. If this 400:1 ratio is indicative (and Navy tests echo them), then 1200 reports suggest the Internet suffers a half million Internet break- ins (even if very few do real damage). Using other DISA figures, the GAO estimated that DOD computers alone had been attacked 250,000 times in 1995 13.

The Internet, with its benign assumptions is, anyway, hardly indicative of systems in general; it is not used for mission- critical tasks (military logistics perhaps the most glaring exception) and if it to becomes a mission-critical system whose compromise is a serious problem, it must evolve and will necessarily become more secure 14. Were some hacker, for instance, to invade and bring down the network here at the National Defense University, it would be difficult to distinguish the effects of doing so from the many times that the network is otherwise down (both by accident and for maintenance). Similarly, someone breaking into NDU's computers for information (none of which is classified) would, at best, find draft copies of papers that their authors would have been more than pleased to circulate on request.

One reason computer security lags is that incidents of breaking in so far have not been compelling. Although many facilities have been entered through their Internet gateways, the Internet itself has only once been brought down (the 1988 Morris worm). No large phone or power distribution outage has been traced to hacking (the most serious incident affecting phones in the Northeast in January 1991 was traced to a faulty software patch). No financial system has ever had its institutional integrity at risk through hacker attacks. A parallel may be drawn with the security of the nation's rail system: unprotected rural train tracks are easy to sabotage, and with grimmer results than network failure, but, until recently, had not taken place for fifty years.

A system easy to abuse one way may not be easy to abuse in another. It is not the thousands of switches in the U.S. phone system that must be guarded but the few hundred signal transfer point (STP) computers. Phone phreaks attack the system by getting into and altering the many data bases that indicate the status of calls and phone numbers. Presumably, with enough alterations, area telephone service can be terminated -- but only so long as the data bases remain altered. It is far harder to plant a bug in the computer's operating system. Even though STP computers are interconnected through Internet protocols, serious study suggests the difficulty of one infecting another.

Can someone destabilize a nation's stock market by scrambling the trading records of the prior day (… la Tom Clancy's Debt of Honor)? Possibly, but it is easy to forget how many separately managed computers record every stock transfer (i.e., the trade company's, each of the two brokers' systems, plus perhaps each client's systems etc.). A simple expedient of archiving every transaction to an occasionally read archival medium (e.g., CD-ROM or even printouts) will foil most after-the-fact corruption, detect consistent in-the-process faults, and sometimes reveal deliberately intermittent ones.

Can an individual's assets be stripped by erasing his bank account? A bank account is essentially an obligation by the bank to repay the depositor. This obligation does not disappear because the bank's record of it cannot be easily found.

Finally, a system's reliability involves not only its holes but its ability to detect its own corruption, the existence of backup data files and capabilities, the overall robustness of the system (including redundancy in routing), as well as the ability to restore its own integrity and raise its own security level on short notice.

All this notwithstanding, computer security is too weak in too many places to withstand systematic attack. Systems were thought safe because really brilliant hackers were scarce; today, easy-to- use tools circulate on informal public networks for hackers.

Information Attacks Do Not Offer Obvious Strategic Gains

Although important computer systems can be secured against hacker attacks at modest cost, that does not mean that they will be secured. Increasingly common and sophisticated attempts may be the best guarantor that national computer systems will be made secure. If the absence of important incidents lulls systems administrators into inattention, entr‚e is created for some group to launch a broad, simultaneous, disruptive attack across a variety of critical systems. The barn door closes but the horses are gone. For this reason, a sequence of pinpricks, or even a steadily increasing crescendo of attacks is the wrong strategy for an attacker; it creates its own inoculation. Strategic effectiveness requires that a nation's infrastructure be attacked in force all at once. No such attack has ever happened, but as of 6 December 1941, no country had every been attacked across the Pacific Ocean either.

A key distinction needs to be made between a purposeless attack and a purposeful one. The problem with Japan's attack on Pearl Harbor was not so much sunk ships and dead sailors as it was U.S. strategic immobility while Japan conquered large chunks of Southeast Asia and Oceania. An attack on the NII which leaves an opening for strategic mischief is of far greater note than one which does merely causes damage. A strategic motive for a Digital Pearl Harbor could be to dissuade United States from military operations (e.g., against the attacking country), or hinder their execution by hindering a mobilization, deployment, or command-and- control.

How much damage can a Digital Pearl Harbor cause? Suppose that hackers could shut down all phone service (and, with that, say, credit card purchases) nationwide for a week. The event would be disruptive certainly and costly (and more so every year), but as long as recovery times are measured in hours or even days they are

probably less disruptive than certain natural events 15, such as a large snowstorm, flood, or earthquake -- indeed, far less so in terms of lost output than a modest-size recession. How much would the U.S. public have to be discomfited before it demanded that the United States, for instance, disengage from a part of the world the attacker cared about? It is more plausible that the United States would desist before opponents whose neighborhoods are judged less worthwhile in the face of difficulty. It is less likely to withdraw before an opponent whose power to strike the U.S. economic system suggests why this opponent must be put down.

For instance, it probably would not have been in North Vietnam's interest in 1966 to hire hackers to shut down the U.S. phone system. Doing so would have contravened the message that it was fighting the United States only because the United States was in its (albeit incorrectly defined) territory. Such an attack would not only have compromised its drive to build support in the U.S. for the disengagement of U.S. forces. It would have also portrayed North Vietnam as an opponent capable of hurting the United States at home, an act which would have eroded the cautions which limited U.S. air operations against North Vietnam itself.

How well hackers attacks can delay, deny, destroy, or disrupt military operations is a more open question. An enemy at war should be expected to disrupt U.S. military systems as much as possible. But is there enough military potential from a concerted attack on the civilian infrastructure to merit concern?

Clearly there are vulnerabilities. Today's wars require a high volume of communication from the field to not only the Pentagon (say, from its Checkmate cell in the basement to the Black Hole cell in Riyadh) but also to various support bases, control points, logistics depots, contractors, and consultants. A prolonged power, telephone, and/or E-mail cut-off would hurt command and control. Given the multiplicity of communication media and links in the United States, such a disruption would have to be widespread, coordinated, and uniformly successful to have any effect whatsoever. A disruption that lasted hours, rather than days, would be unlikely to affect outcomes very greatly. Many services can be restored in that time unless some hard-to-replace physical item was damaged. Were U.S. commanders to exercise real-time control over operations using commercial telephone lines, then a disruption would be more problematic but establishing military operations with such long and vulnerable tethers is unwise for many other reasons.

The effect of an extended disruption on troop or supply mobilization is more difficult to gauge; these are processes that typically take weeks or months to reach fruition. Overnight deliveries aside, logistics should be able to withstand minor disruption with little ultimate impact -- otherwise, it is already badly engineered to begin with (disruptions near the point of use, are, of course, an expected feature of warfare).

Is it possible to disrupt communications and thereby retard or confound the nation's ability to respond to a foreign crisis? An enemy with a precise and accurate knowledge of how decisions are made and information is passed within the U.S. military may be able to get inside the cycle and do some non-trivial damage. But how easy is it for an adversary to know this? Not even insiders can count on such paths, and in an age where hierarchical information flow is giving way to networked information flow the relevance of any one pathway is extremely suspect.

The difficulty of crafting a credible Digital Pearl Harbor is best illustrated by looking at the most widely reported scenario, RAND's "Day After in Cyberspace" 16. Over twenty incidents befall U.S. and allied information infrastructures, many stretching the limits of plausibility (e.g., three separated incidents tied to identical logic bombs, the simultaneous blackout of the Washington area's heterogenous phone systems, rogue subcontractors affecting what in real life are triple-redundant systems, market crashes following the manipulation of an unspecifiable computer). Yet, in the end, other than some potential for mass panic, facts on the ground (i.e., the Persian Gulf) are scarcely affected.

Socializing the Provision of Systems Security May Be Unwise

Is systems security a problem whose solution should be socialized rather than remain private? Consider a hypothetical scenario under which a refinery blows up, and damages its neighborhood. The responsibility of the refiner for external damage ought logically to vary by what caused the damage in the first place 17.

(Needless to add, none of this at all excuses the perpetrator, who, if caught, is and ought to be subject to the full force of the law. By the same token if such responsibility can be ascribed to a group or even a country, the justification for similar sanctions thereby exists.)

The force of this example comes from the fact that most of the NII is in private hands; if their owners bear the total costs of system failure they have all the incentives they need to protect themselves. Yet there are a few systems whose disruption carries public consequences: phone lines, energy distribution, funds transfer, and safety. If the threat is great enough, then they have to be secure -- even at the cost of yanking the control systems off networks. Often less costly remedies (e.g., more secure operating systems) suffice. Even the primitive solutions, though, are cheap compared to other steps the country takes to protect itself (e.g., nuclear deterrence). That said, the number of critical sectors is growing. Credit card validation is becoming as critical as funds transfer to hour-to-hour operations of the economy. Automated hospital systems are reaching the importance of mission-critical safety systems.

Should there be a central federal policy maker for guarding the NII; and, if so, who? DOD has both the resources and national security mission. But its expertise is concentrated in the very agency fighting the spread of one of security's most potent tools, encryption. The military's approach of avoiding new systems that do not meet military specifications is costly when applied to technology with short life cycles and difficult when applied outside hierarchies. NIST, the second choice, has the talent but neither the funding, nor the experience at telling other federal agencies what to do. Beyond DOD and NIST, the expertise gets thin and the mission does not quite fit.

Yet, the very concept of a single government commander for information defense is a bit of a reach. Any attempt to "war-room" an information crisis will find the commander armed with buttons that attach to very little outside the Government's immediate control. Repair and prevention will largely be in the hands of system owners, who manage their own systems, employ their own systems administrators, and rarely need to call on each other's or otherwise common resources (so that there is no scarce allocation problem). There is little evidence of any recovery or protection synergy that cuts across sectors under attack (say, power companies, and funds transfer systems). Second, in terms of policy, each sector is different, not only in terms of its vulnerabilities, and what an attack might do, but more importantly, in how the government can influence its adoption of security measures. Judicious coordination to ensure defensive measures are not cross- threaded is always useful. A high-level coordinator could ensure the various agencies are doing what they are tasked to do; lower- level coordinators could work across-the-board issues (e.g., public key infrastructures). Beyond that, no czar is needed.

Some Things are Worth Doing

Because even the privately owned NII is, in some sense, a public resource, a role for the Government is not entirely unwarranted. But this role must be carefully circumscribed and focussed. This section makes ten suggestions.

1. Figure out how vulnerable the NII really is 18. What can be damaged and how easily? What can be damaged through outside attack; what is vulnerable to suborned or even malevolent insiders? For what systems can attacks be detected as they occur and by what means? What kind of recovery mechanisms are in place to return operations after a disruption; after an act of corruption? How quickly can systems be patched to make them less vulnerable? A similar set of questions can be asked about the military's dependence on commercial systems. How thorough would outages of the phone-cum- internet have to be to system cripple military operations and in what way: operations, cognitive support to operations, logistics (and if so, internal to the DOD or external as well), mobilization? What alternative avenues exist for military communications to go through? What suffers when the 95% of military communications that go through public networks has travel on the DOD-owned grid? A third set of questions relates to the existing software suites on which the NII runs: does, for instance, today's Unix need replacement or are known fixes sufficient? How useful are test-and- patch kits for existing systems?

2. Fund research and development on enhanced security practices and tools and promote their dissemination through the economy. The United States spends a hundred million dollars a year in this area (split among DARPA, NSA, and others). Areas of research include more robust operating systems, cryptographic tools, assurance methodologies, tests and, last, but by no means least, standards. We know how to secure systems; what we do not know is how to make such knowledge automatic, interoperable, and easy to use. Cyberspace may need an information security equivalent of the Underwriters' Laboratory capable of developing standard tests for systems security.

3. Take the protection of military systems seriously. It should be assumed that any nation at war with the United States will attack military systems (especially unclassified logistics and mobilization systems) any way it can -- and hacker attacks are among the least risky ways of doing so. Assume that foreign intelligence operatives are, or soon will, be probing U.S. systems for vulnerabilities. DOD may also have legitimate concerns over classified systems in contractors hands and defense manufacturing facilities. It may be useful to stipulate that contractors with the U.S. military (even phone companies) have a reasonable basis for believing their systems are secure. Perhaps DOD needs some method of validating a vendor's source code while providing reasonable assurances that it will not be commercially compromised.

4. Concentrate on key sectors -- or more precisely, the key functions of key sectors. Since, the government cannot protect these systems, it may have to persuade (through its various devices such as contracts, regulation, technology assistance, the bully pulpit) their owners to take security and backup seriously. Several organizations are useful fora for discussing the threat (e.g., Bellcore or the National Security Telecommunications Advisory Council for phones; the National Electric Reliability Council or the Electric Power Research Institute for power plants), non-attribution incident recounting may be especially helpful. Odd as it may sound, critical systems should have some ways of reverting to manual or at least on-site control in emergencies.

5. Encourage the dissemination of threats data and the compilation of incidents data (and not just on the Internet where CERT does a good job). Raw data may have to be sanitized lest investigations be compromised or innocent systems maligned. Nevertheless, effective protection of the public information infrastructure must inevitably involve public policy, and no public policy that relies on "if you knew what I knew" can be viable for very long

6. Seek ways of legimitizing the "red-teaming" of critical systems, in part by removing certain liabilities from unintended consequences of authorized testing. Non-destructive testing of security systems may be insufficient until the state of the art improves; that is, only hackers can ensure that a system is hacker- proof. Unfortunately, hackers are not necessarily the most trustworthy examiners, and, tests do go wrong (the Morris worm propagated faster than intended because somewhere in its program "N" and "1-N" got confused with each other). Incidentally, such systems should be tested both with on-site access permitted, and without it (to better simulate national security threats).

7. Bolster the protection of the Internet's routing infrastructure -- not because the Internet is so important, but because protecting it is relatively cheap. Critical national and international routers should be made secure and the Domain Name Service should be spoof-proof. Note this is not the same as protecting every system on the Internet -- which is expensive and unnecessary.

8. Encourage the technology and use of digital signatures, in part by applying it to security systems and not just electronic commerce. Supporting policies may include research on private key infrastructures, enabling algorithms, and purchases that create a market for them.

9. Work toward an international consensus on what constitutes bad behavior on the part of a state -- and what a set of appropriate responses may be. A consensus permits the rest of the world to handle states that propagate, abet, or hide information attacks by limiting their access to the international phone and internet system -- in much the same way that a similar consensus permits similar trade restraints. That said, proof that a state has sponsored information attacks will be difficult to establish, and states embargoed on suspicion may often be able to convince themselves and others they have been singled out for other reasons.

10. Strengthen legal regimes that assign liability for the consequences of hacker attacks so that the primary onus rests with the system being attacked (subject, of course, to whatever can be recovered from the actual perpetrator).

Other Things Should be Avoided

This section details what is more important: seven things to avoid.

1. Avoid harping on information warfare to the extent that warfare becomes the dominant metaphor used to characterize systems attacks (much less all systems failures). Porting the precepts of inter-state conflict to computer security tends to remove responsibility for self-defense from those whose systems have been attacked. It is not at all obvious that protection from attacks in cyberspace should be yet one more entitlement.

Why? Promoting paranoia is poor policy -- especially when systems still crash often enough on their own. Once something is called war, a victim's responsibility for the consequences of its acts dissipates. A phone company that may have to recompense customers for permitting hackers to harm service should not be able to claim force majeure because it can argue that it was a war victim. Characterizing hacker attacks as acts of war also creates pressure to retaliate against hackers real or imagined. Reasonable computer security is not so expensive that the United States should be forced to go to war to protect its information systems. If, though, the United States needs an excuse to strike back (say, to forestall nuclear proliferation), the supposition that the target has sponsored information terrorism can be summoned as needed.

2. Don't waste much more effort on traditional intelligence collection for hacker warfare. Crime requires means, motives, and opportunity. Means -- cadres of hackers with some access to connectivity (e.g., not sitting in Pyongyang) -- may be easily assumed. Sixty percent of all Ph.Ds awarded in computer security by U.S. universities went to citizens of Islamic or Hindu countries. Put some effort into motive, to understand plausible patterns of attack by other nations (so as to know what needs security work most urgently). Spend the rest of the time on opportunity, which is to say, finding vulnerabilities so that they can be fixed.

3. Don't waste time looking for a Minimum Essential Information Infrastructure for the NII as a whole 19. Such as list will be undefinable (minimum to do what -- conduct a nuclear war, protect a two MRC mobilization, staunch panic?), unknowable (how can outsiders determine the key processes in a system and ensure that they stay the same from one year to the next?), and obsolescent well into its bureaucratic approval cycle (the NII is changing rapidly and has a long way to go before it gels). More to the point, the government has no tools to protect only the key nodes; what it might have are policies that encourage system owners to protect themselves (and they in turn will determine what needs to be protected first).

4. Don't sacrifice security to other equities. It is difficult, for instance, to see how the NII will be secure without the use of encryption; yet the Government is loathe to encourage its proliferation (thus Clipper chip and export controls). The controversy seems to be complicating the credibility of Government attempts to secure the NII.

5. Don't put so much emphasis on getting commercial systems to adopt existing security practices that they are unable to take advantage of tomorrow's innovations -- particularly those that enable collaborative computing. Yes, some key systems (e.g., systems that control dangerous devices) must be secure regardless and, yes, many expected innovations have security problems that must be attended too. The entire systems field, though, is too dynamic for a straightjacket approach.

6. Don't eliminate heterogeneity unnecessarily; it makes coordinated disruption harder and preserves alternative paths. Common industry approaches to security matter less than standard protocols and application portability interfaces across industries.

7. Don't try to make policy without detailed understanding of how information systems are used. Strategic nuclear policy is where engineering details matter little (that they explode is far more important than how they explode). With systems security, it is the very details that are the portals to or barriers against attack.

Conclusions

Will 21st century warfare consist of alternating attacks on enemy information infrastructures? Such an attack may happen -- even if its perpetrators may come to understand how little is to be gained and how much is to be lost by doing so. The more important point is that they need not happen if a modicum of attention -- and that probably suffices -- is paid to the possibility.

So, who should guard the NII? If it's yours, then you should. The alternative is to have the Government protect systems, which, in turn requires knowing the details of everyone's operating systems and administrative practices -- an alternative which, even if it did not violate commonly understood boundaries between private and public affairs, is anyway impossible. Forcible entry in cyberspace does not exist -- unless misguided policy mandates it.

ENDNOTES

1. On 25 June 1996, CIA Director, John M. Deutch testified before the Senate Permanent Committee on Investigations that hacker attacks ranked, in his mind, as the second most worrisome threat to U.S. national security -- just below the threat posed by weapons of mass destruction. In response he had drawn up plans for an roughly thousand-person office located at the NSA office that would focus on risks that foreign hackers posed to U.S. computers. He also supported plans for a "real-time response center" in the Justice Department to work against widespread hacker attacks. He noted the intelligence community has assessed the risks to the United States of such an attack but the results were classified. (return to text)

2. Yet, even today three quarters of all real-time transactions are on mainframe based networks (source: Salvatore Salamone, "How to put Mainframes on the Web," Byte, 21, 6 [June 1996], 53).(return to text)

3. Even though Java's creators paid careful attention to security issues when designing the language (essentially disabling C++ of some dangerous features), its use is still problematic for systems with hard outer shells (to keep intruders from posing as users) but soft innards (so that users can wreak havoc on the system). A Java code picked up from the Net can do almost anything a user can. Thus an unpatched bug (e.g., sendmail) that lets users access system administration files can also let Java-based agents do the same.(return to text)

4. Yet, not every bad egg will harm society. During the Gulf War sensitive war plans had been left in a car and stolen; they were expeditiously returned with the comment that the perpetrator, while being a thief, was by no means a traitor.(return to text)

5. There are two minor exceptions to this rule. One is that a flooder may wish to curtail communications from the United States to a foreign nation using the same router links. Two is to aim the flooding attack at large known reflector sites. The latter can be filtered out if such attacks are repetitive.(return to text)

6. In practice, operational software has to be complemented by dynamic data files. Data files, used properly, however cannot host viruses; they can contain incorrect information but this is a simpler problem to deal with (make sure all changes are signed by (return to text)

7. Unfortunately, a secure digital signature needs to be 512 to 1024 bits long -- and is thus hard to memorize; human use may require hardware-encoded schemes coupled with PIN numbers so that stealing the hardware does not reveal the entire password.(return to text)

8. See Lee Bruno, "Internet Security: How Much is Enough?", in Data Communications25, 5 (April 1996) 60-72.(return to text)

9. Security research is focussing less on how to make systems secure and more on proving that systems are secure. Detecting failure modes, and developing tools, metrics, simulations and formal models are being emphasized. It would be nice if systems can be developed that can prove software to be secure, but today's experience suggests that a large degree of effort is required to verify even a small program. A meta-model of a software system written to highlight its security features may be useful, but the software designer may also be called upon to do other meta-models (e.g., to rigorously state architectural assumptions for later integration into other systems) and all such efforts compete. Fortunately, the access aspects of a program (how outsiders get in, and what privileges insider processes have) tend to be a small fraction of the program itself. If access points are compact and well identified they may be easier to test. Another approach is to hire in-house hackers and give them the source code (this puts them an a great advantage to outside hackers who lack this information - - except for Internet systems where the source code is publicly available) and see how far they get. Alternatively, offer a reward for cracking in (as Netscape has done for their security software) while the product is in beta.(return to text)

10. On 2 June 1996, the London Times reported that banks in London, New York, and Tokyo had paid roughly a half a billion dollars in blackmail to "cyber terrorists." These terrorists had demonstrated to their victims that they could bring computers operations to a halt; in a three year period they had conducted more than forty such attacks. This report, however, has been unusually difficult to verify as neither the victims nor the alleged perpetrators (or anyone else quoted, for that matter) are identified by name. Presumably banks are extremely reluctant to confess to such matters.(return to text)

11. By comparison the total cost of all credit card fraud is $5 billion.(return to text)

12. An analysis of CERT reports by John Howard of CMU suggests that after rising apace with the Internet, the number of incidents peaked in late 1993 and has remained relatively constant thereafter.(return to text)

13. U.S. General Accounting Office, Computer Attacks at Department of Defense Pose Increasing Risks, GAO/AIMD-96-84, May 1996.(return to text)

14. It must be imagined that most people would be loathe to entrust their credit cards to the Internet. In the 1950s, only twenty percent of Americans polled were willing to fly aircraft. The industry quickly realized that its future prospects were tied directly to safety concerns. Boeing developed and implemented its "single-failure" philosophy designed to ensure that no single failure in an aircraft could bring it down. Aircraft accidents fell from a dozen a year in the 1950s to a handful today despite a tenfold increase in aircraft takeoffs and landings.(return to text)

15. By shutting down the Northeast for half a week, the January 1996 snowstorm cost the economy $15 billion. Hurricane Andrews cost roughly $25 billion. The Northridge, California earthquake caused roughly $10 billion in damage.(return to text)

16. See Roger Molaner, Andrew Riddile, Peter A. Wilson,St rategic Information Warfare: A New Face of War, RAND: MR-661-OSD, 1996.(return to text)

17. In practice, insurance would pay, but insurance rates would come to reflect insurers' judgements about their clients' information security programs. The effect is the same.(return to text)

18. The Department of Justice has recently initiated an effort to do exactly this. If researchers are diligent, skeptical, apolitical, and well-funded they should make good progress.(return to text)

19. Determining the minimum for defense operations is probably worthwhile, though. A very compact minimum capability for each infrastructure may also be needing to bootstrap recovery operations in the event of a complete system failure.(return to text)


.
.
.

[library]

.
.
.

top

.
.
.

(.) Idiosyntactix