Idiosyntactix Strategic Arts and Sciences Alliance The Brand Name of the Media Revolution |
![]() |
DEFENDING THE NATIONAL INFORMATION INFRASTRUCTURE
With every passing week, the United States appears to grow more vulnerable to attacks on its soft underbelly -- its national information infrastructure (NII). As this vulnerability to attack is metamorphisized into military metaphors, the logic of national defense kicks in. Yet, as comforting as such logic feels, such ascription would be a major mistake. Martin C. Libicki
Advanced Concepts, Technologies, and Information Strategies Institute for National Strategic Studies National Defense University
Systems security, of course, matters. DOD must assume that it cannot engage enemies that will not in turn counterattack DOD's computers in order to disrupt military operations. Operators of commercial systems must be cognizant of and thus held responsible for the harm done to third parties if their systems have been compromised. Those who introduce new commercial applications need to think through their potential for malicious misuse.
Yet reasonable prudence is a far cry from the notion that hacker attacks will constitute the 21st century's version of strategic warfare 1. Believing thus belies common sense aspects of both computers and national security. It also leads to policy prescriptions applied throughout society; such prescriptions could easily face opposition that would make proponents pine for halcyon days of the Clipper chip. The last thing the United States needs is a reaction to computer risks which is (in words applied to fin-de-siecle Austria-Hungarian Empire) "desperate but not serious".
- Everyday threats engender everyday defenses.
- The more subversive the attack the greater the risk to the attacker.
- Defense is possible if taken seriously.
- Vulnerability, although too high, ought not be exaggerated.
- The strategic benefits of an information attack are unclear.
- System owners have to protect themselves; if they punt, the Government cannot substitute.
Do's and Don'ts for Government policy follow.
The Threat Potential to the U.S. Information Infrastructure
Why are people even worried about attacks on the nation's information infrastructure?
These four factors suggest that the challenge of computer security will matter more to America's well being tomorrow than it does today.
- The U.S. economy and society is growing more dependent on information systems. Analog systems are becoming digital; digital systems are replacing humans (e.g., automated teller machines, voice mail systems). Staring at video screens -- the portals to the infotainment face of the NII -- may come to dominate America's non- business hours as well.
- Information systems becoming interconnected via phone and E- mail. Interconnection saves manhours, promotes workplace collaboration, and permits remote management (e.g., supervisory control and data acquisition [SCADA] systems) but also permits havoc to seep in from outside or even abroad. When supposedly trusted systems can infect each other, malevolence is harder to contain.
- More responsibility for serious work is being shifted to PCs and Unix machines and away from mainframes and minicomputers 2. The latter two, designed to carry a company's jewels, tend to make users second-class citizens, limit their access to software, and take security more seriously. PCs were designed to have everything accessible; Unix workstations were designed for information sharing. Both are more vulnerable.
- Many innovations carry new security risks. Some Web browsers and spreadsheet macros let the unwary download viruses into their system 3. Distributed objects over networks and software agents may introduce similar problems. If systems reconfigure themselves based on learning, a tried-and-true response to suspicions of corruption -- starting afresh with original media -- will set back the system's capabilities.
Does this, however, make the protection of the NII a matter of national security? Yes, to some extent.
- The more a nation depends on the integrity of its information infrastructure, the more it can be held at risk by attacks there. The threat of information warfare's massive disruption has been posited as a potential successor to nuclear warfare's massive destruction. A milder variation holds that an asymmetric competitor may keep the U.S. out of its backyard (and thus stymie our advantage at conventional warfare) by holding the NII at risk.
- Information warfare is terrorism writ less bloody but with wider effect. As the U.S. to bad people it is even more porous to bad bitstreams. A phone or E-mail connection suffices to access a wide variety of computers, hop-scotching from one node to another until an important vulnerability is found and exploited. Because the risk of detection is low and the risk of apprehension and punishment is even lower, a cyberspace attack is relatively cheap and risk-free.
- The DoD is growing dependent on the NII (95 percent of its communications go outside its own systems at some point) as its assets are repatriated and off-the-shelf becomes the rule. An unprotected infrastructure permit foes to undermine conventional military operations.
Everyday Threats Engender Everyday Defenses
Systems abuse comes in many forms. Some are commonplace; they rely on normally present motivations (e.g., greed, thrills) and are no more than a high-tech version of carjacking and joyriding. Others are uncommon, difficult to anticipate, and serious. Systems owners can be normally expected to protect themselves against commonplace threats whose future probability and patterns can be predicted from their past. Uncommon but nonetheless serious threats are less likely to be watched for because they arise from less commonly acted-on human motives.
Deliberate systems abuse, many of which could be hacker attacks, comes in roughly six forms:
- theft of service (e.g., cellular phone call fraud),
- acquisition of objective data (e.g., research results),
- acquisition or alteration of subjective data (e.g., a person's credit history),
- theft of assets (e.g., embezzlement),
- corruption of data in storage or motion (e.g., sabotage), and
- disruption of information services (e.g., telephony) or attached services (e.g., electric power distribution) for its own sake or for secondary purposes (e.g., corrupting medical data to hurt individuals, seeking control for the purpose of blackmail).
The first four types of attacks are or could be commonplace because they can be undertaken by individuals with something to gain. Because greed is eternal, for instance, the motive for breaking into and robbing a bank is everpresent. Ditto for stealing services. Threats against individuals (see the 1995 movie, "The Net"), although a potential tool of guerilla war, are more likely to be motivated by private grudges and gripes. A fourth case, stealing objective data, is the high-tech version of protecting classified information -- something the DoD already takes seriously every day.
The last two, corruption and disruption, best characterize the unexpected and malevolent nature of information warfare. Attackers require an external goal, and, for the most part, a concerted strategy and the time to carry it out.
Systems that face a known threat pattern (and whose owners would bear most or all of the cost of an attack) can determine an optimal level of protection. There is no reason to believe that they provide less protection against information attacks than they do against other threats to their well-being (e.g., shoplifting, embezzlement, customer lawsuits). The real worry is an attack that rarely is seen in the background environment -- and thus one for which there is less natural protection.
Insidious Attacks Often Put Attackers at Risk
Systems can generally be attacked in one of three basic ways: (1) through corrupted system hardware or software (2) by using an insider, or (3) by external hacking -- plus combinations thereof (e.g., having an insider reveal vulnerabilities that facilitate subsequent hacking). An information infrastructure can also be attacked through physical means (e.g., jamming, microwaves, shot and shell) but only for the purpose of physical denial and associated blackmail. Such attacks require on-site presence and as such are considered akin to acts of well-understood terrorism. They carry far greater risks for the user.
The first, corrupted hardware or software, may be epitomized by the myth of the rogue employee of a U.S. microprocessor firm queering some circuits in every PC chip which then goes bad simultaneously just when most needed. How to ensure simultaneity without premature discovery is never explained. A slightly more plausible threat is a planted bug in a specific system (e.g., a computer that is disabled by an external signal or other preset conditions).
The second, the acquiescence or complicity of someone with the right privileges, is more likely. In this downsizing era, there are no shortage of disgruntled employees and ex-employees from whom to recruit 4. Exploiting corruption, whether inside the physical system or among trusted users, has obvious advantages, particularly when used against systems secured against outsiders but not insiders. Yet, the risks of someone getting caught are far higher, because the chain of responsibility is more direct (and the number of suspects in either case is smaller than the billion plus people with phone access). Yet, recruiting such individuals from the outside involves risks akin to those for intelligence recruitment; if someone turns or is caught a system is warned that it is being targeted. The fact that no such conspiracy has come to light (yet) suggests the number of attempts to date has been small. The higher the risk, the lower the odds of being able to penetrate a large number of systems undetected; using insiders is a better avenue for opportunistic or intermittent rather than systematic attack.
A variant attack method -- applicable only to communications networks open to the public -- is to flood the system with irrelevant communications or computerized requests for service. Systems where service is free or where accountability can be otherwise evaded tend to be more prone to such an attack. The weakness of such an attack is that it requires multiple separate sources (to minimize discovery) and that its effects last only as long as calls come in. Because communications channels in the United States tend to be much thicker than whose which go overseas, overseas sites are a poor venue from which to start a flooding attack 5.
Systems Can Be Protected
Even though many computer systems run with insufficient regard for security, they can nevertheless be made quite secure. The fundamental theory is that protection is a point to be sought on a two-dimensional space (see table 1). One dimension is the degree of access: from totally closed to totally open. A system that secures itself only by keeping out every bad guy will make it difficult or impossible for good guys to do their work. The second dimension is resources (money, time, attention) versus sophistication. A sophisticated system keeps bad guys out without keeping so many good guys out and visa versa.
Security Choices Scrimp on Security Spend on Security
Tighten Access Users are kept out Users can get in out or must alter with effort, but their work habits. no hackers can
Loosen Access Systems are Users can get in vulnerable to easily but most attack. hackers cannot.
To start with the obvious method, a computer system that receives no input whatsoever from the outside world ("air-gapped") cannot be broken into (and no, one cannot spray a virus into the air in the hopes that a computer acquires it). If the original software is trusted (and the National Security Agency [NSA] has developed multilayer tests of trustworthiness), the system is secure (efficiency aside). Such a closed system is, of course, of limited value but for some systems the benefits of freer access are more than outweighed by even the small chance of security vulnerabilities (e.g., nuclear systems).
The challenge for most systems, though, is letting them accept external input without putting important records or core operating programs at risk. One way to prevent compromise is to handle all inputs as data to be parsed (a process in which the computer decides what to do by analyzing what the message says) rather than as code to be executed directly. Security then consists of ensuring that no combination of computer responses to messages can affect a core operating program, directly or indirectly (almost all randomly generated data tend to result in error messages when parsed). To pursue a trivial example, there are no button combinations whose pressing will insert a virus into an automatic teller machine.
Unfortunately, systems need to accept changes to core operating programs, all the time. Absent more sophisticated filters, a tight security curtain may be needed around the few applications and superusers allowed to initiate changes (authorized users might have to work from specific terminals hardwired to the network, an option in Digital's VAX operating system). Another method that will cut down on viruses and logic bombs is to operate solely with programs found only on unerasable storage media such as CD-ROMs. Whenever programs must be altered, they can be rewritten, recompiled in a trusted environment, and fixed onto a new CD-ROM (the cost of equipment to cut a CD-ROM is now below a thousand dollars) 6.
The technologies of encryption and especially digital signatures provide other security tools. Encryption is used to keep files from being read and permit passwords to be sent through insecure channels. Digital signatures permit very strong links of authenticity and responsibility to be established between message and messenger. A digital signature is used to create a message hash with a private key for which there is only one public key. If a user's public key can unlock the hash and the hash is compatible with the message, the message can be considered signed and uncorrupted. Thus computer systems can refuse unsigned messages or ensure that messages really originated from other trusted systems (and rogue insiders can be more easily traced). The private key never has to see the network (where it could have been sniffed) or be stored on the system (where the untrustworthy could give it away). Digital signatures are being explored for Internet address generation, and for secure Web browsers. Not only users but also machines, and perhaps individual processes, may all come with digital signatures 7.
Firewalls may also offer a degree of protection. Yet this method, the most popular way to protect computers attached to the Internet, needs a good deal more work before it can be used reliably without a great deal of careful attention to details in setting it up 8.
Because most PCs and workstation operating systems assume a benign world, rewriting them to make them secure against the best hackers is difficult; the more complex the software and security arrangements the greater the odds of a hole. In security, the primitive is often superior to the sophisticated; there are fewer configurations to test 9.
The NII's Vulnerability Should Not Be Exaggerated
How vulnerable is the NII? Sadly, no one really knows. The publicized incidents of phone phreaks, Internet hackers, and bank robberies may or may not be the tip of the iceberg. The common wisdom is that victims do not like to talk about how they have been had, but Citibank's decision to prosecute rather than cover up perpetrators of a fairly large ($400,000 transferred with another $10,000,000 waiting in perpetrator accounts waiting to be withdrawn) computer crime suggest a change in perception and prospects for better reporting 10.
What does computer crime cost? The FBI's precise estimate is between $500 million to $5 billion; in the same league with cellular telephone fraud (roughly $1 billion) and PBX toll call fraud 11. Many such estimates must be carefully understood. For instance, most embezzlement today is computer crime because that is where financial records are kept; but embezzlement clearly predates the computer. The cost of a stolen phone call is much less than its price (because the call otherwise probably would not have been made; most phone systems have excess capacity; and the price of a phone call includes services such as billing that criminals do not need). The cost to a corporation of having its R&D looked at by competitors is almost impossible to assess but is easy to assign an outsize figure to.
How frequent are Internet attacks? One way to calculate is to start with the 1200 reports received by CERT in 1995 12. The Defense Information Systems Administration used publicly distributed tools to attack unclassified defense systems. It worked eight of nine times. Only one of twenty victims knew they were attacked; and only one of twenty of them reported it as they should have. If this 400:1 ratio is indicative (and Navy tests echo them), then 1200 reports suggest the Internet suffers a half million Internet break- ins (even if very few do real damage). Using other DISA figures, the GAO estimated that DOD computers alone had been attacked 250,000 times in 1995 13.
The Internet, with its benign assumptions is, anyway, hardly indicative of systems in general; it is not used for mission- critical tasks (military logistics perhaps the most glaring exception) and if it to becomes a mission-critical system whose compromise is a serious problem, it must evolve and will necessarily become more secure 14. Were some hacker, for instance, to invade and bring down the network here at the National Defense University, it would be difficult to distinguish the effects of doing so from the many times that the network is otherwise down (both by accident and for maintenance). Similarly, someone breaking into NDU's computers for information (none of which is classified) would, at best, find draft copies of papers that their authors would have been more than pleased to circulate on request.
Information Attacks Do Not Offer Obvious Strategic Gains
Although important computer systems can be secured against hacker attacks at modest cost, that does not mean that they will be secured. Increasingly common and sophisticated attempts may be the best guarantor that national computer systems will be made secure. If the absence of important incidents lulls systems administrators into inattention, entr‚e is created for some group to launch a broad, simultaneous, disruptive attack across a variety of critical systems. The barn door closes but the horses are gone. For this reason, a sequence of pinpricks, or even a steadily increasing crescendo of attacks is the wrong strategy for an attacker; it creates its own inoculation. Strategic effectiveness requires that a nation's infrastructure be attacked in force all at once. No such attack has ever happened, but as of 6 December 1941, no country had every been attacked across the Pacific Ocean either.
A key distinction needs to be made between a purposeless attack and a purposeful one. The problem with Japan's attack on Pearl Harbor was not so much sunk ships and dead sailors as it was U.S. strategic immobility while Japan conquered large chunks of Southeast Asia and Oceania. An attack on the NII which leaves an opening for strategic mischief is of far greater note than one which does merely causes damage. A strategic motive for a Digital Pearl Harbor could be to dissuade United States from military operations (e.g., against the attacking country), or hinder their execution by hindering a mobilization, deployment, or command-and- control.
How much damage can a Digital Pearl Harbor cause? Suppose that hackers could shut down all phone service (and, with that, say, credit card purchases) nationwide for a week. The event would be disruptive certainly and costly (and more so every year), but as long as recovery times are measured in hours or even days they are
probably less disruptive than certain natural events 15, such as a large snowstorm, flood, or earthquake -- indeed, far less so in terms of lost output than a modest-size recession. How much would the U.S. public have to be discomfited before it demanded that the United States, for instance, disengage from a part of the world the attacker cared about? It is more plausible that the United States would desist before opponents whose neighborhoods are judged less worthwhile in the face of difficulty. It is less likely to withdraw before an opponent whose power to strike the U.S. economic system suggests why this opponent must be put down.
The difficulty of crafting a credible Digital Pearl Harbor is best illustrated by looking at the most widely reported scenario, RAND's "Day After in Cyberspace" 16. Over twenty incidents befall U.S. and allied information infrastructures, many stretching the limits of plausibility (e.g., three separated incidents tied to identical logic bombs, the simultaneous blackout of the Washington area's heterogenous phone systems, rogue subcontractors affecting what in real life are triple-redundant systems, market crashes following the manipulation of an unspecifiable computer). Yet, in the end, other than some potential for mass panic, facts on the ground (i.e., the Persian Gulf) are scarcely affected.
Socializing the Provision of Systems Security May Be Unwise
Is systems security a problem whose solution should be socialized rather than remain private? Consider a hypothetical scenario under which a refinery blows up, and damages its neighborhood. The responsibility of the refiner for external damage ought logically to vary by what caused the damage in the first place 17.
- What if the refinery was damaged because it was shelled by an enemy military? The refiner's responsibility should be minimal. Refineries are not designed to withstand wartime attacks. It is far more cost-effective to socialize the problem of such incidents by providing a common national defense.
- What if a sniper hits a refinery tower to the same effect? This problem is partially socialized through public law enforcement. Yet, a refiner should make reasonable provision so that a single-point failure does not create an uncontrollable cascade of disaster.
- Changing the sniper to a random pistol wielder widens the responsibilities of the refiner. Owners of dangerous equipment should be expected to take reasonable precautions (e.g., perimeter fencing, security guards) to protect the public from the occasional nut.
- Finally, what if a hacker off site were to access the refiner's system and command a valve to stay open causing the same explosion? Because a refiner should know everything about its information systems (whereas the government may know absolutely nothing) it has all it needs to protect its internal systems from outsiders and ensure that software- generated events (including bugs) cannot wreak havoc.
Some Things are Worth Doing
Because even the privately owned NII is, in some sense, a public resource, a role for the Government is not entirely unwarranted. But this role must be carefully circumscribed and focussed. This section makes ten suggestions.
1. Figure out how vulnerable the NII really is 18. What can be damaged and how easily? What can be damaged through outside attack; what is vulnerable to suborned or even malevolent insiders? For what systems can attacks be detected as they occur and by what means? What kind of recovery mechanisms are in place to return operations after a disruption; after an act of corruption? How quickly can systems be patched to make them less vulnerable? A similar set of questions can be asked about the military's dependence on commercial systems. How thorough would outages of the phone-cum- internet have to be to system cripple military operations and in what way: operations, cognitive support to operations, logistics (and if so, internal to the DOD or external as well), mobilization? What alternative avenues exist for military communications to go through? What suffers when the 95% of military communications that go through public networks has travel on the DOD-owned grid? A third set of questions relates to the existing software suites on which the NII runs: does, for instance, today's Unix need replacement or are known fixes sufficient? How useful are test-and- patch kits for existing systems?
Other Things Should be Avoided
This section details what is more important: seven things to avoid.
1. Avoid harping on information warfare to the extent that warfare becomes the dominant metaphor used to characterize systems attacks (much less all systems failures). Porting the precepts of inter-state conflict to computer security tends to remove responsibility for self-defense from those whose systems have been attacked. It is not at all obvious that protection from attacks in cyberspace should be yet one more entitlement.
Why? Promoting paranoia is poor policy -- especially when systems still crash often enough on their own. Once something is called war, a victim's responsibility for the consequences of its acts dissipates. A phone company that may have to recompense customers for permitting hackers to harm service should not be able to claim force majeure because it can argue that it was a war victim. Characterizing hacker attacks as acts of war also creates pressure to retaliate against hackers real or imagined. Reasonable computer security is not so expensive that the United States should be forced to go to war to protect its information systems. If, though, the United States needs an excuse to strike back (say, to forestall nuclear proliferation), the supposition that the target has sponsored information terrorism can be summoned as needed.
2. Don't waste much more effort on traditional intelligence collection for hacker warfare. Crime requires means, motives, and opportunity. Means -- cadres of hackers with some access to connectivity (e.g., not sitting in Pyongyang) -- may be easily assumed. Sixty percent of all Ph.Ds awarded in computer security by U.S. universities went to citizens of Islamic or Hindu countries. Put some effort into motive, to understand plausible patterns of attack by other nations (so as to know what needs security work most urgently). Spend the rest of the time on opportunity, which is to say, finding vulnerabilities so that they can be fixed.
3. Don't waste time looking for a Minimum Essential Information Infrastructure for the NII as a whole 19. Such as list will be undefinable (minimum to do what -- conduct a nuclear war, protect a two MRC mobilization, staunch panic?), unknowable (how can outsiders determine the key processes in a system and ensure that they stay the same from one year to the next?), and obsolescent well into its bureaucratic approval cycle (the NII is changing rapidly and has a long way to go before it gels). More to the point, the government has no tools to protect only the key nodes; what it might have are policies that encourage system owners to protect themselves (and they in turn will determine what needs to be protected first).
Conclusions
Will 21st century warfare consist of alternating attacks on enemy information infrastructures? Such an attack may happen -- even if its perpetrators may come to understand how little is to be gained and how much is to be lost by doing so. The more important point is that they need not happen if a modicum of attention -- and that probably suffices -- is paid to the possibility.
So, who should guard the NII? If it's yours, then you should. The alternative is to have the Government protect systems, which, in turn requires knowing the details of everyone's operating systems and administrative practices -- an alternative which, even if it did not violate commonly understood boundaries between private and public affairs, is anyway impossible. Forcible entry in cyberspace does not exist -- unless misguided policy mandates it.
1. On 25 June 1996, CIA Director, John M. Deutch testified before the Senate Permanent Committee on Investigations that hacker attacks ranked, in his mind, as the second most worrisome threat to U.S. national security -- just below the threat posed by weapons of mass destruction. In response he had drawn up plans for an roughly thousand-person office located at the NSA office that would focus on risks that foreign hackers posed to U.S. computers. He also supported plans for a "real-time response center" in the Justice Department to work against widespread hacker attacks. He noted the intelligence community has assessed the risks to the United States of such an attack but the results were classified. (return to text) ENDNOTES
2. Yet, even today three quarters of all real-time transactions are on mainframe based networks (source: Salvatore Salamone, "How to put Mainframes on the Web," Byte, 21, 6 [June 1996], 53).(return to text)
3. Even though Java's creators paid careful attention to security issues when designing the language (essentially disabling C++ of some dangerous features), its use is still problematic for systems with hard outer shells (to keep intruders from posing as users) but soft innards (so that users can wreak havoc on the system). A Java code picked up from the Net can do almost anything a user can. Thus an unpatched bug (e.g., sendmail) that lets users access system administration files can also let Java-based agents do the same.(return to text)
4. Yet, not every bad egg will harm society. During the Gulf War sensitive war plans had been left in a car and stolen; they were expeditiously returned with the comment that the perpetrator, while being a thief, was by no means a traitor.(return to text)
5. There are two minor exceptions to this rule. One is that a flooder may wish to curtail communications from the United States to a foreign nation using the same router links. Two is to aim the flooding attack at large known reflector sites. The latter can be filtered out if such attacks are repetitive.(return to text)
6. In practice, operational software has to be complemented by dynamic data files. Data files, used properly, however cannot host viruses; they can contain incorrect information but this is a simpler problem to deal with (make sure all changes are signed by (return to text)
7. Unfortunately, a secure digital signature needs to be 512 to 1024 bits long -- and is thus hard to memorize; human use may require hardware-encoded schemes coupled with PIN numbers so that stealing the hardware does not reveal the entire password.(return to text)
8. See Lee Bruno, "Internet Security: How Much is Enough?", in Data Communications25, 5 (April 1996) 60-72.(return to text)
9. Security research is focussing less on how to make systems secure and more on proving that systems are secure. Detecting failure modes, and developing tools, metrics, simulations and formal models are being emphasized. It would be nice if systems can be developed that can prove software to be secure, but today's experience suggests that a large degree of effort is required to verify even a small program. A meta-model of a software system written to highlight its security features may be useful, but the software designer may also be called upon to do other meta-models (e.g., to rigorously state architectural assumptions for later integration into other systems) and all such efforts compete. Fortunately, the access aspects of a program (how outsiders get in, and what privileges insider processes have) tend to be a small fraction of the program itself. If access points are compact and well identified they may be easier to test. Another approach is to hire in-house hackers and give them the source code (this puts them an a great advantage to outside hackers who lack this information - - except for Internet systems where the source code is publicly available) and see how far they get. Alternatively, offer a reward for cracking in (as Netscape has done for their security software) while the product is in beta.(return to text)
10. On 2 June 1996, the London Times reported that banks in London, New York, and Tokyo had paid roughly a half a billion dollars in blackmail to "cyber terrorists." These terrorists had demonstrated to their victims that they could bring computers operations to a halt; in a three year period they had conducted more than forty such attacks. This report, however, has been unusually difficult to verify as neither the victims nor the alleged perpetrators (or anyone else quoted, for that matter) are identified by name. Presumably banks are extremely reluctant to confess to such matters.(return to text)
11. By comparison the total cost of all credit card fraud is $5 billion.(return to text)
12. An analysis of CERT reports by John Howard of CMU suggests that after rising apace with the Internet, the number of incidents peaked in late 1993 and has remained relatively constant thereafter.(return to text)
13. U.S. General Accounting Office, Computer Attacks at Department of Defense Pose Increasing Risks, GAO/AIMD-96-84, May 1996.(return to text)
14. It must be imagined that most people would be loathe to entrust their credit cards to the Internet. In the 1950s, only twenty percent of Americans polled were willing to fly aircraft. The industry quickly realized that its future prospects were tied directly to safety concerns. Boeing developed and implemented its "single-failure" philosophy designed to ensure that no single failure in an aircraft could bring it down. Aircraft accidents fell from a dozen a year in the 1950s to a handful today despite a tenfold increase in aircraft takeoffs and landings.(return to text)
15. By shutting down the Northeast for half a week, the January 1996 snowstorm cost the economy $15 billion. Hurricane Andrews cost roughly $25 billion. The Northridge, California earthquake caused roughly $10 billion in damage.(return to text)
16. See Roger Molaner, Andrew Riddile, Peter A. Wilson,St rategic Information Warfare: A New Face of War, RAND: MR-661-OSD, 1996.(return to text)
17. In practice, insurance would pay, but insurance rates would come to reflect insurers' judgements about their clients' information security programs. The effect is the same.(return to text)
18. The Department of Justice has recently initiated an effort to do exactly this. If researchers are diligent, skeptical, apolitical, and well-funded they should make good progress.(return to text)
19. Determining the minimum for defense operations is probably worthwhile, though. A very compact minimum capability for each infrastructure may also be needing to bootstrap recovery operations in the event of a complete system failure.(return to text)
.
.
.
.
.
.
.
.
.