Idiosyntactix
Strategic Arts and Sciences Alliance


The Brand Name of the Media Revolution

[library]

DCOM - Chapter 1

Institute for National Studies


DEFENDING CYBERSPACE AND OTHER METAPHORS

MARTIN LIBICKI


Perspectives on Defending Cyberspace

Sections

  • Potential Threats to the NII
  • Everyday Threats Engender Everyday Defenses
  • Deep Threats Focus the Risk on Attackers
  • Systems Can Be Protected
  • The NII's Vulnerability Should Not Be Exaggerated
  • Information Attacks Offer Few Obvious Strategic Gains
  • The Provision of Systems Security Is Inescapably Private
  • Some Things Are Worth Doing
  • Things to Avoid
  • Conclusions

    With every passing week, the United States appears to grow more vulnerable to attacks on its national information infrastructure (NII). As this vulnerability to attack is transformed into military metaphor, the logic of national defense is often exploited to think about security,[11] but as comforting as that logic may feel, it is the wrong way to consider the problem.

    Systems security does matter. The Department of Defense (DOD) must assume that any enemy it engages will attack the DOD's computers to disrupt military operations. Operators of commercial systems must be aware of and thus responsible for harm done to third parties if the operators' systems become compromised. Those who introduce new commercial applications need to think through the potential for malicious use.

    Yet prudence is not the same as a notion that hacker attacks will be the twenty-first century's version of strategic warfare. That notion goes against common-sense aspects of both computers and national security. It also can lead to policy prescriptions so potentially controversial that proponents would pine for the halcyon days of the Clipper chip. The United States certainly does not need a response to computer risks that (in words applied to the fin-de-sicle Austro-Hungarian Empire) are "desperate but not serious."

    This essay argues that the task of securing the NII must be put into perspective. The subject has attracted a wide range of opinions, but the real nature of the threat remains undefined. At this point one can only go by what has happened or not happened to date[12] and reason about the nature of information systems -- how they work, what they do, and why they may be at risk. Doing so at least culls the fantastic from the plausible. In so doing this essay argues that:

    What follows are recommendations for government policy.

    Potential Threats to the NII

    Why do people worry about attacks on the NII?

    These four factors suggest that the challenge of computer security will matter more to America's well-being tomorrow than it does today.

    But do they make protection of the NII a matter of national security? To some extent, yes:

    Everyday Threats Engender Everyday Defenses

    Abuse of systems comes in many forms. Commonplace abuses rely on normally common motivations (e.g., greed, thrills), and many are only high-tech versions of carjacking and joyriding. Others less common are serious and difficult to anticipate. The own ers of systems can be expected to protect themselves (to an economically optimal level) against commonplace threats, the probability and patterns of which can be predicted from experience. Less common but serious threats are less liable to be watched for because they arise from motives that surface less often.

    Deliberate abuse can take roughly six forms:

    The first four types listed here are or could be commonplace, because they can be undertaken by individuals for gain. For example, because greed is eternal, the motive for robbing a bank electronically is ever present. Ditto for stealing services. Threat s against individuals (as in the 1995 movie "The Net"), although a potential tool of guerilla warfare, are more probably motivated by private grudges. The fourth case, the theft of data, is simply a high-tech version of espionage -- something the DOD alre ady takes seriously every day. Fifth and sixth, corruption and disruption, however, best characterize the unexpectedness and malevolence of information warfare: attackers require an external goal and both a concerted strategy and the time to carry it out.

    Systems that face a known pattern of threat (and whose owners would bear most or all the cost of an attack) can determine an optimal level of protection. There is no reason to believe that these owners provide less protection against information attacks than they do against other threats to their well-being (e.g., shoplifting, embezzlement, customer lawsuits). The real worry for such systems is an attack rarely seen in the background environment -- thus one for which there is less recognition and hence l ess protection.

    Deep Threats Focus the Risk on Attackers

    Systems can generally be attacked by errant bits[15] in one of three basic ways: (a) through corruption of a system's hardware or software; (b) through using an insider with access privileges; or (c ) through external hacking, as well as through combinations of these (e.g., through having an insider reveal vulnerabilities that facilitate subsequent hacking). The closer the attack source is to the system's core the more trouble a defense may be; but d eep threats focus suspicion on fewer potential attackers.

    A typical tale of corruption could involve a rogue employee of a U.S. microprocessor firm queering some circuits in every PC chip so that they all go bad simultaneously at just the right time. How simultaneity is ensured without premature discovery is ne ver explained. A slightly more plausible threat is the planting of a bug in a specific system (so that an external signal or specified condition makes the system go awry).[16]

    From 70 to 85 percent of all serious hacker attacks involve insiders. In the era of downsizing, there is no shortage of disgruntled employees or ex-employees capable of initiating an attack or being recruited to do so.[17] Exploiting corruption, whether inside the physical system or among trusted users, offers obvious advantages, particularly for use against systems secured only against outsiders, not insiders. The risks of getting caught are greater, though, because the ch ain of responsibility is direct (and in either case the number of suspects is smaller than the billion- plus people with phone access). The risks involved in recruiting such individuals from the outside resemble those involved in intelligence recruitment; if someone turns or is caught, a system is warned that it is targeted. The more people have to be recruited, the lower the odds of penetrating many systems undetected. Until such a conspiracy comes to light, the presumption must be that no sufficiently l arge attempt has yet been made. Insiders are, therefore, more liable to be the source of opportunistic or intermittent, rather than systematic, attack.

    Last is the hacker route. Most systems divide the world into at least three parts: outsiders, users, and superusers. One popular route of attack on Internet-like networks is (a) systematically guessing someone's password, so that the outsider is s een as a user, and then (b) exploiting the known weaknesses of operating systems (e.g., Unix), so that users can access superuser privileges. Once granted superuser privileges, a hacker can read or alter the files of other users or those of the sys tem; can control the system under attack; can make reentering the system easier (even when tougher security measures are subsequently enforced); and can insert rogue code (e.g., a virus, logic bomb, Trojan horse, etc.) for later exploitation.

    The damage a hacker can do without acquiring superuser privileges depends on the way systems allocate ordinary privileges. A phone user per se can do little damage to the phone system. Computer networks are especially vulnerable to abusers when ce rtain privileges are granted without being metered. Any system with enough users will contain at least one who would abuse resources, filch data, or otherwise gum up the works. Although mechanisms to keep nonusers off the system matter, from a security po int of view, limiting what authorized users can do may be more important.

    Another method of attack -- applicable only to communications networks open to the public -- is to flood the system with irrelevant communications or computerized requests for service. Systems in which either service is free or accountability can be evad ed in some other way are prone to such attacks. The weakness of such attacks is that they often require multiple sources (to tie up enough lines) and separate sources (to minimize discovery), and their effects last only as long as calls come in. Because c ommunications channels within the United States are much thicker than those that go overseas, overseas sites are a poor venue from which to launch a flooding attack.[18]

    Systems Can Be Protected

    Although many computer systems run with insufficient regard for security, they can be made quite secure. The theory is that protection is a point to be sought on a two-dimensional space (see Table 1). One dimension is the degree of access, fro m totally closed to totally open. A system that is secured only by keeping out every bad guy makes it difficult -- or impossible -- for good guys to do their work. The second dimension is resources (money, time, attention) spent on sophistication. A sophi sticated system keeps bad guys out without great inconvenience to authorized users.

    Table 1

    Security Choices

     

    Security Choices Scrimp on Security Spend on Security

    Tighten Access Users are kept out Users can get in

    out or must alter with effort, but

    their work habits. no hackers can

    Loosen Access Systems are Users can get in

    vulnerable to easily but most

    attack. hackers cannot.

    To start with the obvious method, a computer system in a secure location that receives no input whatsoever from the outside world ("air-gapped") cannot be broken into (and, no, a computer virus cannot be sprayed into the air like a living virus, i n the hope that a computer will acquire it). If insiders[19] and the original software are trustworthy (and the NSA has developed multilayer tests for the latter), the system is secure (although often hard to use). Such a cl osed system is, of course, of limited value, but the benefits for some systems (e.g., nuclear systems) of freer access are outweighed by even the smallest chance of security vulnerabilities.

    The challenge for most systems, however, is to allow them to accept external input without putting their important records or core operating programs at risk. One way to prevent compromise is to handle all input as data to be parsed (the process in which the computer decides what to do by analyzing what the message says) rather than as code to be executed directly. Security, then, consists of ensuring that no combination of computer responses to messages can affect the core operating program, indirectly or directly (when parsed, almost all randomly generated data result in error messages). To pursue a trivial example, there are no button combinations that can be pressed that would insert a virus into an ATM. Less trivially, it is very hard to write a vir us in a data- base manipulation language such as structured query language.

    Unfortunately, systems must accept changes to core operating programs all the time. In the absence of sophisticated filters, a tight security curtain may be needed around the few applications and superusers allowed to initiate changes (authorized users m ight need to work from specific terminals hardwired to the network, an option in Digital's VAX operating system). Another method to cut down on viruses and logic bombs is to operate solely with programs found on unerasable storage media, such as CD-ROMs. When programs must be altered,[20] they can be rewritten, recompiled in a trusted environment, and fixed onto new CD-ROMs (by 1996 the cost of equipment to cut a CD-ROM had fallen below $500).

    The technologies of encryption and, especially, of digital signatures provide other security tools. Encryption is used to keep files from being read and to permit passwords to be sent over insecure channels. Digital signatures permit the establishment of very strong links of authenticity and responsibility between message and messenger[21]. A digital signature is used to create a message hash with a private key for which only one public key exists. If a user's public key ca n unlock the hash and if the hash is compatible with the message, the message can be considered signed and uncorrupted. Computer systems can refuse unsigned messages or ensure that messages really originated from other trusted systems. The private key nev er has to see the network (where it might have been sniffed) or be stored on the system (where the untrustworthy might give it away). The use of digital signatures is being explored for Internet address generation and for secure Web browsers. Users as wel l as machines, and maybe even individual processes, may in the future all come with digital signatures.[22]

    Firewalls offer some protection, but, even though they are the most popular method for protecting computers attached to the Internet, they need a good deal of work before they can be used reliably and without considerable attention to detail when being s et up.[23] Anti-virus software also offers some protection against known viruses but whether the $3 billion a year spent on such products has been worthwhile is a different issue.

    Most problems of security for systems come from careless users, poor systems administration, or buggy software. Users often choose easily guessed passwords and leave them exposed. Poorly administered systems include those that let users choose their own passwords (notably easily guessed ones), keep default passwords or backdoors in operation, fail to install security patches, or give users access to total system resources to read or write files (particularly those that control important processes), from which they should be barred. Common bugs include those that override security controls or permit errant users to crash the system, or in general make security unnecessarily difficult or discretionary.

    Client-server architectures suggest a second-best approach to security. Absent constant vigilance by all users, client computers are hard to protect. They are as numerous as their users (and often as variegated); they often travel or sit in unsecured loc ations, and tend to run common applications over commercial operating systems. Client computers are "owned" by their users who tend to upload their own software, use their own media, and roam their favorite Web sites. This helps propagate viruses (by one account half of the client computers used by the U.S. Army in Bosnia were infected). Traditionally, viruses infected the computers they run on and little else; but tomorrow's more intelligent versions may learn to flood or otherwise disable networks, and seek out specific information on servers in order to pass it along, or corrupt it. Servers, for their part, hold the core objects (information bases, processing algorithms, and system control functions) from which clients can be refreshed. Servers are few in number (which facilitates auditing and monitoring), and they rarely travel. They can be secured behind physical walls and semantic firewalls. They are "owned" by their institutions and thus unlikely to host unnecessary applications. They are also more likely to run proprietary or heavyweight operating systems which are inherently more secure. A strategy which solves the easier problem of protecting servers may provide information assurance; however, network servers also must be protected for assured s ervice and they tend to run commercial network operating systems which are inherently more vulnerable.

    The head of the Computer Emergency Response Team (CERT) once estimated that well over 90 percent of reported break-ins involved exploitation of known and uncorrected weaknesses of the target system.[24] Most of the remainde r used methods understood to be theoretically possible, even if the precise algorithm was unknown.

    Because the operating systems of most PCs and workstations assume a benign world, rewriting them to secure them against the best hackers is difficult; the more complex the software and security arrangements, the greater the odds of a hole. In security, t he primitive is often superior to the sophisticated: there are fewer configurations to test.[25]

    Yet a virtual stock exchange (e.g., NASDAQ) may be secured from attack with more confidence than the real one can (e.g., the floor of the New York Stock Exchange). In the virtual world, technology permits owners of a system to control all its access poin ts and examine in detail everything that comes through. In the physical world, public streets cannot be so easily controlled, moving items cannot be so confidently checked, and proximity and force matter.

    Perhaps the most misleading guide to protecting information systems is the myth of the superhacker, the evil genius capable of penetrating any system. Militaries have conventionally been built on the understanding that there is no perfect defense or offe nse: No wall, however thick, will withstand a battering ram of sufficient size (and no battering ram, however strong, can go through a wall sufficiently thick). The analogy to computer systems is specious. Systems are entered because they have holes open to some combination of bytes. The placement and distribution of holes is what matters, rather than how persistently or creatively an attacker forces them.

    The NII's Vulnerability Should Not Be Exaggerated

    How vulnerable is the national information infrastructure? No one really knows. Are publicized incidents of phone "phreaks," Internet hackers, and bank robbers the tip of the iceberg? Common wisdom is that victims do not talk about being had, but Citibank's decision to prosecute the perpetrators of rather than cover up a fairly large computer crime ($400,000 was transferred to and withdrawn from the accounts of the perpetrators and another $10 million had been waiting in them for withdrawal) s uggests a change in perception as well as prospects for public reporting.[26]

    What does computer crime cost? The Federal Bureau of Investigation's (FBI) best estimate is between $500 million to $5 billion -- in the same league as cellular telephone fraud (roughly $1 billion) and private branch exchange (PBX) toll-call fraud.[27] One should not make too much of any such estimates. Most embezzlement in the 1990s is computer crime, because computers are where financial records are kept but embezzlement predates the computer. The cost of a stolen phone c all is much less than its price (most phone systems have excess capacity, the price of a call includes services such as billing, which do not apply to stolen calls, and many callers would have foregone the call if they had to pay). The cost to a corporati on of having its research and development (R&D) looked at by competitors may be impossible to assess, but it is easy to assign an outsize figure to it.[28]

    How frequent are Internet attacks? One way to calculate is to start with the 1,200 reports CERT received in 1995.[29] In the early 1990s, the Defense Information Systems Administration (DISA) used publicly distributed tools to attack unclassified defense systems and succeeded 90 percent of the time. Only five percent of all victims knew they were being attacked, and of those that knew only two percent reported the attack. If this 1000:1 ratio is indicative (and Navy tests e cho them), then 1,200 reports suggest that the Internet suffers a million break-ins (even if few do real damage). Using similar methodology, DISA estimated that in 1995 DOD computers alone were attacked a quarter million times.[ 30]

    The Internet, with its benign assumptions is hardly indicative of systems in general. It is rarely used for mission-critical tasks (with military logistics perhaps the most glaring exception), and if it were to become a mission-critical system for which compromise would be a serious problem, the Internet would need to evolve and would necessarily become more secure.[31] Were a hacker to get on the Internet, and through it, bring down the network at NDU, where I work, the co nsequences would be indistinguishable from the many outages occasioned by accident or maintenance problems. Anyone breaking into NDU's computers for information (none there is classified) would find, at best, only draft copies of papers that their authors would be more than pleased to have circulate on request.

    One reason computer security lags is that so far incidents of breaking in have not been compelling. Although many facilities have been entered through their Internet gateways, the Internet itself was brought down only once (the 1988 Morris worm). No larg e phone or power distribution outage has been traced to hacking (the most serious incident affecting telephones occurred in the Northeast and Los Angeles in 1991, and it was traced to a faulty software patch). There is no evidence that any financial syste m has ever had its financial integrity put at risk by a hacker attack. A parallel security issue may be drawn with the security of the United States's rail system: unprotected rural train tracks are easy to sabotage, and with grimmer results than virtuall y any network failure, but until the Arizona train crash in 1995, such sabotage had not occurred in fifty years.

    A system that is easy to abuse in one way may be difficult to abuse in another. In the U.S. phone system, it is not the thousands of switches that must be guarded but the few hundred signal transfer point (STP) computers. Phone phreaks attack by getting into and altering the many databases that indicate the status of calls and phone numbers. Presumably, with enough alterations, area telephone service could be terminated, but only as long as the databases remain altered. Planting a bug in the computer's o perating system is harder. Even though STP computers are interconnected through Internet protocols, serious study suggests the difficulty of one STP computer infecting another.

    Can a nation's stock market be destabilized by scrambling the trading records of the prior day (as in Tom Clancy's novel Debt of Honor[32])? Possibly, but it is easy to forget how many separately managed computers re cord most stock transfers (e.g., the exchange's, each client's, their brokers, the company itself, etc.). Archiving every transaction to an occasionally read archival medium (CD-ROM or even printouts) could foil most after-the-fact corruption, detect cons istent in- the-process faults, and perhaps reveal deliberately intermittent error.

    Can an individual's assets be stripped by erasing a bank account? A bank account is essentially an obligation by the bank to repay the depositor. That obligation persists even when the bank's record of an account cannot be found.

    Finally, the reliability of a system involves factors other than its security holes: the system's ability to detect its own corruption, the existence of backup data files and capabilities, its overall robustness (including redundancy in routing), and its ability to restore its own integrity and raise its own security level on short notice.

    Yet, in spite of all the measures sketched here, and measured against plausible rather than mythical dangers to systems, the truth is that computer security remains too weak in too many places to withstand systematic attack. Systems were thought safe bec ause really brilliant hackers were scarce. By 1995, easy-to-use tools came to circulate on informal public networks for hackers to find and use.

    Information Attacks Offer Few Obvious Strategic Gains

    Although important computer systems can be secured against hacker attacks at reasonable cost, that does not mean that they will be secured. Increasingly common and sophisticated attempts may be the best guarantor of the security of national computer systems. If the absence of important incidents lulls systems administrators into inattention, entre is created for some group to launch a broad, simultaneous, disruptive attack across a variety of critical systems. The barn door closes, but the horse is g one. For this reason, a sequence of pinpricks or even a steady increase of attacks is the wrong strategy: it creates its own inoculation. Strategic effectiveness requires attacking an infrastructure in force and all at once.[33]

    A key distinction is between a purposeless attack and a purposeful one. Japan's attack on Pearl Harbor was successful (at least in the short run) not because so many ships were sunk and sailors killed or wounded but because the United States had been imm obilized while Japan conquered large chunks of Southeast Asia and the Pacific. An attack on the NII that left an opening for strategic mischief would be far more damaging than one that merely caused damage. A strategic motive for a digital Pearl Harbor co uld be to dissuade the United States from military operations (perhaps against the attacking country) or to hinder their execution by disrupting mobilization, deployment, or command and control.

    How much damage could a digital Pearl Harbor cause? Suppose hackers shut down all phone service (and, say, all credit card purchases) nationwide. That would certainly prove disruptive and costly, but as long as recovery times are measured in hours or eve n days, such an attack would be less costly than such natural events[34] as a hurricane, snowstorm, flood, or earthquake -- events that have yet to bring the country to its knees[35]. How much would the public need to be discomfited before demanding that the United States disengage from the part of the world the attacker cared about? More plausibly, the United States might desist before opponents whose neighborhoods were judged less worthwhile in face of difficulty of protecting them. The United States is less likely to withdraw before an opponent whose power to strike the U.S. economic system provides a rationale for why the opponent must be put down.

    Would it have been in North Vietnam's interest to hire hackers to shut down the U.S. phone system in 1966? Doing so would have contravened the message that it was fighting the United States only because the United States was in Vietnamese territory. Such an attack could have compromised support in the U.S. for the disengagement of U.S. forces. It would have also portrayed North Vietnam as an opponent capable of hurting the United States at home, which would have eroded the cautions that limited U.S. air operations against North Vietnam.

    A more pertinent question than how much damage a digital Pearl Harbor might cause is how well hackers attacks can delay, deny, destroy, or disrupt military operations. An enemy in war should be expected to disrupt U.S. military systems as much as possibl e. But is there enough military gain from a concerted attack on the civilian infrastructure to warrant the risks?

    Clearly, some military functions are vulnerable to attacks on certain portions of the NII. Today's wars require a large volume of communications from the field both to the Pentagon (say, to its Checkmate cell in the basement from the Black Hole cell in R iyadh) and to various support bases, control points, logistics depots, contractors, and consultants. A prolonged power, telephone, or e- mail cut-off would hurt broad command and control. Given the many communications media and dense links in the United S tates, such a disruption would need to be nearly complete, that is, widespread, coordinated, and largely successful, to have any effect whatsoever -- and only if the DOD had little capacity to transfer vital traffic onto its own systems. Were U.S. command ers to exercise real-time control over operations that depended on commercial telephone lines, then a disruption would be a bigger problem, but establishing military operations with such long and vulnerable tethers is unwise for many other reasons.

    The effect of an extended disruption on troop or supply mobilization is more difficult to gauge; these processes typically take weeks or months to bear fruit.[36] A disruption that lasted hours, rather than days, would prob ably affect outcomes imperceptibly. Many services can be restored in that time, unless some hard-to- replace physical item was damaged. If a logistics system cannot withstand minor disruption (overnight deliveries aside) with little ultimate impact, it ca n only have been badly engineered to begin with (disruptions near the point of use being, of course, an expected feature of warfare).

    Can communications be sufficiently disrupted to retard or confound the nation's ability to respond to a crisis overseas? An enemy with precise and accurate knowledge of how decisions are made and how information is passed within the U.S. military might g et inside the cycle and do real damage -- but the enemy must understand the cycle very well. Even insiders can rarely count on knowing how information is routed into a decision; in an age in which hierarchical information flow is giving way to networked i nformation flow the importance of any one predesignated route is doubtful.

    The difficulty of crafting a credible linkage between an NII attack and national security is best illustrated by looking at the most widely quoted scenario, RAND's "Day After in Cyberspace."[37] More than twenty incidents b efall U.S. and allied information infrastructures, many stretching the limits of plausibility (e.g., three separate incidents tied to identical logic bombs, the simultaneous blackout of the Washington area's heterogenous phone systems, rogue subcontractor s affecting what in real life are triply redundant systems, market crashes following manipulation of a hypothetical central computer). Yet, in the end, except for a potential for mass panic, facts on the field of combat (in this case, in the Persian Gulf) are scarcely affected.

    The Provision of Systems Security Is Inescapably Private

    Is systems security a problem whose solution should be socialized, rather than remain private? Consider a hypothetical scenario in which a refinery blows up and damages its neighborhood. The responsibility of the refiner for external damage ought rea sonably to vary according to what caused the original damage[38] (even if the perpetrator and supporting nations or institutions share can be identified and subject to the force of law or state action).

    Most of the NII is in private hands; if its owners bear the total costs of system failure, they have all the incentives they need to protect themselves. But public consequences would follow the disruption of certain systems: phone lines, energy distribut ion, funds transfer, and safety. If the threat is great enough, then they have to be secure -- even at the cost of yanking the control systems off publicly accessible networks. Often, less costly remedies (e.g., more secure operating systems) suffice. Eve n primitive solutions are cheap compared with other steps the United States takes to protect itself (e.g., nuclear deterrence). That said, the number of critical sectors is growing. Credit-card validation is becoming as critical as funds transfer to the h our- to-hour operation of the economy. Automated hospital systems are evolving toward mission-critical safety systems.

    Should there be a central federal policymaker to guard the NII? If so, who? The DOD has both the resources and national security mission, but its expertise is concentrated in the very agency fighting the spread of one of the most potent tools of security , encryption.[39] The military's approach -- avoiding new systems that fail to meet military specifications -- is costly when applied to technology with short life cycles and difficult when applied outside command-and-contro l hierarchies. NIST, the second choice, has the talent but neither funding nor experience in telling other federal agencies what to do. Beyond the DOD and NIST, expertise thins and the mission fits poorly.

    The concept of a single government commander for information defense is, anyway, a stretch. Any attempt to "war-room" an information crisis will find the commander armed with buttons attached to little outside immediate government control. Repair and pre vention are largely in the hands of system owners, who manage their own systems, employ their own systems administrators, and rarely need to call on shared resources (so there is little need for central allocation). Little evidence exists of recovery or p rotection synergy which cuts across sectors under attack (say, power companies and funds transfer systems). The other problem with a single set of security policies is that each sector differs greatly not only in its vulnerabilities and in what an attack might do but, more important, in how government can influence its adoption of security measures (e.g., some sectors are regulated monopolies). Seeing to it that various private efforts to defend themselves are not at odds can help. A high-level coordinato r could ensure that the various agencies do what they are tasked to do; lower level coordinators could work across-the-board issues (e.g., public key infrastructures). Beyond these, no czar is needed.

    Some Things Are Worth Doing

    Because even the privately owned NII is, in a sense, a public resource, a role for the government may be warranted, but this role must be both circumscribed and focused. Here are ten suggestions for ways of doing so, all of which can be addressed sim ultaneously.

    1. Figure out how vulnerable the NII really is.[40] What can be damaged and how easily? What can be damaged by outside attack; what is vulnerable to suborned or even malevolent insiders? For which systems might attac ks be detected as they occur and by what means? What recovery mechanisms are already in place to recover operations after a disruption -- or after an act of corruption? How quickly can systems be patched to make them less vulnerable? Similar questions can be asked about the military's dependence on commercial systems. How thorough would outages of the phone and Internet need to be to cripple military system operations and how would they do so: by affecting operations, cognitive support to operations, logi stics (if so, only internal to the DOD or also external), mobilization? What alternative avenues exist for military communications to go through? What suffers when the 95 percent of military communications that otherwise go through public networks have to travel on the DOD-owned grid? Further questions concern the software suites on which the NII runs: for instance, does today's Unix need replacement, or are known fixes sufficient? How useful are test-and-patch kits for current systems?

    2. Fund R&D for enhanced security practices and tools and promote their dissemination through the economy. The United States spends $100 million a year in this area of R&D (divided among the Defense Advanced Research Projects Agency [DARPA], NSA, and other agencies) to make operating systems more robust and to develop cryptographic tools, assurance methodologies, tests and, last but not least, standards. The technology already exists to secure systems, but not how to make that knowledge automatic, interoperable, and easy to use. Cyberspace may need an equivalent of the Underwriters' Laboratory, capable of developing standard tests for the security of information systems.

    3. Take the protection of military systems seriously. Any nation at war with the United States should be assumed to want to attack military systems (especially unclassified systems for logistics and mobilization ) in any way it can -- and hacker a ttacks are among the least risky ways of doing that. The government should assume that foreign intelligence operatives are, or soon will be, probing U.S. systems for vulnerabilities. The DOD should also be concerned about systems in contractors' hands and defense manufacturing facilities. The government could stipulate in contract that those who supply critical goods and services for the U.S. military (even phone companies) should have a reasonable basis for believing their systems are secure. Perhaps the DOD needs methods to validate a hardware or software vendor's source code that would also assure that the vendor's commercial secrets are safe.

    4. Concentrate on key sectors -- or on the key functions of key sectors (telecommunications, energy and funds distribution, and safety systems). Because the government cannot protect these systems, it may have to persuade (through technology assis tance, or its bully pulpit) their owners to take security and backup seriously. Several organizations are useful for discussing mutual security concerns: Bellcore or the National Security Telecommunications Advisory Council for phones; the National Electr ic Reliability Council or the Electric Power Research Institute for power plants. Odd as it may sound in a digital age, critical systems should have ways to revert to manual or at least on-site control in emergencies.

    5. Encourage dissemination of data on threats and compilation of data on incidents (CERT already does a good job for the Internet). Raw data may need to be sanitized so investigations are not compromised nor innocent systems maligned. Effective pr otection of the public-information infrastructure inevitably involves public policy, and public policy that relies on "if you knew what I knew" cannot long be viable.

    6. Seek ways to legitimize "red-teaming" of critical systems, in part by removing certain liabilities from the unintended consequences of authorized testing. Nondestructive testing of security systems may be insufficient until the state of the art improves; that is, only hackers can ensure that a system is hacker-proof. Unfortunately, hackers are not necessarily the most trustworthy nor systematic examiners, and tests can go wrong (the Morris worm propagated more quickly than he intended, because somewhere in its program "N" got confused with "1-N"). Such systems should be tested both with and without on-site access permitted (the latter to simulate national security threats).

    7. Bolster protection of the Internet's routing infrastructure -- not because the Internet is itself important but because protecting it is relatively cheap. Critical national and international routers should be made secure, and the Domain Name Se rvice should be spoof-proof. This is different from protecting every system on the Internet, which would be both very expensive and the proper purview of system owners.

    8. Encourage the technological development and application of digital signatures, in part by applying them to security systems and not just to electronic commerce. Supportive policies may include research on private key infrastructures, enabling a lgorithms, and purchases that create a market for them.

    9. Work toward international consensus on what constitutes bad behavior on the part of a state and what appropriate responses might be. Consensus would permit the rest of the world to adopt a common policy against states that propagate, abet, or h ide information attacks by limiting those states' access to the international phone and Internet system, much as international consensus permits trade restraints. That said, proof that a state has sponsored information attacks will be difficult to establi sh, and a state embargoed on suspicion may often be able to convince itself and others that it has been singled out for sanctions for other reasons.

    10. Strengthen legal regimes that assign liability to systems owners for the secondary consequences of hacker attacks. Needless to add, the owners should be able, if at all possible, to recover costs from perpetrators.[4 1] At the current state of technology, however, it would have a chilling affect on networks if their owners were held responsible for attacks unwittingly perpetrated through their systems (e.g., a hacker gets into one network in order to penetrate a s econd one).

    Things to Avoid

    Perhaps more important than figuring out what to do is figuring out what not to do. Here are six things to avoid.

    1. Avoid harping on information warfare to the extent that warfare becomes the dominant metaphor used to characterize systems attacks (much less systems failures). Porting precepts of interstate conflict to computer security can remove responsibil ity for self-defense from those whose systems are attacked. Protection from attack in cyberspace should not be yet one more entitlement from the government.

    Once something is called war, the responsibility of the victim for the consequences of its negligence is dissipated. A phone company that may need to compensate customers for permitting hackers to harm service should not be able to claim force majeure as a victim of information warfare. Characterizing hacker attacks as acts of war creates pressure to retaliate against the hackers, real or imagined. Reasonable computer security is sufficiently affordable that the United States should never be forced to go to war to protect its information systems. Finally, promoting paranoia is poor policy, especially when systems crash often enough on their own.

    2. Limit the resources expended on looking for a threat. Crime requires means, motives, and opportunity. Means -- the cadres of hackers with some access to connectivity -- can be assumed. Of all Ph.D. degrees awarded in computer security by U.S. u niversities, 60 percent went to citizens of Islamic or Hindu countries. The United States needs to put some effort into specific motive so as to forecast plausible patterns of attack by other nations (in order to know what security tasks are most urgent). Most of the information-collection effort should go toward opportunity -- assessing U.S. vulnerabilities so that they can be fixed.

    3. Ignore the seductive appeal of automatic retribution software. Militaries are built to hit back rather than prosecute; by this logic DOD systems could protect themselves by downloading a disabling virus into a hacker's computer system. Yet, ass ume, despite serious technical obstacles, that the approach works. Imagine, then, a hacker breaking into, say, CNN's computers, and from there into DOD. A DOD system instantly retaliates by dropping a virus into CNN, which, understandably, objects. Conseq uences ensue.

    4. Don't sacrifice security to other equities.[42] It is difficult to see how the NII can be secure without the use of encryption, yet the government is loathe to encourage the proliferation of encryption (the Clippe r chip and export controls). Controversy over encryption has complicated the government's credibility in securing the NII.

    5. Remember that too great an emphasis on adopting today's security practices may keep systems from taking advantages of tomorrow's innovation (e.g., for collaborative computing). Some systems (e.g., those that control dangerous devices) must be s ecure regardless and, yes, many anticipated innovations have security problems that must be attended to. But the systems field is too dynamic for a straightjacket approach.

    6. Respect heterogeneity; it makes coordinated disruption harder to achieve and preserves alternative paths. Common industry approaches to security matter less than standard protocols and software hooks to algorithms for standard security function s.

    Conclusions

    Who should defend cyberspace? The case for assigning cyberspace defense to the DOD arises from the ill-considered prediction that cyberspace attacks could become the predominant feature of 21st- century warfare (it is difficult enough to construct a scenario in which such attacks have little more than nuisance value).

    Is cyberspace, in fact, a space that can be defended -- or is it a set of largely private spaces that traffic in bytes from other largely private spaces? No good alternative exists to having system owners attend to their own protection. By contrast, havi ng the government protect systems requires it to know details of everyone's operating systems and administrative practices -- an alternative impossible to implement, even if it did not violate commonly understood boundaries between private and public affa irs. In cyberspace, forcible entry does not exist, unless mandated by misguided policy.

    |Table of Contents | Next Chapter |

  • .
    .
    .

    [library]

    .
    .
    .

    top

    .
    .
    .

    (.) Idiosyntactix