Idiosyntactix Strategic Arts and Sciences Alliance The Brand Name of the Media Revolution |
[library]
DCOM - Chapter 2
DEFENDING CYBERSPACE AND OTHER METAPHORS MARTIN LIBICKI
Deterring Information Attacks
Sections
Elements of Deterrence Defining the Incident Determining the Perpetrator Certainty of Response Conclusions A nation can defend its information infrastructure by denial, detection (with prosecution), and deterrence. Denial frustrates attacks by preventing them or limiting their effects. Detection followed by prosecution of the attacker inhibits attacks and takes the attacker out of circulation. Deterrence is the threat that a nation (or analogous entity[43]) can be punished for sponsoring such an attack.
Denial and detection are straightforward. No one argues that computer systems ought to be vulnerable to attack and penetration. Most detected cases of hacker warfare are crimes and therefore merit punishment.[44] Denial and detection may be less than satisfactory responses, however. Defenses, from one perspective, are good, but only up to a point. Although they can deny casual attacks, they fall before full-scale ones backed by the resources only a nation or some similarly financed transnational criminal organization (TCO) could provide. The ease by which hackers can attack a system from anywhere around the globe without leaving detectable virtual fingerprints suggests that the risk of punishment is low.[45] Hackers supported by foreign governments may be detected but later hidden (perhaps by allied TCOs) or discovered but not lie beyond extradition.
Should deterrence be part of a nation's information defense strategy? At a workshop sponsored by the Center for Advanced Concepts and Technologies,[46] more than two-thirds strongly replied "yes" to the question, "Should the United States have a declarative policy about its response to information warfare attacks?"
The term "deterrence" and its cousin "graduated response" appear to be leftovers from the Cold War, and if information warfare is regarded as an aspect of strategic warfare, they may well be. During the Cold War, the United States developed and adopted a policy of strategic nuclear deterrence, in essence, a warning to those who would attack to expect an attack in return.[47] Deterrence is commonly believed (if impossible to prove[48]) to have worked -- at any rate, the homeland of the United States was not attacked by a foreign force using either nuclear or conventional weapons. By analogy, analysts have wondered whether a strategy similar to deterrence could ward off attacks on critical U.S. information systems.
The argument here is that an explicit strategy of deterrence against attacks on the nation's information infrastructure is problematic and that little would be gained from making any such policy at all specific.
Need the United States declare that it reserves the right to strike back against an information attack? Any state that perpetrates harm to the U.S. homeland can already expect retaliation. After the bombing in Oklahoma City, an early false lead suggested a tie to radical Islamic states. In the Middle East the consensus was that the United States would retaliate in force if the lead were solidified by evidence: had Iran, for example, attacked an information system and caused casualties (e.g., an induced Federal Aviation Administration [FAA] outage, a badly set switch in a rail system), the United States would have retaliated as well. A destructive attack without casualties also could invite retaliation. Who would believe an attacker's protestation that reprisals were unwarranted because information terrorism was never official listed as an actionable incident?
The United States has never made clear its equation for how much harm from a violent incident merits how much retaliation. Sometimes the identity of the perpetrator makes a difference. The attack by the United States on Libya in 1986 would have incurred a greater risk if executed against a nation equipped with nuclear weapons (China) or capable of causing considerable mischief (North Korea). By contrast, Cold War U.S. nuclear retaliatory policy could be applied against any foe; designed for use against the Soviet Union, it could easily have been applied to a lesser aggressor.
Richard Hayes has outlined several prerequisites to the success of a strategy of deterrence.[49] Three concern explicit deterrence:
- The incident must be well defined.
- The identity of the perpetrator must be clear.
- The will and ability to carry out punishment must be believed (and cannot be warded off).
Two concern deterrence in kind:
- The perpetrator must have something of value at stake.
- The punishment must be controllable.[50]
Should information attacks be punished by information counter- attacks? Several factors argue yes. First, punishment in kind makes obvious what is being responded to. Second, it obviates difficult questions of moral equivalence (e.g., how many lives are equal to disruption of a credit-card validation system?). Third, restricting the response to the same channel limits the action-reaction cycle (and might keep the damage below what a conventional war, much less a nuclear war, could cause). If there were an information-warfare agency to handle retaliation (as in a spy-for-spy exchange, or the expulsion of someone else's diplomat in retaliation for expulsion of one's own), that might keep more powerful and dangerous institutions out of the game. Yet hacking computers to punish computer hacking would erode any moral argument the United States might make about the evils of hacking[51] -- even if it did satisfy the desire to render "a taste of your own medicine."
The two factors against retaliation in kind are asymmetry and controllability. If a nation that sponsored an attack on the U.S. infrastructure itself lacked a reliable infrastructure to attack, it could not be substantially harmed in kind and therefore would not be deterred by equal and opposite threat. North Korea, for example, does not have a stock market to take down; phone service in many Islamic terror-sponsoring states is already hit-or-miss. Controllability -- the ability not just to achieve effects but to predict their scope -- is difficult. To predict what an attack on someone's information system will do requires a good intelligence about how to get in it, what to do inside, and what secondary effects might result. The more complex systems become, the harder predicting secondary effects becomes -- not only effects inside the system but also outside it or even outside the country. Retaliation may produce nothing, may produce a nothing that can be made to look like something, may produce something, may produce everything, or may affect third parties, including neutrals, friends, or U.S. interests. The NII, after all, is growing increasingly globalized.[52] Without the ability to control the size or nature of effects, graduated response is almost meaningless.
The difficulties involved in the three issues remaining to be discussed here -- defining the incident, determining the perpetrator, and delivering retaliation -- can be illustrated by eight vignettes. Note that retaliation against physical terrorism is a cleaner concept to apply (at least based on the first two criteria[53]) than retaliation against physical terrorism; yet it has been less than clearly successful as a policy.
What criteria should differentiate an actionable information warfare attack from one that is ignored? Nuclear events (even the smallest ones) are obvious (and rare); any hostile nuclear event can be declared as actionable. Hacker attacks -- information warfare in microcosm -- are numerous and for the most part trivial. There may a million break-ins on the Internet every year (see page 24). Most are home-grown although some originate overseas -- a fraction of which may be state-sponsored. Most of the million are pranks and do no damage. Even if damage is done, usually it is scarcely more than an annoyance. And even if either are grounds for individual punishment, it does not necessarily follow that they are sufficiently grave grounds for international retaliation. To retaliate against every break-in (even every state-sponsored break- in) would tax the principle of proportionality. Defining an actionable incident means determining how much harm is enough.[54]
Loss of life might be one threshold -- clearly, a hacker attack on a railroad switch that caused a fatal collision would be actionable. Yet fatalities are often only indirect results of the intended damage.
Should economic loss beyond a certain threshold (e.g., stock trades muddled) trigger retaliation? A threshold may be arbitrary, and no measure of the effect of an incident may be exact. What is the cost of preventing credit card purchases for a day? If forced to use other means of payment, some customers might use cash, others might come back another day, and still others might never make the intended purchase. Which result best measures the loss to the economy? The sum of all salaries of people not working? Or of those not working productively? How would one measure the loss of corrupted data? Would it be the time required to restore the integrity of the data, or the damage to the integrity of the system corrupted? Two vignettes illustrate some of the potential problems involved.
Vignette 1. What types of information warfare are actionable? An U.S. company bids against an Asian company to supply a telephone system to a third party. A member of the Asian country's intelligence service hacks into the computer of the U.S. company, determines the amount of the U.S. bid, tells its native company which undercuts the bid, takes the contract, and costs the United States thousands of potential jobs. Is this an actionable instance of information warfare -- and, if so, in what domain (e.g., is it spy-versus-spy)? When French intelligence officials were suspected of spying on U.S. firms, the United States retaliated by using its agents to acquire information about French firms (and got caught doing so).[55] During recent trade talks with Japan on automobiles, it was revealed that U.S. signals intelligence found valuable information on the Japanese negotiation strategy.[56] Was this information warfare? Had the tables been turned, how would the United States measure damage done to its interests in order to determine whether a certain threshold that could trigger retaliation had been crossed?
Vignette 2: Can damage be measured? The control centers of the FAA suffer from serious service outages that cause increasing flight disruptions and thus economic loss. At some point, someone checks the integrity of the FAA computer system and finds signs of hacker intrusion. The hackers are identified unambiguously and so - - as is rarely possible -- is the time of first penetration. Even after the operating software is cleaned up, considerable controversy surrounds any attempt to determine what damage, if any, was caused by the intrusion. If the 1990s are any indication, the FAA's current system is susceptible to increasing outages. Until an outage is linked to specific alterations in the system's code, the only way to gauge the independent effect of the attack on system uptime is to use statistics. Statistical methods can produce a range of conclusions that vary with the model used to estimate downtime in the absence of attack. If the outage did not cause an accident, but might have, would creation of a potentially life- threatening hazard be grounds for retaliation?
If an information attack were distinguished from background noise, the perpetrator caught, and a obvious chain of evidence pointing to command by or, at least assistance to a foreign government, then something actionable would have occurred. But how often can an attack be traced unambiguously? Perpetrators rarely leave anything as identifiable as fingerprints. Criminals often have habits that increase the chance of their being caught -- they brag, they return to the scene of the crime, they inflexibly adopt a particular method, they do not clean up their signatures -- but these are not hallmarks of professional operators. Because cold, professional hacking incidents are rare (or known ones are), the chance of detecting a carefully laid plan is unknowable. Even were the perpetrators caught, tracing them to a government is hardly guaranteed: hackers neither wear uniforms nor require enormous resources or instruments hard to find outside government hands (e.g., a tank).[57]
Vignette 3: Can the United States reliably tag an obvious foe as perpetrator? Jordan comes under military pressure from Iraq, and the United States ponders intervention. Suddenly, a series of mysterious, hacker-caused blackouts plague major U.S. cities. No perpetrator is identified, but both Hamas and Hezbollah take credit for the blackouts. It seems clear that the attacks were motivated by Iraq, as a warning to the United States not to become involved on Jordan's side. Or was it Iraq? Iran, to whom the United States is still the "Great Satan," might have a double motive -- to hurt the United States and draw it into conflict with its own rival, Iraq. Jordan would want the United States to take the crisis seriously and intervene on its side. Israel could want the United States to support Jordan (e.g., to see a greater U.S. presence over the horizon). Adding a wild card, North Korea, having just engineered a peace offensive, could have reason to create incidents that make it look benign in contrast to those the U.S. government looks likely to blame. Or maybe Hamas or Hezbollah were telling the truth after all. By analogy, after Pan Am Flight 103 exploded over Lockerbie, Scotland, in 1989, Libya, Syria, and Iran were all suspected of being responsible, until Libya's refusal to extradite suspects focussed attention on its possible role. The United States lacks the luxury of a single foe assumed to be lurking behind every information warfare attack.
Vignette 4: How reliably can state sponsorship be determined? As anti-Western sentiment increases in Moscow and Russia seeks to define a foreign policy independent from the West, the U.S. telephone system is hit by disruptive outages. Hackers are caught, who prove to be recent immigrants from Russia connected to the Mafiya, in turn, connected to the government in Moscow.[58] Should the government in Russia be held accountable? Many governments have ties to transnational criminal organizations. To some extent this reflects the corruption of government by crime, but governments could also use criminal organizations in lieu of their own official organs[59]. If a government could choose between perpetrating an attack through its own organs and contracting it out, most would take the latter option quite seriously.[60] A contractors's reliability might be questionable, but contractors often have effective ways to keep their own employees in line.
Vignette 5: Can state sponsorship be assumed even when evidence leads back to state officials? The KGB admits that the Mafiya (Vignette 4) is linked to a KGB unit. The Russian government concedes it is having difficulty reestablishing control over the unit (which says but cannot prove that it was acting on KGB orders). History is replete with examples of free-lancing intelligence tolerated because having militaries involved can complicate deniability. Russia's rationale looks plausible, so the incident is not considered actionable -- but is this view accurate? If a rogue commander were to launch a nuclear weapon, a government could be held responsible for near criminal negligence in the command and control of dangerous equipment. Yet do computers used for hacking qualify as dangerous equipment? Shrugging off a rogue battalion that is invading its neighbor is more difficult, because an attack that large cannot be undertaken without government complicity. But must a serious hacking incident be that resource- intensive? A few bright hackers might suffice.
A policy of deterrence presumes incident and response are tightly linked. But is it wise policy to promise a response, regardless of the identity of the perpetrator? One would not want a retaliatory policy with no flexibility whatsoever; yet clarity is the hallmark of deterrence and sophistication tends to cause blurriness.
U.S. strategic retaliation designed during the Cold War projected a tough adversary; other potential attackers were lesser cases. In information warfare, there is no canonical foe and no lesser case. Ordinarily, retaliation serves to deter the recurrence of incidents, yet the United States is vulnerable to attacks because systems security is weak and weak systems security reflects the perception that potentially damaging attacks are rare. A sufficiently nasty attack might catch people's attention and promote security. A second attack would therefore be harder to pull off.
The next three vignettes differ only in the identity of the perpetrator as a way of exploring the nature of the response, and, as such, the certainty of a sufficiently serious response.
Vignette 6: Can retaliation be perceived as a mere excuse for military action by the United States? A hacker attack on the primary U.S. funds transfer system causes it to shut down while system faults that led to corrupted records are traced and eradicated. Before order is restored, the extended shutdown of the system leads to widespread layoffs, bankruptcies, and cascading panic. The crime is traced to agents of the Iranian government, and the United States retaliates with air strikes against Iran's nuclear infrastructure, setting back the presumed Iranian weapons program by ten years. In retaliation, Iran attempts to close the Straits of Hormuz, which the United States reopens but only after some fighting and a steep hike in oil prices at home. When the dust settles, retaliation was adjudged worthwhile because it deterred further attacks on the funds transfer system. Yet the United States has a long history of worrying about Iran's nuclear program and its potential threat to oil flows near the Straits of Hormuz. Retaliation for the attack on the funds transfer system seemed to provide a convenient pretext for doing what was otherwise useful.
Vignette 7: Are there countries against which the United States should hold their fire if even provoked? Consider that North Korea was responsible for the attack on the funds transfer attack. If sufficiently irked by U.S. retaliation, North Korea is in a position to cause considerable damage to South Korea. North Korea has artillery overlooking Seoul and forces it can and has sent southward; it probably has nuclear weapons. Is the United States willing to risk a second Korean War over an incident that might have been thwarted had a few million dollars more been invested in security? Would the United States be comfortable having to explain that calculus to anyone else? Would investing to secure the nation's critical systems be less costly and risky than planning for a retaliatory act whose consequences cannot be controlled? In the end, the United States does little (just as South Korea did little in response to the assassination of its top officials in Rangoon, in 1983, and to the destruction of one of its airliners in 1987). Inaction was rationalized by the perception that the government of North Korea was probably declining as a threat to its neighbors and would eventually fall in due course.
Vignette 8: Are there countries whose activities ought to be ignored as short-term irritants? This time Serbians are responsible. Again, retaliation is considered and again rejected. Pressure is put on Serbia to extradite those responsible, but few in the United States expect that this request will be given high priority any more than the search for war criminals did. Officials conclude that the Serbia's enmity toward the United States will fade as the former Yugoslavia sorts itself out; there is no geostrategic rationale for risking an armed conflict that might result in a cycle of retaliation. Officials are relieved they did not institute a deterrence policy that would have required them to make good on a promise of retaliation.
An explicit specification requires a nation to respond to what, in the case of information attacks, could prove to be gauzy circumstances. Lack of a specification does not prevent ad hoc retaliation.
It is difficult to see how an explicitly declared deterrence policy could be made to work, but it is easier to see what the problems are in trying. A declared policy that could not be reliably instantiated would soon lack credibility. If thresholds were too low or the proof that a nation sponsored terrorism not sufficiently convincing, then retaliation would make the United States appear the aggressor. If thresholds were too high and standards of proof too strict, a policy of retaliation would prove hollow. If the United States were to retaliate against nations regardless of other political considerations, it would risk unwanted confrontation and escalation; if its responses were seen as too expedient, retaliation would seem merely a cover for more cynical purposes.
Is it even obvious that the United States should react vigorously to information attacks? To do so might tell others that they have hit a nerve and raise the possibility that the United States could be hurt enough to be dissuaded from action in its interests or could become distracted in crisis. The opposite view, that information attacks are problems only for those too negligent to secure their own systems, would suggest that they are unlikely to alter U.S. foreign policies or its defense posture. This stance might persuade potential opponents that the results would be of no official concern to the United States -- it cannot affect its policy and cannot give cheer to its enemies. Thus, it would be of no political gain to them.[61]
.
.
.
[library]
.
.
.
.
.
.