Chapter 5. Security and Integrity Problems

Security used to be
an inconvenience sometimes,
but now it is a necessity
all the time.

MARTINA NAVRATILOVA,
AFTER THE STABBING OF
MONICA SELES BY A FAN
OF STEFFI GRAF, 1993

WE NEXT CONSIDER A VARIETY OF PROBLEMS relating to system security, including confidentiality, integrity, and availability, among other requirements. In Section 5.1, we examine various malicious and other purposeful computer misuses that relate to security and integrity. In subsequent sections, we examine security accidents (Section 5.2) and computer misuse as it relates to applications—including spoofs (Section 5.3), denials of service (Sections 5.4 and 5.5), financial misuse and accidents (Sections 5.6 and 5.7), electronic voting (Section 5.8), and incarceration of inmates (Section 5.9). Privacy issues are deferred until Chapter 6.

5.1 Intentional Misuse

Every system has vulnerabilities.

Every system can be compromised.

There have been many widely publicized cases of computer misuse, including attacks by the West German Wily Hackers, various computer-based credit-card and telephone frauds, increasingly many types and instances of viruses in personal computers, and the Internet Worm. A few perpetrators have received jail sentences.

Just a few of the most interesting cases of malicious computer misuse are noted here. Many others are given in the on-line Risks Forum (see Appendix A) and the quarterly issues of Software Engineering Notes—for example, see [113].

5.1.1 Misuse by Nonauthorized or Unexpected Users

A few representative cases of misuse are included in this section.

The Internet Worm

Late on the evening of November 2, 1988, a program was released that spread itself iteratively across Berkeley Unix systems throughout the Internet (the logical extension of what used to be the ARPAnet, encompassing many other networks) [35, 57, 138, 150, 159]. The worm program exploited three different trapdoors: the debug option of sendmail, gets (used in the implementation of finger), and remote logins exploiting .rhost files. It also made a somewhat gratuitous attempt to use a few hundred passwords previously obtained by selected preencryptive matching attacks. The result was a self-propagating worm with viruslike infection abilities that was able to copy itself into many systems and then to run. It moved from system to system, entering by whichever of these flaws it found first.

The Internet Worm provides an example of the intentional propagation of effects from system to system throughout a network (but, as noted in Section 4.2, the proliferation within each system was not intended); access to one system was used iteratively to gain access to other systems. Although multiple weak links were exploited, there was essentially only one agent that was causing the iterative propagation, and the four weak links were the same throughout.

The perpetrator, Robert Tappan Morris, was indicted on a felony count, and found guilty. He was sentenced to community service with a fine of $10,000 (SEN 14, 6; 15, 3; 16, 2). At his trial, Morris testified that about 50 percent of his time in developing the worm had been devoted to preventing it from proliferating within any given system, while trying to maintain its presence in case a copy was detected and deleted.

The Internet Worm’s exploitation of sendmail, finger, and .rhosts required no authorization! Debate over whether Morris exceeded authority is therefore somewhat moot.

Seldom has the computer community been so divided by one incident. In the final analysis, there seem to be relatively widespread moderate sentiments that the Worm was a badly conceived experiment that should never have been attempted, and that the sentence of its creator was essentially fair—neither meting out excessive punishment nor failing to provide a deterrent to would-be emulators. But many diverse positions have also been expressed. On one side are people who think that Morris should have received an extensive jail term, or perhaps should have been barred from the computer profession for life, or that companies willing to employ him should be boycotted. On the other side are people who think that Morris became a sacrificial lamb to his country in demonstrating a diverse collection of flaws that have long cried out for greater recognition, or that he was an accidental whistleblower whose most serious mistake was a programming error that enabled the Worm to replicate out of control within each penetrated system, contrary to what he had intended. Then there are people who think that the ARPAnet was a sandbox anyway, whose main original reason for existence was to conduct experiments with networking! Incidentally, once it was evident that the Worm had run amok, Morris asked a friend to post a message explaining the Worm and how to provide an antidote. Unfortunately, that message did not spread widely because of the network congestion; even worse, the Department of Defense Milnet authorities cut that network off from the rest of the Internet, so that Milnet system administrators were in the dark as to the problem and its potential remediation. Doug Mcllroy [89] expressed concerns that many of the potential lessons may have gone unnoted. “Those who repudiate only the final misdeed implicitly absolve the prior lapses in ethics and oversight that made the worm fiasco possible. They signal that you can get away with building, promoting, distributing, and selling deceitful software as long as you are not caught exploiting it.” The debate still continues, while similar risks persist.

West German cracking activities

West German crackers planted Trojan horses, attacked NASA systems, and exploited flaws in a new operating-system release. A perpetrator was arrested in Paris. After detecting breakins at the Lawrence Berkeley Laboratory, Cliff Stoll expended considerable effort to track down one of the perpetrators, Markus Hess [162, 163]. Hess and the others were also accused of computer espionage for the Soviet spy agency, the KGB. Three of the Wily Hackers were indicted on espionage charges, and received mild convictions for espionage but none for the system attacks.1

Dutch crackers

The Netherlands, which previously had not been particularly active in arresting people for malicious hacking, took two young men into custody on January 27, 1992. They were charged with forgery (changing files), racketeering (masquerading), and vandalism, with a motive of “fanatical hobbyism.”2

“Mad Hacker” jailed

Nicholas Whiteley (self-styled as the “Mad Hacker”) was sentenced on June 7, 1990, to 4 months in jail for malicious damage to property (tapes), perhaps because the British Computer Misuse Act was not in effect until August 29, 1990, and would not cover his computer-breakin activities. His appeal failed (SEN 15, 5; 16, 2).

Cracking NASA

Richard G. Wittman of Colorado was accused of two felonies in cracking into NASA computers and of five misdemeanors for interfering with government operations. He was sentenced to a 3-year probation period and to mental health treatment (SEN 17, 1; 17, 3).

Other breakins

There were numerous other reports of arrests for breakins. Nine high-school students in Pennsylvania were arrested along with other U.S. students for breaking into computers and for obtaining millions of dollars worth of goods and services using credit-card numbers gained through telephone taps. In the same week in July 1987, two other people were arrested for breakins at Stanford and Berkeley (SEN 12, 4).

Australian activities

Three Australians were indicted for breaking into and tampering with Internet computers in the United States and with other computers in Australia.3 Another was fined $750 for unauthorized access (SEN 15, 3). A 20-year-old computer-science student was charged with unauthorized access and with 47 counts of misusing the Australian telephone system for unauthorized computer access to U.S. computers (SEN 16, 4).

Wollon-gong but not forgotten

A support engineer who had her employment with the Wollongong Group of Palo Alto, California, terminated for “nonperformance” in November 1987 subsequently used her password (which surprisingly was still valid 2 months later) to offload proprietary source code.4

“Terminus.”

Computer consultant Leonard Rose pleaded guilty to federal felony wire-fraud charges for stealing Unix source code. He was also accused of having distributed a Trojan horse program that would allow him to gain unauthorized system access.5

Telephone phreaking and related misuse

From the early days of tone generators to spoof the coin-box bongs for nickels, dimes, and quarters, to “blue boxes” that emit appropriate internal operator tones, to credit-card scams and computer penetrations that enable free telephone calls, the telephone phreakers have been notably successful over the years. Recently, law enforcement has been a little more active in prosecuting the phreakers. An 11-count federal indictment accused five metropolitan New York City youths of computer tampering and fraud, wire fraud, wiretapping, and conspiracy related to misuse of telephones, computers, and credit (SEN 17, 4). Various other indictments have also been reported (for example, SEN 15, 3). Telephone-system penetrations have taken place, involving misuse of switching systems and information systems. Herbert Zinn (“Shadow Hawk”) was accused of breaking into AT&T and government computer systems, and of stealing software (SEN 12, 4; 14, 2). Kevin Mitnick was arrested when he was 17 for breaking into Pacific Bell computer systems and switches, altering telephone bills, and stealing data. He was arrested 8 years later for breaking into a Digital Equipment Computer, stealing a computer security system, and making free long-distance telephone calls.6 There are numerous other cases as well.

Cellular-telephone fraud

Capturing and replaying cellular-telephone identifiers to avoid charges is amazingly simple, and is increasingly popular. The identifying electronic serial numbers (ESNs) can be acquired by using scanners; the stolen ESNs are implanted in “cloned” cellular telephones whose call charges are then billed to the unknowing victim of the stolen ESN. In one early example, 18 people were arrested for altering cellular mobile telephones (SEN 12, 2). The problem of cellular-telephone fraud has become significantly more severe in recent years, and the losses are now on the order of $0.5 billion each year.

Prestel

A British reporter was given a demonstration of how to penetrate British Telecom’s Prestel Information Service. He was shown what appeared to read Prince Philip’s electronic mailbox (which was actually a dummy) and watched a financial market database being altered (again, possibly a demo version). The event was widely reported.7 The subsequent prosecution was the first such case in Britain (SEN 11, 3), but the conviction was reversed by the Appeal Court and the House of Lords (SEN 13, 3).

TV or not TV

A television editor who had been hired away from another station was charged with unlawfully entering his former employer’s computer system. He had originally helped to create the system’s security (SEN 14, 2).

Fox Television discovered that their computers had been penetrated by a journalist who had gotten access to sensitive files (SEN 15, 3).

Debit-card copying

There have been reports of the magnetic strips on public-transit fare cards (for example, for the Metro in Washington, D.C., and for the San Francisco Bay Area Rapid Transit, BART) being copied repeatedly onto cards with almost no remaining value. Even though the information is encrypted, the copy/playback attack works.

Database misuse

A 14-year-old boy from Fresno, California, used “secret” (!) access codes obtained from an on-line bulletin board, and accessed the TRW credit information system, from which he acquired credit-card numbers, with which he charged at least $11,000 worth of merchandise (SEN 15, 1). A 12-year-old boy from Grosse Ile, Michigan, was arrested for tapping into TRW credit files and posting credit-card numbers on electronic bulletin boards (SEN 15, 3). Another 14-year-old from Prairie Village, Kansas, browsed through confidential files of at least 200 companies, and was also able to access an Air Force satellite-positioning system (SEN 14, 6). The ease with which such database accesses can be accomplished suggests the expression “child’s play.”

Voice-mail cracking

The Certified Grocers of California (CerGro) had their voice-mail system cracked. Two hundred voice-mailboxes were reprogrammed for illicit purposes, including a database for stolen credit-card numbers, cocaine prices, and prostitution (SEN 13, 4). The Houston City Hall voice-mail system had been configured with no passwords required; someone rerouted confidential voice-mail messages to nonintended recipients (SEN 16, 4).

Trial by BBoard

In the aftermath of an article he wrote on malicious hacking activities, Newsweek reporter Richard Sandza was subjected to an electronic bulletin-board trial (in absentia, of course), and pronounced guilty. Consequently, the retribution involved someone accessing the TRW credit database to gain and post Sandza’s credit-card numbers. As a result, $1100 in merchandise was charged to him, and his home computer was crashed remotely via his unlisted telephone number (SEN 10, 1).

Politicians’ computers get hit

In their Washington, D.C., offices, Republican U.S. Representatives Zschau and McCain each had computer systems penetrated via dialup lines, with letters to constituents deleted and addresses deleted from mailing lists (SEN 11, 2). In New Jersey, a Republican state legislature staffer admitted breaking into the Democrats’ computer and obtaining thousands of documents (SEN 16, 1). Information on about 17,000 of Ross Perot’s supporters disappeared from a campaign computer, although it was not known for sure whether the deletion was done maliciously or inadvertently (SEN 17, 4).

Student grade fraud

In various cases, students have gained access to computers to change their course grades, at Stanford around 1960 and, in more recent cases, in Alaska, Columbia University, and the University of Georgia (SEN 8, 5; 16, 3).

5.1.2 Pest-Program Exploitations

There have been many instances of Trojan horses, including time bombs, logic bombs, and personal-computer virus attacks. Indeed, new strains of viruses are continuing to emerge at an alarming rate.

Trojan Horse Exploitations

A few of the more interesting cases involving Trojan horses are included here.

Trojan dog

Richard Streeter of CBS, Inc., downloaded onto his PC a bulletin-board program that was advertised as providing enhanced graphics. Instead, the program wiped out all of his files and displayed “Arf! Arf! Got You!” on his screen.8

Trojan turkey

A program passed around the ARPAnet reportedly drew a nice picture of a turkey on your screen when you compiled and ran it. The program also deleted all unprotected files (SEN 13, 3).

Password-catching Trojan horses

Beginning in fall 1993, Trojan horses appeared in the network software of numerous Internet computers. In particular, telnet, a program that permits logins to and from remote sites, was altered so that it could capture all passwords presented to it. This type of attack is an increasingly serious threat, and is ongoing.9

In the PC-VAN network joining NEC PC9800 computers in Japan, user passwords were captured by a Trojan horse and recorded in a cryptographic form that could later be decrypted by the attacker (SEN 13, 4).

Emergency system Trojan horsed

A former employee maliciously modified the software of the Community Alert Network installed in New York and in San Jose, California. The software did not fail until it was needed in response to a chemical leak at Chevron’s refinery in Richmond, California. The emergency system was then down for 10 hours (SEN 17, 4).

Beware of smart telephones

A scam was detected involving third-party pay telephones that could capture and record credit-card numbers for later illegal reuse (SEN 16, 3). This type of Trojan horse attack is also seen in a teller-machine fraud noted in Section 5.6.

Time-Bomb and Logic-Bomb Exploitations

Several specific forms of Trojan horses are considered next—namely, a few cases of intentionally inserted Trojan horse attacks involving time bombs or logic bombs. Several of these cases involved attempted blackmail or extortion.

General Dynamics logic bomb

A programmer, Michael John Lauffenberger, was convicted of logic bombing General Dynamics’ Atlas rocket database. He had quit his job, but was hoping to be rehired at a premium when the time bomb went off, because he thought he would be the only one who could fix it. It was discovered accidentally by another programmer.10

New York insurance database logic bomb

While working as a consultant to a New York City law firm to upgrade its medical-payments database system, Donald R. Lewis planted a logic bomb intended to crash the system when presented with a claim numbered 56789. The law firm paid him an extra $7000 to do the repair job (removing a conditional statement from the program), but then must have discovered what happened, because the bottom line was that he had to pay them back the $25,000 he had gotten for the entire job (SEN 17, 4).

Time bomb deletes brokerage records

Donald Gene Burleson was prosecuted on felony charges for planting a time bomb that, shortly after he was fired, deleted more than 168,000 brokerage records of USPA & IRA in Fort Worth, Texas. He was convicted and fined (2 to 10 years, up to $5000).11

Logic bomb in Lithuanian nuclear-power plant

Section 2.10 notes the case of a power-plant employee who installed a pest program in the control software, hoping to be paid to repair the damage.

Typesetter disables employer’s system?

A British freelance printer was accused of taking revenge on a company that refused to pay him £2000 by locking up their computer with a password only he knew. The company eventually went bankrupt and sued him. He denied the charges.12

Pandair logic bomb

A contract programmer, James McMahon, was accused of planting logic bombs in a Pandair Freight system in the United Kingdom. One locked up terminals, and the other was set to wipe out memory. However, he was cleared of all charges for insufficient evidence (SEN 13, 1; 13, 2).

Viruses

There are many different types of viruses, and a continual emergence of variants. The number of distinct personal-computer virus strains increased from five at the beginning of 1988 to over 1000 early in 1992, and has continued to grow dramatically since then; numbers over 2000 were commonly cited by mid-1993. The increase in the number of virus strains and the growth of the antivirus industry are both enormous.

Self-modifying (polymorphic) viruses and stealth viruses are emerging that can conceal their existence in a variety of ways, including by distributing themselves in smaller pieces.

Personal-computer viruses would be much less of a problem if the personal-computer operating systems had any serious security that could permit the operating system to protect itself against viruses. Note that a virus executes as though it were the legitimate user of the system, and thus is simultaneously a masquerader and a Trojan horse operating with whatever privileges the user has (that is, normally omnipotent!).

The on-line RISKS archives tracked many of the early reports of personal-computer viruses. However, it rapidly became clear that continuing to do so would be a losing battle. Fortunately, at about that time, the VIRUS-L newsgroup appeared (known as comp.virus on USENET—see Appendix A).13

We list here only a few of the highlights of the virus phenomenon.

Jerusalem Virus attacks

A strain of the Jerusalem (a.k.a. Israeli) virus that triggers each Friday the Thirteenth infected the U.S. Government Printing Office (SEN 15, 2). On Friday the Thirteenth of April, 1990, there were attacks all over China, which was also plagued by attacks on government computers in more than 90 departments in September of 1991 (SEN 15, 2; 15, 3). The World Bank was attacked in the same month (SEN 16, 4).

“Clearing up” the Italian judiciary

The Gp 1 virus was found throughout many Italian judicial computer systems. Its effect was to award the maximum security clearance to uncleared users (SEN 17, 3).

The AIDS Trojan horse

Joseph Popp allegedly mailed more than 20,000 floppy disks from the “PC Cyborg Corp” that supposedly contained information on AIDS, but actually had a Trojan horse, which when executed demanded £200 Sterling to eradicate the pest program (SEN 15, 1). Popp was subsequently extradited from the United States to Britain and accused on 11 charges, including attempted blackmail of medical institutes (SEN 17, 2).

Contaminated products

One of the stranger aspects of viral propagation is that several of the products designed to detect and remove viruses were themselves contaminated, resulting in further contaminations when they were used. Recorded instances included a contaminated antiviral program called flushot, which itself contained a Trojan horse. In addition, Leading Edge shipped 6000 IBM-compatible personal-computer systems with the Michelangelo virus, which wipes the hard disk on the artist’s birthday, March 6. Intel shipped 800 copies of infected LANSpool 3.01 computer network software, also infected with Michelangelo. DaVinci Systems shipped 900 copies of eMAIL 2.0 demonstration disks that were infected with Michelangelo. Prior to the birthday, Norton released a free utility to detect Michelangelo; unfortunately, it had undocumented “features” that could wipe anything after the first partition. On the birthday, various Michelangelo attacks were reported, in New Zealand, Australia, Japan, China, Poland, Germany, South Africa, and Canada. There were also some instances in the United States, although the enormous media hype that preceded the day may well have raised the levels of awareness sufficiently to minimize further damage. Novell accidentally shipped a self-masking “stealth” Stoned III virus to as many as 3800 of its customers (SEN 17, 2). The Jerusalem virus noted previously was found in a commercial release of game software (SEN 15, 3). Appendix 9 of Ferbrache [44] lists 17 distinct vendor-software packages that were contaminated in the shrinkwrap. MSDOS, Macintosh, Atari, and Amiga viruses are included in the list.

Even a bogus scare can cause troubles

A Friday the Thirteenth attack on the SunOS system was widely rumored on the day before to have been planted by a disgruntled employee of Sun Microsystems. The rumor had almost no substance (there had actually been a clock-related bug fix). However, some of the panic caused one installation to set the clock back so that Friday would appear to be Thursday—and its screenblank program then had to wait 24 hours before it could check for further activity (SEN 13, 3).

5.1.3 Misuse by Authorized Personnel

In various cases, misuse has been perpetrated by a user with some authorized access, possibly exploiting system vulnerabilities to gain additional access as well.

Insider frauds

Computer-based financial frauds committed by insiders are included in Section 5.6. Election frauds are discussed in Section 5.8.

Insider misuse of databases

Examples of insider misuse relating to privacy violations in law-enforcement and government databases are given in Section 6.2. There have also been reports of insiders altering law-enforcement databases to delete references to citations and arrest warrants.

Evidence tampering

The ease with which fingerprint evidence can be faked is demonstrated by a New York State Police scandal in which, over a period of 10 years from 1982 to 1992, all five investigators in the identification unit of Troop C in Sidney, New York, were implicated in repeated cases of evidence tampering, particularly to provide incriminating “evidence” where none existed. They were responsible for investigations in five upstate counties. Various prison sentences were meted out.14

Digital evidence is also subject to tampering, as illustrated by the novel and movie, Rising Sun. Even if great care is exerted to protect the integrity of digitally recorded images, those images can be altered with relative ease by appropriate insiders. The same is true of electronic mail, as noted in Section 5.3. On the other hand, such alterations can be used constructively, as in the editing of facial expressions to produce the proper smiles on the Tyrannosaurus Rex in Jurassic Park and the removal of a pageturner’s hand from a video of a Yo-Yo Ma performance.

5.1.4 Other Cases

An enumeration of computer security problems is never complete. Here are two miscellaneous types.

Theft of computers

Several cases of stolen computers have been noted, including potential misuse of data. In one strange case, a thief purloined the computer of a tax preparer, but later returned the floppy disks (SEN 13, 3).

Inference

In many cases, information that is not directly accessible to users—whether or not they are authorized—can be inferred, by logical inference or by exploitation of covert channels (see Section 3.4). An example is provided by someone taking a test who noticed a reverse symmetry in the emerging answers; by intuiting that the mask used to score the first half of the test would also be used for the second half when the mask was flipped over from top to bottom, the test taker was able to guess correctly the answers to the questions he had not yet answered, and to detect incorrect answers where there was a mismatch in those already answered (SEN 16, 3). (Other examples of inference possibilities are given in Challenges C5.2 and C5.3 at the end of the chapter.)

5.1.5 Comments on Intentional Misuse

The increased prevalence of breakins, misuse by insiders, and personal-computer viruses may be considered a symptom of problems to come. Computer systems and networks often do not enforce sufficient security, and, in many cases, they are intrinsically vulnerable. The existing laws are not adequately applicable. (What do “access” and “authorized access” mean?) Ethical teachings are desirable, but may not have much effect on determined misusers, especially in a society in which fraud and creative malpractices are commonplace—and are even considered as successes by some people.

Easy access is not a virtue in systems in which critical functionality (for example, life preserving) can be compromised. Security technology has been advancing in recent years. However, attack techniques have escalated accordingly—somewhat similar to strains of bacteria that continually mutate to develop immunity to new antibiotics. Preencryptive dictionary attacks on password files described in 1979 by Bob Morris and Ken Thompson [96] have been used, for example, in penetrations of U.S. computers from Australia and Europe and within the United States. Other password weaknesses are also being exploited. Electronic capture of vital information is becoming easier.

The security provided by many computer systems is fundamentally flawed. In some cases, valid security techniques exist but are not being used properly. We need hardware and software systems that can hinder misuse by insiders as well as by penetrators. People are the ultimate problem. Despite improving technology, serious risks remain.

Although cases of security penetrations and misuses by authorized users continue, many such cases remain unreported while further vulnerabilities lurk—waiting to be abused. Some of the deeper implications of the overall security problem remain obscured from general view.

5.2 Security Accidents

In addition to security problems caused by malicious misuse, we also consider accidental security problems, such as the following.

The password file and the message of the day were interchanged

One day in the early 1960s on the MIT Compatible Time-Sharing System (CTSS), the entire unencrypted password file appeared as the message of the day whenever anyone logged in. This mishap was the result of two different people using the same editor in the same directory, one editing the password file and the other editing the message-of-the-day file, with the editor program using fixed names for the temporary files. The original design had made the implicit assumption that the editor would never be used more than once at the same time in the same directory. See [96, 28] and a note from F.J. Corbató in SEN 15, 2, who added this insight: “The tale has at least two morals: First, design bugs are often subtle and occur by evolution with early assumptions being forgotten as new features or uses are added to systems. Second, even system programmers make mistakes so that prudent system management must be based on expecting errors and not on perfection.”

Discovery software exposure

In the preparations at Rockwell International for the shuttle Discovery’s return on August 22, 1988, the protection provided by IBM’s access-control facility RACF was accidentally disabled, leaving the software capable of being modified by programmers who were not supposed to access the system except in “browse” mode. Changes to the software could have been made either accidentally or intentionally at that time (SEN 13, 3).

Incomplete deletions and other residues

The RISKS archives include quite a few cases of information residues resulting from improper deletion of data. The Air Force sold off more than 1200 surplus digital tapes and almost 2000 analog tapes, many with sensitive data that had not been erased. Five out of the seven installations investigated had inadvertently released tapes with secret data.15 The confidential files of a federal prosecutor in Lexington, Kentucky, remained on disks of a broken computer that were sold off for $45 to a dealer in used government equipment. The data included sealed federal indictments and employee personal data. Because the machine was broken, normal deletion did not work and magnetic erasure had been attempted (SEN 15, 5). A similar case involved FBI data (SEN 16, 3). A British military laptop was stolen, containing “extremely sensitive” information on the Gulf War operations (SEN 16, 2). Because many systems perform deletion by setting a deleted flag or removing a pointer, rather than by actually doing the physical deletion of the data, this problem is much more widespread than is generally realized.

Damning residues

Whether such residues are good or bad may depend on who you are. For example, White House backup computer files remained undeleted after the Irangate saga (SEN 12, 2). A similar case involved Brazilian president Fernando Collor de Mello (SEN 18, 1, 7). The residual presence of Gulf War battle plans provided incriminating evidence in a case involving the theft of $70,000 worth of computers (SEN 17, 4).

After-effects from residues

In a further twist, a New Zealand man bought old disks from Citibank that apparently contained residual details of overseas accounts and money-laundering activities. His death under mysterious circumstances was suspected to be linked to his possession of the damaging data (SEN 18, 1, 13).

5.3 Spoofs and Pranks

This section considers a few mischievous spoofs and pranks.16

5.3.1 Electronic-Mail Spoofs

There have been several now-classical cases of April Fools’ Day electronic mail (E-mail). Of particular note are the ground-breaking 1984 message allegedly from Chernenko and the 1988 message allegedly from Gene Spafford. (Perhaps leap years are special?)

Chernenko

The 1984 Chernenko Spoof announced that the USSR had joined the Internet world, and looked forward to open discussions and peaceful coexistence. (The message is shown in Figure 5.1, with the header information slightly simplified.) The USENET return path for the message ended with moskvax! kremvax! chernenko; the original address information also contained cia and nsa, to add to the humor. Piet Beertema, the creator of the spoof, had rigged the mailer tables so that not only did the outgoing mail have the bogus address, but also responding mail was delivered to him. Perhaps most fascinating was the collection of responses that ensued. Some people took the message as genuine, and were either indignant at its content or joyously supportive. Others recognized the spoof, and typically expressed either delight or hostility (for example, indignation at the rampant misuse of network resources). (See SEN 9, 4, 6-8, for Piet’s description of the event and the responses he received.)

Figure 5.1 Chernenko Spoof

image

image

Spafford

The 1988 Spafford Spoof was a serious discussion on how to recognize April Fools’ Day messages—for example, a date of April 1, a time of 00:00:00, subtle “mispellings,” use of nonexistent sites and/or Russian computer names such as kremvax or moscvax (sic), funky message IDs, and names of “well-known” people. There were also warnings of impending risks and a discussion of the dangers of bogus messages. Attentive reading of the message (Figure 5.2) shows that each of the proffered warning signs is present in the message itself, suggesting that the message was a self-referential hoax.17 Although not publicly acknowledged at the time, the perpetrator was Chuq von Rospach.

Figure 5.2 Spafford Spoof

image

An Aporkryphal Tail?

E-mail tends to spread widely around the Internet. Figure 5.3 illustrates the question of whether a particular piece of E-mail is genuine, and whether its sender is authentic.

Figure 5.3 Pig-farm E-mail (Zeke)

image

In this example, every litter bit counts. This item appears to be bogus—although nicely hammed up. IPFRC was not a known site address (noted by Michael Wagner), recalling Chernenko@moskvax. Furthermore, reference to “pig” instead of “hog” is highly suspect, according to “Billy Bob” Somsky. Perhaps my SRI colleagues and I can help get to the bottom of this story. As part of a system for detecting and analyzing anomalies, we have developed a Security Officer User Interface, known as SOUI, which might have been of help in bringing in the sow.

Various conclusions are relevant here. E-mail spoofs are still relatively easy to perpetrate in some systems—for example, using trapdoors or overly powerful privileges to insert bogus from addresses or to alter existing messages. Authentication of mail can be achieved by addition of cryptographic integrity seals [66], but they have not yet become widely used. Some spoofs could be dangerous if people were to take them seriously. However, explicitly clueing in the audience tends to spoil the fun on the more obvious and less harmful spoofs.

5.3.2 Pranks with Electronic Displays

We include two cases of tampering with electronically controlled public displays.

A Rose Bowl Weevil

Students at the California Institute of Technology are famous for their high-tech pranks. Here is a relevant example that affected the 1984 Rose Bowl Game.18 The victimized organizing committee might be called the lesser of weevils, that is, bowl weevils.

For a CalTech course named Experimental Projects in Electrical Circuits, Dan Kegel and Ted Williams devised a scheme to take over the Rose Bowl scoreboard for the 1984 Rose Bowl game between UCLA and Illinois. This project was sanctioned in the abstract by their professor, although he apparently did not know until just before the game that the Rose Bowl scoreboard had been selected as the target.

Situated 2 miles away from the stadium, they used a radio link to a microcomputer that piggybacked onto the scoreboard control circuits, enabling them to control the scoreboard. Their initial messages (“GO CIT!” and a graphic of the CalTech beaver) went largely unnoticed, so in the final quarter they changed the team names to CalTech and MIT. Unable to figure out what was happening, officials shut down the scoreboard, with CalTech leading MIT 38 to 9.

Newspapers of March 27, 1984, reported that the two CalTech seniors had pleaded nolo contendere to a misdemeanor trespassing charge and were fined $330 each. Their explanation was broadcast around the Internet, as shown in Figure 5.4. The City Prosecutor claimed that the actual damages to the scoreboard were $4200, but that the city had found a temporary way to fix it.

Figure 5.4 Rose Bowl scoreboard takeover (Kegel and Williams)

image

image

Electronic information-board spoofed

The intended message on a display board at Connecticut and L Street NW in Washington, D.C., was superseded by an offensive bogus message, which flashed five times in 25 minutes. No one among the operations staff seemed to know how the bogus display could have been perpetrated. No one took credit for it, and no one was able to identify the perpetrator(s) (SEN 15, 1).

5.3.3 Risky Spoof-Like Scams

Although closely related in their causes, cases with more evil intent and more serious effects have occurred, and are considered in other parts of this book because they cannot be classified as spoofs or pranks. Two serious cases involving impersonations of air-traffic controllers are noted in Section 2.4.2. Several scams are considered in Section 5.9 relating specifically to prison escapes, one of which involves a bogus E-mail message. There have also been a variety of scams in which users were requested by E-mail or other communications to give their passwords or to change their passwords temporarily to a given string, allegedly in the interests of catching a would-be thief or system breaker or detecting a security flaw (SEN 16, 3).

5.3.4 Risks of Minimizing Entropy

A reverse danger exists—namely, that a message dated April First really is genuine, but that it appears to be so bogus or so commonplace that no one believes it. This danger represents a form of wolf-wolf syndrome. The following case might be an example.

In his President’s Letter in the Communications of the ACM, December 1981, Peter Denning debunked the prediction that other intelligent life would be confirmed during 1982 (as claimed by some researchers involved in the search for extraterrestrial intelligence). Denning’s statement is reproduced in Figure 5.5.

Figure 5.5 Intelligent life in space (Peter J. Denning)

image

In the April 1991 Inside Risks column of the CACM, we noted that we had just learned that researchers at a major NASA center claimed to have confirmed extraterrestrial intelligence. They told us that the key is to couple Denning’s contention (that intelligent transmissions are hidden in pure white noise) with the well-known statistical fact that, whereas some real noise is approximately white, none is purely white. The researchers found a beam of pure white noise coming from the direction of Alpha Centauri. Cryptologists told us they do not hold much hope that they will decipher the signal successfully, citing a theorem that a perfectly unbreakable encoding process produces a signal that appears to be pure white noise.

5.3.5 Defensive Spoofs

Trojan-horse spoofs have also been used for defensive purposes. Here are two examples.

Cable freeloaders caught by free offer

Continental Cablevision of Hartford offered viewers a free T-shirt during the broadcast of the Holyfield-Bowe fight on November 14, 1992. However, the offer and its free telephone number were seen by only those people using illegal decoders; 140 freeloaders called the 800 number within minutes of the ad’s broadcast. Continental sent the callers T-shirts by certified, return-receipt mail, with a follow-up letter reminding them of the federal law (fines up to $10,000) and demanding a $2000 fine.19

Cadsoft detects illegal programs

Cadsoft offered a free demonstration program that stealthily searched the hard disk for the use of illegal copies—which were seriously cutting into their business. Whenever the program found any, it invited the user to print out and return a voucher for a free handbook. About 400 users responded, and received in return a letter from the company’s lawyers, who sought to collect at least 6000 deutschmarks from each of them. Employees of IBM, Philips, and German federal offices were among them, innocent or not.20

5.4 Intentional Denials of Service

Recognizing that intentional and unintentional problems may be closely related, we next consider denials of service that were caused maliciously—in addition to those caused by pest programs noted in Section 5.1. Denials of service that occurred accidentally (that is, nonmaliciously) are discussed in Section 5.5.21

Satellite TV interference and spoofing

A Home Box Office program was preempted in April 1986 by someone identifying himself as “Captain Midnight,” who overwhelmed the programmed movie (The Falcon and the Snowman)22 with his own message protesting HBO’s scrambling. The spoofing lasted for about 5 minutes (SEN 11, 3; SEN 11, 5; Associated Press, April 27, 1986). The intruder was later identified and apprehended.23

Similar interruptions of regular programming have occurred several times since, including a bogus program being inserted into the Playboy Channel (SEN 12, 4; SEN 16, 1), and a pirate broadcast that took over WGN-TV and WTTW in Chicago (SEN 14, 2). Video pirates also disrupted a Los Angeles cable broadcast of the 1989 Super Bowl (SEN 14, 2).

Newspaper advertisement altered

A coffee advertisement in the Italian newspaper La Notte was sabotaged, exploiting the lack of user authentication in the paper’s computer system. The Italian word for coffee was replaced with an Italian four-letter word. The following day, a crossword puzzle and the horoscope were similarly modified (SEN 17, 1).

Sabotage of Encyclopaedia Britannica database

An ex-employee, disgruntled about being fired, changed words in the text of the 1988 edition in preparation—for example, substituting “Allah” for “Jesus Christ”; the alterations were caught before publication (SEN 11, 5).

Analyst locks out others

In a dispute over alleged wrongdoings of his superiors, the Washington, D.C., finance analyst changed the password on the city’s computer and refused to share it. The Deputy Mayor (who was himself under investigation by a federal grand jury at the time) called him “a nerd and an imbecile” (SEN 11, 2).

Insurance computer taken hostage

The former chief financial officer of an insurance company, Golden Eagle Group Ltd, installed a password known only to himself and froze out operations. He demanded a personal computer that he claimed was his, his final paycheck, a letter of reference, and a $100 fee—presumably for revealing the password (SEN 12, 3).

Town’s data deleted

All of the computer-based financial records of Prescott Valley, Arizona, were wiped out, just before the new year’s budget was about to be prepared. Sabotage was strongly suspected (SEN 12, 2).

Company data deleted

A former cost estimator for Southeastern Color Lithographers in Athens, Georgia, was convicted of destroying billing and accounting data on a Xenix system, based on an audit trail linking the delete commands to his terminal—but not necessarily to him! The employer claimed damages of $400,000 in lost business and downtime.24

Bogus cable chips zapped

Denials of service may also be used constructively. American Cablevision of Queens, New York, had its revenge on users of illegally installed chips. It created a signal that zapped just the bogus chips, and then waited to catch 317 customers who called to complain that their screens had gone dark (SEN 16, 3).

Bringing the system down

Each computer system seems to have its own ways of being crashed. In the early days, people sought to find new ways to crash systems—although this effort was certainly antisocial. (The artificial intelligence folks at MIT solved this problem by installing a crash command, which neatly removed the challenge.) The MIT Compatible Time-Sharing System (CTSS) in the 1960s was brought to its knees by a command file (runcom, like a Unix shell script) that recursively invoked itself. Each recursion created a new saved file, and a hitherto undetected bug in the system permitted the saved files to exhaust the disk space. (This incident was reported in SEN 13, 3 by Tom van Vleck—who with Noel Morris diagnosed and fixed the problem.)

5.5 Unintentional Denials of Service

Outages have been caused (for example) by improper system design, programming, and maintenance, and “acts of God”—although almost always with underlying human causes (a theme that is discussed in Section 9.1). Vulnerabilities in many other existing systems could also result in serious loss of system survivability. This section considers various cases of accidental denials of service, along with a few related events.

5.5.1 Hardware, Software, and Communications Problems

We first consider computer and communication systems with survivability requirements—that is, systems that are expected to continue to perform adequately in the face of various kinds of adversity. Fault-tolerant and nonstop (for example, Tandem) systems are designed to survive specific types of hardware malfunctions. Secure systems are intended to withstand certain types of misuse—such as malicious denial-of-service attacks that can impair functional survivability or diminish performance. Survivable systems may need to be both fault tolerant and secure—for example, if the perceived threats include hardware malfunction, malicious misuse, power failures, and electromagnetic (or other) interference. There may be critical hard real-time requirements as well, in which stringent time deadlines must be met.

We summarize several illustrative past problems that suggest the pervasive nature of the survivability problem; there are many diverse causes and potential effects.

The 1980 ARPAnet collapse

The 1980 ARPAnet collapse resulted in a network-wide 4-hour total outage, as discussed in Section 2.1, due to both hardware and software problems

Various telephone cable cuts

Cable cuts resulted in long-term outages, as noted in Section 2.1, including the 1986 isolation of New England from the rest of the ARPAnet and shutdowns of airports.

The 1990 AT&T slowdown

The 1990 AT&T slowdown resulted in an 9-hour collapse of long-distance services, as discussed in Section 2.1, due primarily to a flawed software implementation of the crash-recovery algorithm.

Phobos

The Soviet Phobos 1 probe was doomed by a faulty software update, which caused a loss of solar orientation, which in turn resulted in irrevocable discharge of the solar batteries. Phobos 2 encountered a similar fate when the automatic antenna reorientation failed, causing a permanent loss of communications (SEN, 13, 4; SEN 14, 6).

Automobile-traffic control

Section 2.11.2 notes the case in which a child was killed and another injured when computer-controlled traffic lights failed at 22 school crossings in Colorado Springs, Colorado—because of the failure of radio communications in receiving the time from an atomic clock. Other traffic problems were also reported, for example, one in Orlando, Florida, where cabling was left undone after a system upgrade (SEN 14, 1), a second due to a centralized single-computer outage in Lakewood, Colorado (SEN 15, 2), and a third in Austin, Texas, due to an untested upgrade (SEN 15, 3).

Automated teller machines

There have been numerous reports of automated teller machines (ATMs) accidentally gobbling up perfectly legitimate ATM cards (SEN 9, 2; 10, 2; 10, 3; 10, 5; 12, 2; 12, 4), as well as cases of entire bank ATM operations being shut down by system malfunctions (SEN 14, 1; 14, 2).

Business shutdowns

The archives include reports of numerous business outages, including gas stations and fast-food stores (SEN 11, 5; 12, 1). A newly opened supermarket in Aylesford was closed because a breakdown in its centralized computer prevented the bar-code scanners from working at all 32 checkout stations (SEN 17, 1).

Hi-tech theater controls

In 1985, the computer-controlled stage turntable for the musical Grind ground to a halt 30 minutes into the first act in Baltimore. The star of the show, Ben Vereen, gave an extemporaneous speech on how he felt about “the new mechanized world we live in” (SEN 10, 2).

The rotating stage for the American premiere of Les Miserables failed 30 minutes into the matinee performance on December 28, 1986, at Kennedy Center in Washington, D.C., necessitating $120,000 in ticket refunds (plus parking refunds as well) for the matinee and the evening performance because of “glitches in the . . . controls.” The turntable could be operated only at full speed, which was much too fast to be safe (SEN 12, 2).

The opening performance of The Who’s Tommy in Boston on December 1, 1993, had to be canceled because the computers controlling the high-tech musical did not function properly after the trip from the previous week’s performances in Louisville.25

The Amsterdam Stopera (a combined town hall and music theater) had installed a modern computer-controlled door system. Unfortunately, the doors did not work properly, and people were locked inside (Piet van Oostrum in SEN 14, 1).

In a somewhat less dramatic case, the Theatre Royal in Newcastle installed a new ticket-reservation system in time for the Royal Shakespeare Company’s visit. The system was down for several days, and no tickets could be sold. Yes, there was no backup (as noted by Robert Stroud in SEN 12, 2).

DBMS failure kills giraffes

The air-cargo database management system at the Amsterdam air cargo terminal failed completely. According to Ken Bosworth of Information Resources Development, it took several days and several dead giraffes before the problem was solved.26

Coke Phone Home

Telephone records of a city building in Fayetteville, North Carolina, supposedly unoccupied on nights and weekends, showed hundreds of telephone calls from two extensions in January 1985. Because all the calls were to the local Coca Cola Bottling Company, it was clear that the Coke machines were trying to contact their headquarters—which was eventually traced to a program bug (SEN 10, 2). In 1992, Peter Scott reported on someone with the 82nd Airborne assigned to an office at Fort Bragg. When the telephone was plugged in, it began to ring incessantly; the calls were traced to a Coke machine whose default telephone number (perhaps in a different area code) had never been changed (SEN 17, 2).

Library catalogs

The New York Public Library lost the computer-based reference numbers for thousands of books when information was transferred from one computer to another. Apparently, there was no backup (SEN 12, 4). Chuck Weinstock noted an electrical failure that shut down the computer center at Carnegie-Mellon University for a few days; the library was out of business—because their catalogs existed only in on-line form (SEN 12, 2).

The Monitor and the Merry Mac?

This case actually involves IBM PCs, but might be applicable to Macs as well. John B. Nagle observed that it is possible to burn out a monochrome monitor under software control, simply by stopping the horizontal sweep (SEN 13, 3).

5.5.2 Ambient Interference

Interference from electronic or other signals is a nasty problem, because it is difficult to prevent and is difficult to diagnose when it does occur.

Tomahawk

On August 2, 1986, a Tomahawk missile suddenly made a soft landing in the middle of an apparently successful launch. The abort sequence had accidentally been triggered as a result of a mysterious bit dropping. “Cosmic radiation” was one hypothesis. (On December 8, 1985, a Tomahawk cruise missile launch crashed because its midcourse program had been accidentally erased on loading.) (See SEN 11, 2; 11, 5; 12, 1.)

Black Hawk

In tests, radio waves triggered a complete hydraulic failure of a UH-60 Black Hawk helicopter, effectively generating false electronic commands. (Twenty-two people were killed in five Black Hawk crashes before shielding was added to the electronic controls. Subsequently, Black Hawks were not permitted to fly near about 100 transmitters throughout the world (SEN, 13, 1; 15, 1).

The Sheffield radar was jammed by its own communications

There were various erroneous reports about the cause of the Sheffield sinking during the Falklands War, which killed 20 crew members on May 4, 1982. Initial reports suggested the French-made Exocet missile fired by an Argentine plane was not on the list of enemy missiles, and therefore was not expected to be used against the British. That explanation was then denied by the British Minister of Defence, Peter Blaker. A later report27 contributed by Martin Minow indicated that the electronic antimissile defenses on the British frigate Sheffield were either jammed or turned off during the attack because of attempted communications between the captain and naval headquarters. The ship’s transmitter was on the same frequency as the homing radar of the Exocet (SEN 11, 3).

Cosmic rays and the Challenger

The communication channel capacity of the space shuttle Challenger was cut in half for about 14 hours on October 8, 1984, due to “a heavy cosmic burst of radiation.” The “cosmic hit” erased part of the memory of the stationary-orbit Tracking and Data Relay Satellite (TDRS). Communications were resumed after TDRS was reprogrammed from the ground. An earlier report on a study by J.F. Ziegler at IBM and W.A. Lanford at Yale University showed that densities in 1980 chip technology were such that about one system error per week could be attributable to cosmic-ray interference at the electron level. As altitude increases, the likelihood increases. As chip density increases, the likelihood increases. (See SEN 10, 1.)

Incidentally, the first TDRS experienced interference from extraneous radio transmissions. This problem was intrinsic to the design. The discovery resulted in cancellation of the March 7, 1985, liftoff of the shuttle Challenger, which was intended to place a second TDRS in orbit.28

Other interference in air and space

Other cases are worth noting, such as the sunspot activity that affected the computers and altered Skylab’s orbit in 1979 (SEN, 13, 4), and the Atlas-Centaur whose program was altered by lightning (SEN, 12, 3; 15, 1), noted in Section 2.2.2. Osaka International Airport’s radar screens were jammed by electromagnetic interference from a television aerial booster (SEN 12, 3). A Boeing 707 struck by lightning over Elkton, Maryland, in 1963 is noted in Section 2.4. The interference problems during the 1986 raid on Libya are noted in Section 2.3. In a strange case, a cellular telephone in someone’s luggage in the cargo hold of a commercial airliner was reported to have received a telephone call whose emitted radio-frequency signals apparently triggered a fire alarm in the hold (SEN 14, 6, reported on August 15, 1989). The apparent discovery of a pulsar in 1989 was later attributed to interference from a television camera (SEN 16, 3). (Although that case of interference did not result in a denial of service in the usual sense, there was a consequent denial of the pulsar’s existence.) There have been bans on in-flight use of computers with mouse devices (SEN 17, 3), and more recently bans on hand-held cellular telephones, because of perceived interference with navigational systems (RISKS 14, 33, February 18, 1993).29

Lightning wallops rockets

Lightning hit the rocket launch pad at the NASA launch facility at Wallops Island, Virginia, on June 10, 1987, igniting three small rockets and accidentally launching two of them on their preplanned trajectories. Somewhat ironically, the rockets were scheduled to have been launched to study the effects of night-time thunderstorms (SEN 12, 3).

Lightning strikes twice

On July 31, 1988, lightning struck the draw-bridge between Vineyard Haven and Oak Bluffs on Martha’s Vineyard, Massachusetts, paralyzing the three-phase controls and then ricocheting into the elevated transformer. As a result, 40 sailboats and tall powerboats were locked into the Lagoon Pond for almost 3 days. (That weekend was the one when the ferry Islander ripped a hole in its belly when it ran aground, and had to be taken out of service for many weeks; 500 cars were backlogged that day alone.) A previous lightning strike, only 3 weeks before, had closed the same drawbridge for 24 hours.30

Mining equipment

A miner was killed when two pieces of equipment were accidentally set to be controlled by the same frequency. The miner was knocked off a ledge by the unintended machine, a scoop tram (SEN 14, 5).

Big Mac attacked

McDonald’s experienced a strange interference problem; electrical appliances (toasters) and time keeping were affected. As a result of the special timers introduced for McMuffin products, time clocks were gaining 2 to 4 hours each day, inflating employees’ paychecks accordingly. The new toasters’ voltage-control circuits were inducing voltage spikes in the power line, disrupting the clocks—which were driven by the same power circuit. McDonald’s had installed over 5000 toasters before the problem was diagnosed. Bogus food orders were also emerging, such as 11 hamburgers, 11 fries, and 11 cokes, replicating the items of previous orders. This duplication wreaked havoc with the inventory and financial systems as well. In a separate McProblem, interference from new higher-power local police radios caused McDonald’s cash drawers to open.31

Garage doors for openers

Years ago, signals from Sputnik, the first Soviet orbiter, opened and closed garage doors as it went overhead.

When President Reagan’s airborne command plane (an E-4B, a modified 747) was parked at March Air Force Base, thousands of remote-control garage-door openers in the San Bernardino (California) area failed, apparently due to jamming from the plane’s radio transmissions (Associated Press, April 4, 1986).

In 1987, the U.S. Army installation at Fort Detrich jammed garage-door remote controls near Frederick, Maryland (SEN 13, 1).

In 1989, garage doors in the Mt. Diablo area in California were disabled by temporary transmissions between the Alameda Naval Air Station and a Navy ship (SEN 14, 5). (Thus, each of the three main U.S. military services were culprits!)

Nuclear-power plants

The Nine Mile Point 2 nuclear reactor in Oswego, New York, was knocked off-line by a transmission from a two-way radio in the control room, because the radio signals were picked up by the turbine-generator monitoring system; this interference triggered an automatic shutdown (SEN 14, 5). A similar event occurred at least twice 4 years later at the Palo nuclear-power plant in Iowa (SEN 18, 1, 12).

Sunspots in Quebec

On March 13, 1989 (the unlucky Friday the Thirteenth fell on a Monday that month, as Pogo might have noted), intense solar activity knocked out a remote substation, resulting in 6 million people in Canada’s Quebec province being without electricity for almost 12 hours (SEN 14, 5).

Interference in hospitals

Edward Ranzenbach reported that, on a visit to a local hospital intensive-care unit, he noticed a respirator repeatedly triggering an alarm and having to be manually reset. He noted that this problem occurred each time there was traffic on his portable radio around 800 megahertz. Such radios are now banned from that hospital (SEN 14, 6).

Risks in pacemakers

In Section 2.9, we note the case of the 62-year-old man being given microwave therapy for arthritis, and dying after microwave interference reset his pacemaker to 214 beats a minute. We also cite the case of a man who died when his pacemaker was reset by a retail-store antitheft device.

Interference in the theater

During a performance of A Chorus Line attended by President Ford, a Secret Service man received a call on his walkie-talkie. His push-to-talk switch wiped out the CMOS memory of the lighting board, plunging the entire theater into darkness (SEN 12, 2).

Sir Andrew Lloyd Webber’s $5-million musical Sunset Boulevard had its London opening delayed 13 days because the elaborate scenery kept shifting unexpectedly on the Adelphi Theater stage. The cause was discovered by the composer himself when he was visiting the theater. “I made a call on my mobile telephone, and the set moved. I made a second call and it moved again.” Hydraulic valves controlling the sets were triggered by the telephone transmissions.32 (This was a real setback!)

Hats off for Johnny Carson!

On August 31, 1989, Johnny Carson did an impression of Abe Lincoln as a budding young stand-up comic, wearing a 2-foot top-hat. The end of the skit involved triggering a radio-controlled device that blew the hat upward off his head. A long delay arose during the taping of the show when the hat kept blowing off Johnny’s head backstage before his entrance. His technicians figured out that radio-frequency interference from nearby kids with remote-control robot cars kept triggering the hat. The obvious solution was to get the kids to stop transmitting during the skit. Honest, Abe had some good lines, but Johnny’s hair-raising gimmick top-hatted them all (SEN 14, 6).

Interference with robots

Section 2.8 notes at least six deaths in Japan attributed to robots affected by stray electromagnetic interference.

Clocks run too fast

The village of Castle Donnington, Leicestershire, United Kingdom, installed new street-lighting systems with automatic timers that caused certain types of alarm clocks to race ahead. Mark Huth noted that the older clocks with red (light-emitting diode) displays count the cycles in the AC power, whereas the newer clocks with green (electroluminescent) displays count the cycles generated by a quartz crystal oscillator isolated from the power line by a DC power supply.33

Automobile microprocessor interference

There have been numerous reports on microprocessor-controlled automobile control systems being sensitive to electromagnetic interference, particularly from nearby citizen’s-band transmitters. Effects have included the sudden speeding up of cars on automatic cruise control (SEN 11, 1).

Visible and invisible light

In the late 1970s, the BBC was filming a documentary in the U.S. Library of Congress. Howard Berkowitz reported that, occasionally, all of the computer tape drives would stop, rewind, and unload, and that the operating system would crash. The cause was traced to electronic flashes used for still pictures, which triggered the end-of-tape sensors (SEN 12, 4). Dan Klein reported on ultraviolet erasable electronically programmable read-only memory (EPROMs) being susceptible to photographic flashes (SEN 14, 1).

Power anomalies

The scoreboard clock kept failing during a Seattle-Denver National Basketball Association game. The problem was eventually traced to reporters having hooked up their portable computers to the same circuit that the scoreboard was using (SEN 13, 3). Lindsay Marshall reported that the computer room at the University of Newcastle upon Tyne lost all power when a circuit breaker blew at 13:50 P.M. on July 23, 1986, just as the television coverage for the Royal Wedding ended (SEN 11, 5). Other strangely coincidental events, such as plumbing floods during commercial breaks in television spectaculars and the dramatic increase in the birth rate 9 months after the Northeast power blackout, also have been reported.

Other interference cases

There have been various additional reports of strange system behavior in which stray electromagnetic or cosmic radiation was suspected, but was not always confirmed as the cause. Railway switches were allegedly triggered by video-game machines. Train doors opened in transit between stations. In Japan in 1986, 42 people were killed when two crashproof rollercoasters collided, with electromagnetic interference apparently the cause. An F-16 bomb dropped on rural West Georgia on May 4, 1988, was thought to have been triggered by stray signals (SEN 14, 5). The Air Force’s PAVE PAWS radar has reportedly triggered airplane ejection seats and fire extinguishers (SEN 15, 1). The B-1B (stealth) bomber had electronic countermeasures that jammed its own signals (SEN 14, 2). Such episodes are particularly scary when the cause cannot be confirmed unequivocally, in which case recurrences are harder to prevent.

5.5.3 Other Environmental Factors

The physical world around us also contributes a variety of other effects.

Barometer up, planes down

Barometric pressure reached 31.85 inches at Northway, Alaska, on February 1, 1989, then the highest reading ever recorded in North America and the third highest in the world. (Temperatures were reported unofficially at −82 degrees Fahrenheit; the official weather bureau thermometer gave up at −80.) Because most aircraft altimeters could not provide accurate readings, the FAA grounded all air traffic that would require instrument approaches.34

Fires and floods

The fire and the ensuing water damage in Hinsdale, Illinois, on May 8, 1988, seriously affected computers and communications throughout the area; 35,000 telephone subscribers were out of operation and a new telephone switch had to be brought in to replace the old one. At least 300 ATMs were inoperative. Car telephones became hot items for conducting commerce (SEN 13, 3).

The Minneapolis Federal Reserve Bank flooded when an air-cooling pipe burst. A serious security vulnerability was opened up during the backup operation (SEN 16, 3).

Other fires have had a variety of devastating effects, including the May 1988 Dreyers Ice Cream plant fire in which the entire collection of secret formulae for ice-cream flavors was lost. It came at a terrible time; the flavor team had not yet completed its plan to have computer backups on everything, just in case (SEN 13, 3).

Snow

The snow-laden roof of a computer center collapsed in Clifton, New Jersey, on March 13, 1993. As a result, 5000 ATMs throughout the United States were shut down. The preplanned backup site for that computer center was unable to take over the operation because it was already doing emergency processing for computer systems that had been affected by the World Trade Center bombing the month before (SEN 18, 3, A-4).

Air conditioning

Dave Horsfall reported from Australia that a noisy air-conditioning system was seriously disrupting a council meeting, so the mayor ordered it shut off. Of course, no one remembered to turn it on again; by the next morning, one of the computers had been cooked (SEN 13, 4).

5.5.4 Prevention of Denials of Service

Attempting to attain any level of guaranteed service is a fundamental problem in real-time control systems, communication systems, medical systems, and many other types of applications. In particular, with their a priori emphasis on alternate routing, both the ARPAnet and the AT&T network in the past represented outstanding examples of highly survivable systems. The fault modes that caused the 1980 and 1990 collapses were unanticipated.

The generalized form of nondenial of service—that is, survivability—clearly encompasses many requirements such as security (prevention of penetration and misuse), reliability, availability, and real-time responsiveness; assuring it necessitates extreme care in development and operation, including care even greater than that required for systems that must be merely secure or fault tolerant.

5.6 Financial Fraud by Computer

Pursuing the duality between intentional and unintentional events, we next turn to financial problems. In this section, we consider financial fraud, both successful and thwarted. In Section 5.7, we consider accidental losses.

5.6.1 Perpetrated Fraud

Public reports include only a fraction of the known cases of fraud; private reports suggest that many others have occurred. This section gives a few cases that are acknowledged.

Volkswagen scam

Volkswagen lost almost $260 million as the result of an insider scam that created phony currency-exchange transactions and then covered them with real transactions a few days later, pocketing the float as the exchange rate was changing (SEN 12, 2, 4). This case is an example of a salami attack—albeit with a lot of big slices. Four insiders and one outsider were subsequently convicted, with the maximum jail sentence being 6 years, so their efforts were not entirely successful!

ATM fraud

Losses from ATMs are numerous. The RISKS archives include a $350,000 theft that bypassed both user authentication and withdrawal limits, $140,000 lost over a weekend because of exploitations of a software bug, $86,000 stolen via fabricated cards and espied personal identifying numbers (PINs), $63,900 obtained via the combination of a stolen card and an ATM program error, and various smaller scams.

Bogus ATM captures PINs

Alan Scott Pace, 30, and Gerald Harvey Greenfield, 50, were arrested on charges of credit-card fraud, wire fraud, interstate transportation of stolen property, and conspiracy to commit a felony. Mr. Greenfield was also charged with bank fraud. They had planted a bogus ATM in the Buckland Hills Mall in Manchester, Connecticut, which captured account numbers and PINs from the unsuspecting customers, whose requests were then rejected (although there were reports that the machine might have occasionally given out some money just to appear legitimate). They then fabricated bogus bank cards using the account numbers and PINs that their Trojan-horse ATM had captured. Their arrest on June 29, 1993, was based on routine films of their having used genuine ATMs from which they allegedly withdrew more than $100,000. Also seized were software, three handguns, bank-network stickers, a police scanner, credit cards and passports, and equipment to make the phony bank cards.35

New Hampshire subsequently informed Connecticut that Pace was wanted in New Hampshire for a string of nine jewelry scams in 1987. He had been under indictment in 1989 for running a bogus jewelry store, but never showed up for arraignment.36

Bogus transactions

First National Bank of Chicago had $70 million in bogus transactions transferred out of client accounts. One transaction exceeded permissible limits, but the insiders managed to intercept the telephone request for manual authorization. However, that transaction then overdrew the Merrill-Lynch account, which resulted in the scam being detected. Seven men were indicted, and all the money was recovered (SEN 13, 3, 10).

Two people used direct-access facilities to transfer BFr 245 million from Belgian BNP accounts to their own accounts in other banks, through an apparent breach in the bank’s security. The transactions were detected by a series of audits, and the funds were recovered (SEN 19, 1, 6-7).

A 23-year-old employee of Ceska Sporitelna, the Czech Republic’s biggest savings bank, transferred 35 million crowns ($1.19 million) from various accounts over 8 months. He wrote his own software to do the transfers, but only after warning the bank officials of their weak computer security. The theft was not detected until he withdrew almost one-half of the money; he was arrested as he was stuffing the money into a briefcase (SEN 19, 1, 7).

Diversion of funds

At the University of Texas, an employee used a dean’s password to divert $16,200 over a 1-year period. He awarded fellow-ship stipends to students who were not eligible. The diversion was detected when a student wrote to the dean of recruiting to thank the dean for his generosity (SEN 17, 3).

Other frauds

Other frauds include a collaborative scam that acquired 50 million frequent-flier miles, an individual effort that gained 1.7 million miles, a collaborative effort involving millions of dollars worth of bogus airline tickets, and a bank computer-system employee who snuck in an order to Brinks to deliver 44 kilograms of gold to a remote site, collected the gold, and then disappeared.

Detective detected

A Pinkerton’s detective-agency employee embezzled $1 million by transferring the funds to her own account and to accounts of two fictitious companies. This and another scam applying for refunds from false tax returns were detected after the employee left Pinkerton’s. At the time that this case was reported, she was facing up to 30 years imprisonment and huge fines (SEN 16, 4). The RISKS archives do not include the final disposition of the case.

5.6.2 Thwarted Attempts

Attempts that were not successful also frequently remain unreported. Here are a few cases that managed to surface.

Bogus transfer requests

The First Interstate Bank of California came within a whisker of losing $70 million as the result of a bogus request to transfer funds over the automated clearinghouse network. The request came via computer tape, accompanied by phony authorization forms. It was detected and canceled only because it overdrew the debited account (SEN 17, 3).

The Union Bank of Switzerland received a seemingly legitimate request to transfer $54.1 million (82 million Swiss francs). The automatic processing was serendipitously disrupted by a computer system failure, requiring a manual check—which uncovered the attempted fraud. Three men were arrested (SEN 13, 3, 10).

Bogus lottery ticket

The Pennsylvania state lottery was presented with a winning lottery ticket worth $15.2 million that had been printed after the drawing by someone who had browsed through the on-line file of still-valid unclaimed winning combinations. The scam was detected because the ticket had been printed on card stock that differed from that of the legitimate ticket (SEN 13, 3, 11).

Gambling fraud

In 1983, a multiple slot-machine progressive payoff had reached $1.7 million at Harrah’s Tahoe. The details remain sketchy, but it appears that a group of insiders figured out how to rig the microchip to trigger the payoff. The group tried to get a collaborator to collect, but he panicked when confronted with photographers, and the scam was exposed (SEN 8, 5).

Two-person authorization of bogus transfers

On Christmas Eve 1987, a Dutch bank employee made two bogus computer-based transfers to a Swiss account, for $8.4 million and $6.7 million. Each required two-person authorization, which was no obstacle because the employee knew someone else’s password. The first transaction was successful. The second one failed accidentally (due to a “technical malfunction”), and was noted the following working day. Suspicions led to the arrest of the employee (SEN 13, 2, 5).

Counterfeit ATM cards

An ATM-card counterfeiting scam planned to make bogus cards with a stolen card encoder, having obtained over 7700 names and PINs from a bank database. An informant tipped off the Secret Service before the planned mass cashin, which could have netted millions of dollars (SEN 14, 2, 16).

Disappearing checks

An innovation in check fraud that is only marginally technology related involves the use of a chemical that causes bogus checks to disintegrate shortly after being deposited (SEN 14, 1, 16).

In general, computer misuse is getting more sophisticated, keeping pace with improvements in computer security. Nontrivial authentication (for example, more sophisticated than fixed passwords) can hinder outsiders, although systems with open networking and dialup lines are potentially more vulnerable to penetrations than are systems with no possible outside access. Authentication within a system or network among supposedly trusted components is vital, but often lacking. Fraud by insiders remains a problem in many commercial environments (often not even requiring technology, as in the U.S. savings and loan fiasco, now estimated to exceed $1.5 trillion in apparent losses). High-tech insider fraud can be difficult to prevent if it blends in with legitimate transactions.

Most of the thwarted attempts noted here were foiled only by chance, a situation that is not reassuring—particularly because more cautious perpetrators might have been successful. We do not know the extent of successful frauds. Financial institutions tend not to report them, fearing losses in customer confidence and escalations in insurance premiums. This lack of reporting leaves us wondering how many successful cases have not been detected, or have been detected but not reported. More comprehensive system security, authentication (of users and systems), accountability, auditing, and real-time detectability would help somewhat. More honest reporting by corporations and governmental bodies would help to reveal the true extent of the problems, and would be beneficial to all of us in the long term. In any event, computer-aided fraud will continue. The higher the stakes in terms of funds available for scamming or skimming, the greater the incentives for committing computer-aided financial crimes.

5.7 Accidental Financial Losses

In the previous section, we considered intentional computer-aided financial fraud. In this section, we consider accidental financial mishaps; some of these were subsequently reversible, whereas others entailed irrevocable losses.

5.7.1 Cases of Accidental Financial Loss

The following cases exhibit amazing diversity; they are listed roughly in order of decreasing loss.

My BoNY loss, she spilleth!37

One of the most dramatic examples was the $32 billion overdraft experienced by the Bank of New York (BoNY) as the result of the overflow of a 16-bit counter that went unchecked. (Most of the other counters were 32 bits wide.) BoNY was unable to process the incoming credits from security transfers, while the New York Federal Reserve automatically debited BoNY’s cash account. BoNY had to borrow $24 billion to cover itself for 1 day (until the software was fixed), the interest on which was about $5 million. Many customers were also affected by the delayed transaction completions (SEN 11, 1, 3–7).

Making Rupee!

Due to a bank error in the current exchange rate, an Australian man was able to purchase Sri Lankan rupees for (Australian) $104,500, and then to sell them to another bank the next day for $440,258. (The first bank’s computer had displayed the Central Pacific franc rate in the rupee position.) Because of the circumstances surrounding the bank’s error, a judge ruled that the man had acted without intended fraud, and could keep his windfall of $335,758 (SEN 12, 4, 10).

Racetrack losses

“A series of computer malfunctions” was blamed for the Los Alamitos racetrack losing $26,000 in excess payouts when the final results were posted incorrectly (SEN 16, 2, 5).

Bank generosity

A Norwegian bank cashpoint computer system (ATM) consistently dispersed 10 times the amount requested. Many people joyously joined the queues as the word spread (SEN 15, 3, 7).

Replicated payments

A software flaw caused a bank in the United Kingdom to duplicate every transfer payment request, for a period of half an hour, totaling over £2000 million (or 2 billion British pounds as counted in the United States). Even though the bank claimed to have recovered all of the funds, Brian Randell speculated on possible lost interest—with a potential maximum around £0.5 million a day (SEN 15, 1, 5-6).

Replicated transactions

During a test of its computers, the Federal Reserve Bank accidentally reran 1000 transactions from the previous working day, transferring $2 billion to 19 different financial institutions. However, the error was caught quickly and no losses were incurred (SEN 11, 2, 9).

Replicated transfers

Udo Voges reported on a bank in Munich that accidentally mounted the wrong tape, redoing all the monthly transfers that had already been processed at the end of the month. The repeated transfers were later reversed (SEN 14, 2, 9).

Hot wire transfer

A high-flying wire-transfer organization had one group that dealt in multiples of one-thousand, while another group dealt with actual amounts. The inevitable result was that a $500 thousand Federal Reserve transaction was converted into $500 million—although the unusually large transaction was questioned manually and then reversed (anonymously contributed to SEN 10, 3, 9-10 by someone in that organization).

Winning lottery tickets bought after closing

Software was blamed for allowing six winning tickets to be purchased after the New England Tri-State lottery drawing was announced. The flaw was caught before payouts were made (SEN 16, 1, 19).

5.7.2 Prevention of Financial Losses

Preventing financial losses involves defending against intentional and accidental events. Some systems do well against one mode, but not against the other. The cases cited here and in the previous section suggest that there are risks in both modes. Although different techniques may be involved, the manifestations of these two modes of loss have much in common. Generalizing the system design and operational practice somewhat can permit coordinated detection and prevention of both cases. Indeed, there are losses that occurred accidentally that could have been caused with malicious intent, and a few others that were intentionally caused that could have occurred accidentally, as suggested in Chapter 4. The distinction between the two modes of loss is in general blurred, and it is wisest to anticipate both modes.

Controls for authentication (of users and systems), system integrity (for example, prevention of accidental or malicious system alterations), and database integrity (for example, external consistency with the real world, as well as internal consistency within the computer systems) are particularly important. The Clark-Wilson model [25] for application integrity is appropriate, embodying both good accounting practice and good software-engineering practice. It can be used profitably in conjunction with well-conceived approaches for increased system security, to help prevent both intentional and accidental losses. Many of the problems cited in this section and the previous section could have been prevented, or detected, or at least minimized, with a combination of good system design, enforcement of authentication and finer-grain access controls, and sensible administrative controls. However, some systems will always be susceptible to insider fraud, and systems with intentionally wide-open connectivity and remote users will always have some susceptibility to outsider fraud.

5.8 Risks in Computer-Based Elections

Errors and alleged fraud in computer-based elections have been recurring Risks Forum themes. The state of the computing art continues to be primitive. Punch-card systems are seriously flawed and easily tampered with, but are still in widespread use. Direct recording equipment is also suspect, with no ballots, no guaranteed audit trails, and no real assurances that votes cast are properly recorded and processed. Computer-based elections are being run or considered in many countries, including some countries notorious for past riggings. The risks discussed here exist worldwide.

5.8.1 Erroneous Election Results, Presumably Accidental

Computer-related errors occur with alarming frequency in elections. In 1989, there were reports of uncounted votes in Toronto and doubly counted votes in Virginia and in Durham, North Carolina. Even the U.S. Congress had difficulties when 435 Representatives tallied 595 votes on a Strategic Defense Initiative measure. An election in Yonkers, New York, was reversed because of the presence of leftover test data that accumulated into the totals. Alabama and Georgia also reported irregularities. After a series of mishaps, Toronto has abandoned computer-based elections altogether. Most of these cases were attributed to “human error” rather than to “computer error,” and were presumably due to operators and not to programmers; however, in the absence of dependable accountability, who can tell?

In 1992, there were numerous further goofs. In Yamhill County, Oregon, votes for the District Attorney candidates were reversed. Because the computer program assumed the candidates were in alphabetical order, a two-to-one landslide went to the wrong candidate (at least until the error was caught).

In Ventura County, California, yes and no votes were reversed on all the state propositions, although the resulting errors did not alter the statewide outcomes.

Ray Todd Stevens reported on voting machines that were misaligned so that it was possible to vote for Bush but not for Clinton. (A voting official said that “they only deal with the totals and it would all average out.”)

5.8.2 Election Fraud

If wrong results can occur accidentally, they can also happen intentionally. Rigging has been suspected in various elections. In other cases, fraud might easily have taken place. Numerous experts have attested to the ease with which close elections could be rigged. However, lawsuits have been unsuccessful, particularly given the absence of trustworthy audit trails.

The opportunities for rigging elections are manifold, including the installation of trapdoors and Trojan horses—child’s play for vendors and knowledgeable election officials. Checks and balances are mostly placebos, and are easily subverted. Incidentally, Ken Thompson’s oft-cited Turing address [167] noted in Section 3.3.2 reminds us that tampering can occur even without any source-code changes; thus, code examination is not sufficient to guarantee the absence of Trojan horses.

For many years in Michigan, manual system overrides were necessary to complete the processing of noncomputer-based precincts, according to Lawrence Kestenbaum.

Doug Hardie cited a personal experience in observing a flagrant case of manual intervention with a punched-card voting system at San Jose State University—albeit in only a small election. (He also had a cute comment on the linguistic distinction between the election machine being fixed and its being repaired.)

In the St. Petersburg, Florida, mayoral election on March 23, 1993, some computer printouts “released inadvertently by election officials showed 1429 votes had been cast in Precinct 194 [which has zero registered voters], raising questions of vote tampering.” This anomaly was explained as a compilation of “summary votes” that were legitimate but that had been erroneously allocated in copying the results from one computer system to another. Coincidentally (?), the margin of victory was 1425 votes. A subsequent recount seemed to agree with the original totals.38 On the other hand, controversy continued for months afterward. By the way, a consistent Trojan horse somewhere in the process would of course give reproducible results; the concurrence of a recount is by itself not enough.

In addition to various cases of fraud having been alleged in the use of electronic voting systems (see, for example, Ronnie Dugger’s New Yorker article [39]), paper-ballot systems are of course vulnerable as well. A recent example involves Costilla County in Colorado, where absentee ballots have been voted for ineligible voters (friends, relatives, and dead persons) as far back as 1984, and have been decisive. However, that hardly seems like news here. The conclusion is that, in principle, all voting systems are easy to rig—given appropriate access to whatever it takes (computer systems, bogus paper ballots, and so on). Trojan horses can in essence be arbitrarily subtle.

John Board at Duke University expressed surprise that it took more than 1 day for the doubling of votes to be detected in eight Durham precincts. Lorenzo Strigini reported in November 1989 on a read-ahead synchronization glitch and an operator pushing for speedier results, which together caused the computer program to declare the wrong winner in a city election in Rome, Italy (SEN 15, 1). Many of us have wondered how often errors or frauds have remained undetected.

5.8.3 Reduction of Risks in Elections

The U.S. Congress has the constitutional power to set mandatory standards for federal elections, but has not yet acted. Existing standards for designing, testing, certifying, and operating computer-based vote-counting systems are inadequate and voluntary, and provide few hard constraints, almost no accountability, and no independent expert evaluations. Vendors can hide behind a mask of secrecy with regard to their proprietary programs and practice, especially in the absence of controls. Poor software engineering is thus easy to hide. Local election officials are typically not sufficiently computer-literate to understand the risks. In many cases, the vendors run the elections. (See [144].)

Providing sufficient assurances for computer-based election integrity is an extremely difficult problem. Serious risks will always remain, and some elections will be compromised. The alternative of counting paper ballots by hand is not promising, and is not likely to be any less subvertible. But we must question more forcefully whether computer-based elections are really worth the risks. If they are, we must determine how to impose more meaningful constraints. Section 7.9 considers what might be done to avoid some of the problems discussed here.39

5.9 Jail Security

Iron type bars and silicon chips do not a prison make.

This section summarizes a collection of technology-based problems related to prison and jail security. Several of these cases involved scams that permitted inmates to escape. Section 6.5 notes a different type of law-enforcement problem relating to false imprisonments that resulted from naming problems and mistaken identities.

Jailed drug kingpin freed by bogus E-mail

An alleged cocaine dealer, William Londono, was released from Los Angeles County Jail on August 25, 1987, on the basis of an E-mail message that had apparently been fabricated by someone with access to the jail’s computer system. About 70 people had legitimate access to the system, but it was not known whether the perpetrator was an insider or an outside masquerader. However, Londono’s release appeared to have insider accomplices; for example, he and his street clothes were missing, and his departure was not discovered for 6 days! This case followed two previous escapes from the same jail involving inmates switching identification wristbands.40

Jailbreak by fax

Jean Paul Barrett, a convicted forger serving a 33-year term, was released from a Tucson, Arizona, jail on December 13, 1991, after a forged fax had been received ordering his release. A legitimate fax had been altered to bear his name.41

Seven inmates escaped in Santa Fe

While the prison computer control system was down on July 4, 1987, in Santa Fe, New Mexico, a prisoner kidnapped a guard, shot another, commandeered the control center, and released six other prisoners. All seven went through an emergency door on the roof, pole-vaulted over a barbed-wire prison fence, and disappeared. The guard tower was being staffed only during daylight hours “because of financial restrictions.”42 (It was subsequently staffed 24 hours a day.) The prison computer-control system was reportedly down at the time, and would otherwise have monitored the motion detectors and thereby prevented the escape! Apparently, the monitoring software had been disabled because of too many false alarms. (See a report by James Lujan in SEN 12, 4, which noted that only one of the seven escapees had been apprehended.)

Woman escapes from Oregon jail

Diane Downs, a convicted murderer of some notoriety (she shot her three children, killing one, allegedly to rid herself of the responsibility for them), escaped from the medium-security Oregon women’s prison on July 11, 1987. While in the recreation yard, she scaled two fences and walked away. Budget money was tight, so there was no guard assigned to watch inmates in the yard; instead, the jail depended on an alarm system in the outer fence. The alarm did go off, but no one paid attention to it because it had been going off every day, usually because of strong winds or birds. (See SEN 12, 4, reported by Andrew Klossner.) Both this and the previous case remind us of the boy who cried “wolf!” When a system gives so many false alerts, it is time to do something to improve the false-positive discrimination.

Dutch computer system sets criminals free, arrests innocent people

One day after a new Dutch computer system was installed at the Central Investigation Information Department (CRI) in The Hague in the Netherlands, it began to behave strangely. Criminals were set free, and innocent people were arrested. The computer system was taken out of service. Unfortunately, the backup system had been decommissioned. The vendor blamed the police for using the system incorrectly.43

Computer breakin for jail breakout

A computer breakin was used in an attempted jail breakout in the Santa Clara County jail in California. A prison inmate gained access to the on-line prison information system and managed to alter the data for his release date, from December 31 to December 5. (He wanted to be home for Christmas.) However, the alteration was detected by a suspicious deputy comparing the on-line entry with manual records, after the inmate had bragged about how he was going to get out early.44

Faulty locks delay prison opening

A new El Dorado County jail could not be opened for some weeks because the computer-controlled cell doors would not lock. The vendor was charged a daily penalty of $1250.45

San Joaquin County jail doors unlocked

On the evening of December 27, 1992, the new San Joaquin (California) County jail computer system automagically unlocked all of the cell doors in a high-risk area, with a highly audible series of loud clicks, releasing about 120 potentially dangerous inmates who were being held in an “administrative segregation pod.” Fortunately, the pod was itself isolated by other doors that remained locked. The glitch was attributed to a spurious signal from the incoder card, whose responsibilities include opening those doors in emergencies.46

Pickable high-tech locks

Less than 1 year after the opening of the supposedly escape-proof Pelican Bay State Prison near Crescent City, California, inmates learned how to pop open the pneumatic cell doors at will. A similar system in the Santa Rita Jail in Alameda County was also found to be pickable.47

Oklahoma power outage freezes jail doors

Oklahoma County opened a new jail in November 1991, with a comprehensive new computer system developed in Colorado. Toward the end of February 1993, the software failed, leaving each of the doors in a state that could not be changed manually. Some prisoners remained locked in their cells, some doors remained wide open. Twenty-two jailers were trapped in a control room for an entire shift when the computer system shut down due to a 5-minute power outage. An attempted fix 4 days later failed.48

5.10 Summary of the Chapter

This chapter considers several of the more illustrative security problems, the intent being to help you gain a greater appreciation for the range of vulnerabilities presented in Chapter 3 and the ways in which those vulnerabilities have been and can be exploited. Such an appreciation is essential to preventing the occurrence of similar problems in the future, in existing systems as well as in new systems.

Table 5.1 provides a brief summary of the causes of the problems cited in this chapter. The abbreviations and symbols are as in Table 1.1. The column heads include a column headed Misuse, indicating explicit willful system misuse, as opposed to other problems arising in the course of normal system use.

Table 5.1 Summary of security problems and their causes

image

Each of the first five cases in the table indirectly reflects pervasive inadequacies throughout system development and operation, although the direct cause is external system misuse. For example, in the case of personal-computer viruses, the unwitting victim is the importer of the virus (typically, via floppy disk) and sometimes is its trigger; the original source of the virus may be long ago and far away. However, the existence of the virus on a given system relies on the inability of the personal-computer operating system to protect itself against itself; that flaw may alternatively be attributed to unspecified requirements, poor system design, weak software implementation, and, in some cases, inadequate hardware.

The situation has improved only somewhat since the Internet Worm, which demonstrated how vulnerable a certain class of systems (Berkeley Unix) was at that time. Today’s computer systems and networks are still generally vulnerable to attack, externally by penetrators and internally by supposedly authorized users. Many of the classical vulnerabilities of Chapter 3 are still present, in one guise or another.

Designing systems and networks that are secure and reliable is a difficult task, and requires an overall system perspective, as suggested in Chapter 4. Many techniques that could be used are known in the research and prototype development communities, but those techniques have been relatively slow to reach the commercial marketplace and thence common use. Some of these techniques are considered further in Chapter 7, along with criteria and other desiderata for secure and reliable systems. The criteria outlined in Section 5.8 give a cursory indication of the breadth of scope required for one particular application area, but those criteria are, in most cases, generally applicable to many other types of applications. Considerable experience is also required to operate and administer those systems and networks.

In certain applications, it is possible to seal off a system from outside users; however, the circumstances in which that is practical are generally co-opted by pressing needs for increased inter-connectivity and interoperability. For example, the need for remote maintenance of telephone switching systems requires that telephone switch controllers be available remotely. For many years, it was relatively easy for outsiders to penetrate the telephone system via the maintenance interfaces. Recent incorporation of user authentication that does not rely on fixed passwords has slowed down the frequency of attack.

Even if security were to be considerably improved to combat intentional misuse, Sections 5.2, 5.5, and 5.7 suggest that accidental misuse by authorized users and unexpected events related to modes of system unreliability would continue to undermine security.49

Challenges

Warning: As noted at the end of Chapter 3, carrying out experimental attacks on computer and communication systems is not recommended; it could result in felony or misdemeanor charges, the resultant need to pay serious lawyers’ fees, and a first-hand opportunity to study the use of computer systems in jails. Nevertheless, anyone involved in designing and implementing a system, or merely in using a system, should expand his or her horizons by trying to anticipate the types of problems described in this chapter. It may surprise you to discover how many such attacks could be perpetrated on systems that you believe are sound.

C5.1 Do you believe the statement at the beginning of Section 5.1, that every system can be compromised? Defend your conclusion.

C5.2 What inferences can you draw from the following information? Salaries associated with individuals are stored in a database, but are not accessible to unprivileged users. However, a query exists that provides the average salary of an arbitrary collection of named individuals, as long as more than 10 people are included. Can you, as an unprivileged user, derive the salaries of every individual? If you can, describe how. If you cannot, what would you require to do so?

C5.3 A particular directory is protected so that wildcarding is not permitted; that is, ls * or its equivalent does not provide a listing of the directory entries. However, escape completion is permitted. That is, if an initial character string of an entry name is long enough to determine the following characters uniquely, then, if that string is typed followed by an escape character, the system will provide the rest of the characters in the entry name; for example, typing ls ris followed by an escape character would result in the system completing the command line as, say, ls risks, if no other entry name were to begin with ris . Can you obtain a list of the entire set of directory entry names—that is, the equivalent of ls *? If you can, describe how. If you cannot, explain why not.

C5.4 The telephone book for Agency X is classified; in particular, the list of employees’ names is classified, as is the association of names with telephone numbers. The main switchboard number is publicly known. In some cases, a switchboard operator will give you the number and connect you, if you know the name of an individual. In other cases, a computer-automated touchtone service may field your call. In either case, how might you obtain a list of names and numbers?

C5.5 In Challenge C5.4, suppose that the operator will not give you the telephone number for a named individual, but will connect you. Suppose that direct in-dialing is also possible, bypassing the switchboard. Can you obtain a list of names? Can you obtain the associated numbers? Explain your answers.

C5.6 Analyze two arguments for and two against the reporting of cases of system misuse that have violated security. Are these arguments the same as those for and against the reporting of known or newly detected system vulnerabilities? Why? Is the discussion of types of vulnerabilities in Chapter 3 consistent with your analysis? Explain your answer.

C5.7 Reconsider your design of the simple alarm system in C2.4, knowing what you now know about security and the vulnerabilities detected in C4.3. Add security features to your design to make it resistant to malicious attacks, such as attempts to cut the primary power or the backup-battery wires. Identify circumstances under which your design might fail to operate properly, including both single-point and multiple-source failure modes.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.221.112.220