CHAPTER 17

MOBILE CODE

Robert Gezelter

17.1 INTRODUCTION

17.1.1 Mobile Code from the World Wide Web

17.1.2 Motivations and Goals

17.1.3 Design and Implementation Errors

17.2 SIGNED CODE

17.2.1 Authenticode

17.2.2 Fundamental Limitations of Signed Code

17.2.3 Specific Problems with the ActiveX Security Model

17.2.4 Case Studies

17.3 RESTRICTED OPERATING ENVIRONMENTS

17.3.1 Java

17.4 DISCUSSION

17.4.1 Asymmetric, and Transitive or Derivative, Trust

17.4.2 Misappropriation and Subversion

17.4.3 Multidimensional Threat

17.4.4 Client Responsibilities

17.4.5 Server Responsibilities

17.5 SUMMARY

17.6 FURTHER READING

17.7 NOTES

17.1 INTRODUCTION.

At its most basic, mobile code is a set of instructions that are delivered to a remote computer for dynamic execution. The problems with mobile code stem from its ability to do more than just display characters on the remote display.

It is this dynamic nature of mobile code that causes policy and implementation difficulties. A blanket prohibition on mobile code is secure, but that prohibition would prevent users of the dynamic Web from performing their tasks. It is this tension between integrity and dynamism that is at the heart of the issue.

The ongoing development of computer-based devices, particularly personal digital assistants (PDAs) and mobile phones, has broadened the spectrum of devices that use mobile code, and therefore are vulnerable to related exploits. The advent of the Apple iPhone in 2007 highlighted this hazard.1

Several definitions, as used by United States military forces but applicable to all, are useful in considering the content of this chapter:

Enclave. An information system environment that is end to end under the control of a single authority and has a uniform security policy, including personnel and physical security. Local and remote elements that access resources within an enclave must satisfy the policy of the enclave.

Mobile code. Software obtained from remote systems outside the enclave boundary, transferred across a network, and then downloaded and executed on a local system without explicit installation or execution by the recipient. Mobile code is a powerful software tool that enhances cross-platform capabilities, sharing of resources, and Web-based solutions. Its use is widespread and increasing in both commercial and government applications…. Mobile code, unfortunately, has the potential to severely degrade…operations if improperly used or controlled.

Malicious mobile code. Mobile code software modules designed, employed, distributed, or activated with the intention of compromising the performance or security of information systems, increasing access to those systems, providing the unauthorized disclosure of information, corrupting information, denying service, or stealing resources.2

17.1.1 Mobile Code from the World Wide Web.

On the World Wide Web, the phrase “mobile code” generally refers to executable code, other than Hypertext Markup Language (HTML) and related languages (e.g., Extensible Markup Language, XML), supplied by a Web server or delivered by e-mail for execution on the client's computer. The most common packaging technologies for mobile code are ActiveX, Java, and JavaScript (also known as ECMAScript). Mobile code can directly perform covert functions on a client system, accessing information, or altering the operation or persistent state of the system; it can create accidental or deliberate vulnerabilities that can be exploited at a later time. The widespread use of mail clients that support HTML e-mail, with either embedded or referenced program code, has become a widespread source of vulnerability. So-called pop-ups can also be a source of vulnerability in many ways. In a technical sense, pop-ups have the ability to invoke other www sites and pages. In a legal sense, they may give rise to log entries that can cause legal problems, as in the case of Julie Amero, a Norwich, Connecticut, substitute teacher accused and subsequently convicted of using a classroom computer to access inappropriate material.3

Although malicious software such as viruses, worms, and Trojan horse programs written in compiled, interpreted, or scripting languages such as Visual Basic also might be considered mobile code, these pests are not generally labeled as such; this chapter deals only with ActiveX controls, Java applets, and JavaScript programs.

Today's trend toward increasing dynamism, with its attendant increase in the use of Ajax4 and other technologies that rely on JavaScript and other mobile code technologies, increases the scope of the threat while also making it more difficult to ban the use of the vulnerable technologies.

The most spectacular problems with mobile code involve system or application crashes, which disrupt user sessions and workflow. However, silent covert access to or modification of client system data are far more serious problems. For example, some Web sites covertly obtain e-mail addresses from users' browsers, resulting later in unwanted commercial e-mail. In the past, antimalware tactics have relied heavily on widespread distribution of threats. The emergence of designer mobile code, specifically targeted to a particular system or a small number of systems, is a dangerous trend.5

The 2005 digital assault against Varda Raziel-Jacont and Amnon Jacont that placed material from a then-unpublished manuscript, “L for Lies” on various Internet sites, was not an aberration. The investigation into this affair uncovered a covert Trojan horse that provided remote access to the Jaconts' computer. Following leads from this investigation, Israeli police investigators uncovered a far larger computer-based information-gathering enterprise.6 The investigation culminated in the arrest of Raziel-Jacont's former son-in-law and his current wife. In the end, the private data of three major private investigation companies, several purchasers of the information, and apparently dozens of victim companies were compromised. This was not an isolated incident. The trend of targeted attacks against senior personnel, rather than random attacks, has accelerated.7

Investigative agencies have also entered the fray. In July 2007, an affidavit filed by the Federal Bureau of Investigation in connection with a series of bomb threats described the use of spyware to infiltrate a suspect's computer and return information to investigators.8

Mobile code presents a complex set of issues to information technology (IT) professionals. Allowing mobile code into an operating environment compromises any enclave; however, even commercial off-the-shelf (COTS) programs breach the integrity of an enclave. The differences between mobile code and other forms of code are primarily in the way these external programs are obtained, installed, documented, and controlled. In an enterprise computing environment, COTS acquisition normally involves conscious and explicit evaluation of the costs and benefits of installing a particular named and documented program. In contrast, mobile code programs are installed largely without notification to the user, and generally without documentation, change control, or review of any kind. Unless client-system firewalls are set to reject mobile code automatically, system administrators cannot be certain exactly which software has been executed on client machines under their nominal control. The use of Secure Sockets Layer (SSL)–based technologies such as HTTPS to otherwise secure connections used by applications also has the side effect of preventing the detection of mobile code at the firewall level. Although such control is often illusory, due to user circumvention of restrictions on installation of unauthorized software, the use of mobile code, installed by external Web sites, seriously compromises any remaining control over employee software configurations.

Mobile code has also been used in some cases to enforce proprietary rights in content, as was the case in a 2005 affair involving Sony Music.9 The Sony Music case involved software contained on music CD-ROMS; precisely the same effect could have occurred with a downloaded file or Web page. The covert installation of software of any kind is a serious hazard to integrity, security, and privacy. Malfunction or misuse of such software would likely fit within the criminal statutes defining illegal, unauthorized alteration of systems. The attorney general of Texas,10 as well as private class actions in New York11 and California,12 all filed cases against Sony. All of these actions were settled by Sony BMG in December 2006.13 In January 2007, the U.S. Federal Trade Commission announced a settlement with Sony BMG on the charges of installing software without permission.14 Investigations were also opened by the attorneys general of Massachusetts and Florida as well as overseas in Italy and Canada.

There are also reports that the Sony Root Kit was exploited by the Back-door.IRC.Snyd.A exploit15 and others to hide files from malware scans. The widespread nature of this induced vulnerability should also give pause. A widespread vulnerability provides an ecological niche ready for exploitation.

17.1.2 Motivations and Goals.

The motivations and goals of malware propagators have continued an evolutionary trend from the unintentionally destructive to the vengeful, vindictive, and criminal.

In the beginning, many incidents were randomly damaging, or pranks with unintended side effects. This is no longer the case. Now malevolent mobile code is often code with a purpose. That purpose may be embarrassment, it may be blackmail, it may be corporate espionage, or it may be out-and-out theft. In a different dimension, the goal may be the subordination of otherwise innocent computer resources for a criminal enterprise against unrelated third parties.

The change in goals also has a dramatic impact on counter strategies. When the goal was mass publicity, the same infection was widespread, and scanning technologies could be used to identify known threats. When the goal is no longer publicity, publicity and widespread infection are maladaptive. Covert infection is then a far more attractive strategy than mass distribution. Custom mobile code designed to achieve selective covert infections is unlikely to quickly appear in the crosshairs of scanning software. This follows the evolutionary trajectory common in the biological world, where pathogens tend to mutate into less fatal forms over time. It is maladaptive for a parasite, which is what most malware is, to fatally damage its host. The downside of this effect is that the chronic infections with no apparent side effects are often overlooked.

Going forward, technologies and operational routines that make it difficult for unauthorized code to take up residence in or compromise the persistent state of the system are far more desirable counterstrategies than approaches based on scanning for known infections.

17.1.3 Design and Implementation Errors.

Design and implementation errors take a variety of forms. The simplest cases involve software that malfunctions on a constant predictable basis. More pernicious and more dangerous are those errors that silently compromise the strict containment of a multiuser environment. Errors in such prophylactic layers, known as brick walls or sandboxes, compromise the integrity of the protection scheme. In the worst cases, they permit unfettered access to system-level resources by unprivileged user programs.

Design and implementation errors can occur within any program or procedure, and mobile code is no exception. Sandboxes (nonprivileged, restricted operating environments) are intended to prevent unauthorized operations. Authentication determines which organization takes responsibility for such errors.

This chapter looks at a security model based on authentication of mobile code and then examines how restricted operating environments help to limit damage from harmful mobile code.

These concerns are appropriate to both widely distributed and targeted attacks. The challenge of targeted attacks lies in their small population; targeted attacks are unlikely to appear “on the radar” of general distribution scanning programs.

17.2 SIGNED CODE.

Authentication technologies are designed to ensure that information supplied by an organization has not been altered without authorization. The technology used to implement this assurance is based on use of the Public Key Infrastructure (PKI), as discussed in detail in Chapter 37. Code authenticated using PKI-based mechanisms often is referred to as signed. Signed code generally is immune to unauthorized modification; however, a signature guarantees integrity only from the point in time that the code is signed; the signing process does not imply safety or quality of the code prior to the point of signing.

Once signed, a file cannot be altered without the cooperation of someone holding access to the private key associated with the creating organization's X.509 certificate. (An X.509 certificate, digitally signed by an authorized user, authenticates the binding between a user's name and the user's public key.) Looking below the surface, such precautions do not address a variety of vulnerabilities:

  • Access to private keys
  • Access to the code base prior to signing
  • Fraudulent certificates
  • Design and implementation errors

17.2.1 Authenticode.

Microsoft's Authenticode technology is an example of an authentication-based approach.16 Developers wishing to distribute code obtain an appropriate digital certificate from a Certification Authority (CA) and use the digital certificate to sign the code. The signature is then checked by the client system each time that the code is executed.

Authenticode relies on several components:

  • PKI and the X.509 certificates issued by a Certification Authority.
  • Limited access to the private keys associated with the issuing organization's X.509 certificate. In Microsoft terminology, the term “Software Publishing Certificate” or “SPC” refers to a PKCS #7 object, which in turn contains a collection of X.509 certificates used to sign code.
  • The integrity of the processes used by the CA to ensure that requests for X.509 certificates are legitimate.

Authenticode does not address issues relating to the safety or accuracy of signed code, merely that it is authentic and unaltered since signing. For example, signing does not provide any guard against employee malfeasance.

17.2.2 Fundamental Limitations of Signed Code.

Signing technologies, regardless of the context (e.g., e-mail, applets, and archives), do not directly address questions of accuracy or correctness; they merely address questions of legitimacy. The biggest danger in signing schemes is the all-or-nothing approach taken to trust. Signed items are presumed to be trustworthy to the fullest extent of the requesting user's authority. The signed item can perform any operation that the user would be permitted to execute. There is no concept of partial trust. In an attorney's words, such an acceptance would be a general power of attorney. In the words of the CERT/Coordination Center (CERT/CC)–sponsored “Security in ActiveX Workshop”: “A digital signature does not, however, provide any guarantee of benevolence or competence.”17

At the same time, the inherent power and apparent legitimacy of a digital signature place a heavy burden on signers and the higher levels of the PKI to ensure the integrity of the mechanisms and secrets.

The key to the integrity of signed code is the signing process and the process that generates the object to be signed; the security of the secret keys required for its implementation determines the degree of trust in attribution of the signed code. In the truest sense, the private keys associated with the X.509 certificate represent the keys to the kingdom, as valuable as a signature chop in the Far East or a facsimile signature plate for a bank account.

On a practical level, accepting code signed by an organization is an explicit acceptance that the signing organization has good controls on the use of its signing keys. Organizations that take security seriously, segregating access to privileged accounts and controlling access to systems, are well positioned to manage the procedures for signing code.

Thus, the procedures and systems used for signing code should be treated with the same caution as is used for the aforementioned signing plates or the maximum security cryptographic facilities familiar to those in the national security area.

Unfortunately, despite years of publicity about the dangers of shared passwords and accounts, in many IT installations shared accounts and passwords remain common. There is little reason to assume that the secrets relating to PKI are better protected, despite extensive recommendations that those details be well guarded.

17.2.3 Specific Problems with the ActiveX Security Model.

The CERT/CC workshop on Security in ActiveX summarized the security issues in three major areas: importing and installing controls, running controls, and the use of controls by scripts.18 The next sections summarize key findings from this report.

17.2.3.1 Importing and Installing Controls.

As discussed, the sole basis for trusting a signed control is its presumed origin. However, the originator of the code may have incorporated a design flaw in the control or may not have done adequate software quality assurance to prevent serious bugs.

A trusting user may install a signed control that contains a vulnerability making it useful for attackers simply because it is signed.

On Windows systems with multiple users, once a control has been permitted by one user, it remains available for all users, even if their security stances differ.

17.2.3.2 Running Controls.

An ActiveX control has no limitations on what it can do on the client machine, and it runs with the same privileges as those of the user process that initiated the control.

Although ActiveX security measures are available in Internet Explorer, other client software may run controls without necessarily implementing such security. Internet Explorer security levels tend to be all or nothing, making it difficult to allow a specific control without allowing all controls of that type. Remote activation of controls can bypass normal security perimeters such as those imposed by firewalls.

There is no basis for deciding whether a particular control is safe to execute or not, in any particular context.

17.2.3.3 Scripting Concerns.

Lacking a general basis for limiting the actions of controls, ActiveX programmers must effectively determine their own precautions to prevent harmful actions. It is difficult enough to develop a good set of boundaries on program activity, even if one uses a general model such as the sandbox described later; it is extremely difficult to see how individual developers can be expected to create their own equivalent of the sandbox for each individual control or whether they can be trusted to do so. In light of these hazards, the authors of the CERT/CC report stated that “there is a large number of potential failure points.”

17.2.4 Case Studies.

Several security breaches or demonstrations mediated through ActiveX have occurred since the introduction of this technology in the mid-1990s.

17.2.4.1 Internet Exploder.

In 1996, Fred McLain wrote Internet Exploder, an ActiveX control designed to illustrate the broad degree of trust conferred on an ActiveX control by virtue of its having been “signed.” Exploder, when downloaded for execution by Internet Explorer, will shut down the browser's computer (the equivalent of the Shut down / Shut down sequence from the Start menu on a Windows system). This operation is operationally disruptive but not actually corrupting of the system. McLain notes in his frequently asked questions (FAQ) on Exploder that it is easy to build destructive or malicious controls.19

Exploder raises an important question: Who and what are the limits on trust when using signed code? In normal commercial matters, there is a large difference between an inauthentic signature, a forgery, and a properly signed but unpayable check. In software, the difference between an inauthentic control and a dangerous one is far less clear.

17.2.4.2 Chaos Computer Club Demonstration.

On January 27, 1997, a German television program showed members of the Chaos Computer Club demonstrating how they could use an ActiveX control to steal money from a bank account. The control, available on the Web, was written to subvert the popular accounting package Quicken. A victim need merely visit a site and download the ActiveX control in question; it automatically checked to see if Quicken was installed. If so, the control ordered Quicken to issue a transfer order to be saved in its list of pending transfers. The next time the victim connected to the appropriate bank and sent all pending transfer orders to the bank, all the transfers would be executed as a single transaction. The user's personal identification number (PIN) and transaction authorization number (TAN) would apply to all the transfers, including the fraudulent one in the pile of orders. Most victims would be unaware of the theft until they received their next statement—if then.20

Dan Wallach of Princeton University, commenting on this case, wrote:

When you accept an ActiveX control, you're allowing completely arbitrary code to rummage around your machine and do anything it pleases. That same code could make extremely expensive phone calls, to 900 numbers or over long distances, with your modem; it can read, write, and delete any file on your computer; it can install Trojan horses and viruses. All without any of the subterfuge and hackery required to do it with Java. ActiveX hands away the keys to your computer.21

Responding to criticisms of the ActiveX security model, Bob Atkinson, architect and primary implementer of Authenticode, wrote a lengthy essay explaining his point of view. Among the key points:

  • Microsoft never claimed that it would certify the safety of other people's code.
  • Authentication is designed solely to permit identification of the culprits after malicious code is detected.
  • Explorer-based distribution of software is no more risky than conventional purchases through software retailers.22

Subsequent correspondence in the RISKS Forum chastised Mr. Atkinson for omitting several other key points, such as:

  • Interactions among ActiveX controls can violate system security even though individual controls appear harmless.
  • There is no precedent in fact for laying liability at the feet of software developers even when you can find them.
  • Under attack, evidence of digital signature is likely to evaporate from the system being damaged.
  • Latency of execution of harmful payloads will complicate identification of the source of damage.
  • Malice is not as important a threat from code as incompetence.
  • Microsoft has a history of including security-threatening options, such as automatic execution of macros in Word, without offering any way of turning off the feature.
  • A Web site can invoke an ActiveX control that is located on a different site or that already has been downloaded from another site, and can pass, by means of that control, unexpected arguments that could cause harm.23

17.2.4.3 Certificates Obtained by Imposters.

In January 2001, VeriSign issued two Class 3 Digital Certificates for signing ActiveX controls and other code to someone impersonating a Microsoft employee. As a result, users receiving code signed using these certificates would receive a request for acceptance or rejection of a certificate apparently signed by Microsoft on January 30 or 31, 2001. As Russ Cooper commented on the NTBUGTRAQ Usenet group when the news came out in March 2001:

The fact that unless you actually check the date on the Certificate you won't know whether or not its [sic] one you can trust is a Bad Thing(tm)[sic], as obviously not everyone (read: next to nobody) is going to check every Certificate they get presented with.

You gotta wonder how VeriSign's issuance mechanism could be so poorly designed and/or implemented to let something like this happen.

Meanwhile, Microsoft are [sic] working on a patch that will stick its finger in this dam.

Basically, VeriSign Code–Signing Certificates do not employ a Certificate Revocation List (CRL) feature called CDP, or CRL Distribution Point, which causes the Certificate to be checked for revocation each time its read. Even if you have CRL turned on in IE, VeriSign Code–Signing Certificates aren't checked.

Microsoft's update is going to shim in some mechanism which causes some/all Code–Signing Certificates to check some local file/registry key for a CRL, which will (at least initially) contain the details of these Certificates. Assuming this works as advertised, any attempt to trust the mis-issued Certificates should fail.24

Roger Thompson, Chief Technical Officer for Exploit Prevention Labs, explained that the imposters' motives would determine how bad the results would be from the fraudulent certificates. “If it was someone with a purpose in mind, then six weeks is a long time to do something,” he said. “If the job was to install a sniffer, then there could be a zillion backdoors as a result of it.” Published reports indicated that the failure of authentication occurred due to a flaw in the issuing process at VeriSign: The certificates were issued before receiving verification by e-mail that the official customer contact authorized the certificates. This case was the first detected failure of authentication in over 500,000 certificates issued by VeriSign.25

17.3 RESTRICTED OPERATING ENVIRONMENTS.

From a Web perspective, the term “sandbox” defines what could be referred to as a restricted operating environment. Restricted operating environments are not new; they have existed for nearly 50 years in the form of multiuser operating systems, including MULTICS, OS/360 and its descendants, OpenVMS, UNIX, and others. See Chapter 24 in this Handbook for an overview of operating systems security.

In simple terms, a restricted, or nonprivileged, operating environment prohibits normal users and their programs from executing operations that can compromise the overall system. In such an environment, normal users are prohibited from executing operations such as HALT that directly affect hardware. User programs are prevented from executing instructions that can compromise the operating system memory allocation and processor state and from accessing or modifying files belonging to the operating system or to other users. Implemented and managed carefully, such systems are highly effective at protecting information and data from unauthorized modification and access. The National Computer Security Center (NCSC) Orange Book contains criteria for classifying and evaluating trusted systems.26

The strengths and weaknesses of protected systems are well understood. Permitting ordinary users unrestricted access to system files compromises the integrity of the system. Privileged users (i.e., those with legitimate access to system files and physical hardware) must be careful that the programs they run do not compromise the operating system. Most protected systems contain a collection of freestanding programs that implement useful system functions requiring some form of privilege to operate. Often these programs have been the source of security vulnerabilities. This is the underlying reasoning behind the universal recommendation that programs not run as root or Administrator, or with similar privileges unless absolutely necessary.

17.3.1 Java.

Java is a language developed by Sun Microsystems for platform independent execution of code, typically within the context of a Web browser. The basic Java environment includes a Java Virtual Machine (JVM) and a set of supporting software referred to as the Java Run Time Environment. Applets downloaded via the World Wide Web (intranet or Internet) have strict limitations on their ability to access system resources. In particular, these restrictions prevent the execution of external commands and read or write access to files.

The Java environment does provide for signed applets that are permitted wider access to files. Dynamically downloaded applets also are restricted to initiating connections to the system that supplied them, theoretically limiting some types of third-party attacks.

In concept, the Java approach, which also includes other validity tests on the JVM pseudocode, should be adequate to ensure security. However, the collection of trusted applets found locally on the client system and signed downloaded applets represent ways in which the security system can be subverted. Without signature, the Java approach is also vulnerable to attack by domain name system (DNS) spoofing.

Multiuser protection and virtual machine protection schemes also are totally dependent on the integrity of the code that separates the nonprivileged users from privileged, system-compromising, operations. Java has not been an exception to this rule. In 1996, a bug in the Java environment contained in Netscape Navigator Version 2.0 permitted connections to arbitrary Universal Resource Locators (URLs).27 Later, in 2000, errors were discovered in the code that protected various resources.28 Although the Java environment is less widely exploitable than ActiveX, vulnerabilities continue to be uncovered. In 2007, at least two vulnerabilities were reported by US-CERT.29 Significantly, both of these reported vulnerabilities involved the ability of untrusted applets to compromise the security envelope.

Additionally, since unsigned code can take advantage of errors in underlying signed code, there is no guarantee that complex combinations of untrusted and trusted code will not lead to security compromises.

17.4 DISCUSSION.

Mobile code security raises important issues about how to handle relationships in an increasingly interconnected computing environment.

17.4.1 Asymmetric, and Transitive or Derivative, Trust.

It is common for cyberrelationships to be asymmetric with regard to the size or power of the parties. This fact increases the potential for catastrophic interactions. It also creates opportunities for mass infection across organization boundaries. Large or critical organizations often can unilaterally impose limitations on the ability of partner organizations to defend their information infrastructure against damage.

The combination of a powerful organization and insufficient controls on signing authority, or, alternatively, the obligatory execution of unsigned (or self-signed) ActiveX controls, is a recipe for serious problems. The powerful organization is able to obligate its partners to accept a low security level, such as would result, for example, from using unsigned ActiveX controls, while abdicating responsibility for the resulting repercussions.

All organizations should, for security and performance reasons, use the technology that requires the least degree of privilege to accomplish the desired result. JavaScript/ECMAScript can provide many functions, without the need for the functionality provided by Java, much less ActiveX. It remains common for large organizations to force the download of ActiveX controls for purposes that do not require the power of ActiveX, merely using the justification that they perceive Internet Explorer to be the more prevalent browser. Often this requires running the installation script from an account with Administrator privileges, a second security violation. This is particularly surprising since these same organizations often offer parallel support for Firefox, Opera, Safari, and other non-ActiveX supporting browsers on Linux, Apple, and other platforms. This “Trust Me” concept forces the risk and burden of consequences on the end user, who is far less able to deal with the consequences.

As noted earlier in this chapter, Web servers represent an attractive vector for attacks. Signing (authentication) methods are a way to control damage potential, if the mechanisms used for admitting executable code are properly controlled. Failure to control these mechanisms leads to severe side effects.

In Chapter 30 in this Handbook, it is noted that protecting Web servers requires that the contents of the servers be managed with care. It is appropriate and often necessary to isolate Web servers on separate network segments, separated from both the Internet and the organizational intranet by firewalls. These precautions are even more necessary when servers are responsible for supplying executable code to clients.

Security practitioners should carefully examine the different functions performed by each server. In some cases, such as OpenVMS hosts, where network servers commonly run as unprivileged processes in separate contexts and directory trees, it is feasible to run multiple services on a single server. In other systems, such as UNIX and Windows, where it is common for applications services to execute as privileged, with full access to all system files, a logic error in a network service can compromise the security of the entire server, including the collection of downloadable applets.

Far more serious and equally subtle is transitive (or derivative) trust: Alpha trusts Beta who trusts Gamma. A security compromise—for example, an unsigned Java applet or a malfunctioning or malevolent ActiveX control supplied by Gamma—compromises Beta. Beta then causes problems with Alpha's systems. This cascade can continue repeatedly, leading to numerous compromised Web services and systems far removed geographically and organizationally from the original incident.

17.4.2 Misappropriation and Subversion.

The threat space has mutated over the last several years. Where the main danger from mobile code was attacks on the target machine, today's threat is far more diverse. In November 2007, John Schiefer of Los Angeles pled guilty to installing software designed to capture usernames and passwords. According to news reports, he was also involved in running a number of networks of compromised computers, often referred to as “bots,” which are often used to initiate distributed denial-of-service (DDoS) and other attacks.30 In this particular case, the announcement by the U.S. Department of Justice31 mentions two specific episodes: 250,000 machines infected with spybots to obtain user's usernames and passwords for PayPal and other systems; and a separate scheme involving a Dutch Internet advertising company in which a network of 150,000 infected computers were used to “signup” for one of the advertising company's programs.

This was one of the cases stemming from Bot Roast II,32 an FBI operation against several botnet networks.

17.4.3 Multidimensional Threat.

Mobile code is a multidimensional threat, with several different aspects that must each be treated separately. Signing code, such as Java applets or ActiveX controls, addresses the problem of authenticity and authority to release the code. However, the integrity of the signature mechanism requires that the integrity of the PKI infrastructure be beyond reproach. In a very real sense, the PKI infrastructure is beyond the control of the organization itself. Any compromise or procedural slip on the part of the Certificate Authority or signer invalidates the presumptions of safety.

Signing, however much it contributes to resolving the question of authenticity, does not address safety or validity. As an example, the Windows Update ActiveX control, distributed by Microsoft as part of the various Windows operating systems, has as its underlying purpose the update of the operating system. A failure of that control would be catastrophic. Fortunately, Microsoft gives users the choice of using the automatic update facility or doing updates manually. Many Web applications are not so accommodating.

The problem is not solely a question of malfunctioning applets. It is possible that a collection of applets involved in a client's overall business activities may collide in some unanticipated fashion, from attempting to use the same Windows registry key in contradictory ways, to inadvertently using the same temporary file name. Similar problems often occur with applications that presume they have a monopoly on the use of the system, an all-too-common syndrome.

These issues are, for the most part, completely unrelated to each other. A solution in one area would neither improve nor worsen the situation with regard to the other issues.

17.4.4 Client Responsibilities.

The expanding threat presents a challenge to those responsible for ensuring the integrity of desktop computing. Put simply, there is a complex, multidimensional threat, and it is not easily defended against using the techniques of portals, firewalls, and scanners.

The danger from browsing the World Wide Web is the danger that the browser will permit an attacker, directly or indirectly, to cause a modification to the persistent state of the system. The simplest step in the correct direction is not to browse the World Wide Web from within a protection context that has access to critical system files and settings. Limiting this access by using a nonprivileged user account for browsing significantly decreases the hazard, provided of course that the system files are protected from access by such an account.

The mass availability of Virtual Machine technology presents an additional alternative. Virtual Machine technology, pioneered by IBM in the 1960s on mainframes, has emerged in a new guise on platforms down to the desktop level. The general availability of this capability in the desktop world opens up a whole new defensive strategy against mobile malware: the expendable Web browser.

An expendable www browser is an instantiated desktop within a virtual machine environment, from a known system image. If it is compromised, it is merely rewritten from a known, uncompromised system image. It allows one to create a low-security, at-risk, browsing enclave within an otherwise higher security environment. This is an approach that has been used, in a physical sense to be sure, by some organizations proving public access personal computers. Rather than attempting to fortify the machines against compromise or attach, they are reinitialized from a known image after each user. This allows the end user to indulge the foibles of trading partners' attempts to impose unsafe computing practices in an expendable environment that can be isolated. Using Windows as an example, while it is an unsafe practice to install software as Administrator, it is far less damaging to do so in a virtual machine, where the machine can be deleted at convenience with little side effect.

17.4.5 Server Responsibilities.

As noted earlier in this chapter, Web servers represent an attractive vector for attacks. Signing (authentication) methods are a way to control damage potential, provided the mechanisms used for admitting executable code are properly controlled. Failure to control these mechanisms leads to severe side effects.

The concept of minimum necessary privilege applies to mobile code. There is little reason to impose the use of ActiveX for the purposes of changing the color of a banner advertisement. JavaScript/ECMAScript is capable of many powerful, display-related operations with a high degree of safety. Using Java to maintain a shopping cart (price, quantities, and contents) is reasonable and does not require the use of a signed applet, with its attendant greater capabilities and risks. At the other end of the scale, it is plausible that a system update function (e.g., the Windows Update function, which automatically downloads and installs changes to the Windows operating system) requires the unbridled power of a signed ActiveX control.

When the power of signed applets or controls is required, good software engineering practice provides excellent examples of how to limit the potential for damage and mischief, as discussed in Chapter 38 in this Handbook.

Good software implementation isolates functions and limits the scope of operations that require privileged access or operations. Payroll applications do not directly manipulate printer ports, video display cards, network adapters, or disk drives. Privileged operating system components, such as device drivers and file systems, are responsible for the actual operation. This separation, together with careful parameter checking by the operating system kernel and the privileged components, ensures safety.

The same techniques can be used with applets and controls. Because they require more access, they should be programmed carefully, using the same defensive measures as are used when implementing privileged additions to operating systems. As an example, there is little reason for a Simple Mail Transfer Protocol (SMTP) server to be privileged. An SMTP server requires privileges for a single function, the delivery of an individual electronic mail message to a recipient's mailbox. This can be accomplished in two ways:

  1. Implement the application in a nonprivileged way, by marking users' e-mail files and directories with the necessary access permissions for the mail delivery program to create and modify e-mail files and directories. Such a mechanism is fully in conformance with the NCSC Orange Book's C2-level of security.
  2. Implement a separate subcomponent whose sole responsibility is the actual message delivery of the message. The subcomponent must be written defensively to check all of its parameters, and does not provide an interface for the execution of arbitrary code. This approach is used by HP's OpenVMS operating system.

The UNIX sendmail program, by contrast, is a large, multifunctional program that executes with privileges. sendmail has been the subject of numerous security problems for over a decade and has spawned efforts to produce more secure replacements.33

17.5 SUMMARY.

Mobile code provides many flexible and useful capabilities. The different mechanisms for implementing mobile code range from the innocuous (HTML), to fairly safe (JavaScript/ECMAScript), and with increasing degrees of power and risk through Java and ActiveX.

Ensuring security and integrity with the use of mobile code requires cooperation on the part of both the provider and the client. Clients should not accept random signed code and controls. Providers have a positive responsibility to:

  • Follow good software engineering practices.
  • Grant minimum necessary privileges and access.
  • Use defensive programming.
  • Limit privileged access, with no open-ended interfaces.
  • Ensure the integrity of the signing process and the associated private keys.

With appropriate caution, mobile code can be a constructive, powerful part of intranet and Internet applications, both within an organization and in cooperation with its customers and other stakeholders.

17.6 FURTHER READING

Carl, Jeremy. “ActiveX Security: Under the Microscope,” Web Week, 2, No. 17, November 4, 1996; www.Webdeveloper.com/activex/activex_security.html

CERT. “NIMDA Worm,” September 11, 2001, www.cert.org/advisories/CA-2001-26.html

CERT. “sadmind/IIS Worm,” May 8, 2001, www.cert.org/advisories/CA-2001-11.html

CERT. “Unauthentic ‘Microsoft Corporation’ Certificates,” March 22, 2001, www.cert.org/advisories/CA-2001-04.html

Dormann, Will, and Jason Rafail. “Securing Your Web Browser,” CERT, January 23, 2006, www.cert.org/tech_tips/securing_browser/index.html on December 2, 2007.

Evers, J. FAQ: JavaScript Insecurities. http://www.xml.org/xml/news/archives/archive.07282006.shtml#4

Felten, Edward. “Security Tradeoffs: Java vs. ActiveX,” last modified April 28, 1997, www.cs.princeton.edu/sip/faq/java-vs-activex.html.

Felten, E., and J. Halderman. Lessons from the Sony CD DRM Episode, Center for Information Technology Policy, Department of Computer Science, Princeton University, February 14, 2006, http://itpolicy.princeton.edu/pub/sonydrm-ext.pdf.

Felten, E., and G. McGraw. Securing Java: Getting Down to Business with Mobile Code. New York: John Wiley & Sons, 1999. Also free and unlimited Web access from www.securingjava.com

Gehtland, J., B. Galbraith, and D. Almaer. Pragmatic Ajax. Raleigh, NC: Pragmatic Bookshelf, 2006.

Grossman, J., and T. C. Niedzialkowski. “Hacking Intranet Websites from the Outside,” Black Hat (USA), Las Vegas, August 3, 2006.

Hensing, R. “W32/HLLP.Philis.bq, Chinese Gold Farmers and What You Can Do about It,” December 2, 2007, http://blogs.technet.com/robert_hensing/archive/2006/12/04/w32-hllp-philis-bq-chinese-gold-farmers-and-what-you-can-do-about-it.aspxon.

Holzman, S. Ajax Bible. Hoboken, NJ: John Wiley & Sons, 2007.

Java Security. Frequently Asked Questions, revision March 29, 2001, java.sun.com/sfaq/index.html

Keizer, G. “FBI Planted Spyware on Teen's PC to Trace Bomb Threats,” Computerworld, July 19, 2007.

McGraw, G., and E. W. Felten. Java Security: Hostile Applets, Holes and Antidotes—What Every Netscape and Internet Explorer User Needs to Know. New York: John Wiley & Sons, 1997.

Microsoft. “Introduction to Code Signing” (with appendix), 2001, msdn.microsoft.com/workshop/security/authcode/intro_authenticode.asp; msdn.microsoft.com/workshop/security/authcode/appendixes.asp.

Rhoads, C. “Web Scammer Targets Senior U.S. Executives,” Wall Street Journal, November 9, 2007, http://onhne.wsj.com/pubhc/article4irint/SB119456922698387317.html on November 14, 2007.

Schwartz, J. “iPhone Flaw Lets Hackers Take Over Security Firm Says,” New York Times, July 23, 2007.

VeriSign, “Microsoft Security Bulletin MS01-017: Erroneous VeriSign-Issued Digital Certificates Pose Spoofing Hazard,” March 22, 2001, www.microsoft.com/TechNet/security/bulletin/MS01-017.asp.

VeriSign, “VeriSign Security Alert Fraud Detected in Authenticode Code Signing Certificates,” March 22, 2001, www.VeriSign.com/developer/notice/authenticode/index.html.

Zakas, N., J. McPeak, and J. Fawcett. Practical Ajax, 2nd ed. Hoboken, NJ: John Wiley & Sons, 2007.

17.7 NOTES

1. J. Schwartz, “iPhone Flaw Lets Hackers Take over Security Firm Says,” New York Times, July 23, 2007,

2. Adapted from Memorandum, November 7, 2000, from Arthur M. Money, Assistant Secretary of Defense for C3I and CIO, to Secretaries of the Military Departments, Chairman of the Joint Chiefs of Staff, Chief Information Officers of the Defense Agencies, et al. SUBJECT: Policy Guidance for Use of Mobile Code Technologies in Department of Defense (DoD) Information Systems; see www.c3i.osd.mil/org/cio/doc/mobile-code11-7-00.html.

3. J. Penny, “40 Years Too Long in Norwich Porn Case?” Norwich Bulletin, January 9, 2007; G. Smith, “Teacher Facing Porn Charges” Norwich Bulletin, November 11, 2004.

4. Jesse James Garrett, “AJAX: A New Approach to Web Applications,” http://adaptivepath.com/ideas/essays/archives/000385.php.

5. D. Izenberg, “Trojan Horse Developers Indicted,” Jerusalem Post, March 5, 2006.

6. Glenn Frankel, “18 Arrested in Israeli Probe of Computer Espionage,” Washington Post, Tuesday, May 31, 2005.

7. J. Kirk, “Hackers Target C-level Execs and Their Families,” Network World, July 2, 2007.

8. G. Keizer, “FAQ: What We Know (Now) about the FBI's CIPAV spyware,” Computerworld, July 29, 2007.

9. Mark Russinovich, “Sony, Rootkits and Digital Rights Management Gone Too Far,” December 2, 2007, http://blogs.technet.com/markrussinovich/archive/2005/10/31/sony-rootkits-and-digital-rights-management-gone-too-far.aspx.

10. The State of Texas v. Sony BMG Music Entertainment, LLC, Case GV-505065, District Court of Travis County, Texas, 126th Judicial District.

11. James Michaelson and Ori Edelstein v. Sony BMG Music, Inc. and First 4 Internet, Case 05 CV 9575, United States District Court, Southern District of New York.

12. Alexander William Guevara v. Sony Music Entertainment, et al, Case BC342359, Superior Court of the State of California, County of Los Angeles.

13. R. McMillan, “Sony Pays $ 1.5M to Settle Texas, California Root Kit Suits,” Computerworld, December 20, 2006.

14. www.ftc.gov/opa/2007/01/sony.htm.

15. Backdoor.IRC.Snyd.A, December 2, 2007, www.bitdefender.com/VIRUS-1000058-en-Backdoor.IRC.Snyd.A.html.

16. Advanced Software Logic, “What Is Authenticode?” www.Webcomponentdeployment.com/faq.htm.

17. CERT/CC, Results of the Security in ActiveX Workshop, Pittsburgh, Pennsylvania, August 22–23, 2000; PDF download available at: www.cert.org/archive/pdf/activeX_report.pdf.

18. CERT/CC, pp. 6–9.

19. F. McLain, “The Exploder Control Frequently Asked Questions (FAQ),” last updated February 7, 1997, www.halcyon.com/mclain/ActiveX/Exploder/FAQ.htm.

20. D. Weber-Wulff, “Electronic Funds Transfer without Stealing PIN/TAN,” RISKS 18, No. 80 (1997), catless.ncl.ac.uk/Risks/18.80.html.

21. D. Wallach, “RE: Electronic Funds Transfer without Stealing PIN/TAN,” RISKS 18, No. 81 (1997), catless.ncl.ac.uk/Risks/18.8.html.

22. B. Atkinson, “Comments and Corrections Regarding Authentication,” RISKS 18, No. 85 (1997).

23. RISKS 18, No. 86 (1997), et seq.

24. R. Cooper, “Alert: Microsoft Security Bulletin MS01-017,” NTBUGTRAQ list server, 2001, archive at: archives.neohapsis.com/archives/ntbuttraq/2001-q/0046.html.

25. R. Lemos, “Microsoft Says Beware of Stolen Certificates,” ZDNet News, March 22, 2001, news.cnet.com/news/0-1003-200-5222484.html.

26. For full text, see www.radium.ncsc.mil/tpep/library/rainbow/5200.28-STD.html.

27. CERT, “Java Implementations Can Allow Connection to an Arbitrary Host,” www.cert.org/advisories/CA-1996-05.html.

28. CERT, “Netscape Allows Java Applets to Read Protected Resources,” www.cert.org/advisories-CA-2000-15.html.

29. Vulnerability Note VU#336105, Sun Java JRE vulnerable to unauthorized network access, www.kb.cert/vuls/id/336105; Vulnerability Note VU#102289, Sun Java JRE vulnerable to privilege escalation, www.kb.cert/vuls/id/102289.

30. J. Serjeant, “‘Botmaster’ Admits Infecting 250,000 Computers,” Reuters, November 9, 2007.

31. United States Attorney's Office, Central District of California, “Computer Security Consultant Charges with Infecting up to a Quarter Million Computers that Were Used to Wiretap, Engage in Identity Theft, Defraud Banks,” Press Release No. 07-143, November 9, 2007, www.usdoj.gov/usao/cac/news/pr2007/143.html.

32. “‘BOTROASTII’: Cracking Down on CyberCrime,” December 2, 2007, www.fbi.gov/page2/nov07/botnet112907.html; more extensive details in “‘Bot Roast II” Nets 8 Individuals,” www.fbi.gov/pressrel/pressrel07/botroast112907.htm on December 2, 2007

33. For example, the National Vulnerability Database (NVD, http://nvd.nist.gov/) shows that the Common Vulnerabilities and Exposures (CVE) database includes a total of 29 unique vulnerabilities involving sendmail, of which 15 are dated 2000 and 2001. This trend continues, with the NVD Version 2.0 showing an additional 16 sendmail-related issues from 2002 through 2007.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.222.184.126