1.6 Ethical Issues in Security Analysis

Security analysis exists to improve system security. Students of security analysis must ensure that they themselves do not pose a risk to the systems they review. When an organization requests a security assessment, the security analyst’s situation is clear:

  • ■   The analyst needs written authorization from the organization to verify that the assessment should take place.

  • ■   The analyst should use the appropriate tools to perform the assessment.

  • ■   When finished, the analyst should collect the results and report them to the appropriate people in the organization.

The assessment results could pose a risk to the organization, so they are treated as confidential. The results are shared only with the appropriate people in the organization. The analyst is obligated to protect all working notes and ensure that information about the analysis doesn’t leak to others. When finished, the analyst should securely erase all information not needed for business purposes and take strong measures to prevent any details from leaking to others.

Laws, Regulations, and Codes of Conduct

The guidance provided here may be affected by local laws and regulations, or by obligations to employers or other secrecy agreements. Here is a brief review of other obligations that might affect the handling and disclosure of security vulnerabilities.

  • ■   Legal restrictions—There are “antihacking” laws in some jurisdictions. In the United States, the Digital Millennium Copyright Act (DMCA) outlaws “circumvention” of technical measures intended to protect copyrighted data, often called digital rights management (DRM) mechanisms. The DMCA can make it a crime to find a vulnerability in a DRM technique and publish it.

  • ■   National security information—If the vulnerability involves a “classified” system, then the information may fall under defense secrecy restrictions. People who handle such information sign secrecy agreements; violating such an agreement could even be treated as criminal espionage.

  • ■   Nondisclosure agreements—Employees and people working with sensitive information on the behalf of various enterprises or other organizations often sign an agreement not to disclose information the enterprise considers sensitive. Violations may lead to lost jobs and even lawsuits.

  • ■   Codes of conduct—Many professional organizations or holders of professional certifications agree to a professional code of conduct. Violators may lose their certification or membership. In practice, however, such codes of conduct aren’t used punitively but instead try to reinforce accepted standards of just and moral behavior.

Legal and ethical obligations may push a security practitioner toward both secrecy and disclosure. Professional standards and even laws recognize an obligation to protect the public from danger. An extreme case often arises in medical science: When people get sicker during a drug trial, is it caused by the tested drug or by something else? The researchers may be obliged to keep their research secret because of agreements with the drug company. On the other hand, the drug company may be legally and morally liable if people die because they weren’t informed of this newly discovered risk.

1.6.1 Searching for Vulnerabilities

With practice, it becomes easy to identify security weaknesses in everyday life. One sees broken fences, broken locks, and other security measures that are easy to circumvent. Such observations are educational when studying security. However, these observations can pose risks both to the security analyst and to people affected by the vulnerabilities.

In general, an analyst does not pose a danger by passively observing things and making mental notes of vulnerabilities. The risks arise if the analyst makes a more overt search for vulnerabilities, like searching for unlocked homes by rattling doorknobs. Police officers and other licensed security officers might do this, but students and other civilians should not engage in suspicious behavior.

For example, there are security scanning tools that try to locate computers on a network. Some of these tools also check computers for well-known vulnerabilities. Most companies that provide computer or network services require their users and customers to comply with an acceptable use policy (AUP). Most network AUPs forbid using such tools, just as some towns might have laws against “doorknob rattling.” Moreover, some tools may have unfortunate side effects, like the one in this example.

An Incident: A security analysis team was scanning a network. Part of the scan was interpreted as attempts to log in as an administrative user on the main server computer. The numerous failed logins caused the system to “lock out” the administrative user. The system had to be restarted to reestablish administrative control.

Overt and systematic security scanning should take place only when it has been explicitly authorized by those responsible for the system. It is not enough to be a professional with a feeling of responsibility for security and the expertise to perform the scan.

In 1993, Randall Schwartz, a respected programming consultant and author of several books on programming, was working for an Intel Corporation facility in Oregon. Schwartz claimed to be concerned about the quality of passwords used at the site, particularly by managers. Without formal authorization, Schwartz made a copy of a password file and used a “password cracking” program to extract the passwords.

Cybersecurity can’t flourish as a grassroots effort. If upper management doesn’t want to address security issues, we won’t gain their trust and support by locating and announcing existing security weaknesses. Not only does this increase risks by highlighting weaknesses, it also carries an implicit criticism of management priorities. Security improves when the appropriate managers and system administrators apply resources to improve it. We must keep this in mind when we take on roles as managers and administrators.

1.6.2 Sharing and Publishing Cyber Vulnerabilities

Vulnerabilities are not necessarily found by security analysts working on behalf of a site or vendor. Many are found through a form of “freelance” security testing. Many people have earned a few minutes of fame and publicity by finding a security vulnerability, reporting it publicly, and seeing the report repeated in the national news. Others have found ways to profit financially from vulnerabilities.

The cybercrime community promotes an active trade in vulnerability information and in software to exploit those vulnerabilities. The most prized is a ­zero-day vulnerability or other flaw: one that hasn’t been reported to the software’s vendor or to the general public. Victims can’t protect themselves against zero-day attacks because the attack is new and unknown. The NSA has contributed to this by collecting zero-day vulnerabilities itself—and keeping them secret—to use in its intelligence-gathering and cyber-warfare operations. This poses a national security dilemma if an attacker could use the same vulnerabilities against targets in the United States.

Cyber vulnerabilities became a public issue in the 1990s as new internet users struggled to understand the technology’s risks. Initially, vulnerability announcements appeared in email discussion groups catering to security experts. Several such groups arose in the early 1990s to discuss newly uncovered vulnerabilities in computer products and internet software. Many participants in these discussions believed that “full disclosure” of security flaws would tend to improve software security over time. Computer product vendors did not necessarily patch security flaws quickly; in fact, some did not seem to fix such flaws at all. Some computing experts hoped that the bad publicity arising from vulnerability reports would force vendors to patch security flaws promptly.

By the late 1990s, the general public had come to rely heavily on personal computers and the internet, and security vulnerabilities had become a serious problem. Reports of security vulnerabilities were often followed by internet-borne attacks that exploited those new vulnerabilities. This called into question the practice of full disclosure. Today, the general practice is as follows:

  • ■   Upon finding a vulnerability in a product or system, the finder reports the vulnerability to the vendor or system owner. The finder should provide enough information to reproduce the problem.

  • ■   The vendor should acknowledge the report within 7 days and provide the finder with weekly updates until the vendor has resolved the problem.

  • ■   The vendor and the finder should jointly decide how to announce the vulnerability.

  • ■   If the vendor and finder cannot agree on the announcement, the finder will provide a general announcement 30 days after the vendor was informed. The announcement should notify customers that a vulnerability exists and, if possible, make recommendations on how to reduce the risk of attack. The announcement should not include details that allow an attacker to exploit the vulnerability. Some organizations that handle vulnerabilities wait 45 days before making a public announcement.

  • ■   Publication of the details of the vulnerability should be handled on a case-by-case basis.

Although many organizations recognize these guidelines, they are not the final word on vulnerability disclosure. At the Black Hat USA 2005 Briefings, a security conference, security researcher Michael Lynn described a vulnerability in Cisco’s internet router products. Routers provide the backbone for moving messages on the internet; they provide the glue to connect local networks and long-distance networks into a single, global internet.

Lynn did not provide the details of how the vulnerability worked, but he did demonstrate the vulnerability to the audience at the conference. Cisco had released a patch for the vulnerability 4 months earlier. However, Lynn’s presentation still unleashed a furor, including court actions and restraining orders. As Lynn noted in his talk, many routers on the internet would still contain the vulnerability if their owners hadn’t bothered to update the router software.

Although few student analysts are likely to uncover a vulnerability with widespread implications, many will undoubtedly identify weaknesses in systems they encounter every day. Students should be careful when discussing and documenting these perceived weaknesses.

Students who are analyzing systems for training and practice should also be careful. A student exercise that examines a real system could pose a risk if it falls into the wrong hands. Therefore, it is best to avoid discussing or sharing cybersecurity class work with others, except for fellow cybersecurity students and the instructors.

Moreover, some countries and communities may have laws or other restrictions on handling security information. Be sure to comply with local laws and regulations. Even countries that otherwise guarantee free speech may have restrictions on sharing information about security weaknesses. In the United States, for example, the DMCA makes it illegal to distribute certain types of information on how to circumvent copyright protection mechanisms.

An ongoing discussion in the security community exists about whether, when, and how to announce security vulnerabilities. In the past, there was a tradition of openness and sharing in the internet community that biased many experts toward full disclosure. Today, however, security experts tend to keep vulnerability details secret and, at most, share it with the owners of the vulnerable system. The challenge is to decide how much to tell, and when.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.16.254