Chapter 7. Information Asset Protection

THE OBJECTIVE OF THIS CHAPTER IS TO ACQUAINT THE READER WITH THE FOLLOWING CONCEPTS:

  • Threats to security, perpetrators, and attack methods

  • Administrative management controls used to promote security

  • Implementing data classification schemes to specify appropriate handling of records

  • Physical security protection methods

  • Perimeter security designs, firewalls, and intrusion detection

  • Logical access controls for identification, authentication, and restriction of users

  • Changes in wireless security, including the robust security network

  • Encryption systems using symmetric and asymmetric public keys

  • Dealing with malicious software, viruses, worms, and other attacks

  • Storage, retrieval, transport, and disposition of confidential information

  • Controls and risks with the use of portable devices

  • Security testing, monitoring, and assessment tools

Information Asset Protection

The goal of information asset protection is to ensure that adequate safeguards are in use to store, access, transport, and ultimately dispose of confidential information. The auditor must understand how controls promote confidentiality, integrity, and availability.

We will discuss a variety of technical topics related to network security, data encryption, design of physical protection, biometrics, and user authentication. This chapter represents the most significant area of the CISA exam.

Understanding the Threat

Protecting information assets is a significant challenge. The very subject of security conjures up a myriad of responses. This chapter provides you with a solid overview of practical information about security. The unfortunate reality is that concepts of security have not evolved significantly over the last 2,000 years. Let us explain.

The medieval design of security is still pervasive. Most of your customers will view security as primarily a perimeter defense. History is riddled with failed monuments attesting to the folly of overreliance on perimeter defenses. Consider the castle walls to be equivalent to the office walls of the client's organization. Fresh water from the creek would be analogous to our modern-day utilities. The castle observation towers provide visibility for internal affairs and awareness of outside threats. The observation tower is functionally equal to network management and intrusion detection. A fortress drawbridge provides an equivalent function of the network firewall, allowing persons we trust to enter our organization. The castle courtyard serves as the marketplace or intranet. This is where our vendors, staff, and clients interact. During medieval times, it was necessary for our emissaries to enter and exit the castle fortress in secret. Confidential access was accomplished via a secret tunnel. Our modern-day equivalent to the secret tunnel is a virtual private network (VPN). Consider these thoughts for a moment while you look at Figure 7.1, concerning the medieval defensive design.

It is possible that security has actually regressed. In medieval times, royalty would use armed guards as an escort when visiting trading partners. In the modern world, the princess is given a notebook computer, PDA, cell phone, and airline ticket with instructions to check in later. Where is the security now?

Medieval defensive design

Figure 7.1. Medieval defensive design

Medieval castles fell as a result of infiltration, betrayal, loss of utilities such as fresh water, and brute force attacks against the fortress walls. This example should make it perfectly clear why internal controls need improvement. The only possible defensive strategy utilizes multiple layers of security with a constant vigil by management. Anything less is just another castle waiting to fall.

Let's take a quick look at some examples of computer crime and threats to the information assets.

Examples of Threats and Computer Crimes

There is nothing new about the threats facing organizations. History shows that these threats and crimes date back almost 4,000 years (over 130 generations). Therefore, none of these should be a surprise. We need to take a quick review of the threats and crimes that shall be mitigated with administrative, physical, and technical controls:

Theft

The theft of information, designs, plans, and customer lists could be catastrophic to an organization. Consider the controls in place to prevent theft of money or embezzlement. Have equivalent controls in place to prevent the theft of valuable intellectual property.

Fraud

Misrepresentation to gain an advantage is the definition of fraud. Electronic records may be subject to remote manipulation for the purpose of deceit, suppression, or unfair profit. Fraud may occur with or without the computer. Variations of fraud include using false pretenses, also known as pretexting, for any purpose of deceit or misrepresentation.

Sabotage

Sabotage is defined as willful and malicious destruction of an employer's property, often during a labor dispute or to cause malicious interference with normal operations.

Blackmail

Blackmail is the unlawful demand of money or property under threat to do harm. Examples are to injure property, make an accusation of a crime, or to expose disgraceful defects. This is commonly referred to as extortion.

Industrial espionage

The world is full of competitors and spies. Espionage is a crime of spying by individuals and governments with the intent to gather, transmit, or release information to the advantage of any foreign organization. It's not uncommon for governments to eavesdrop on the communications of foreign companies. The purpose is to uncover business secrets to share with companies in their country. The intention is to steal any perceived advancements in position or technology. Telecommunications traveling through each country are subject to legal eavesdropping by governments. Additional care must be taken to keep secrets out of the hands of a competitor.

Unauthorized disclosure

Unauthorized disclosure is the release of information without permission. The purpose may be fraud or sabotage. For example, unauthorized disclosure of trade secrets or product defects may cause substantial damage that is irreversible. The unauthorized disclosure of client records would cause a violation of privacy laws, not to mention details that would be valuable for a competitor.

Loss of credibility

Loss of credibility is the damage to an organization's image, brand, or executive management. This can severely impact revenue and the organization's ability to continue. Fraud, sabotage, blackmail, and unauthorized disclosure may be used to destroy credibility.

Loss of proprietary information

The mishandling of information can result in the loss of trade secrets. Valuable information concerning system designs, future marketing plans, and corporate formulas could be released without any method of recovering the data. Once a secret is out, there is no way to make the information secret again.

Legal repercussions

The breach of control or loss of an asset can create a situation of undesirable attention. Privacy concerns have created new requirements for public disclosure following a breach. Without a doubt, the last thing an organization needs is increased interest from a government regulator. Stockholders and customers may have grounds for subsequent legal action in alleging negligence or misconduct, depending on the situation.

According to the U.S. Federal Bureau of Investigation (FBI), the top three losses in 2005–2006 were due to virus attack, unauthorized access, and theft of proprietary information. There is a trend of dramatic increase in unauthorized access and theft of proprietary information. So, the auditor may ask, who is doing this?

Identifying the Perpetrators

There is one fundamental difference between a victim and a perpetrator. The victim did not act with malice. The perpetrators of crime may be casual or sophisticated. Their motive may be financial, political, thrill seeking, or a biased grudge against the organization. The damage impact is usually the same regardless of the perpetrator's background or motive. A common trait is that a perpetrator will have time, access, or skills necessary to execute the offense.

Today's computer criminal does not require advanced skills, although they would help. A person with mal-intent needs little more than access to launch their attack. For this reason, strong access controls are mandatory. The FBI reported that the number of internal attacks were approximately equal to the number of external attacks since 2005. So, who is the attacker?

Hackers

The term hacker has a double meaning. The honorable interpretation of hacker refers to a computer programmer who is able to create usable computer programs where none previously existed. In this Study Guide, we refer to the dishonorable interpretation of a hacker—an undesirable criminal. The criminal hacker focuses on a desire to break in, take over, and damage or discredit legitimate computer processing. The first goal of hacking is to exceed the authorized level of system privileges. This is why it is necessary to monitor systems and take swift action against any individual who attempts to gain a higher level of access. Hackers may be internal or external to the organization. Attempts to gain unauthorized access within the organization should be dealt with by using the highest level of severity, including immediate termination of employment.

Crackers

The term cracker is a variation of hacker, with the analogy equal to a safe cracker. Some individuals use the term cracker in an attempt to differentiate from the honorable computer programmer definition of hacker. The criminal cracker and criminal hacker terms are used interchangeably. Crackers attempt to illegally or unethically break into a system without authorization.

Script Kiddies

A number of specialized programs exist for the purpose of bypassing security controls. Many hacker tools began as well-intentioned tools for system administration. The argument would be the same if we were discussing a carpenter's hammer. A carpenter's hammer used for the right purpose is a constructive tool. The same tool is a weapon if used for a nefarious purpose. A script kiddy is an individual who executes computer scripts and programs written by others. Their motive is to hack a computer by using someone else's software. Examples include password decryption programs and automated access utilities. Several years ago, a login utility was created for Microsoft users to get push-button access into a Novell server. This nifty utility was released worldwide before it was recognized that the utility bypassed Novell security. The utility was nicknamed Red Button and became immensely popular with script kiddies. Internal controls must be put in place to restrict the possession or use of administration utilities. Violations should be considered severe and dealt with in the same manner as hacker violations.

Employee Betrayal

There is a reason why the FBI report cited the high volume of internal crimes. A person within the organization has more access and opportunity than anyone else. Few persons would have a better understanding of the security posture and weaknesses. In fact, an employee may be in a position of influence to socially engineer coworkers into ignoring safeguards and alert conditions. This is why it is important to monitor internal employee satisfaction. The great medieval fortresses fell by the betrayal of trusted allies.

Ethical Hacker Gone Bad

The term ethical hacker or white hat is a new definition in computer security. An ethical hacker is one who is authorized to test computer hacks and attacks with the goal of identifying an organization's weakness. Some individuals participate in special training to learn about penetrating computer defenses. This will usually result in one of two outcomes.

In the first outcome, a few of the ethical white-hat technicians will exercise extraordinary restraint and control. The objective of ethical hacking is to exercise hacker techniques only in a highly regimented, totally supervised environment. The white-hat technician will operate from a prewritten test plan, reviewed by internal audit or management oversight. The slightest deviation is grounds for termination. This additional level of control is to protect the organization from error or personal agenda by the white-hat technician.

Tip

Separation of duties requires the white hat (ethical hacker) to operate under the management of internal audit or an equivalent audit department. Forced separation of duties provides evidence that protects both management and the technician. The ethical hacker must not have any operational duties or otherwise be involved in daily IT operations.

The second outcome is that a white-hat technician will direct their own efforts. Some individuals will demonstrate great pride in their ability to circumvent required controls. These self-directed hacking techniques create an unacceptable level of risk for multiple reasons including organizational liability. The series of movies about Jason Bourne, The Bourne Identity (2002), The Bourne Supremacy (2004), and The Bourne Ultimatum (2007) illustrate the risk of self-directed activity. After management loses control, there is no way back. These fictional movies are, in part, based on facts from real events.

Note

As professional auditors, we've been engaged on several occasions to determine whether the internal staff has been using hacker techniques and tools without explicit test plans and approval. Proper test documentation requires keystroke-level detail combined with specific steps to capture corresponding evidence. In each event except one, the technician was fired for violating internal controls. Additional controls are necessary when a white-hat technician is employed by the organization. Honest people may be kept honest with proper supervision.

Third Parties

External persons are referred to as third parties. Third parties include visitors, vendors, consultants, maintenance personnel, and the cleaning crew. These individuals may gain access and knowledge of the internal organization.

Note

You would be surprised by how many times auditors have been invited to join the client in a meeting room with internal plans still visible on the whiteboards. The client's careless disregard is obvious by the words important—do not erase emblazoned across the board. You can bet this same organization allows their vendors to work unsupervised. In the evening, the cleaning crew will unlock and open every door on the floor for several hours while vacuuming and emptying waste baskets. We seriously doubt the cleaning crew would challenge a stranger entering the office. In fact, a low-paid cleaning crew may be exercising their own agenda.

Ignorance

The term ignorance is simply defined as the lack of knowledge. An ignorant person may be a party to a crime and not even know it. Even worse, the individual may be committing an offense without realizing the impact of their actions. Management may be guilty of not understanding their current risks and corresponding regulations. The statement "We/I did not know" is the fastest route to a conviction. Every judge will agree that ignorance of the law is not an excuse. Every manager is expected to research the regulations bearing upon their business practice. There is no legal excuse for ignorance or apathy. Fortunately, ignorance can be cured by training. This is the objective of user training for internal controls. By teaching the purpose of internal security controls, the organization can reduce their overall risk.

Overview of Attack Methods

Your clients will expect you to have knowledge about the different methods of attacking computers. We will try to take the boredom out of the subject by injecting practical examples. Computer attacks can be implemented with the computer or against a computer. There are basically two types of attacks: passive and active. Let's start with passive attacks.

Passive Attacks

Passive attacks are characterized by techniques of observation. The intention of a passive attack is to gain additional information before launching an active attack. Three examples of passive attacks are network analysis, traffic analysis, and eavesdropping:

Network analysis

The computer traffic across a network can be analyzed to create a map of the hosts and routers. Common tools such as HP OpenView or OpenNMS are useful for creating network maps. The objective of network analysis is to create a complete profile of the network infrastructure prior to launching an active attack. Computers transmit large numbers of requests that other computers on the network will observe. Simple maps can be created with no more than the observed traffic or responses from a series of ping commands. The network ping command provides a simple communications test between two devices by sending a single request, also known as a ping. The concept of creating maps by using network analysis is commonly referred to as painting or footprinting.

Host traffic analysis

Traffic analysis is used to identify systems of particular interest. The communication between host computers can be monitored by the activity level and number of service requests. Host traffic analysis is an easy method used to identify servers on the network.

Specific details on the host computer can be determined by using a fingerprinting tool such as Nmap. The Nmap utility is active software that sends a series of special commands, each command unique to a particular operating system type and version. For example, a Unix system will not respond to a NetBIOS type 137 request. However, a computer running Microsoft Windows will answer. The exact operating system of the computer can usually be identified with only seven or eight simple service requests. Host traffic analysis will provide clues to a system even if all other communication traffic is encrypted. This is an excellent tool for tracking down a rogue IP address. The Nmap utility provides information as to whether the destination address is a Unix computer, Macintosh computer, computer running Windows, or something else like an HP printer. This fingerprinting technique is also popular with hackers for the same reason.

Eavesdropping

Eavesdropping is the traditional method of spying with the intent to gather information. The term originated from a person spying on others while listening under the roof eaves of a house. Computer network analysis is a type of eavesdropping. Other methods include capturing a hidden copy of files or copying messages as they traverse the network. Email messages and instant messaging are notoriously vulnerable to eavesdropping because of their insecure design. Computer login IDs, passwords, and user keystrokes can be captured by using eavesdropping tools. Encrypted messages can be captured by eavesdropping with the intention of breaking the encryption at a later date. The message can be read later, after the encryption is compromised. Eavesdropping helped the Allies crack the secret code of radio messages sent using the German Enigma machine in World War II. Network sniffers are excellent tools for capturing communications traveling across the network.

Now let's move on to discuss the active attacks.

Active Attacks

Passive attacks tend to be relatively invisible, whereas active attacks are easier to detect. The attacker will proceed to execute an active attack after obtaining sufficient background information. The active attack is designed to execute an act of theft or to cause a disruption in normal computer processing. Following is a list of active attacks:

Social engineering

Criminals can trick an individual into cooperating by using a technique known as social engineering. The social engineer will fraudulently present themselves as a person of authority or someone in need of assistance. The social engineer's story will be woven with tiny bits of truth. All social engineers are opportunists who gain access by asking for it. For example, the social engineer may pretend to be a contractor or employee sent to work on a problem. The social engineer will play upon the victim's natural desire to help.

Phishing

A new social engineering technique called phishing (pronounced fishing) is now in use. The scheme utilizes fake emails sent to unsuspecting victims, which contain a link to the criminal's counterfeit website. Anyone can copy the images and format of a legitimate website by using their Internet browser. A phishing criminal copies legitimate web pages into a fake email or to a fake website. The message tells the unsuspecting victim that it is necessary to enter personal details such as U.S. social security number, credit card number, bank account information, or online user ID and password. Phishing attacks can also be used to implement spyware on unprotected computers. Many phishing attacks can be avoided through user education.

Dumpster diving

Attackers will frequently resort to rummaging through the trash for discarded information. The practice is also known as dumpster diving. Dumpster diving is perfectly legal under the condition that the individuals are not trespassing. This is the primary reason why proper destruction is mandatory. Most paper records and optical disks are destroyed by shredding.

Virus

Computer viruses are a type of malicious program designed to self-replicate and spread across multiple computers. The purpose of the computer virus is to disrupt normal processing. A computer virus may commence damage immediately or lie dormant, awaiting a particular circumstance such as the date of April Fools' Day. Viruses will automatically attach themselves to the outgoing files. The first malicious computer virus came about in the 1980s during prototype testing for self-replicating software. Antivirus software will stop known attacks by detecting the behavior demonstrated by the virus program (signature detection) or by appending an antivirus flag to the end of a file (inoculation, or immunization). New virus attacks can be detected if any program tries to append data to the antivirus flag. Not all antivirus software works by signature scanning; it can also use heuristic scanning, integrity checking, or activity blocking. These are all valid virus detection methods.

Worm

Computer worms are destructive and able to travel freely across the computer network by exploiting known system vulnerabilities. Worms are independent and will actively seek new systems on their own.

Logic bomb

The concept of the logic bomb is designed around dormant program code that is waiting for a trigger event to cause detonation. Unlike a virus or worm, logic bombs do not travel. The logic bomb remains in one location, awaiting detonation. Logic bombs are difficult to detect. Some logic bombs are intentional, and others are the unintentional result of poor programming. Intentional logic bombs can be set to detonate after the perpetrator is gone.

Trapdoor

Computer programmers frequently install a shortcut, also known as a trapdoor, for use during software testing. The trapdoor is a hidden access point within the computer software. A competent programmer will remove the majority of trapdoors before releasing a production version of the program. However, several vendors routinely leave a trapdoor in a computer program to facilitate user support. The commercial version of PGP encryption software contained a trapdoor designed to recover lost encryption keys and to allow the government to read the encrypted files, if necessary. Trapdoors compromise access controls and are considered dangerous.

Root kit

One of the most threatening attacks is the secret compromise of the operating system kernel. Attackers embed a root kit into downloadable software. This malicious software will subvert security settings by linking itself directly into the kernel processes, system memory, address registers, and swap space. Root kits operate in stealth to hide their presence. Hackers designed root kits to never display their execution as running applications. The system resource monitor does not show any activity related to the presence of the root kit. Once installed, the hacker has control over the system. The computer is completely compromised.

Brute force attack

Brute force is the use of extreme effort to overcome an obstacle. For example, an amateur could discover the combination to a safe by dialing all of the 63,000 possible combinations. There is a mathematical likelihood that the actual combination will be determined after trying less than one-third of the possible combinations. Brute force attacks are frequently used against user logon IDs and passwords. In one particular attack, all of the encrypted computer passwords are compared against a list of all the words encrypted from a language dictionary. After the match is identified, the attacker will use the unencrypted word that created the password match. This is why it is important to use passwords that do not appear in any language dictionary.

Denial of service (DoS)

Attackers can disable the computer by rendering legitimate use impossible. The objective is to remotely shut down service by overloading the system and thereby prevent the normal user from processing anything on the computer.

Distributed denial of service (DDoS)

The denial of service has evolved to use multiple systems for targeted attacks against another computer to force its crash. This type of distributed attack is also known as the reflector attack. Your own computer is being used by the hacker to launch remote attacks against someone else. Hackers start the attack from unrelated systems that the hacker has already compromised. The attacking computers and target are drawn into the battle—similar in concept to starting a vicious rumor between two strangers, which leads them to fight each other. The hackers sit safely out of the way while this battle wages.

IP fragmentation attack

One of the common Internet attack techniques is to send a series of fragmented service requests to a computer through a firewall. The technique is successful if the firewall fails to examine each packet. For this reason, firewalls are configured to discard IP fragments.

Crash-restart

A variation of attack techniques is crash-restart. An attacker loads malicious software onto a computer or reconfigures security settings to the attacker's advantage. Then the attacker crashes the system, allowing the computer to automatically restart (reboot). The attacker can take control of the system after it restarts with the new configuration. The purpose is to install a backdoor for the attacker.

Maintenance accounts

Most computer systems are configured with special maintenance accounts. These maintenance accounts may be part of the default settings or created for system support. An example is the user account named DBA for database administrator, or tape for a tape backup device. All maintenance accounts should be carefully controlled. It is advisable to disable the default maintenance accounts on a system. The security manager may find an advantage in monitoring access attempts against the default accounts. Any attempted access may indicate the beginnings of an attack.

Remote access attacks

Most attackers will attempt to exploit remote access. The goal of the attack is often based on personal satisfaction for political gain. There is less personal risk involved in gaining remote access. The common types of remote access attacks are referred to as follows:

War dialing

The attacker uses an automated modem-dialing utility to launch a brute force attack against a list of phone numbers. The attack generates a list of telephone numbers that were answered by a computer modem. The next step of the attack is to break in through an unsecured modem. This is why it's necessary for modems to reject inbound calls or to be protected by a telephone firewall such as TeleWall by SecureLogix. Remote access servers (RASs) provide better authentication than a modem. RAS logging capability can be used to identify attacks, if properly configured. Using RAS modem pools combined with telephone-firewall products such as TeleWall will reduce the chance of an attacker making a successful penetration.

War driving/walking

Wireless access is known to be insecure. Wireless manufacturers have seriously compromised security in an effort to improve Plug and Play capabilities for users. The trade-off of fewer user support issues for less security provides casual attackers the opportunity to gain remote access by walking or driving past wireless network transmitters. Previous attackers use symbols to mark the unsuspecting organization's property, to show other attackers that wireless access is available at that location. This marking technique is referred to as war chalking. War chalk maps of insecure access points are available for download on the Internet.

Note

We discuss the latest standards in wireless security later in this chapter. Wired Equivalent Privacy (WEP) and Wi-Fi Protected Access (WPA) have been officially classified as insecure since April 2005 because of implementation failures. Existing equipment using WEP/WPA is designated as insecure. The current standard is IEEE 802.11i known as Robust Security Network.

Cross-network connectivity

Interconnected networks are effective in business. The connectivity across networks provides an avenue for more-efficient processing by the user. Computer networks are cross-connected internally, and even across the Internet. It is not uncommon for a business partner to have special access. All of these connections can be exploited by an attacker. Business partner connections can provide an opportunity for the attacker to remotely compromise the systems of a partner organization with little chance of detection. The purpose of internal and external firewalls is to block attacks. The implementation of internal firewalls is an excellent practice that dates back to the Great Wall of China. Better-run organizations recognize the need. Since 2005, the deployment of internal firewalls has accelerated. Figure 7.2 shows the threat of attackers entering through business partner connections.

Cross-network connectivity

Figure 7.2. Cross-network connectivity

Source routing

As stated earlier, useful system administration tools can be implemented as weapons. In the early days of networking, it was necessary to send data across a network without any reliance on the network configuration itself. Therefore, a special network protocol known as source routing was developed. Source routing is designed to ignore the configuration of the network routers and follow the instructions designated by the sender (the source). Source routing is a magnificent diagnostic tool for reaching remote networks. As you can imagine, source routing also is popular with hackers, because it allows a hacker to bypass routing configurations used for firewall security. For this reason, every firewall and most routers must be configured to disable source routing.

Salami technique

The salami technique is used for the commission of financial crimes. The key here is to make the alteration so insignificant that in a single case it would go completely unnoticed—for example, a bank employee inserts a program into the bank's servers that deducts a small amount of money from the account of every customer. No single account holder will probably notice this unauthorized debit, but the bank employee will make a sizable amount of money every month.

Packet replay

Network communications are sent by transmitting a series of small messages known as packets. The attacker captures a series of legitimate packets by using a capture tool similar to a network sniffer. The packets are retransmitted (replayed) within a short time window to trick a computer system into believing that the sender is a legitimate user. This technique can be combined with a denial of service technique to compromise the system. The legitimate user is knocked off the network by using denial of service, and the attacker attempts to take over communications. This can be effective for hijacking sessions in single sign-on systems such as Kerberos. We discuss Kerberos single sign-on later in this chapter.

Message modification

Message modification can be used to intercept and alter communications. The legitimate message is captured before receipt by the destination. The content of the message, address, or other information is modified. The modified message is then sent to the destination in a fraudulent attempt to appear genuine.

This technique is commonly used for a man-in-the-middle attack: A third party places themselves between the bona fide sender and receiver. The person in the middle pretends to be the other party. If encryption is used, the middle person tricks the sender into using an encryption key known by the middle person. After reading, the message will be re-encrypted using the key of the true recipient. The message will be retransmitted to be received by the intended recipient. The man-in-the-middle is able to eavesdrop without detection. Neither the sender nor the receiver is aware of the security compromise.

Email spamming and spoofing

You are probably aware of email spamming. Spamming refers to sending a mass mailing of identical messages to a large number of users. The current laws governing email allow a business to send mass emails as long as the recipient is informed of the sender's legitimate address and the recipient is provided a mechanism to stop the receipt of future emails.

Email spamming is a common mechanism used in phishing attacks. The term spoofing refers to fraudulently altering the information concerning the sender of email. An example of email spoofing is when an attacker sends a fake notice concerning your eBay auction account. The spoofed email address appears as if it were sent by eBay. Email spamming is illegal in some countries, and email spoofing is prosecuted as criminal fraud and/or electronic wire fraud.

A variety of other technical attacks may be launched against the computer. A common attack is to send to the computer an impossible request or series of requests that cannot be serviced. These cause the system to overload its CPU, memory, or communication buffers. As a result, the computer crashes. An example is the old Ping of Death command (ping -l 65510), which exceeded the computer's maximum input size for a communication buffer.

The first step in preventing the loss of information assets is to establish administrative controls. Let's begin the discussion on implementing administrative safeguards.

Using Administrative Protection

Throughout this Study Guide, we have discussed the importance of IT governance over internal controls. The first step for a protection strategy is to establish administrative operating rules. Information security management is the foundation of information asset protection. Let's discuss some of the administrative methods used to protect data: information security management, IT security governance, data retention, documenting access paths, and other techniques. We will begin with information security management.

Information Security Management

The objective of information security management is to ensure confidentiality, integrity, and availability of computing resources. To accomplish this goal, it is necessary to implement organizational design in support of these objectives. Let's discuss some of the job roles in information security management:

Chief security officer

The chief security officer (CSO) is a role developed to grant the highest level of authority to the senior information systems security officer. Unfortunately, this tends to be a position of title more than a position of real corporate influence. The purpose of the CSO position is to define and enforce security policies for the organization.

Chief privacy officer

New demands for client privacy have created the requirement for a chief privacy officer (CPO). This position is equal to or directly below the chief security officer. The CPO is commonly a position of title rather than genuine corporate authority. The CPO is responsible for protecting confidential information of clients and employees.

Information systems security manager

The information systems security manager (ISSM) is responsible for the day-to-day process of ensuring compliance for system security. The ISSM follows the directives of the CSO and CPO for policy compliance. The ISSM is supported by a staff of information systems security analysts (ISSA) who work on the individual projects and security problems. An ISSM supervises the information systems security analysts and sets the daily priorities.

IT Security Governance

The concept of IT governance for security is based on security policies, standards, and procedures. For these administrative controls to be effective, it is necessary to define specific roles, responsibilities, and requirements. Let's imagine that an information security policy and matching standard have been adopted. The next step would be to determine the specific level of controls necessary for each piece of data. Data can be classified into groups based on its value or sensitivity. The data classification process will define the information controls necessary to ensure appropriate confidentiality.

The federal government uses an information classification program to specify controls over the use of data. High-risk data is classified top secret, and the classifications cascade down to data available for public consumption. Every organization should utilize an information classification program for their data. International Standard 15489 (ISO 15489) sets forth the requirements to identify, retain, and protect records used in the course of business. It's the only method to ensure integrity with proper handling. Let's take a look at the typical classifications used in business:

Public

Information approved for public consumption. It is important to understand that data classified as public needs to be reviewed and edited to ensure that the correct message is conveyed. Examples of public information include websites, sales brochures, marketing advertisements, press releases, and legal filings. Most information filed at the courthouse is a public record, viewable by anyone.

Sensitive

There is a particular type of data that needs to be disclosed to certain parties but not to everyone. We refer to this data as sensitive. This data may be a matter of record or legal fact. However, the organization would not want to go about advertising the details. Examples of sensitive information include client lists, product pricing structure, contract terms, vendor lists, and details of outstanding litigation.

Private, internal use only

The classification of data for internal use only is commonly applied to operating procedures and employment records. The details of operating procedures are usually provided on a need-to-know basis to prevent a person from designing a method for defeating the procedure. Examples of private records include salary data, health-care information, results of background checks, and employee performance reviews.

Confidential

This is the highest category of general security classification outside the government. It may be subdivided into confidential and highly confidential trade secrets. Confidential data is anything that must not be shared outside of the organization. Examples include buyout negotiations, secret recipes, and specific details about the inner workings of the organization. Confidential data may be exempt from certain types of legal disclosure.

The overall purpose of using an information classification scheme is to ensure proper handling based on the information content and context. Context refers to the usage of information.

Two major risks are present in the absence of an information classification scheme. The first major risk is that information will be mishandled. The second major risk is that without an information classification scheme, all of the organization's data may be subject to scrutiny during legal proceedings. The information classification scheme safeguards knowledge. Failure to implement a records and data classification scheme leads back to our previous paragraph about ignorance and a speedy conviction.

Authority Roles over Data

To implement policies, standards, and procedures, it is necessary to identify persons by their authority. Three levels of authority exist in regard to computers and data. The three levels of authority are owner, custodian, and user.

Data Owner

The data owner refers to executives or managers responsible for the data content. The role of the data owner is to do the following:

  • Assume responsibility for the data content

  • Specify the information classification level

  • Specify appropriate controls

  • Specify acceptable use of the data

  • Identify users

  • Appoint the data custodian

As an IS auditor, you will review the decisions made by the data owner to evaluate whether the actions were appropriate.

Data User

The data user is the business person who benefits from the computerized data. Data users may be internal or external to the organization. For example, some data is delivered for use across the Internet. The role of the data user includes the following tasks:

  • Follow standards of acceptable use

  • Comply with the owner's controls

  • Maintain confidentiality of the data

  • Report unauthorized activity

You will evaluate the effectiveness of management to communicate their controls to the user. The auditor investigates the effectiveness and integration of policies and procedures with the user community. In addition, the auditor determines whether user training has been effectively implemented.

Data Custodian

The data custodian is responsible for implementing data storage safeguards and ensuring availability of the data. The custodian's role is to support the business user. If something goes wrong, it is the responsibility of the custodian to deal with this promptly. Sometimes the custodian's role is equivalent to a person holding the bag of snakes at a rattlesnake roundup or the role of a plumber when fixing a clogged drain. The duties of the data custodian include the following:

  • Implement controls matching information classification

  • Monitor data security for violations

  • Administer user access controls

  • Ensure data integrity through processing controls

  • Back up data to protect from loss

  • Be available to resolve any problems

Now we have identified the information classification and the job roles of owner, user, and custodian. The next step is to identify data retention requirements.

Identify Data Retention Requirements

Data retention specifies the procedures for storing data, including how long to keep particular data and how the data will be disposed of. All records follow a life cycle similar to the SDLC model in Chapter 5, "Life Cycle Management." The requirements for data retention can be based on the value of data, its useful life, and legal requirements. For example, financial records must be accessible for seven years. Medical records are required to be available indefinitely or at least as long as the patient remains alive. Records regarding the sale or transfer of real property are to be maintained indefinitely, as are many government records.

The purpose of data retention is to specify how long a data record must be preserved. At the end of the preservation period, the data is archived or disposed of. The disposal process frequently involves destruction. We will discuss storage and destruction toward the end of this chapter.

Now the authority roles and data retention requirements have been identified. So, the next administrative step is to document the access routes (paths) to reach the data.

Document Access Paths

It would be extremely difficult to ensure system security without recognizing common access routes. One of the requirements of internal controls is to document all of the known access paths. A physical map is useful. The network administrator or security manager should have a floor plan of the building. The locations of computer systems, wiring closet, and computer room should be marked on the map. Map symbols should indicate the location of every network jack, telephone jack, and modem. The location of physical access doors and locking doors should also be marked on the map. This process would continue until all the access paths have been marked. Even the network firewall and its Internet communication line should appear on the map.

Next, a risk assessment should be performed by using the map of access paths. Hackers can injure the facility via the Internet or from within an unsupervised conference room. Special attention should be given to areas with modem access. Modems provide direct connections, which bypass the majority of IT security. Computer firewalls are effective only if the data traffic passes directly through the firewall. A computer firewall cannot protect any system with an independent, direct Internet connection.

The purpose of documenting access paths and performing a risk assessment is to ensure accountability. Management is held responsible for the integrity of record keeping. Guaranteeing integrity of a computer system would be difficult if nobody could guarantee that access restrictions were in place.

The change control process should include oversight for changes affecting the access paths. For example, a change in physical access security may introduce another route to the computer room. Persons entering and leaving through the side door, for example, would have a better opportunity to reach the computer room without detection.

The next step to ensure security is to provide constant monitoring. Physical security systems can be monitored with a combination of video cameras, guards, and alarm systems. Badge access through locked doorways provides physical access control with an audit log. A badge access system can generate a list of every identification badge granted access or denied access through the doorway. Unfortunately, a badge access system will have difficulty ensuring that only one person passed through the doorway at a particular time. A mantrap system of two doorways may be used to prevent multiple persons from entering and exiting at the same time. A mantrap allows one person to enter and requires the door to be closed behind the person. After the first door is closed, a second door can be opened. The mantrap allows only one person to enter and exit at a time.

To support the increased security, it will be necessary to train the personnel.

Personnel Management

Everyone in the organization should undergo a process of security awareness training. Education is the best defense. Computer training and job training are commonplace. The organization should introduce a training program promoting IT governance in security to generate awareness. Let's consider the possible training programs:

  • New-hire orientation, which should include IT security orientation

  • Physical security safeguards and asset protection

  • Reeducating existing staff about IT security requirements

  • Introducing new security and safety considerations

  • Email security mechanisms

  • Virus protection

  • Business continuity

Every organization should have a general IT security training program to communicate management's commitment for internal controls.

A good training program can run in 20 minutes or less. The intention is to improve awareness and understanding. This objective does not require a marathon event. Training can occur in combination with normal activities. A favorite technique of ours is to place a 20-minute video presentation on the back end of HR benefits and orientation sessions. This ensures that the audience will be present. HR will provide time and attendance reporting for the participants. Each person on staff will be tracked through a series of presentations, leading to cumulative awareness training. Other methods include a brown-bag lunch event, followed by a contest giveaway to promote attendance.

Physical Access

Physical access is a major concern to IT security. As an IS auditor, you need to investigate how access is granted for employees, visitors, vendors, and service personnel. Which of these individuals are escorted and which are left unattended? What is the nature of physical controls and locking doors? Are there any internal barriers to prevent unauthorized access?

The following is a list of the three top concerns regarding physical access:

Sensitive areas

Every IS auditor is concerned about physical access to sensitive areas such as the computer room. The computer room and network wiring closets are an attractive target. Physical access to electronic equipment will permit the intruder to bypass a number of logical controls. Servers and network routers can be compromised through their keyboard or service ports. Every device can be disabled by physical damage. It is also possible for the intruder to install eavesdropping access by using wiretaps or special devices.

Service ports

Network equipment, routers, and servers have communication ports that can be used by maintenance personnel. A serial port provides direct access for a skilled intruder. Shorting out the hardware can create a denial of service situation. Special commands issued through a serial port may bypass the system's password security. When security is bypassed, the contents in memory can be displayed to reveal the running configuration, user IDs, and passwords.

Note

As auditors, we have observed senior maintenance personnel from the two largest router manufacturers. During one particular crisis, the skilled technicians successfully bypassed security and reconfigured a major set of changes to routers without halting network service and without knowing the actual administrator passwords.

Computer consoles

The keyboard of the server is referred to as the console. Direct access to servers and the console should be tightly controlled. A person with direct access can start and stop the system. The processes stopping the system may be crude and cumbersome, but the outcome will be the same. Direct access also provides physical access to disk drives and special communication ports. It would be impossible to ensure server security without restricting physical access.

Terminating Access

Administrative procedures are necessary to ensure that access is terminated when an employee leaves the organization. The access of existing employees should also be reviewed on a regular basis. In a poorly managed organization, the employee will be given access to one area and then to additional areas as their jobs change. Unfortunately, this results in a person with more access than their job requires. Access to sensitive areas should be limited to persons who perform a required job function in the same area. If the person is moved out of the area, that access should be terminated. The IS auditor should investigate how the organization terminates access and whether it reviews existing access levels. The concept of least privilege should be enforced. The minimum level of access is granted to perform the required job role.

Incident Handling

Incident handling is an administrative process we discussed in Chapter 6, "IT Service Delivery." Physical damage or an unlocked door at the wrong time should initiate the incident-handling process. Auditors will need to investigate how the organization deals with incident handling in regard to security implications.

Auditors need to ask the following questions:

  • What are the events necessary to trigger incident response?

  • Are the user and the IT help desk trained to know when to call?

  • What is the process for activating the incident response team?

  • Does the incident response team have an established procedure to ensure a proper investigation and protect evidence?

  • Are members of the incident response team formally appointed and trained?

Violation Reporting

Policies and procedures in security plans are ineffective unless management is monitoring compliance. An effective process of monitoring will detect violations. Better control occurs when activity monitoring is separate from the person or activity performing the work. Self-monitoring is a violation of IT governance controls. The built-in reporting conflict of self-monitoring will seriously question the integrity of the reporting process. Separation of duties applies to people, systems, and violation reporting. The IS auditor needs to investigate how violations are reported to management:

  • Does a formal process exist to report possible violations?

  • Will a violation report trigger the incident response team to investigate?

The role of the IS auditor in personnel management is to determine whether appropriate controls are in place to manage the activities of people inside the organization. Now we will move on to physical protection.

Implementing Physical Protection

Physical barriers are frequently used to protect assets. A few pages ago, we discussed the creation of a map displaying access routes and locked doors. After risk assessment, the next step is to improve physical protection.

Let's review a few of the common techniques for increasing physical protection:

Closed-circuit television

Closed-circuit television can provide real-time monitoring or audit logs of past activity. Access routes are frequently monitored by using closed-circuit television. The auditor may be interested in the image quality and retention capabilities of the equipment. Some intrusions may not be detected for several weeks. Does the organization have the ability to check for events that occurred days or weeks ago?

Guards

Security guards are an excellent defensive tool. Guards can observe details that the computerized security system would ignore. Security guards can deal with exceptions and special events. In an emergency, security guards can provide crowd control and direction. Closed-circuit television can extend the effective area of the security guard. The monitoring of remote areas should reduce the potential for loss. The only drawback is that guards may be susceptible to bribery or collusion. It's a common practice for the guards to be monitored by a separate security staff in banks and casinos.

Special locks

Physical locks come in a variety of shapes and sizes. Let's look at three of the more common types of locks:

Traditional tumbler lock

An inexpensive type of lock is the tumbler lock, which uses a standard key. This is identical to the brass key lock used for your home and automobile. The lock is relatively inexpensive and easy to install. It has one major drawback: Everyone uses the same key to open the lock. It is practically impossible to identify who has opened the lock. Figure 7.3 shows a diagram of the tumbler lock.

Tumbler lock

Figure 7.3. Tumbler lock

Electronic lock

Electronic locks can be used by security systems. The electronic line is frequently coupled with a badge reader. Each user is given a unique ID badge, which will unlock the door. This provides an audit trail of who has unlocked the door for each event. Electronic locks are usually managed by a centralized security system. Unfortunately, electronic locks will not tell us how many people went through the door when it was open. To solve that problem, it would be necessary to combine the electronic lock with closed-circuit television recording.

Cipher lock

Cipher locks may be electronic or mechanical. The purpose of the cipher lock is to eliminate the brass key requirement. Access is granted by entering a particular combination on the keypad. Low-security cipher locks use a shared unlock code. Higher-security cipher locks issue a unique code for each individual. The FBI office in Dallas has a really slick electronic cipher lock using an LCD touchpad. The user touches a combination of keys in sequence on the LCD keypad. Between each physical touch, the key display changes to prevent an observer from detecting the actual code used. This is an example of a higher-security cipher lock.

Biometrics

The next level of access control for locked doors is biometrics. Biometrics uses a combination of human characteristics as the key to the door. We discuss this later in this chapter, in the section "Using Technical Protection."

Burglar alarm

The oldest method of detecting a physical breach is a burglar alarm. Alarm systems are considered the absolute minimum for physical security. An alarm system may be installed for the purpose of signaling that a particular door has been opened. Remote or unmanned facilities frequently implement a burglar alarm to notify personnel of a potential breach. Burglar alarm systems should be monitored to ensure appropriate response in a timely manner.

Environmental sensors

Technology is not tolerant of water or contaminants. Environmental controls are used to regulate the temperature, humidity, and airflow. The failure of air-conditioning or humidity control can damage sensitive computer equipment. All the servers in the computer room would overheat and crash without continuous air-conditioning. Environmental sensors should be monitored with the same interest and response as a burglar alarm.

Data Processing Locations

We have discussed the need for security to restrict physical access. The data processing facility requires special consideration in its design. Data processing equipment is a valuable asset that needs to be protected from environmental contamination, malicious personnel, theft, and physical damage.

The data center location should not draw any attention to its true contents. This will alleviate malicious interest by persons motivated to commit theft or vandalism. The facility should be constructed according to national fire-protection codes with a 2-hour fire-protection rating for floors, ceilings, doors, and walls.

Some locations in the building are not suitable for the computer room. Basements are a poor choice because they are susceptible to flooding. In 1960, computers were placed behind glass windows to showcase them as a status symbol. A series of riots in the mid-1960s made it apparent that computers needed to be rapidly moved into fortified rooms. The most expedient location was an unused basement with no windows. The standard over the past 50 years has been to place the data center on a middle floor in the building—preferably located between the second floor and one floor below the top floor.

Note

Basements are a poor choice for data centers. Ground-level floors are not a good choice because they are easy to access by both thieves and attackers. A top floor is not recommended because of the likelihood of storm damage and roof leaks. A floor just below the top floor may be acceptable. The second layer of concrete between the floors provides additional protection from roof leaks. Opaque windows are considered acceptable in some environments if the windows are shatterproof and installed by using a sturdy mount equal to the window rating.

Access to the data center should be monitored and restricted. The same level of protection should be given to wiring closets because they contain related support equipment. Physical protection should be designed by using a 3D space consideration: Intruders should not be able to gain access from above, below, or through the side of the facility.

The physical space inside the data processing facility should be environmentally controlled. Let's move on to a discussion of environmental protection.

Environmental Controls

The first concern in the data center is electrical power. Electrical power is the lifeblood of computer systems. Unstable power is the number one threat to consistent operations. At a minimum, the data center should have power conditioners and an uninterruptible power supply.

Note

You are expected to understand a few of the terms used to describe conditions that create problems for electrical power.

Figure 7.4 illustrates the different types of electrical power conditions.

Electrical power conditions

Figure 7.4. Electrical power conditions

Emergency Power Shutoff

Electricity is both an advantage and a hazard. The national fire-protection code requires an emergency power off (EPO) switch to be located near the exit door. The purpose of this switch is to kill power to prevent an individual from being electrocuted. The EPO switch is a red button, which should have a plastic cover to prevent accidental activation. The switch can be wired into the fire-control system for automatic power shutoff if the fire-control system releases water or chemicals to disable a fire.

Uninterruptible Power Supply

The uninterruptible power supply (UPS) is an intelligent power monitor coupled with a string of electrical batteries. The UPS constantly monitors electrical power. A UPS can supplement low-voltage conditions by using power stored in the batteries. During a power outage, the UPS will provide a limited amount of battery power to keep the systems running. The duration of this battery power depends on the electrical consumption of the attached equipment. Most UPS units are capable of signaling the computer to automatically shut down before the batteries are completely drained. Larger commercial UPS systems have the ability to signal the electrical standby generator to start.

Standby Generator

The standby generator provides auxiliary power whenever commercial power is disrupted. The standby generator can be connected to the UPS for an automated start. The UPS will signal the generator that power is required, and the generator will start warming up. After the generator is warmed up, a transfer circuit will switch the electrical feed from commercial power to the generator power. The UPS will filter the generator power and begin recharging batteries. The standby generator can run for as long as it has fuel. Most standby generators run on diesel fuel or natural gas:

Diesel generator

A diesel generator requires a large fuel-storage tank with at least 12 hours of fuel. Better-prepared organizations will store at least three days worth of fuel and as much as thirty days of fuel. Simple power failures will typically be resolved within three days. Power failures due to storm damage usually take more than one week to resolve. Smart executives would never trust the business to hollow promises from a fuel delivery service. When power stops, attitudes will flare because the business has been shut down. More fuel storage is cheaper than the outage or expense of a court battle with your supplier over late fuel delivery.

Natural gas generator

Natural gas–powered generators have the advantage of tapping a gas utility pipeline directly or using a connection through a storage tank. The natural gas generator does not require a fuel truck to refill its fuel tank. The local natural gas pipeline provides a steady supply of fuel for an extended period of time. The natural gas supply is a good idea in areas that are geologically stable.

Note

Air-conditioning equipment is the Achilles heel of generators. An enormous amount of fuel is consumed when running AC using generator power. It also takes a very big generator to run AC compressors. Realistic alternatives to AC include cycling cooler subsurface water from deep wells for convection cooling.

Note

One alternate power-conserving method uses a stored tank of liquid nitrogen for temporary cooling over days or weeks. Liquid nitrogen is used by electronic manufacturers in environmental testing for component failure analysis. The same technique could be applied by venting small amounts of liquid nitrogen through a closed-loop condenser system. Air blowing across the condenser will keep the room cool without burdening the generator.

Dual Power Leads

The best way to prevent power outages is to install power leads from two different power substations. It would be extremely expensive to just pay someone to run special power cables. Instead, the location of the building housing the computer room should be selected according to area power grids. Power grids are usually divided along highways. Careful location selection will place your building within a quarter-mile of two power grids. This makes the cost of the dual connection affordable. Dual power leads should approach the building from different directions without sharing the same underground trench. A construction backhoe is extremely effective in destroying underground connections.

Power Transfer System

The power transfer system, known as a transfer switch, provides the connection between commercial power and UPS battery power in the generator. The transfer switch may be manual or computerized. It is not uncommon for the power transfer switch to fail during a power outage. Therefore, manual power-transfer procedures should be in place. Automated power-transfer switches may not be able to react to a pair of short power failures occurring within the same 30-minute window. After the first power failure, the generator will come online to produce electrical current. After commercial electrical power is restored, the power switch will transfer back to commercial electricity. At the same time, the generator will receive a signal to begin cooling down and finally shut off. If another power outage occurs during the generator cooling period, the power transfer switch will cycle to generator power while the generator is not producing electrical current. This condition may be resolved by increasing the battery capability of the UPS and adjusting generator start and stop times.

Figure 7.5 illustrates the basic electrical power system used for computer installations.

Power system overview

Figure 7.5. Power system overview

Heating, Ventilation, and Air-Conditioning

The computer installation requires heating, ventilation, and air-conditioning. Electronic equipment performs well in cold conditions; however, magnetic media should not be allowed to freeze. Ventilation is necessary for cooling computer equipment. Physical damage occurs if the computer circuitry sustains extended use at temperatures of 104 degrees or higher. Physical damage will also occur if the internal electronic circuitry exceeds 115 degrees during operation.

Air-conditioning is also used to control humidity. Humidity will control static electricity that could damage electrical circuits. The ideal humidity for a computer room is between 35 percent and 45 percent at 72 degrees. This will reduce the atmospheric conditions that would otherwise create high levels of static electricity.

Fire, Smoke, and Heat Detection

The data center and records storage area should be equipped for fire, smoke, and heat detection. Unheated areas may need to be monitored for freezing conditions. There are three basic types of fire detectors, using smoke detection, heat detection, or flame detection:

Smoke detection

Uses optical smoke detectors or radioactive smoke detection

Heat detection

Uses a fixed temperature thermostat (which activates above 200 degrees), or rapid-rise detection (which activates the alarm if the temperature increases dramatically within a matter of minutes)

Flame detection

Relies on ultraviolet radiation from a flame or the pulsation rate of a flame

A fire-detection system activates an alarm to initiate human response. A fire-detection system may also activate fire suppression with or without the discharge of water or chemicals.

Fire Suppression

Fire suppression is the next step after fire detection. A fire-suppression system may be fully automated or mechanical. There are three basic types of fire-suppression systems:

Wet pipe system

The wet pipe system derives its name from the concept of water remaining inside the pipe. Most sprinkler heads in a ceiling-based system are mechanical. Each sprinkler head is an individual valve held closed by a meltable pin. A fire near the sprinkler head will melt the pin, and the valve will open to discharge whatever is in the pipe. This type of system can burst because of a freeze, or leak due to corrosion, which would create an unscheduled discharge. Figure 7.6 shows a wet pipe system.

Dry pipe system

The dry pipe system is an improvement over the wet pipe for two reasons. First, the pipe is full of compressed air rather than water prior to discharge. When the valve opens, there is a delay of a few seconds as the air clears from the line. The water will discharge after the air is purged. This leads us to the second advantage. The flow of rushing air can trigger a flow switch to activate the EPO switch to kill electrical power. Equipment will shut off during the few seconds before the water is discharged. This will reduce the amount of damage to computer equipment. Special computer cabinets are made to shed water away from electronic hardware mounted inside. Figure 7.7 shows a dry pipe system.

Wet pipe system

Figure 7.6. Wet pipe system

Dry pipe system

Figure 7.7. Dry pipe system

Dry chemical system

Dry chemical systems are frequently used in computer installations because dry chemicals avoid the hazards created by water. The dry chemical system uses a gas such as FM-200 or NAF-S-3 to extinguish fire.

Note

Gaseous halon is no longer used because it is a chlorofluorocarbon (CFC) that destroys the Earth's ozone layer (1994 international environmental accord). The exception is aircraft and ships. Fires occurring while in flight or at sea could be devastating with tragic loss of life. All former halon installations in computer rooms should have been converted to FM-200 or an equivalent dry chemical.

When electronic sensors detect a fire condition, the dry chemical system will discharge into the room. Figure 7.8 shows the basic design of a dry chemical system.

Dry chemical system

Figure 7.8. Dry chemical system

Special administrative controls are necessary with dry chemical systems. Maintenance personnel must never lift floor tiles or move ceiling tiles while the dry chemical system is armed. Floating particles of dust can activate a discharge of dry chemicals. Humans should not inhale the gas used in dry chemical systems because it may be lethal. A dry chemical discharge introduces a great deal of air pressure into the room within seconds. Fragile glass windows may shatter during discharge, creating a temporary airborne glass hazard.

Note

You need to be aware that a water pipe system may be required even if a dry chemical system is installed. Fire safety codes can require wet pipe systems throughout the building, without exception. Some building owners will not allow the tenant to alter existing fire-control systems. The dry chemical system would then have to be installed in parallel to the existing water-based system.

Water Detection

Water is discharged from air-conditioning or cooling systems and usually runs to a drain located under the raised floor. Water-detection systems are necessary under the floor to alert personnel of a clogged drain or plumbing backup. Water-detection sensors also may be placed in the ceiling to detect leakage from pipes in the roof above. It is common for water pipes to be located over a computer room directly above the ceiling tiles or higher floors. Water can even cascade down inside the building from the roof.

Figure 7.9 provides a simple overview of a data processing center and the various environmental control systems.

To minimize risk, the organization should have a policy prohibiting food, liquids, and smoking in the computer facility. Now it is time to discuss safe storage of assets and media records.

Typical computer room

Figure 7.9. Typical computer room

Safe Storage

Vital business records and computer media require protection from the environment, fire, theft, and malicious damage. Safe onsite storage is required. The best practice is to use fireproof file cabinets and a fireproof media safe. Standard business records are kept in the fireproof file cabinet. Computer tapes and disk media are stored in a special fire safe. The fire safe provides physical protection and ensures that the internal temperature of the safe will not exceed 130 degrees. Copies of files and archived records are transferred to offsite storage.

Offsite Storage

The offsite storage facility provides storage for a second copy of vital records and data backup files. The offsite storage facility should be used for safe long-term retention of records. The standard practice is to send backup tapes offsite every day or every other day. The offsite storage location should be a well-designed, secure, bonded facility with 24-hour security. This site must be designed for protection from flood, fire, and theft. The offsite vendor should maintain a low profile without visible markings identifying the contents of the facility to a casual passerby. Most offsite storage facilities provide safe media transport.

Media Transport

Business records and magnetic media should be properly boxed for transit. The contents in the box must be properly labeled. An inventory should be recorded prior to shipping media offsite for any reason. The tape librarian is usually responsible for tracking data media in transit. Backup tapes contain the utmost secrets of any organization. All media leaving the primary facility must be kept in secure storage at all times and tracked during transit. New regulations are mandating the use of encryption to protect the standing data on backup tapes. The tape librarian should verify the safe arrival of media at the offsite storage facility. Random media audits at the offsite facility are a good idea. The custodian must track the location and status of all data files outside the primary facility.

Disposal Procedures

Information and media will be disposed of at the end of its life cycle. This is the seventh phase of the SDLC model. A formal authorization process is required to dispose of physical and data assets. Improper controls can lead to the untimely loss of valuable assets. Let's review the common disposal procedures for media:

Paper, plastic, and photographic data

Nondurable media may be disposed of by using physical destruction such as shredding or burning.

Durable and magnetic media

Durable media may be disposed of by using data destruction techniques of overwriting and degaussing:

Overwriting

Data files are not actually deleted by a Delete command. The only change is that the first character of the file is set to zero. Setting the first character to zero indicates that the remaining contents can be overwritten whenever the computer system needs more storage space. Undelete utilities operate by changing the first character back to a numeric value of one. This makes the file contents readable again. To destroy files without recovery, it is necessary to overwrite the contents of the disc. A file overwrite utility is used to replace every single data bit with a random value such as E6X, BBB, or other meaningless value.

Degaussing

Degaussing is a bulk erasing process using a strong electromagnet. Degaussing equipment is relatively inexpensive. To operate, the degaussing unit is turned on and placed next to a box of magnetic media. The electromagnet erases magnetic media by changing its electrical alignment. Erasure occurs within minutes or hours, depending on the strength of the device.

Note

As a CISA, you are required to understand the fundamental issues of physical protection. Advanced study in physical security is available from ASIS International, for the Certified Protection Professional (CPP) credential.

You're now ready to begin the last major section of this chapter, which covers technical methods of protection.

Using Technical Protection

Technical protection is also referred to as logical protection. A simple way to recognize technical protection is that technical controls typically involve a hardware or software process to operate. Let's start with technical controls, which are also known as automated controls.

Technical Control Classification

Technical protection may be implemented by using a combination of mandatory controls, discretionary controls, or role-based controls. Let's discuss each:

Mandatory access controls

Mandatory access controls (MAC) use labels to identify the security classification of data. A set of rules determines which person (subject) will be allowed to access the data (object). The security label is compared to the user access level. The comparison process requires an absolute match to permit access. Here is an example:

(User label "subject" = Data label "object") = ALLOW
Label does not match = DENY

The process is explicit. Absolutely no exceptions are made when MAC methods are in use. Under MAC, control is centrally managed and all access is forbidden unless explicit permission is specified for that user. The only way to gain access is to change the user's formal authorization level. The military uses MAC.

Discretionary access controls

Discretionary access controls (DAC) allow a designated individual to decide the level of user access. DAC access is usually distributed across the organization to provide flexibility for specific use or adjustment to business needs. The data owner determines access control at their discretion. The IS auditor needs to investigate how the decisions concerning DAC access controls are authorized, managed, and regularly reviewed. Most businesses use discretionary access control.

Role-based access controls

Certain jobs require a particular level of access to fulfill the job duties. Access that is granted on the basis of the job requirement is referred to as role-based access control (RBAC). A user is given the level of access necessary to complete work for their job. The system administrator position is an example of role-based access control.

Task-based access controls

Individual tasks may need to be performed for the business to operate. Whereas role-based access is used for job roles, task-based access control (TBAC) refers to the need to perform a specific task. Common examples include limited testing, maintenance, data entry, or access to a special report.

The type of access control used is based on risk, data value, and available control mechanisms. Now we need to discuss application software control mechanisms.

Application Software Controls

Application software controls provide security by using a combination of user identity, authentication, authorization, and accountability. As you will recall, user identity is a claim that must be authenticated (verified). Authorization refers to the right to perform a particular function. Accountability refers to holding a person responsible for their actions. Most application software uses access control lists to assign rights or permissions. The access control list contains the user's identity and permissions assigned.

Database Views

Data within the database can be protected by using database views. The database view is a read restriction placed on particular columns (attributes) in the database. For example, Figure 7.10 illustrates using a personnel file to create a telephone list. Data that is not to be read for the telephone list has been hidden by using the database view.

Database views for security

Figure 7.10. Database views for security

Restricted User Interface

Another method of limiting access is to use a restricted user interface. The restricted interface may be a menu with particular options grayed out or not displayed at all. Menu access is preferred to prevent the user from having the power of command-line arguments. The command line is difficult to restrict.

Security Labels

A major concern in security is the ability for users to bypass the security label. The security label is a control that specifies who may access the file and how the file may be used. The IS auditor should work with security managers to identify ways in which labels and security settings may be bypassed. Additional compensating controls are necessary to protect against the bypassing of labels and security.

Authentication Methods

The first step of granting access is identification of the user: A user presents a claim of identity. The second step is to authenticate the user identity claim against a known reference. The purpose of this authentication is to ensure that the correct person is granted access. Table 7.1 illustrates the difference between identification and authentication.

Table 7.1. Identification versus Authentication

Concept

Function

Identification

A claim of identity or a search process of comparing all known entries until either a match is found or the data list is exhausted. Identification is known as a one-to-many search process.

Authentication

A single match of the identity claim against reference information. If a single attempt fails, the authentication failed. Authentication is a single-try process, also known as a one-to-one process (compare only, no search).

Understanding Types of Authentication

Three types of authentication are possible using discrete information. The most common type of authentication is the user password. The user password is expected to be a secret known only to the user. Unfortunately, many user passwords are poorly constructed or suffer from ineffective protection. When a user logs in with a password, the only information known is that someone has logged in with the password. That is no guarantee of who that person is. Let's take a look at the three types of information, or factors, which can be used to authenticate the user:

Type 1: Something a person knows

The login ID and password should be unique to each user. Unfortunately, the password may be discovered by observation or insufficient security by the user. Passwords should be considered weak authentication. Passwords can be forgotten, shared, discovered by observation, and broken by technical means.

Type 2: Something a person has in possession

An improvement above the password is to authenticate the user based on a unique item in their possession. This requires the user to have a login ID with a password and the unique item at the time of login. Banks use type 2 authentication with your ATM card and PIN. Another example of type 2 authentication is the smart card. A smart card contains a microchip with unique information read by a card scanner. Figure 7.11 shows a drawing of a smart card.

Smart card with embedded microchip

Figure 7.11. Smart card with embedded microchip

It is possible to use type 2 authentication without specialized hardware. Common techniques include using a hard token, USB token, or soft token.

The hard token is a card or key fob read by the user during login. The user types a username and password combined with the number that appears on the token display screen. The number displayed by the token changes every 1 to 2 minutes, thereby making each password unique. The following shows a sample login and the password containing the code from the token:

Login ID = jmorris Password = ******94328

Another method is to use a USB token. Authentication information is recorded in a microchip on a USB token device. The user plugs the USB token into the USB port when logging in to the computer. This type is popular in hospitals for use by doctors and nurse practitioners. Figure 7.12 shows a drawing of the two common types of hard tokens.

Hard authentication token

Figure 7.12. Hard authentication token

Another method of type 2 authentication is the use of software tokens (soft tokens), also known as digital certificates. Soft tokens are relatively inexpensive. The software token captures unique information, including the CPU's electronic serial number during the initial certificate-signing request. The resulting digital certificate contains this embedded information to prove the identity of the computer. Soft tokens are not portable because each is assigned to a specific computer.

Unfortunately, hardware and software tokens can be stolen or lost, and some can be secretly duplicated. These problems are what bring us to the third type of authentication.

Type 3: Physical characteristic

The third type of authentication is based on a unique physical characteristic. The recording of physical characteristics and the matching process is known as biometrics. Let's investigate biometrics further.

Using Biometrics

Biometrics uses unique physical characteristics to authenticate the identity claimed by the user. This is accomplished by using either physiological characteristics or behavioral characteristics. You are expected to understand the different types of biometric data used for authentication.

Using Physiological Characteristics

These are considered strong authenticators of the user's identity because they are difficult to forge. A risk still exists in the management of the biometric sample and system implementation.

Fingerprint

Fingerprints have been used for many years to identify people, especially criminal offenders. In biometrics, the fingerprint is used to authenticate (not identify) the user. Information about the user's fingerprint is recorded into a biometrics database. Rather than the actual image, only summary lists of unique feature characteristics are recorded about the fingerprint. These features include curvature, position, ridge patterns, delta (separation), combined ridges (crossover), islands, and burification (ridge join). The feature data recorded in the database is called minutiae. When the user logs in, the minutiae are identified by the acquisition hardware and compared to the database. Authentication occurs when the acquired minutiae data from the biometrics scanner matches the minutiae from the database. Using fingerprint minutiae instead of capturing the image allows a smaller file size to be stored. Minutiae file size is usually 250 bytes to more than 1,000 bytes. Figure 7.13 illustrates fingerprint minutiae.

Using fingerprint data (minutiae)

Figure 7.13. Using fingerprint data (minutiae)

Palm print

A person's palm print is as unique as a fingerprint. Like a fingerprint, the palm of the hand contains a significant number of unique minutiae plus additional wrinkle lines, blood vessel patterns, and scars. The palm offers a larger volume of data. Figure 7.14 illustrates the palm data.

Using palm print data

Figure 7.14. Using palm print data

Hand geometry

The concept of hand geometry is to measure the details of a person's hand in a three-dimensional image. The usual technique is to put your hand into a machine with your fingers spread between metal pegs. Another method in hand geometry is to grasp a metal knob or bar while sensors measure your knuckle creases or blood vessel patterns. Hand geometry is quite effective and inexpensive.

Retina scan

The retina, located at the rear of the eyeball, contains a unique pattern of tiny veins and arteries that reflect light. The red-eye in photographs is the reflection of the retina. Changes in the retina occur during a person's life. Some of these changes may signal the onset of a new medical condition, such as stroke or diabetes. Users may be concerned about physiological issues or the possible invasion of privacy. Overall, retina scanning is very reliable. Figure 7.15 illustrates retina scanning.

Retina scanning

Figure 7.15. Retina scanning

Iris scan

Iris-scanning technology is based on visible features of freckles, rings, and furrows in the color ring surrounding the eye's pupil. The iris provides stable data from one year of age through a person's entire life. The visible features and their location are combined to form an iris-code digital template. To use an iris scanner, a person is asked to look into an eyepiece and focus on a displayed image. A camera records a picture of the iris and compares it to the biometric database to ensure that the viewer is a living person. Colored contacts would fail the iris scan. Iris scanning is very dependable.

Face scan

New face-scanning technology uses a series of still images captured by video camera. The technology uses three-dimensional measurements of facial features, including the position of eye sockets, nose, mouth openings, and heat pattern thermograph. The feature data is extracted from the image to form a digital facial template of 1,000 to 1,500 bytes. Major advances are occurring in facial recognition that reduce former problems of speed and accuracy. Figure 7.16 illustrates face recognition.

Using Behavioral Characteristics

Behavioral characteristics are considered a weak type of authentication because of concerns about their authenticity. Behavior is easier to forge than physical characteristics. For example, in my childhood, I taught my little brother how to speak. As a result, it's nearly impossible to distinguish even the slightest difference in our voices. My father's handwriting impressed me when I was learning to write in cursive. After much practice, my signature in sixth grade looked exactly like his. That illustrates the problem. Let's look at using signatures and voice patterns.

Face recognition

Figure 7.16. Face recognition

Signature dynamics

Signature dynamics is a behavioral form of biometric data. The user's signature is monitored for time duration, pressure, and technique. The advantage is the low cost of implementation. The disadvantage is that many individuals do not write their signature consistently. Some individuals, such as celebrities and authors of this Study Guide, refuse to allow their signature to be digitally recorded because it is still used as the primary means of authenticating legal documents.

Voice pattern

Voice pattern recognition is an inexpensive method of identifying a person by the way they talk. Voice pattern recognition is not the same as speech recognition. Speech recognition assembles sounds into words. Voice pattern analysis checks for characteristics of pitch, tone, and sound duration. A person's voice is analyzed for unique sound characteristics, tone, inflection, and speed. The typical method is to ask a user to repeat a particular pass phrase. The characteristics of the pass phrase are converted into a digital template. Voice pattern recognition is less expensive and less accurate than other types of biometrics. Voice authentication can be fooled by recorded audio playback of a person's voice. Figure 7.17 illustrates voice recognition.

Voice pattern analysis in biometrics

Figure 7.17. Voice pattern analysis in biometrics

Note

As a CISA candidate, you are expected to understand the older biometric techniques of signature dynamics and voice pattern analysis.

Management of Biometric Systems

Biometrics comprises technology-based systems and hence requires a disciplined approach. Each biometric system follows an SDLC life cycle. As an auditor, you will encounter a growing number of clients using or planning to implement biometric systems. Auditors don't need to be technicians. Our job is to witness whether management did their job of selecting the right system and how it is governed. Let's take a walk through the basic life cycle:

Phase 1: Biometric feasibility

Management determines their need, purpose, and function for biometrics. Are they interested in using biometric technology because it looks cool? Real planning considers the following points:

  • Analysis of regulations and classification of their data to be protected.

  • Physical environment, mission, and people. What problem is biometrics going to solve?

  • Effect of biometrics on employees, customers, and business partners.

  • Data collection may be difficult because of perceptions of intrusiveness, possible misuse, implications of system failure, and moral concerns.

  • Return on investment (ROI) after analysis of initial cost, ongoing operation, maintenance, and comparison of alternatives.

The results of their feasibility study should be available for the auditor to review. Assuming that the decision was a good one, a formal review should occur to gain approval to proceed to phase 2.

Phase 2: Biometric requirements

Before rushing out to buy a product, you'll need to consider of the detailed requirements. These will come from a discovery exercise of intended use, unique needs, and the specific operating environment. Everyone knows you should not go shopping without a list. Doing so results in overspending or buying the wrong item. Independent consultants may assist with the requirements phase. The independent consultant shall be barred from bidding on or selling the biometric system to ensure that the consultant remains independent. Following the SDLC model, the requirements phase should include the following:

  • Identification of the ownership roles, custodian duties, and users of the system.

  • Executive support in the form of a signed biometrics policy, budget, and delegation of authority.

  • Physical access restrictions covering both the biometric system and the area it will protect.

  • Logical (technical) access needs to be restricted to prevent direct access into the data repository. Special methods such as single sign-on (SSO) are necessary for interfacing to other applications without risk of compromising the biometric repository.

  • Solutions to design questions about biometric standards, data storage capacity, security, maintenance, backup, and restoration procedures.

  • Functionality for the intended use. Will it do the job? Is additional functionality needed, such as the ability to export data for FBI background investigations? If so, the system needs to have that capability plus the implementation of a design using the Common Biometrics Exchange Format Framework (CBEFF) with the Electronic Fingerprint Transmission Specification (EFTS).

  • Perform a risk analysis for what happens when this system fails. The two most common failures are rejecting authorized people and mistakenly accepting an attacker. One technique for compromising biometrics is to substitute the attacker's biometric data for that of a legitimate user. After access is granted, the attacker will reload the original biometric data to hide their intrusion.

Assuming that your client did a good job, it's time for the phase 2 review. All the research and planning is presented to determine whether this project should continue into phase 3, return to the drawing board to rerun the discovery process, or be cancelled. If all the issues are addressed, the project may be granted formal approval to proceed to phase 3. Evidence of phase 2 research and formal approval to proceed is expected to be available for the auditor to review.

Phase 3: System selection

We need to take a moment to explain the high-level technical concept of how a biometric system works. The two basic components in biometrics are the template generator and the template matcher. Everyone agrees that the vendor's product needs to be certified. The issue is how it's certified. Let's start with the template generator:

Biometric template generator

Biometric images are acquired during live enrollment of the user. A template generator converts images into a unique data template. This template becomes the user's biometric reference. A template generator must be laboratory tested by using very large data sets in repeatable evaluations. It will be tested offline to ensure that each unique user template is created by using the correct internal program procedures, so it is algorithmically correct and physically distinct from that of other users. Without this testing, the system would have no integrity and could accept the wrong person (false acceptance).

Biometric template matcher

A template matcher is used to compare images against the user's template. It may compare more than one image against more than one template. The goal is to determine whether the image and the template produce a similar score. A template generator is certified as a software library. Template generators and template matchers must be tested and certified separately. A match indicates Personal Identity is Verified (PIV).

Now let's return to the issues in phase 3 regarding system selection. A vendor's product has to be properly certified to be eligible for selection. Additional points to consider in this phase are as follows:

  • Process for enrolling, re-enrolling, and removing users from the system

  • Security protection to prevent tampering, sabotage, substitution of fake templates, and compromise of biometric data (templates)

  • Available technical support and training materials

  • Operational questions of user acceptance, maintenance, hygiene, backup, and restore functions

  • Total cost of ownership (purchase, installation, maintenance) and the ROI

  • Procurement method: bid, outsource, or straight purchase

Phase 3 concludes with a formal review. The selection criteria is presented along with alternatives. All the open issues need to be fixed, rejected, or withdrawn before formal approval can be given. With formal approval, the product may be purchased and the project proceeds into phase 4.

Tip

It can be helpful to attend vendor-sponsored training during final selection, before making the purchase. The purpose is to learn the hidden points of your intended use that don't match the vendor's sales literature. Consider it a tryout before you buy.

Phase 4: System configuration

Some product training is helpful to ensure a proper installation. The auditee should be trained before the vendor sends the installation technicians out to configure the system. Too many times, the promise of knowledge transfer at a later date becomes just another failed promise. Points to consider in phase four include the following:

  • Installing hardware and software

  • Calibrating and recalibrating the system

  • Operating procedures for enrollment, security, transmission, processing, backup, and restoration

  • Using transaction controls

  • Monitoring the systems and logs

  • Detecting system compromise plus use of corresponding incident response procedures

  • Connecting the biometric system interface to other systems

Additional points may be added to suit their intended use. Let's assume that everyone did their job. The next step is to certify the system for production use by testing the people and the procedures.

Phase 5: Biometrics implementation

Now the system is preparing to enter production. There are still a few steps to cover before announcing that the system is ready. Here's the short list:

  • Formalizing configuration management and change control into a system baseline.

  • Training users.

  • Enrolling the user to populate the biometrics database.

  • Managing the deletion of users removed from the system.

  • User testing begins.

  • Management determines whether the installed system met their objectives. Technical tests are run to certify system performance.

  • Management determines whether the overall operation is acceptable. If so, formal accreditation is given in the form of a written sign-off to enter production. Management formally accepts the outcome and the responsibility for any possible failures. Accreditation may be for 90 days, 180 days, or annually.

  • The biometric system may now enter production use.

This may seem like a lot of work because it is. The goal is to ensure that the right things happen for the right reason. This biometrics example is one of our first opportunities to demonstrate how a system goes through the SDLC life cycle process. Now it's time to look ahead into the system's future.

Phase 6: Biometrics post-implementation

Each year a system must be recertified and reaccredited. The purpose is to ensure that management has looked at its historical performance, ROI (if any), and changes in both regulations and attitudes. The question is whether the system is allowed to continue operation as is, is modified to keep up with changes, or is retired.

As auditors, we are interested in finding evidence of the postimplementation review, recertification process, and management's reaccreditation of the system for continued use in production for another 90 days, 180 days, or annually. The process repeats each year the system is in use.

Phase 7: Biometric system disposal phase

This is the same process as in Chapter 5. A review is held to determine the implications of shutting down the system. Topics considered include regulations related to the system, record retention requirements, and ways to archive the data. Formal approval is requested to shut down the system. The decision should be based on a risk analysis of the impact. After formal approval, the system is shut down, media is sanitized, and hardware is removed from capital inventory. Equipment is resold, donated, destroyed, or recycled.

As auditors, we would like to see the evidence that this disposal was authorized and properly handled. Biometric systems have several technical advantages that must be balanced against the known problems. Let's discuss some of the known problems with biometrics.

Problems with Biometrics

Using biometric systems has some drawbacks. How will the biometric results be used? Is the biometric system expected to provide identification or authentication?

Biometric systems face issues of social acceptability. The users may have concerns about sanitary health issues regarding physical contact or about invasion of privacy. Biometric data must be managed to ensure the security of initial data collection, data distribution, and processing. A biometric data policy is required to specify the data life cycle and control procedures. Biometric data must always be protected for confidentiality and integrity.

Error rates exist in all automated systems, and biometric systems are no different. It is possible for an error to occur during data collection or data processing. The following are various types of errors that can occur in a biometric system:

Enrollment

Every user provides a sample for the biometric system during the enrollment process. The sample may fail to be accepted by the system. The typical enrollment process should take only 2 to 5 minutes; any longer could lead to user dissatisfaction.

Failure to enroll

On rare occasions, the user's data will not be accepted by the system. This is referred to as failure to enroll (FTER). It could be due to image quality, calibration, system problems, or the scanner not interpreting a person's physical abnormalities.

False rejection

A legitimate person could be rejected and fail authentication; the correct user fails to authenticate. This failure to accept a legitimate user is known as the false rejection rate (FRR) or type 1 error. The biometric system rejects a legitimate user. This is considered a type 1 error because it is the most common type of error.

False acceptance

It is possible for the system to permit access to an individual who should have been rejected; the wrong person is authenticated. This is referred to as the false acceptance rate (FAR) or type 2 error. The biometric system accepts an unauthorized user.

Equal error/crossover error rate

Every system has a delicate balance of speed over efficiency. In biometrics, this balance is referred to as the equal error rate (ERR) or the crossover error rate (CER). The biometric system is not perfect. An acceptable error rate will always exist. Figure 7.18 illustrates the trade-off between speed and accuracy.

Throughput rate

How many people (samples) can the system process and still have reasonable accuracy? Lower throughput may be acceptable in situations of higher risk. Higher throughput may be better in low-risk situations such as the collection of visitors' fingerprints at turnstiles entering an amusement park.

We have discussed the methods for authenticating a user. Now it is time to discuss the types of access that could be granted on the network.

Crossover error rate

Figure 7.18. Crossover error rate

Network Access Protection

All computer networks are prone to access control problems. It is an ongoing challenge to provide access to legitimate users while blocking access from all others. Several methods have been developed to accomplish this goal. Computer users demand ease of use, while computer custodians strive for tighter controls. Unfortunately, network access is predominantly a perimeter defense. Better controls are sorely needed at the application level.

In this section, we discuss several technologies including firewalls. We will start with single sign-on. The CISA is expected to understand the concept of single sign-on. Its purpose is to improve network access controls by implementing a higher-security system that is easier for the user. One of the most common examples is Kerberos, developed by the Massachusetts Institute of Technology.

Kerberos Single Sign-On

The Kerberos single sign-on (SSO) system was developed to improve both security and user satisfaction. The name Kerberos refers to the mythical three-headed dog guarding the gates to the underworld. Kerberos provides security when the end points of the network are safe but the transmission path cannot be trusted—for example, when the servers and workstations are trusted but the network is not.

The concept of operation is for the user to log in once to Kerberos. After login, the Kerberos system authenticates the user and grants access to all resources. The process works as follows:

  1. The user authenticates to the Kerberos workstation software. Authentication may be a password or a biometric method.

  2. The workstation software authenticates to the Kerberos server.

  3. Shared encryption keys are used. A network access ticket is created by Kerberos.

  4. A Kerberos access ticket is sent to the workstation, signed in the workstation's shared encryption key. All other network servers receive a similar ticket granting the workstation access to shared servers.

  5. The user is automatically signed in to all servers.

The belief is that a user with a strong password and strong encryption will improve overall security. Unfortunately, Kerberos works only with specially modified versions of software designed for use with Kerberos. Merely installing Kerberos will not improve security. There are compatibility problems with different versions of implementation.

Special skills and experience are required to make a Kerberos installation successful. First, a knowledgeable installer will understand how to use separate domains to partition Kerberos access for better security. Second, restoring data from tape backup is quite involved. The Kerberos system must be shut down and the date rolled back to the timestamp of the file being restored. As soon as the file is restored, the time clocks must be rolled forward again with the system resynchronized for the users. Any compromise of the Key Distribution Center (KDC) means that the entire system is compromised and must be shut down. Using Kerberos requires highly experienced system administrators. Figure 7.19 illustrates the design of a Kerberos single sign-on.

Kerberos single sign-on

Figure 7.19. Kerberos single sign-on

Network Firewalls

Computer networks can be protected from internal and external threats by using firewalls (FWs). The concept is that a specially configured firewall on the network will block unwanted access. However, this is a grossly misunderstood concept, and many organizations do not understand firewall capabilities and limitations. As a result, there can be a false sense of security. Let's consider the advantages and disadvantages of network firewalls:

Firewall advantages

Reduces external access to the network.

Firewall disadvantages

There is always a hole for traffic to pass through—either good traffic or bad traffic or both. A firewall can control only the traffic that passes directly through it. It does not protect modems or other access points. A firewall can be misconfigured or technically circumvented. There is no such thing as a completely safe firewall. The firewall concept creates a false sense of security.

Network firewalls have undergone several generations of improvement. The first generation was simply a router with a primitive access list specifying the to destination and the sender's from network addresses. Attackers became more sophisticated, and so did the need for better firewalls. The following are the different generations of firewall technology:

First generation: Packet filter

The first generation was a packet filter. Filtering is based on the sending and receiving address combined with the service port (a packet). The advantage of this design is its low cost.

The first-generation packet filter design was prone to problems. The design was plagued with poor logging and granular rules that were difficult to implement effectively. Hackers were still able to get in.

Second generation: Application proxy filter

A firewall application program was added to the first-generation design of packet filtering. The second generation uses an application proxy to relay requests through the firewall. The proxy checks the inbound request to ensure that it complies with safe computing in both format and type of request. Application proxies perform user requests without granting direct access to the target software. The application proxy is also referred to as a circuit-level firewall. This is because the application proxy is required to complete the circuit; otherwise, no connection exists. This design improved event logging; however, hackers were still able to get in.

Third generation: Stateful inspection

Hackers were able to trick second-generation firewalls by sending a request that was formatted to bypass the proxy design. Application proxy firewalls relied on open connections maintained with the user. Connectionless sessions such as the User Datagram Protocol (UDP) in IP were not protected. In the third generation, UDP connectionless requests are recorded into a history table. The historic "state" of connectionless requests is now controlled by the firewall for better protection. This is referred to as stateful inspection. Stateful inspection is the de facto minimum standard for network firewall technology. However, there's still room for improvement.

Fourth generation: Adaptive response

Improvements in technology allow the firewall to communicate with an intrusion detection system. This provides an adaptive response to network attacks. The firewall administrator can configure stored procedures designed to rebut many types of firewall attack. The firewall can reconfigure itself to block ports or reset connections. One drawback is that a skilled attacker may masquerade as a critical device such as a necessary server. The fourth-generation firewall could accidentally disable the critical device, which would create a denial of service problem.

Fifth generation: Kernel process

The fifth-generation firewall is actually an internal control mechanism designed into the operating system kernel. Individual processing requests are verified against an internal access control list. Those not on the list are rejected. Special military systems have been using fifth-generation firewalls for many years. Microsoft Windows XP has implemented a basic fifth-generation firewall.

The network firewall is the best defense for protecting a network. Each generation provides different levels of cost and protection. Figure 7.20 illustrates the firewalls by generation in relation to the OSI model. We covered the OSI model in Chapter 4, "Networking Technology."

Firewall generations compared to the OSI model

Figure 7.20. Firewall generations compared to the OSI model

Network firewalls can be implemented by using one of three basic designs. The first method is the screened host implementation. The screened host protects a single host through the firewall. The host computer is strongly defended. It is expected that this host may be attacked. Technical manuals may refer to this as the bastion host. Figure 7.21 illustrates the screened host implementation.

The next method of firewall implementation is to install two interface cards in the same host. This method is referred to as to dual-homed. The host computer is configured with the routing disabled. A special software application such as an application proxy relays appropriate communication between the two interface cards. This is the configuration of many Internet firewalls. Figure 7.22 illustrates the dual-homed host.

Firewall-screened host

Figure 7.21. Firewall-screened host

Dual-homed host

Figure 7.22. Dual-homed host

The third method of firewall implementation is known as the screened subnet, or DMZ design. DMZ is a term that refers to the demilitarized zone between enemy forces on a battlefield. The DMZ design allows for several computers to be placed in a protected subnet that is accessible from the outside and by systems inside the network. Any military veteran will tell you that it's possible to be attacked and killed in the DMZ. The same applies to computers located here. Figure 7.23 illustrates the DMZ concept.

Firewall systems should be implemented to support a separation of duties. Separation of duties is just as important for machines as for personnel. The intention is to provide additional layers of control. Separate firewalls allow tighter access-control rules. Selected data is mirrored from internal production servers to a DMZ server for access by business partners or clients. This eliminates the dangers of direct access to an internal server. In addition, the redundancy improves overall availability. An outage would affect a smaller audience. Figure 7.24 illustrates the separation of duties using a firewall.

Screened subnet, also known as DMZ subnet

Figure 7.23. Screened subnet, also known as DMZ subnet

Separation of duties with firewalls

Figure 7.24. Separation of duties with firewalls

Remote Dial-Up Access

Remote users can often access the network over standard telephone lines with modems. This method completely bypasses security mechanisms provided by the network firewall. The dial-up user may access the network through an access server modem bank or an individual modem on a networked computer. As an IS auditor, you need to determine whether the client has adequate safeguards to prevent this method of circumvention. Are the phone connections to modems properly managed considering the higher level of risk?

Remote VPN Access

Virtual private networks (VPNs) connect remote users over an insecure public network such as the Internet. The connection is virtual because it is temporary with no physical presence. VPN technology is cost-effective and highly flexible. A VPN creates an encrypted tunnel to securely pass data as follows:

  • Between two machines

  • From a machine to a network

  • From one network to another network

There are four types of VPN technology and protocol in use today. ISACA wants you to be familiar with the basic terms for each of the four types of VPN:

  • Point-to-Point Tunneling Protocol, or PPTP

  • Layer 2 Tunneling Protocol, or L2TP (OSI layer 2, Data-Link)

  • Secure Sockets Layer, or SSL (OSI layer 5, Session)

  • IP security, or IPsec, Internet protocol (OSI layer 3, Networking)

SSL and Transport Layer Security (TLS) are commonly used for confidentiality and integrity in the session between the user and the server. Both SSL and TLS operate in a similar manner. The design uses a digital certificate on the server to generate one-way authentication. A secure login to the server can be generated by using Secure Shell (SSH). Encryption occurs along the entire path between the sending and receiving computers. SSH provides end-to-end confidentiality and integrity in a terminal session with the host server.

Using the IPsec VPN

The IPsec design is the newest development in virtual private networking. IPsec uses ISP gateways with two modes of creating a VPN. Let's start with two important differences in operation of the VPN gateway:

  • IPsec VPN gateways are used for data that is entering or leaving the organization's local area network. These gateways use an external address from their ISP vendor (AT&T, Deutsche Telekom, BT, Verizon, and so forth).

  • VPN encryption is occurring between the gateways. Data transmitted between the local VPN gateway and internal computer is not encrypted.

    Using the IPsec VPN

In IPsec virtual private networks, there are two specific operating modes:

  • Transport mode is used when there is no need to hide the network address of the sender or recipient. The message payload is encrypted for security; the address header is not.

  • Tunnel mode is used to hide the identity of the sender and recipient. Both address header and payload (message) are encrypted. A new address header is added to data traveling across the Internet. This new address header will use addresses of the sender's VPN-ISP gateway and recipient's VPN-ISP gateway.

Security is managed by the authentication header (AH) and security parameter index (SPI). The authentication header uniquely identifies each packet and provides sequencing information. AH provides authentication, integrity, and nonrepudiation.

The security parameter index provides information about encryption and special handling requirements. This design is based on the older X.25 communication standard you read about in Chapter 4.

Figure 7.25 illustrates the IPsec design.

VPN using IPsec transport versus tunnel mode

Figure 7.25. VPN using IPsec transport versus tunnel mode

The goal of every VPN is to grant remote access to authorized users. Data can be shared across the Internet at a very low cost with relative safety if the proper internal controls are implemented. A VPN can be combined with a firewall DMZ by using a separation of duties between internal production servers and the external accessible server. Figure 7.26 illustrates a VPN with a separation of duties between servers.

Wireless Access

User demands for wireless access to network resources increase every day. When you read the vendor ads, it appears that wireless can provide security equal to wired access. However, wireless access to networks represents an additional level of threat. Wireless security has been completely compromised by vendors to improve Plug and Play capabilities.

Setting Up a Wireless LAN

It's relatively simple to construct a wireless LAN. A wireless network can be peer to peer, which is known as ad hoc mode. Another type of wireless LAN is infrastructure mode with the access points (APs) connecting various stations. Several vendors offer low-cost wireless AP that is similar to a wireless hub or router. Each AP is connected to a wired network and broadcasts connectivity to handheld devices.

VPN with separation of duties

Figure 7.26. VPN with separation of duties

Usually the range of an AP is 300 feet, equivalent to 100 meters. Users can move freely within the 300-foot broadcast range without losing any connectivity. The individual broadcast area (range) is also known as a cell. This is comparable to the design of cellular telephone networks. Effective range can be increased by combining APs and their multiple cells (service range). WLANs are based on the IEEE 802.11 standard. Let's look at the basic components of a WLAN.

Station (STA)

The station is a wireless device, such as a PDA, notebook computer, or mobile phone that is accessing the network.

Access point (AP)

A wireless transmitter/receiver that provides basic network services, usually within 300 feet, equivalent to 100 meters. Higher-power transmitters with longer ranges are entering the marketplace. The AP and STA compose a basic WLAN.

Cell

Individual AP broadcast range is known as the cell or span of coverage. Multiple AP cells are linked together to increase the range and to allow roaming within the building or between buildings.

Figure 7.27 shows the basic layout of a wireless network.

Usually the AP is connected to a distribution system such as the existing wired network to provide an infrastructure with greater services for the stations. This is the most common implementation of WLAN.

Basic wireless network

Figure 7.27. Basic wireless network

Obsolete IEEE 802.11 Wireless Standards

The original WLAN standard for wireless network was very slow, operating at only 1Mbps–2Mbps speeds. Three competing WLAN technology standards were established between 1999 and 2003. All three are considered obsolete due to security flaws:

802.11 a (1997)

Performance speeds were improved by increasing to 54Mbps with transmissions in the 5GHz frequency band.

802.11b (1999)

Transmitting at speeds up to 11Mbps using the 2.4GHz–2.48GHz frequency band.

802.11g (2003)

Provided for transmission at either the 2.4GHz or 5GHz band, with speeds up to 54 Mbps.

The lack of effective security is an enormous drawback in wireless networking. IEEE's original design of Wired Equivalent Privacy (WEP) proved to be practically worthless. In response, IEEE issued a revised wireless standard.

Updated IEEE Wireless Standards

Technology is a constant series of evolving standards. IEEE helps drive technology by issuing new wireless standards offering better security or longer transmission range. Let's start with improving wireless security.

802.11i (2005)

Introduced the concept of a robust security network (RSN) with better implementation of encryption keys.

Another IEEE standard worth mentioning is Worldwide Interoperability for Microwave Access (WiMAX) for metropolitan broadband wireless.

802.16 WiMAX (2003)

Mobile broadband using cellular-based networks. This standard allows roaming Internet access without any real data security. WiMAX is becoming increasingly popular because of its low-cost availability in metropolitan areas. WiMAX should always be considered an insecure network.

WLAN Transmission Security

Emerging demand for wireless connectivity is pushing the boundaries of information security. Moore's law states that computing power will double every 18 months, and at the same time the present methods of security will be reduced in effectiveness by 50 percent.

The challenge of protecting a WLAN can be solved only by a multilevel approach. Every subset of confidentiality, integrity, and availability (CIA) is at risk in WLAN implementations. There is no individual technology that will provide enough protection against the full spectrum of threats. WLAN security requirements can be summarized as follows:

Authentication

A third party must be able to authenticate the sender of a message as genuine. Authentication is the process of matching a claim to verify an exact match. There must be accurate methods of testing the claim to ensure that a legitimate match occurs. The worst method in WLAN authentication is the wireless device merely transmitting with the same shared key of the AP. The AP does not prove its identity to the wireless device, yet they are communicating to each other. This is why everyone should be worried about a rogue AP.

Three methods of WLAN authentication are as follows:

Shared key (poor cryptographic authentication)

A wireless client device transmits a copy of the shared symmetric key belonging to the AP. Typical implementations use a 128-bit key in an RC4 stream cipher. It may be called a preshared key, wireless-PSK, or WPA-PSK by the vendor to gloss over the problem. Overall, this design is very weak and should be considered rudimentary. It is not safe unless a separate VPN is used during transmission over any form of shared-key wireless equipment.

Open system (default, no authentication)

A wireless client device transmits the MAC address of the AP to establish communication. The open system method is notorious for falling victim to the infamous man-in-the-middle attack.

Robust 802.11i (strong cryptographic authentication)

Uses 802.1x port-based access control with stronger key generation and authentication. Manual key management is still an important task to prevent compromise.

Tip

Only two methods are acceptable for security in wireless networks. The first choice is 802.11i (robust security networks). The second choice is a separate VPN implementation using strong authentication with digital certificates and tokens. All existing WEP equipment has been designated as unsafe under all conditions by current government standards and the information security industry at large.

Source: National Institute of Standards and Technology April 2005

Nonrepudiation

To deny involvement is equivalent to repudiation. Nonrepudiation means the other person will not be able to deny their participation. This is used to prove the sender and receiver messages or participants in a transaction. You must have strong authentication to provide nonrepudiation.

Accountability

Individual actions must be traceable back to each unique individual entity. Accountability is difficult or impossible without strong authentication and nonrepudiation.

Integrity

Both sender and receiver need a method to ensure integrity for messages transmitted between the wireless client and AP. We need proof that contents of a message have not been tampered with or changed in transit. All versions of IEEE 802.11a/b/g specifications provided poor authentication between wireless devices.

The IEEE design prior to version 802.11i used a simple cyclic redundancy check (CRC) approach. A CRC-32 frame check sequence is computed on each payload prior to transmission. Then, the CRC-32 frame check is encrypted by using the RC4 stream cipher to create a ciphertext message. The message recipient decrypts the CRC-32 frame check by using a shared key. The CRC-32 results are compared, and messages are discarded if the CRCs are not equal. Unfortunately, the intrinsic flaw is that CRC is not as cryptographically secure as a hash message authentication code (HMAC).

WEP's integrity scheme is still considered vulnerable to certain attacks regardless of key size. RSN 802.11i networks use temporal keys for much stronger authentication. RSN is a trusted technology approved for government use.

Privacy issues

Many misleading advertisements claim that wireless systems have privacy equivalent to a wired connection. As noted earlier in this chapter, the term used is Wired Equivalent Privacy (WEP). The concept of WEP is to use the RC4 symmetric key in a stream cipher to generate a pseudorandom data sequence. WEP's operating technique is simply an exclusive OR (XOR) function using modulo) 2 math for data transmitted in the Network layer and above. The XOR function is based on comparing two values for an equal match (for example 1 and 1 match so XOR will equal 0). Alternatively the values don't match (like 1 and 0 will XOR to equal 1). It's a simple binary comparison for the computer. Using XOR for security functions is a primitive design.

Standard WEP

The 802.11 standard uses a 40-bit cryptographic key for WEP. Individual vendors offer nonstandard extensions of WEP keys at 128 bit–256bits. The intention was based on a theory that larger key sizes would be more difficult to break. Unfortunately, several attacks have been identified, which makes WEP vulnerable regardless of key size.

Enhanced WEP

An enhanced version of WEP was created in an attempt to overcome the original 802.11 security shortfalls. The enhanced version is still susceptible to several of the WEP subversion attacks.

802.11i replaces WEP

All WEP implementations should be considered unsafe and obsolete. The current standard for wireless security is RSN. This may force the replacement of WEP/WPA products with newer RSN 802.11i equipment. The only acceptable alternative is using end-to-end client VPN.

Figure 7.28 provides a simple depiction of the difference between WEP and 802.11i using 802.1X. Notice the changes improving our CIA objectives.

Robust security compared to earlier versions

Figure 7.28. Robust security compared to earlier versions

Achieving RSN Wireless Security

To secure wireless networks and their associated devices requires significant effort, coupled with proper resources and vigilance. It is imperative that management reassess wireless risks at least monthly. The only effective way to maintain security is by constant testing and evaluating systems security controls when wireless technology is being deployed.

Firewall Protection for Wireless Networks

All wireless access points should traverse a network firewall for security reasons. New regulations such as the joint Payment Card Industry (PCI) security policy governing Visa, MasterCard, Discover Card, and American Express have mandated that firewalls be used when merchant organizations process credit transactions on the network. The wireless firewall itself should be separate from the existing Internet firewall. It would be difficult or impossible to successfully combine the two functions into a single firewall. Figure 7.29 provides an overview of using a firewall with wireless access points.

We've spent quite a bit of time discussing firewalls and wireless access controls. It is time to discuss methods for detecting intrusion to the network. Intrusion detection systems have been in the marketplace for more than 10 years. Every organization should have intrusion detection systems in place.

Intrusion Detection

Network intrusion detection systems (IDS) function in a manner similar to virus detection or a burglar alarm. The objective is to inform the administrator of a suspected intrusion or attack occurring. Constant monitoring is necessary in order to receive the benefit of intrusion detection; otherwise, intrusion detection is no more valuable than an audit log of past history. An improved version of intrusion detection is the intrusion prevention system (IPS). The IPS concept is to ensure that the attack is blocked immediately upon detection. New standards refer to IDS and IPS systems as one combined intrusion detection and prevention system (IDPS).

Firewall protecting wireless access points

Figure 7.29. Firewall protecting wireless access points

There are two types of intrusion detection systems:

Host based

The host-based system monitors activity on a particular computer host or device such as a router. Attacks on other devices will not be seen by the host-based IDPS. To avoid confusion, the host-based IDS (HIDS) is now referred to as a host-based intrusion detection prevention system (HIDPS).

Network based

Network IDS systems observe traffic in a manner similar to a packet sniffer. Network-based IDS (NIDS) is referred to as a network intrusion detection prevention system (NIDPS). The network IDPS monitors activity across a network link. The IDPS can see attacks on promiscuous connections, but not across discrete switched network connections. The design of a network switch can prevent an IDPS system from detecting attacks occurring on systems connected to the other switch ports.

There are three technical methods of detecting a network intrusion:

Statistical

The statistical system uses a calculation of network traffic, CPU, and memory loading to determine whether an attack is occurring. Statistical systems are prone to false alarms because the traffic patterns of most networks are sporadic. The statistical system offers the advantage of being able to detect new attacks that might otherwise go unnoticed if a signature-based system were in use.

Signature

Signature-based IDPS relies on a database of attack techniques. The signature-based IDPS is similar in design to a signature-based virus scanner. The IDPS is looking for behaviors that indicate a particular type of known attack. Unfortunately, the signature-based IDPS cannot detect attacks that are not listed in its database.

Neural

Neural-based learning networks are being implemented on intrusion detection systems. The objective is to create a learning system that is a hybrid between statistical- and signature-based methods.

Figure 7.30 illustrates an IDPS on the network.

Intrusion detection and prevention system

Figure 7.30. Intrusion detection and prevention system

Intrusion detection systems are helpful for identifying network attacks in progress. Some of the more successful techniques are to make a server or subnet appear as an enticing target for the attacker. The purpose of the systems is to be a decoy target. An attack of the decoy provides early warning to the appropriate personnel. The decoy can be high interaction in a simulated production environment or low interaction of a static host. There are two basic styles of decoys:

Honey pot

The honey pot is a sacrificial server placed in such a manner as to attract the interest of the attacker. The honey pot server has no legitimate business value other than alerting the organization of an attack. The honey pot utilizes host-based IDPS or network-based IDPS.

Honey net

A honey net is a sacrificial subnet with a few machines designed to attract the interest of the attacker. All traffic from the honey net is considered suspicious because no real production activity is taking place. The purpose of this design is to allow security personnel the opportunity for advance notice of a potential attack against real production.

We have discussed firewalls, remote access, and intrusion detection. Now it is time to discuss the encryption methods used to hide data from prying eyes. Encryption provides a method of hiding data from other people.

Encryption Methods

Encryption systems provide a method of converting clear, readable text into unintelligible gibberish. Decryption converts the gibberish back into a readable message. Encryption and decryption systems have been in use for thousands of years. As a CISA, you are expected to understand the two basic types of encryption systems: private-key systems and public-key systems.

Private Key

Private-key encryption systems use a secret key, which is shared between the authorized sender and the intended receiver. Private-key systems contain two basic components. The first component is the mathematical algorithm for scrambling and unscrambling the message (encrypting and decrypting). The second component is the mathematical key used as a randomizer in the encryption algorithm. The longer the key length, the higher the security it will generate.

A single secret key is carefully shared between the sender and receiver. This is referred to as symmetric-key cryptography (see Figure 7.31). Symmetric-key cryptography is very fast, because the same key is used on both ends. The drawback is that the key must be protected with the highest possible diligence. Anybody who has a copy of the secret key can read the message. Examples of symmetric-key (secret key) cryptography include Data Encryption Standard (DES), which is now obsolete, and the new Advanced Encryption Standard (AES) designed by two Belgian researchers, Joan Daemen and Vincent Rijmen.

Because of the secret-key design of symmetric cryptography, encryption and decryption is very fast and efficient. The drawback is that the secret-key design cannot be used for digital signatures. To do so would expose the key to outsiders. Exchanging the secret key in symmetric cryptography is a major problem.

Public Key

Asymmetrical cryptography is referred to as public-key cryptography. This design utilizes a separate pair of keys for encryption and decryption. The key pair is composed of a secret key protected by the owner and a second public key that is freely distributed. These two keys are mathematically related to each other. The secret key and public key are generated from a supersized prime number. It is practically impossible for a cryptographic hacker to derive the super prime number or related keys in use.

The strength of public-key cryptography depends on the algorithm used and the encryption key length. To encrypt a message, the sender would use their own secret key and public key, plus the public key of the intended recipient. Figure 7.32 shows the process of encrypting by using public-key (asymmetrical) cryptography.

Symmetric-key cryptography

Figure 7.31. Symmetric-key cryptography

Encryption using public-key system

Figure 7.32. Encryption using public-key system

The process of decrypting the message returns it to readable text. To decrypt, the recipient uses their secret key and public key, plus the public key of the sender. The basic concept is that you need only three keys to unlock the data (decryption). If two values were missing, decrypting the file would be impossible. Figure 7.33 shows the process of decrypting by using public-key cryptography.

Note

Although there are more-complex systems, all it takes for confidentiality is to encrypt by using the sender's keys plus the receiver's public key (three keys). The receiver can decrypt by using the sender's public key with their own private key (both public keys plus their own private key). You never use, or need, all four keys to encrypt or decrypt.

Note

Data in storage is seldom encrypted with public key systems because it is 1,000 times slower. Symmetric (same key) systems are used for storage. Public-key systems are used during data transmission.

The design of public-key cryptography eliminates the need to exchange a secret key between the sender and receiver. Public-key cryptography is designed to allow digital signing of files.

Decryption using public-key system

Figure 7.33. Decryption using public-key system

Control of Encryption Systems

The greatest vulnerability in encryption systems is the inherent lack of control. Encryption algorithms are well designed; however, their application or management may be poor. Encryption software and encryption keys should be managed as software applications under the SDLCSDLC. Here are a few tips:

  • Use of encryption should be implemented for intellectual property and regulatory compliance.

  • Each application of encryption (use) must be formally authorized by management. A user must never be allowed to encrypt files that management cannot decrypt without the user.

  • Encryption keys must be individually managed and unique to each task.

  • The encryption keys need to be generated on a system that is physically and logically isolated from other systems (separation of duties). Key transfer is via read-only media.

  • Encryption keys should be stored in a different encrypted format. The term key wrapping refers to encrypting an encryption key by using a different algorithm. This concept is intended to prevent threats to the encryption key itself being exposed as standing data.

  • Encryption keys need to be tracked, similar to checking out books at the library.

  • The users should never have direct access to encryption keys (separation of duties).

  • The use of specific encryption keys should be limited to prevent overexposure or violating the separation of duties. For example, the encryption keys used for data backups cannot be used for encrypting email correspondence.

  • The use, archiving, and destruction of encryption keys requires a formal review. Destruction of encryption keys cannot occur without formal approval of management.

Just imagine what would happen if the organization was backing up data using encryption, and the encryption keys were lost. It could get even worse when you consider the implications to business continuity. Heaven forbid that the backup was required for disclosure under a court order for electronic discovery. Each of these conditions indicates a management failure in which integrity of the organization is lost. Failure to control the use of encryption leads to dramatic consequences.

Digital Signatures

A digital signature is intended to be an electronic version of a personal signature. The purpose is to indicate that the message was sent by a uniquely identified individual. A digital signature created by running a hash utility against any size of file will generate a 120-bit or 160-bit output. Figure 7.34 illustrates the process of generating a hash file and digital signature.

Generating a digital signature

Figure 7.34. Generating a digital signature

The digital signature is attached to the message file, and both are sent to the recipient. The receiver must use the message file, digital signature file, and the sender's public key to test the validity of the signature. Without testing the digital signature, there is no indication of its authenticity. Figure 7.35 illustrates the process of verifying a digital signature.

Verifying a digital signature

Figure 7.35. Verifying a digital signature

There is a new method for sending an encrypted message and the key along with it. This process uses a digital envelope. A message is encrypted along with a session key. The session key is encrypted a second time, using the recipient's public key. After transmission, the recipient can decrypt by using their private key, and then decrypt again by using the sender's public key. The double-step encryption ensures that no one else is able to decrypt the key in transit. Figure 7.36 illustrates the process of using a digital envelope.

Using a digital envelope

Figure 7.36. Using a digital envelope

Elliptic-Curve Cryptography

The newest method for encryption algorithms is the elliptic curve. The concept of an elliptic curve is to generate a three-dimensional space. The encryption key refers to a reference point within those dimensions. It would be extremely difficult to calculate keys generated from an elliptic curve. A small elliptic-curve key is exponentially stronger than one generated by linear math.

Elliptic-curve cryptography is used with wireless encryption. The implementation within wireless encryption is completely compromised; however, the concept of the elliptic-curve algorithm is essentially strong. Unfortunately, a 97-bit elliptic-curve encryption has been cracked from the Web by the same people who cracked the RSA 512-bit encryption key. It took twice as long, but the cracking attack was successful.

Quantum Cryptography

Quantum cryptography is based on polarization metrics of random photon light pulses. This promising technology is not available as a commercial product yet. However, the overall design appears to be quite strong.

Public-Key Infrastructure

The process of sharing encrypted files between various parties is referred to as public-key infrastructure (PKI). You should have a basic understanding of this process for the CISA exam. Public-key infrastructure is designed to provide a level of trust and authentication between users. This infrastructure is built by using the public-key encryption system we discussed earlier in this chapter. Public-key infrastructure comprises four basic components:

Certificate authority (CA)

A user contacts a certificate authority to procure a digital certificate. The digital certificate contains the user's contact information along with unique identifying characteristics. The certificate authority will vouch for the authenticity of the user after the certificate is issued. Certificate authorities typically follow the X.509 exchange standard.

Registration authority (RA)

Some big customers such as IBM, Novell, and Microsoft issue certificates from a block of certificates acquired through a certificate authority, such as Entrust, Trustwave or VeriSign. The RA is delegated bookkeeping and issuing functions from the CA. The certificate authority maintains the certificates that have been issued and verifies their authenticity.

Certificate revocation list (CRL)

Digital certificates are checked to ensure that they are valid at the time of use. A certificate revocation list is maintained by the certificate authority to indicate that certificates have expired or are revoked. This process allows invalid certificates to be cancelled.

Certification practice statement (CPS)

A certification practice statement (CPS) is a disclosure document that specifies how a certificate authority will issue certificates. The CPS specifically states how PKI participants will issue, manage, use, renew, and revoke digital certificates. It does not facilitate interoperation between certificate authorities, but rather the practices (procedures) of a single certificate authority.

Figure 7.37 shows the concept of registering and acquiring a PKI digital certificate from the certificate authority.

Getting a PKI digital certificate

Figure 7.37. Getting a PKI digital certificate

After acquiring a digital certificate, the user will present it during a transaction. The receiver will check the certificate against the certificate authority's database. If the certificate is valid, the transaction will continue. Certificates can be checked against other authorities, using a cross-verification process. Figure 7.38 shows the concept of presenting and using a PKI digital certificate.

Using a PKI certificate

Figure 7.38. Using a PKI certificate

Practical Example of Digital Certificates

Consider the process for flying on an airline. The airline issues an electronic ticket to the buyer. The electronic airline ticket is a digital certificate in a human-readable form, while the computer's digital certificate is not readable by humans. Let's walk though the amazing similarity between processing an airline e-ticket and the computer using a PKI digital certificate:

  1. The airline issues an electronic certificate, as long as the tickets are paid for and the buyer can prove their identity. This means that the airline is the certificate authority (CA).

  2. A travel agent may handle the transaction to get an electronic airline ticket issued (certificate). The travel agent is acting as a registration authority (RA) for the airline (CA).

  3. The buyer receives an electronic copy of their airline ticket (electronic certificate). The original data is held by the airline in their certificate repository (database of tickets sold).

  4. Prior to boarding the aircraft, the electronic ticket code is verified against the airline's database repository to determine whether the ticket is valid. The certificate is checked by the CA to see whether it is valid. If it checks out as valid, you board the airplane.

    Tickets issued through other airlines can be verified by using cross-certification between the two database repositories.

    Use of the electronic ticket is governed by the ticket revocation list and airline statement of policies. This is synonymous with the certificate revocation list (CRL) and certification practice statement (CPS).

Once again, the only real difference between the airline ticket example and your computer is the terminology—plus, a human can actually read an airline ticket. Now we need to move into email security.

Secure Multipurpose Internet Mail Extension

Secure Multipurpose Internet Mail Extensions (S/MIME) was developed so that people could send email across the Internet without having to worry about whether the recipient could read it. The original design was Privacy Enhanced Mail (PEM), developed in 1993. S/MIME was created in 1999 and incorporated several enhancements including support for the newer SHA-1 hash and MD5 hash. Additional support was added for the RSA encryption system and signing time attributes. S/MIME provides authentication of the sender and receiver and also verifies message integrity. S/MIME is the current standard for secure email and attachments. Figure 7.39 illustrates the reason why email security is so important.

Email security issues

Figure 7.39. Email security issues

Encryption-Key Management

Encryption-key management is critical to ensure confidentiality and integrity of encrypted data. There are several risks with regard to encryption keys. First, the key itself must be protected from theft or illegal copying. Second is a requirement to use several different keys—one for each purpose. It's important not to overuse a key. This is to ensure integrity of the encrypted files. Each use exposes the key to compromise. Unfortunately, this multitude of keys also creates a significant administrative burden, which if mishandled can turn into a catastrophe.

Note

Every auditor needs to investigate how management is controlling and governing the use of encryption. The critical success factor in encryption is to physically and logically protect the secret key. All keys undergo a full seven-phase life cycle, just like application software.

Encryption keys must be stored with a great deal of care. The keys will need to be managed throughout their life cycle. At a future date, a key will be marked for destruction. Creating, managing, storing, and destroying the keys is of particular concern to the auditor. Few organizations do a good job.

Network Security Protocols

The world of safe e-commerce is built on a handful of network security protocols. These security protocols include the following:

  • Pretty Good Privacy (PGP) for personal file encryption.

  • Secure Sockets Layer (SSL), which is used by most Internet websites for HTTPS sessions. The newer implementation is called Transport Layer Security (TLS).

  • Secure Hypertext Transfer Protocol (HTTPS), which uses SSL

  • IP security (IPsec) protocol using the authentication header (AH) and encapsulated security payload (ESP)

Without these protocols, it would be impossible for businesses, individuals, and the government to conduct confidential transactions. Figure 7.40 shows where these protocols fall within the OSI model.

Security protocols and where they fall within the OSI model

Figure 7.40. Security protocols and where they fall within the OSI model

E-commerce is becoming more important in today's IT landscape. Figure 7.41 illustrates several types of e-commerce in use today.

There is one special type of payment protocol that has been developed to protect credit card accounts. The Secure Electronic Transaction (SET) protocol provides a method for purchasing over the Internet without disclosing the credit card number to the merchant.

The merchant opens an account with a payment system such as PayPal's Payflow. A customer makes a purchase by using the merchant's shopping cart. At the appropriate time, the shopping cart passes the transaction to the SET payment gateway. The customer enters their credit card number on the SET gateway, completely out of sight from the merchant. The SET payment gateway sends to the merchant a transaction authorization to complete the purchase. The merchant uses the transaction authorization as authorization to ship the product. The SET gateway system deposits the funds into the merchant's bank account. This prevents a questionable merchant from being able to view a customer's credit card number. It also prevents credit card numbers from being retained in insecure shopping cart databases.

Types of e-commerce

Figure 7.41. Types of e-commerce

Design for Redundancy

There is more to protecting information assets than just encryption. Communication networks must be designed for redundancy. One method of improving redundancy is to use alternate telecommunications routing. We discussed meshed networks in Chapter 4. Alternate routing provides multiple communication paths in case the normal path fails. Alternate routing can be used in a local area network or a wide area network.

Another technique to ensure availability and integrity of information assets is the use of mirrored servers. Two servers acting as one are said to be mirrored. Mirrored servers are also known as high-availability servers. One of the servers acts as a primary server. The second server is the failover server, which runs in the background until the primary server dies. The second failover server assumes all processing responsibilities if the primary should fail for any reason. Figure 7.42 shows the basic design of a mirrored, or high-availability, server pair.

Computer disk systems are known to experience failures. The loss of a disk system could result in the loss of valuable data. A tape backup system can restore the data at the cost of additional downtime. A solution to this problem is the implementation of redundant hard disks. A Redundant Array of Independent—or Inexpensive—Disks (RAID) provides an excellent method of protecting information assets. Special software drivers copy the data files onto different hard disk controllers on two separate sets of hard disks. Either set of hard disks is capable of running the system without data loss. Figure 7.43 shows the basic layout of a RAID system.

Mirrored, or high-availability, servers

Figure 7.42. Mirrored, or high-availability, servers

RAID system

Figure 7.43. RAID system

Telephone Security

The security of the telephone system is a major concern. Telephone hackers, known as phreakers, are notorious for attempting to steal telephone service.

The telephone PBX—which stands for Private Branch Exchange—needs to be protected by using the same techniques as those used to protect a network server or router. Care must be given to the life cycle controls of the PBX. Maintenance accounts and unauthorized access are major concerns.

Newer phone systems use voice-over-IP networks to save money. This introduces the problems of network security controls to telephone systems. As an auditor, you should be aware of the issues regarding IP networks for both data and voice.

Technical Security Testing

Clients should undergo a regular schedule of security assessment by using the control self-assessment (CSA) and technical tools such as port scanners. We discussed the issue of how all tests must be performed under controlled procedures. You should recall that access to testing tools and authority to run the test must be tightly controlled.

Management needs to promote a regimented approach to discovery and resolution of vulnerabilities. Several types of testing are needed to ensure that security weaknesses are corrected. Let's look at the common types of tests:

Network scanning

This is a fast method for discovering the hosts on the network, also called host enumeration. Most scanners are fully automated, yet they can also overload systems with service requests until they crash. Network scanning does not find all vulnerabilities. It should be conducted weekly.

Vulnerability scanning

Individual systems can be polled on all available service ports. This is known as vulnerability scanning. Versions of software are checked, and particular types of service requests are sent by the scanning software to the target computer. This type of test usually triggers an intrusion detection alarm. It also usually crashes the target computer or causes some software damage. This type of testing should be conducted monthly.

Password cracking

A special tool is used to test the strength of user passwords. It works by creating a large set of passwords from dictionaries printed in every language. The hash output is matched against the system password file. When a match is found in the hash file, the word used to create the match will equal the actual password. Password cracking utilities perform all the simple conversions, anagrams, palindromes, and common character substitutions just as a user would. Weak passwords can be easily identified. Care must be taken to ensure that the right people perform this test under totally supervised conditions. Otherwise, password abuse will become a problem.

Log review

All those great log files are useless if nobody reads the contents. Log reviews identify deviations in change control and security policies. This underrated jewel of data can be the auditor's best friend for determining whether operations are following their organizational policies. Unauthorized activities, security violations, and capacity problems can be quickly discovered. Log reviews should be continuous.

Penetration testing

Penetration tests are commonly used to uncover vulnerabilities that may be discovered by an attacker. It may take hours or days to penetrate the target. The U.S. National Security Agency has created a certification program for the level 1 Information Systems Assessment methodology (IAM) to be used for all systems depended upon by the government. This includes service providers and prime contractors. Use in smaller areas of private industry is recommended but not required. The NSA level 2 Information Evaluation Methodology (IEM) provides the planning details and software tools to run a penetration test, and then report the results against the government-approved NSA baseline. The NSA IAM and IEM techniques will be of significant interest to auditors specializing in regulatory compliance. The NSA certification is the only technical assessment method approved and endorsed by the U.S. government. Penetration tests should be run annually.

Summary

As an IS auditor, you should be extremely interested in the implementation of information asset protection mechanisms by the client. There are numerous threats that could compromise administrative, physical, and technical controls.

You should understand how these controls have been implemented by the customer and what level of monitoring is occurring. Implementing controls without constant monitoring would be a waste of effort. Without effective monitoring processes, the client would be negligent.

This chapter has covered several technical methods that the CISA is expected to know. Be sure to read this chapter at least twice and study the definitions carefully.

Exam Essentials

Be able to evaluate the effectiveness of technical (logical) access controls.

Technical controls include access control mechanisms, encryption, firewalls, and intrusion detection and prevention capabilities. Technical access control mechanisms include passwords, access control lists, and biometrics for authentication.

Understand the perimeter defense mechanisms.

The network security infrastructure must provide sufficient perimeter defenses along with mechanisms to minimize loss from hackers, viruses, and worms. But the network is susceptible to attack by hacking, spoofing, spamming, and denial of service, along with other threats such as social engineering.

Know the purpose of the environmental controls used in the IT environment.

Environmental controls are necessary to prevent an interruption to the system's availability and to protect assets from loss. Environmental systems include power, water detection, heating and air-conditioning, fire detection, fire control, and humidity to prevent the buildup of static electricity.

Recognize the different types of technical attacks.

Passive attacks collect information to be used later in an active attack. Active attacks are designed to break down the defenses and execute the will of the attacker.

Understand the different motives of the malicious attacker.

Internal controls are used to prevent or detect most crimes committed by strangers and internal personnel. Remain aware that most theft is committed by someone known within the organization, because of access, motive, and time. The police refer to this concept as MOM: motive, opportunity, and means.

Understand how biometrics are used to judge the authenticity of a user.

Physical access controls are used to prevent loss of assets during safe storage, retrieval, operations, transport, and disposal.

Understand the need to implement physical access controls.

The goal of all perimeter controls is to ensure that only trusted and honest individuals are allowed access. The weakest method of authentication is using a password (type 1 authentication). A better method is to use type 2 authentication unique physical characteristics such as the possession of a device (ATM card, smart card, hard token) and combine possession with a password (secret). It is possible for the legitimate users to be denied access by the system (type 1 error, or false rejection). Accordingly, illegitimate users may get access by mistake (type 2 error, or false acceptance).

Understand the differences between public-key and private-key encryption systems.

The public key interchange (PKI) provides authentication, integrity, and confidentiality between parties. A public-key system uses asymmetric cryptography with a public key that is shared and a secret private key that must be protected for disclosure. Secret-key systems use symmetric cryptography with a single shared secret key. The symmetric system is faster but fails to provide authentication. A compromised secret key will destroy confidentiality.

Recognize that management must exercise control over encryption.

Encryption algorithms do not manage themselves. It is the responsibility of management to control the use of encryption and encryption keys under a System Development Life Cycle (SDLC) model. Special handling is required with encryption keys, including methods of safe storage with separation of duties.

Recognize technical mechanisms.

Technical mechanisms such as server mirroring and RAID disk systems can be used to increase redundancy in order to promote better system availability. The redundant hardware increases fault tolerance for conditions involving hardware failure and possibly intruder attacks.

Understand the purpose of different VPNs to protect remote access.

Virtual private networks use encryption for secure communication between systems on different networks. Secure Sockets Layer (SSL) and Transport Layer Security (TLS) implement fully encrypted sessions running from the sending computer to the receiving computer. SSL and TLS require the use of digital certificates. IPsec VPN uses VPN gateways for transmission over a wide area network. With IPsec, the encryption is between the gateways, not individual systems. IPsec gateways use the address of their ISP provider on the external interface.

Understand intrusion detection and prevention systems.

Intrusion detection and prevention systems are designed to function as a computer-based hacker alarm. The system can be implemented by using either a host-based method or network-based method. An IDPS will react to only those perceived attacks that occur on the system with host-based IDPS installed or that are transmitted down a network link that is actively monitored by a network-based IDPS. Attacks on all other systems are invisible to the IDPS. The IDPS identifies attacks by using one of three methods: comparing a database of known attack signatures, comparing changes to a statistical baseline, or using a neural network with knowledge-based rules.

Review Questions

  1. What is the best method for an organization to allow its business partners to access the company intranet across the Internet?

    1. Shared virtual private network

    2. Shared lease line

    3. Internet firewall

    4. Network router with MLSP

  2. Digital signatures are primarily designed to provide additional protection with electronic messages in order to ensure which of the following?

    1. Message deletion

    2. Message read by unauthorized party

    3. Sender verification

    4. Message modification

  3. Internet communication requires more security. To audit Internet security and access control, the IS auditor will first need to examine what?

    1. Validity of password changes

    2. Architecture of the client/server application

    3. Network architecture and design

    4. Virus protection and firewall servers

  4. Which of the following is the most appropriate method to ensure confidentiality in data communications?

    1. Secure hash algorithm (SHA-1)

    2. Virtual private network (VPN)

    3. Digital signatures

    4. Digital certificates with public-key encryption

  5. What is the most effective method for preventing or limiting the damage caused by a software virus attack?

    1. Access control software configured for restricted setting

    2. Updated virus signatures

    3. Antivirus policies and standards

    4. Data download standards with administrative review

  6. What is the primary purpose of a network firewall?

    1. Protect company systems from attack by external systems

    2. Protect downstream systems from all the internal attacks

    3. Protect all modem-connected systems from Internet attacks

    4. Protect attached systems from attacks running through the firewall

  7. Which of the following is the least dependable form of biometrics?

    1. Hand geometry

    2. Facial recognition

    3. Signature analysis

    4. Iris scanning

  8. The IS auditor has just completed a review of an organization. Which of the following weaknesses would be considered the most serious?

    1. Lack of separation of duties for critical functions.

    2. Weak password controls without effective policy enforcement.

    3. Business continuity plans include noncritical applications.

    4. Network server is not backed up regularly.

  9. What is the purpose of the DMZ (demilitarized zone) concept for Internet communications?

    1. Demilitarized refers to a safe zone that is protected from all Internet attacks.

    2. Subnet that is semiprotected and allows external access.

    3. Protected subnet implemented using a fifth-generation firewall.

    4. Safeguard control for communication allowing access to internal production servers.

  10. An e-commerce website needs to be monitored to detect possible hacker activity. What would be the best security component to perform this function?

    1. Third-generation firewall

    2. Honey net ACL router with built-in sniffer software

    3. Elliptic data encryption for privileged files

    4. Statistical or signature-based detection software

  11. The auditee organization decided to implement single sign-on (SSO) for all their users. Their implementation will be using logon ID and passwords for access control. What situation should they be concerned about?

    1. Password aging must be set to force unique password changes every 30 to 60 days using alphanumeric characters.

    2. The user's system access will have protection; however, password changes will be more difficult because of synchronization issues between servers.

    3. Unauthorized login would have access to the maximum resources available on the network.

    4. The servers will need memory and CPU upgrades to handle the extra workload generated by SSO.

  12. What is the primary purpose of intrusion detection protection systems (IDS/IDPS) when compared to firewall systems?

    1. A firewall blocks all attacks; IDS informs us if the firewall was successful.

    2. IDS will notify the system administrator at every possible attack that has occurred, whether successful or unsuccessful.

    3. A firewall reports all attacks to the IDS.

    4. IDS logs and notifies the system administrator of any suspected attacks but may not recognize every attack.

  13. Which of the following statements is true concerning asymmetric-key cryptography?

    1. The sender and receiver have different keys.

    2. The sender and receiver use the same key.

    3. The sender encrypts the files by using the recipient's private key.

    4. Asymmetric keys cannot be used for digital signatures.

  14. The IS auditor is auditing the controls related to employee termination. Which of the following is the most important aspect to be reviewed?

    1. Company staff is notified about the termination.

    2. All login accounts of the employee are terminated.

    3. Details of the employee have been removed from active payroll files.

    4. Company property provided to the employee has been returned.

  15. Which is the most important responsibility of the IS security person?

    1. Controlling and monitoring data security policies

    2. Promoting security awareness within the organization

    3. Establishing new procedures for IT and reviewing their legal accuracy

    4. System administration of the servers and database

  16. What method provides the best level of access control to confidential data being processed on a local server?

    1. Writing a history of all transaction activity to the system log for auditing.

    2. Processing of sensitive transactions requires a separate login and password.

    3. Application software uses internal access control rules to implement least privilege.

    4. System login access is restricted to particular stations or hours of operation.

  17. What is the primary purpose for using database views?

    1. Allow the user access into the database

    2. Provide a method for generating reports

    3. Allow the system administrator access to maintain the database

    4. Restrict the viewing of selected data

  18. Which of the following statements is true concerning an Internet worm?

    1. Able to travel independently through the systems

    2. Self-replicates and attaches itself to files during execution

    3. Uses a backdoor to access system resources

    4. Can be dormant until triggered by a particular date or time

  19. What is the issue with regard to the use of source routing?

    1. Source routing is a diagnostic tool used with firewalls.

    2. No issue with compensating controls.

    3. Source routing can bypass network defenses.

    4. Is a desired feature for network monitoring.

  20. The equal error rate (EER) or crossover error rate (CER) refers to which of the following?

    1. Firewalls

    2. Biometrics

    3. Encryption

    4. Separation of duties

  21. Which of the following is the best definition of minutiae?

    1. Characteristics data

    2. Detailed log data

    3. High-definition scan

    4. Minutes of meeting

  22. Complete the following statement with the best available choice: A ________ will subvert the kernel and bypass operating system security. This is ________ inside downloads and installs itself without the knowledge of the user.

    1. Denial of service, hidden

    2. Worm, impossible to detect

    3. Root kit, hidden

    4. Virus, impossible to detect

  23. Which of the following represents a control failure of IT governance?

    1. Independent contractors

    2. External audit

    3. Internal audit

    4. Self-monitoring

  24. Which of the following has the lowest correlation to the separation of duties concept?

    1. Change control

    2. Violation reporting

    3. Network firewalls

    4. Theft

  25. Why should the transportation and tracking of backup media be given a high priority?

    1. Backup media has a limited shelf life.

    2. Backups should be transported in a locked storage box.

    3. Backup media contains the organization's secrets.

    4. Use of encryption eliminates transportation and tracking issues.

  26. Which of the following techniques is used to prevent the encryption keys from being susceptible to an attack against standing data?

    1. Key wrapping

    2. Key generation

    3. Symmetric-key algorithm

    4. Asymmetric-key algorithm

  27. Complete the following statement with the best answer. The ________ access controls are ________ managed with _______ approval requirements for the highest possible level of security.

    1. Mandatory, locally, manager

    2. Discretionary, centrally, formal

    3. Discretionary, individually, manager

    4. Mandatory, centrally, formal

  28. Which of the following VPN methods will transmit data across the local network in plain text without encryption?

    1. Secure Sockets Layer (SSL)

    2. IPsec

    3. Transport Layer Security (TLS)

    4. Layer 2 Tunneling Protocol (L2TP)

  29. Complete the following statement: The auditor can use ________ as a fast method for discovering the hosts on the network, and ________ to identify all available service ports.

    1. Vulnerability scanning, log review

    2. Penetration testing, host enumeration

    3. Host enumeration, vulnerability scanning

    4. File mount logs, vulnerability scanning

  30. Which of the following VPN methods is used to transmit the payload and hide internal network addresses with encryption?

    1. IPsec tunnel

    2. Secure Sockets Layer (SSL)

    3. IPsec transport

    4. Transport Layer Security (TLS)

  31. Which encryption system is primarily used in private industry for transportation rather than storage?

    1. Symmetric-key encryption

    2. Asymmetric-key encryption

    3. Secret keys

    4. Public keys

  32. What is the best definition of stateful inspection?

    1. History and nature of connectionless requests

    2. Packet-filtering firewall with application proxy service

    3. Internal control mechanism designed into the operating system kernel

    4. History and nature of connection-oriented requests

  33. Which of the following access control models is used for distributed management?

    1. Discretionary

    2. Mandatory

    3. Explicit

    4. Formal

  34. What is the most important reason for management to control encryption by using a System Development Life Cycle model?

    1. To build better encryption algorithms.

    2. Poor management is the biggest threat.

    3. Encryption systems are complex.

    4. Cost overruns are common with encryption systems.

  35. Which of the following situations is the most important topic for an auditor to spend extra time investigating?

    1. The system configuration is being dictated by security staff.

    2. A firewall is configured to disable source routing.

    3. Dual power leads are being installed to the computer room.

    4. An all-in-one device controls security and processes user requests.

Answers to Review Questions

  1. A. The virtual private network (VPN) is the most flexible and least expensive solution for accessing company resources across the Internet.

  2. C. Digital signatures provide authentication assurance of the email sender. Digital signatures use the private key of the sender to verify identity.

  3. C. The IS auditor will need to understand the network architecture and design before being able to evaluate the security and access controls. Later, the architecture of the client/server application and virus protection will be of interest.

  4. B. The virtual private network (VPN) would ensure data confidentiality. A secure hash algorithm would identify that a file has been changed but will not provide confidentiality. Digital signatures are used to assess the identity of the sender but do not provide confidentiality.

  5. B. Maintaining updated virus signature files. Access control software is not directly responsible for limiting the virus attack. Antivirus policies and standards should require updated virus signature files in order to be effective. Data download standards will help; however, virus signatures is the best choice.

  6. D. The network firewall can protect only those systems that route communication through the firewall. The firewall cannot protect systems attached via modem. Insecure wireless networks are also a major threat.

  7. C. Signature analysis is the most undependable form of biometrics. Hand geometry and iris scanning are very dependable. Facial recognition is improving.

  8. D. The network server not being backed up regularly is the most significant threat to data integrity and availability. Weak passwords would be a lesser concern. The lack of separation of duties may be offset by compensating controls.

  9. B. The DMZ is a subnet that is semiprotected by the firewall and allows for external access.

  10. D. An IDPS with statistical or signature-based detection software would be the best choice.

  11. C. Any unauthorized logon would have access to all the server resources on the network. Password aging with unique passwords is a good idea anyway.

  12. D. The IDS keeps the transaction log and alerts the system administrator of any suspected attacks. The IDS can use statistical behavior or signature files to determine whether an attack has occurred.

  13. A. The sender and receiver each have their own public and private (secret) key pair. All the other statements are false. Asymmetric keys are definitely used for creating digital signatures. The sender would never use the recipient's private key, only the recipient's public key.

  14. B. The system access and login accounts of the employee should be terminated immediately. Company property is important, but a lesser concern than system access.

  15. A. Controlling and monitoring data security policies is the highest priority of the IS security person.

  16. C. Application controls should use internal access control lists to implement least privilege. System login restrictions are of less importance by comparison.

  17. D. Database views are used to implement least privilege and restrict the data that can be viewed by the user.

  18. A. An Internet worm is able to travel independently through systems, unlike a virus. The virus self-replicates and attaches itself to files during execution.

  19. C. Source routing should not be allowed to operate on network firewalls. It presents a significant risk by allowing a hacker to bypass router security settings.

  20. B. In biometrics, the trade-off between the false acceptance rate (FAR) and the false rejection rate (FRR) is known as the equal error rate (EER) or crossover error rate (CER).

  21. A. Minutiae is the collection of characteristics used in biometric data about a specific user (a user's biometric template). The process converts a high-resolution scan into a tiny count of unique characteristics.

  22. C. Root kits are malicious software designed to subvert the operating system security. Root kits compromise system security and use stealth to hide their presence. After a root kit is installed, the system is completely compromised.

  23. D. Self-monitoring represents a violation in separation of duties. A built-in reporting conflict exists without external monitoring by another individual. A person in the same job function cannot monitor their own work.

  24. D. Theft is not part of the separation of duties concept because anyone at any level can commit crimes of theft. Separation of duties applies to computers, people, processes, and facilities.

  25. C. Backup media must be tracked because it contains the utmost secrets of any organization. Media leaving the facility must be kept in locked storage boxes at all times. Tracking is required during transit to confirm its departure time and arrival. Some regulations require the use of encrypted backup tapes to protect the standing data. Remember, encrypting data increases security. Managing encryption requires more-involved handling procedures.

  26. A. Key wrapping is used to protect encryption keys from disclosure. Otherwise, encryption keys would be susceptible to the same attacks as standing data.

  27. D. Mandatory access controls are always centrally managed, with formal approval required to increase that individual's level of access.

  28. B. IPsec uses encryption between the VPN gateways. Data transmitted from the gateway to the local computer is not encrypted.

  29. C. Host enumeration provides a fast method for discovering all the hosts on the network. Vulnerability scanning will identify all the available service ports on the host computers. Neither of these processes should be performed during production hours. These scanning methods will activate intrusion detection alarms. Vulnerability scanning may crash the target computer.

  30. A. Tunnel mode of IPsec will encrypt both the payload and local network addresses. This hides the messages and prevents identification of the sender and recipient while the messages travel across the public Internet.

  31. B. Asymmetric-key encryption, also known as public-key encryption, is typically used for the transmission of data (electronic transportation). The other options are closely related distracters.

  32. A. Connectionless UDP requests were almost impossible to track in older firewalls. Stateful inspection collects the history and nature of connectionless requests to determine whether the remote request should be transmitted to the destination computer or discarded as hazardous.

  33. A. Distributed security uses discretionary access control. Decisions are based in the local assessment of requirements and intended use. Distributed security is notorious for having consistency problems.

  34. B. The System Development Life Cycle is used to control the use of encryption and encryption keys. Poor management is the number one cause of failure.

  35. D. An all-in-one device may be violating the separation of duties. This creates undue risk in the interest of cutting costs (greed).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.140.194.170